A sender device may receive input data including to at least one of text or speech input data during a given period of time. In response, the sender device may use one or more of the emotion detection modules to analyze input data received during the same period of time to detect emotional information in the input data, which corresponds to the textual or speech input received during the given period of time. The sender device may generate a message data stream that includes both: text generated from the textual or speech input during the given period of time, and emotion data providing emotional information the same period of time. A recipient device may then use one or more emotion augmentation modules to process such a message data stream and output an emotionally augmented communication.
An example system and method elicits reviews and opinions from users via an online system or a web crawl. Opinions on topics are processed in real time to determine orientation. Each topic is analyzed sentence by sentence to find a central tendency of user orientation toward a given topic. Automatic topic orientation is used to provide a common comparable rating value between reviewers and potentially other systems on similar topics. Facets of the topics are extracted via a submission/acquisition process to determine the key variables of interest for users.
A social messaging platform includes a labeling module to label a base message according to an aggregate message response parameter, which represents the sentiment of users towards the content of the base message. The labels provide information that can be used to distinguish more nuanced sentiments and the degree of the sentiment users may have towards the base message. The aggregate message response parameter and corresponding labels are determined, in part, by identifying and evaluating icons (e.g., emojis, emoticons) present in one or more response messages posted in response to a base message. The labels, in turn, can be used in a variety of applications including recommending new content to users based on their mood, identifying messages potentially containing toxic content for review, or providing a way for businesses to evaluate public sentiment towards an advertisement and facilitate targeted advertisements to users.
Example systems and methods are described for implementing a swipe-to-like feature. In an example implementation, a list of content items is displayed on a touchscreen display, and based on detecting input of a first gesture, such as, for example, a swipe gesture, for a first one of the content items in the list, associating a predetermined first sentiment with the first content item.
Systems and methods for emoji prediction and visual sentiment analysis are provided. An example system includes a computer-implemented method. The method may be used to predict emoji or analyze sentiment for an input image. An example method includes the step of receiving an image. The example method further includes the steps of generating an emoji embedding for the image and generating a sentiment label for the image using the emoji embedding. The emoji embedding may be generated using a machine learning model.
An embodiment of the invention discloses an information processing method and device, and electronic equipment. One specific embodiment of the information processing method comprises the steps of: displaying a mood image list in response to a received preset instruction for indicating addition of mood images, wherein the mood image list comprises at least one mood image; determining a target mood image according to a selection operation of a user on the mood images in the mood image list; and adding the target mood image into a user head portrait. According to the information processing method and the device, the convenience of acquiring the mood of the user by contacts of the user is improved.
The invention provides a video file processing method and system, a medium and electronic equipment. The method comprises the steps of obtaining voice comment information input by a user for a current video file, wherein the voice comment information comprises voice content, voice duration and comment mood; identifying the content of the current video file, and generating a plurality of video scenes; determining the video scene matched with the voice comment information; and when the video file is played to the video scene, outputting the voice content. According to the method, the interactive interestingness of the reviewer can be improved; and user viscosity can be further increased.
Provided in embodiments of the present disclosure are a video processing method and apparatus, a device, and a medium. The method includes: receiving a play trigger operation for a forwarded video, the forwarded video being forwarded from an original video by a forwarding user; obtaining the original video, and forwarding comment information provided by the forwarding user on the original video when forwarding the original video; and playing the original video on a play interface of video works of the forwarding user, and displaying the forwarding comment information. In the embodiments of the present disclosure, when the forwarding user forwards the original video, the original video can be played when receiving a play station for the forwarded video, and meanwhile, the forwarding comment information provided by the forwarding user on the original video during the forwarding is displayed while playing the original video, instead of only displaying the comment information of the creator as in the related art. Thus, in this case, it is convenient for the user to know the forwarding user's feeling about the forwarded video, thereby improving the user's ense of experience.
The embodiment of the invention discloses a display control method and device, equipment and a storage medium. The method comprises the steps of playing a first video in a first display area in a video stream playing interface of a preset application program, receiving a switching instruction, and switching to play a second video in the first display area under the condition that a preset reminding condition is met, the display area of the second video is changed from the first display area to a second display area in the playing process, the size of the second display area is smaller than that of the first display area, and preset prompt information is displayed outside the second display area in the video stream playing interface. By adopting the technical scheme, the reduction change of the display size is combined with the prompt information, so that the perception of a user on anti-addiction can be enhanced, the reminding effect is enhanced, the problem that the user is easily addicted to watch a video stream is solved. Meanwhile, the abrupt feeling and the blocking feeling caused by directly playing a video in a smaller area are avoided, and the user experience is improved. And the video watching experience of the user is ensured.
The invention provides a video recommendation method and device, electronic equipment and a computer readable storage medium, and the method comprises the steps: obtaining the sensory information of a user when the user is detected to watch a video; sending the sensory information to a server, receiving related information of the to-be-recommended video returned by the server, and displaying the related information of the to-be-recommended video to the user, wherein the to-be-recommended video is determined based on sensory information. Scheme of the present disclosure, when it is detected that the user watches the video, the sensory information of the user is obtained, the watching state of the user, including the watching feeling of the currently watched video and the to-be-recommended video determined based on the sensory information of the user, can be reflected on the basis of the sensory information, and the watching feeling of the user is considered, so that the recommended video is more accurate and more accordant with the mind of the user.
The invention provides a video music matching method and device, electronic equipment and a computer readable storage medium, and relates to the technical field of video processing. The method comprises the following steps: inputting a video to be subjected to music matching into a pre-trained first neural network model to obtain specific-dimension video features of the video to be subjected to music matching; inputting the video features of the specific dimension into a predetermined second neural network model to obtain an emotion category of the video to be subjected to music matching; obtaining a plurality of songs corresponding to the emotion categories from a song library pool, and respectively extracting audio features of the songs; and performing Euclidean distance calculation according to the specific dimension video features of the video to be subjected to music matching and the audio features of each track, and taking the track of which the Euclidean distance is within a preset range as a recommended track of the video to be subjected to music matching. According to the invention, based on the emotion category expressed by the video, automatic music matching can be carried out on the video, and the use experience of a user is improved.
A method and a device for generating stickers are provided. An embodiment of the method includes extracting an image sequence from a person-contained video to be processed; identifying emotions of the faces respectively displayed by each of the target images in the image sequence to obtain corresponding identification results; based on the emotional levelscorresponding to the emotion labels in the identification results corresponding to each of the target images, extracting a video fragment from the person-contained video, and acting the video fragment as the stickers. The image sequence comprises target images displaying faces; the identification results comprise emotion labels and emotional levels corresponding to the emotion labels. The embodiment can extract the video fragment from the given person-contained video to act as stickers based on the facial emotion match, which can achieve the generation of stickers based on the facial emotion match.
The invention discloses a reading interaction method, device and equipment, a server and a storage medium. The method comprises the following steps: receiving a user face image captured when a user reads; identifying the face image of the user, and determining a state expression of the user; and when the state expression meets a reading interaction condition, determining interaction content corresponding to the state expression and playing the interaction content. By means of the method, emotion change sharing between the user and the virtual reading partner in the reading process is achieved through state expression interaction with the reading user in the reading process, so that the single boring reading process is improved, the reading interestingness of the user is improved, and the reading enthusiasm of the user is improved.
The embodiment of the invention provides an image sentiment analysis method and device and electronic equipment, and belongs to the technical field of image processing, and the method comprises the steps: inputting an image into a first network, so as to determine the theme of the image; determining an image theme emotion attribute of the image theme; and inputting the image of which the image theme emotion attribute is determined into a second network to quantify the image theme emotion attribute, with the second network comprising sub-networks corresponding to each theme, and the image of the corresponding theme being input into the sub-network of the corresponding theme. Through the processing scheme of the invention, the emotion of the image theme can be obtained, and the emotion can be quantified.
The embodiment of the invention discloses an interactive data processing method and device, equipment and a storage medium, and the method comprises the steps: recording the touch time of a touch event when the touch event for a preset type of control is detected on a live display interface; If the touch time exceeds a set time threshold, determining that the touch event is a long-press operation; When the touch event is a long-press operation, generating animation paths at set time intervals; Wherein every two generated adjacent animation paths are different; And playing an animation according to each animation path. According to the technical scheme provided by the embodiment of the invention, the interest of the live broadcast room can be improved, and user emotion expression can be promoted.
The present invention relates to a sound effect adding method and apparatus, a storage medium, and an electronic device. The method comprises: determining, on the basis of an emotion judgment model, a statement emotion label of each statement of a text to be processed; determining an emotion offset value of said text on the basis of the type of emotion labels which are largest in quantity among the multiple statement emotion labels; for each paragraph of said text, determining an emotion distribution vector of the paragraph according to the statement emotion label of at least one statement corresponding to the paragraph; determining emotion probability distribution of the paragraph on the basis of the emotion offset value and the remotion distribution vector corresponding to the paragraph; determining, according to the emotion probability distribution of the paragraph and sound effect emotion labels of multiple sound effects in a sound effect library, a target sound effect matching the paragraph; and adding the target sound effect to an audio position corresponding to the paragraph in an audio file corresponding to said text. Thus, the effect of automatically selecting and adding sound effects can be implemented, and the efficiency of adding sound effects can be improved.
The embodiment of the invention discloses a method and device for video music matching. A specific embodiment of the method comprises the steps of obtaining a to-be-matched music video; inputting the to-be-matched video into a pre-trained video emotion classification model to obtain at least one piece of emotion classification information corresponding to the to-be-matched video and a probability corresponding to the emotion classification information; acquiring a to-be-recalled music information set corresponding to the to-be-matched music video, wherein each piece of music information in the to-be-recalled music information set corresponds to at least one emotion label; and generating a recalled music information list based on matching of the at least one piece of emotion classification information and probability corresponding to the to-be-matched music video and the emotion tags corresponding to the music information in the to-be-recalled music information set. According to the embodiment, the emotion dimension information of the video and the gameplay is fully utilized, so that the matching degree of the video music matching is effectively improved.
The embodiment of the invention discloses a method and device for identifying character emotion. A specific embodiment of the method comprises the steps of extracting a face image set from a to-be-processed figure video, dividing the face image sert into at least one face image group based on a matching relationship between the face images, the different face image groups corresponding to different persons displayed by the person video, for each face image group in the at least one face image group, performing expression recognition on each face image in the face image group to obtain a corresponding expression recognition result, and determining the emotion information of a person corresponding to the face image group based on the expression recognition result corresponding to each face image in the face image group. According to the embodiment, the character emotion recognition based on facial expressions is realized.
The embodiment of the invention provides an interaction method and device, electronic equipment and a storage medium. The method comprises the following steps: receiving a first trigger operation acting on a target control corresponding to a target work; and in response to the first trigger operation, displaying a target visual element at a preset display position corresponding to the target work, and controlling the target visual element to move from the preset display position to a first target display position of a target identifier, the target identifier being an identifier of a target publisher of the target work. According to the embodiment of the invention, by adopting the technical scheme, a new interaction mode can be provided for the user, so that the user can express a new emotion through the interaction model, the interestingness of the user during interaction can be improved, and the user interaction frequency and the interaction experience are further improved.
The embodiment of the invention discloses a music sharing method, system and device, electronic equipment and a storage medium, and the method comprises the following steps: entering a lyric video template display interface related to a target song when an instruction related to the display of a lyric video template is triggered; generating a lyric video according to a video editing operation of a user on the lyric video template display interface; and in response to a video publishing instruction of a user, publishing the lyric video to a target position. According to the technical scheme disclosed by the embodiment of the invention, the problems that in the prior art, the music sharing mode and scene are relatively single, and the requirement that the user shares the music-related, active and rich emotion expression cannot be met are solved, the music expression requirement of the user can be amplified, the user is helped to more actively and richer express the music-related emotion, and a streaming media product can be helped to complete more social media transmission, so that more high-quality new users can be obtained.
The invention relates to a background music generation method and device, a readable medium and electronic equipment. The method relates to the technical field of electronic information processing, and comprises the steps of obtaining a target text and a target type of the target text, determining a target chord corresponding to the target type, determining a target emotion label of the target text, and generating background music corresponding to the target text according to the target emotion label and the target chord. The background music suitable for the target text is automatically generated according to the type and emotion label of the target text, manual selection is not needed, the limitation of existing music is avoided, the background music generation efficiency can be improved, the application range of the background music is expanded, and therefore, the connotation and expressive force of the target text are increased through the background music.
Embodiments of the present disclosure provide a media collection generation method and apparatus, an electronic device, a storage medium, a computer program product, and a computer program. In the media collection generation method, a plurality of emotion identifiers are displayed in a playing interface for a target piece of media, the emotion identifiers being used for representing preset emotion types; and in response to a first interaction operation on a target emotion identifier, adding the target piece of media to a target emotion media collection corresponding to the programmed computer scans. The emotion identifiers that are preconfigured in the playing interface and triggered by means of an interaction operation implement classification of a target piece of media, and consequently generation of corresponding emotion media collections is achieved, causing a generated emotion media collection to achieve media classification on the basis of on the emotion and feeling of a user; the use experience of a personalized media collection for a user is improved, media collection generation steps and logic are simplified, and media collection generation efficiency is improved.
Implementations generally relate to selecting soundtracks. In some implementations, a method includes determining one or more sound mood attributes of one or more soundtracks, where the one or more sound mood attributes are based on one or more sound characteristics. The method further includees determining one or more visual mood attributes of one or more visual media items, where the one or more visual mood attributes are based on one or more visual characteristics. The method further includes selecting one or more of the soundtracks based on the one or more sound mood attributes and the one or more visual mood attributes. The method further includes generating an association among the one or more selected soundtracks and the one or more visual media items, wherein the association enables the one or more selected soundtracks to be played while the Pone or more visual media items are displayed.
Methods, systems, and media for personalizing computerized services based on mood and/or behavior information from multiple data sources are provided. In some implementations, the method comprises: obtaining information associated with an objective of a user of a computing device from multiple data sources; determining that a portion of information from each of the data sources is relevant to the user having the objective, wherein the portion of information is indicative of a physical or emotional state of the user of the computing device; assigning the user of the computing device into a group of users based at least in part on the objective and the portion of information from each of the data sources; determining a target profile associated with the user based at least in part on the objective and the assigned group; generating a current profile for the user of the computing device based on the portion of information from each of the data sources; comparing the current profile with the target profile to determine a recommended action, wherein the recommended action is determined to have a likelihood of impacting the physical or emotional state of the user; determining one or more devices connected to the computing device, wherein each of the one or more devices has one or more device capabilities; and causing the recommended action to be executed on one or more of the computing device and the devices connected to the computing device based on the one or more device capabilities.
A dynamic text-to-speech (TTS) process and system are described. In response to receiving a command to provide information to a user, a device retrieves information and determines user and environment attributes including: (i) a distance between the device and the user when the user uttered the query; and (ii) voice features of the user. Based on the user and environment attributes, the device determines a likely mood of the user, and a likely environment in which the user and user device are located in. An audio output template matching the likely mood and voice features of the user is selected. The audio output template is also compatible with the environment in which the user and device are located. The retrieved information is converted into an audio signal using the selected audio output template and output by the device.
A given set of videos are sequenced in an aesthetically pleasing manner using models learned from human curated playlists. Semantic features associated with each video in the curated playlists are identified and a first order Markov chain model is learned from curated playlists. In one method, a directed graph using the Markov model is induced, wherein sequencing is obtained by finding the shortest path through the directed graph. In another method a sampling based approach is implemented to produce paths on the digraph. Multiple samples are generated and the best scoring sample is returned as the output. In a third method, a relevance based random walk sampling algorithm is modified to produce a reordering of the playlist.
Described herein are methods and system for analyzing music audio. An example method includes obtaining a music audio track, calculating acoustic features of the music audio track, calculating geometric features of the music audio track in view of the acoustic features, and determining a mood of the music audio track in view of the geometric features.
Emoticons or other images are inserted into text messages during chat sessions without leaving the chat session by entering an input sequence onto an input area of a touchscreen on an electronic device, thereby causing an emoticon library to be presented to a user. The user selects an emoticon, and the emoticon library either closes automatically or closes after the user enters a closing input sequence. The opening and closing input sequences are, for example, any combination of swipes and taps along or on the input area. Users are also able to add content to chat sessions and generate mood messages to chat sessions.
The present invention relates to anticipatory lighting from device screens based on user profiles. Systems, methods, and computer readable storage mediums are provided for determining the mood of a user, deriving an appropriate lighting scheme, and then implementing the lighting scheme on all devices within a predetermined proximity to the user. Furthermore, when the user begins a task, the devices can track the user and use the lighting from the nearby screens to offer functional lighting.
Methods, systems, and media for ambient background noise modification are provided. In some implementations, the method comprises: identifying at least one noise present in an environment of a user having a user device, an activity the user includes currently engaged in, and a physical or emotional state of the user; determining a target ambient noise to be produced in the environment based at least in part on the identified noise, the activity the user is currently engaged in, and the physical or emotional state of the user; identifying at least one device associated with the user device to be used to produce the target ambient noise; determining sound outputs corresponding to each of the one or more identified devices, wherein a combination of the sound outputs produces an approximation of one or more characteristics of the target ambient noise; and causing the one or more identified devices to produce the determined sound outputs.
This document describes automated nursing assessments. Automation of the nursing assessment involves a nursing-assessment device that makes determinations of a person’s mood, physical state, psychosocial state, and neurological state. To determine a mood and physical state of a person, video of the person is captured while the person is positioned in front of an everyday object, such as a mirror. The captured video is then processed according to human condition recognition techniques, which produces indications of the person’s mood and physical state, such as whether the person is happy, sad, healthy, sick, vital signs, and so on. In addition to mood and physical state, the person’s psychosocial and neurological state are also determined. To do so, questions are asked of the person. These questions are determined from a plurality of psychosocial and neurological state assessment questions, which include queries regarding how the person feels, what the person has been doing, and so on. The determined questions are asked through audible or visual interfaces of the nursing-assessment device. The person’s responses are then analyzed. The analysis involves processing the received answers according to psychosocial and neurological state assessment techniques to produce indications of the person’s psychosocial and neurological state.
Systems and methods are provided for identifying and rendering content relevant to a user’s current mental state and context. In an aspect, a system includes a state component that determines a state of a user during a current session of the user with the media system based on navigation of the media system by the user during the current session, media items provided by the media system that are played for watching by the user during the current session, and a manner via which the user interacts with or reacts to the played media items. In an aspect, the state of the user includes a mood of the user. A selection component then selects a media item provided by the media provider based on the state of the user, and a rendering component effectuates rendering of the media item to the user during the current session.
Disclosed herein is an “activity assistant” and an “activity assistant user interface” that provides users with dynamically-selected “activities” that are intelligently tailored to the user's world. For example, a graphical UI includes selectable context elements, each of which corresponds to a user-attribute whose value provides a signal to the activity assistant. In response to selecting a parameter associated with at least one of the selectable context elements, a first signal is generated and provided to the activity assistant. In response to providing the signal, one or more activities are populated and ordered based, at least in part, on the signal, and subsequently displayed. The parameters may include a current mood of a user, a current location of the user, associations with other users, and a time during which the user desires to carry out the activity in some examples.
A method includes providing, by an audio playback interface, an initial playlist comprising audio tracks. The method includes receiving a user preference associated with an initial audio track during a listening session, wherein the user preference is indicative of a listening mood of a user and comprises one or more of a user behavior or a natural language input. The method includes generating a representation of the user preference in a joint audio- text embedding space by applying a two-tower model comprising an audio embedding network and a text embedding network. A proximity of two embeddings is indicative of semantic similarity. The method includes training a machine learning model to generate an updated playlist responsive to the listening mood of the user during the listening session. The method includes applying the machine learning model to generate the updated playlist. The method includes substituting the initial playlist with the updated playlist.
A method for social interacting, including using a portable messaging device for designating, from time to time, a plurality of friends, selecting a mood, sending one or more representations of the selected mood to each of the plurality of designated friends, further selecting an updated mood, and further sending one or more representations of the updated mood to each of the plurality of designated friends, to supersede the previously sent one or more representations of the mood. A user interface is also described and claimed.
Implementations described herein relate to causing emoji(s) that are associated with a given emotion class expressed by a spoken utterance to be visually rendered for presentation to a user at a display of a client device of the user. Processor(s) of the client device may receive audio data that captures the spoken utterance, process the audio data to generate textual data that is predicted to correspond to the spoken utterance, and cause a transcription of the textual data to be visually rendered for presentation to the user via the display. Further, the processor(s) may determine, based on processing the textual data, whether the spoken utterance expresses a given emotion class. In response to determining that the spoken utterance expresses the given emotion class, the processor(s) may cause emoji(s) that are stored in association with the given emotion class to be visually rendered for presentation to the user via the display.
Meetings held in virtual environments can allow participants to conveniently express emotions to a meeting organizer and/or other participants. The avatar representing a meeting participant can be enhanced to include an expression symbol selected by that participant. The participant can choose among a set of expression symbols offered for the meeting.
A computing device is described that includes a camera configured to capture an image of a user of the computing device, a memory configured to store the image of the user, at least one processor, and at least one module. The at least one module is operable by the at least one processor to obtain, from the memory, an indication of the image of the user of the computing device, determine, based on the image, a first emotion classification tag, and identify, based on the first emotion classification tag, at least one graphical image from a database of pre-classified images that has an emotional classification that is associated with the first emotion classification tag. The at least one module is further operable by the at least one processor to output, for display, the at least one graphical image.
A device may detect a negative emotion of a user and identify, based on detecting the negative emotion of the user, a task being performed by the user in relation to an item. The device may obtain, based on identifying the task, information to aid the user in performing the identified task in relation to the item. The information may include at least one of information, obtained from a memory associated with the device, in a help document, a user manual, or an instruction manual relating to performing the task in relation to the item; information, obtained from a network, identifying a document relating to performing the task in relation to the item; or information identifying a video relating to performing the task in relation to the item. The device may provide the obtained information to the user.
Systems and methods for capturing the emotion of a user when viewing particular media content. The method implemented on a computer system having one or more processors and memory includes detecting display of a media content item, e.g. a video clip, an audio clip, a photo or text message. While the media content item is being displayed, the viewer expression e.g. emotion is detected corresponding to a predefined viewer expression i.e. by using a database to compare the expressions with each other; as well as the identifying a portion of the media content item (e.g. the scene of the video clip) that corresponds with the viewer's expression i.e. emotion. The viewer expression or emotion is based on one of: a facial expression, a body movement, a voice, or an arm, leg or finger gesture and is presumed to be a viewer reaction to the portion of the media content item.
Described techniques may be utilized to receive a transcription stream including transcribed text that has been transcribed from speech, and to receive a summary request for a summary to be provided on a display of a device. Extracted text may be identified from the transcribed text and in response to the summary request. The extracted text may be processed using a summarization machine learning (ML) model to obtain a summary of the extracted text, and the summary may be displayed on the display of the device. When an image is captured, an augmented summary may be generated that includes the image together with a visual indication of one or more of an emotion, an entity, or an intent associated with the image, the summary, or the extracted text.
Systems and methods for capturing media content in accordance with viewer expression are disclosed. In some implementations, a method is performed at a computer system having one or more processors and memory storing one or more programs for execution by the one or more processors. The method includes: (1) while a media content item is being presented to a user, capturing a momentary reaction of the user; (2) comparing the captured user reaction with one or more previously captured reactions of the user; (3) identifying the user reaction as one of a plurality of reaction types based on the comparison; (4) identifying the portion of the media content item corresponding to the momentary reaction; and (5) storing an association between the identified user reaction and the portion of the media content item.
The technology relates to methods for detecting and classifying emotions in textual communication and using this information to suggest graphical indicia such as emoji, stickers or GIFs to a user. Two main types of models are fully supervised models and few-shot models. In addition to fully supervised and few-shot models, other types of models focusing on the back-end (server) side or client (on-device) side may also be employed. Server-side models are larger-scale models that can enable higher degrees of accuracy, such as for use cases where models can be hosted on cloud servers where computational and storage resources are relatively abundant. On-device models are smaller-scale models, which enable use on resource-constrained devices such as mobile phones, smart watches or other wearables (e.g., head mounted displays), in-home devices, embedded devices, etc.
In one embodiment, a social networking system, in response to receiving an action request from a user, expands the portion of a social networking web site with which the user interacted to initiate the action request, and populates the expanded portion with object suggestions of the same type as the target object of the action request. In particular embodiments, the object suggestions are based at least in part on the characteristics of the target object of the action request. Such embodiments capitalize on the transitory mood of the user and facilitate and promote the chaining of subsequent action requests.
Systems, methods, and non-transitory computer-readable media can acquire a set of media content items. A mood indication can be acquired. A soundtrack can be identified based on the mood indication. A video content item can be dynamically generated in real-time based on the set of media content items and the mood indication. The video content item can include the soundtrack.
A content item is sent for display on client devices of users of an online system. Information indicating that a first user is currently viewing the content item is received from a client device. A second user connected to the first user is identified. The second user is performing a user interaction with the content item while the first user is currently viewing the content item. An emotion associated with the user interaction is determined. A widget identifying the second user and the emotion is sent for display to the client device. The widget is configured to move across the content item displayed on the client device while the first user is currently viewing the content item. Responsive to receiving from the client device a user interaction with the widget, information is sent for display indicating the second user in a field for receiving comments by the first user.
In one embodiment, a method includes a client device receiving a selection of an emotion capture button. The emotion capture button is associated with an emotion. In response to the receiving the selection of the emotion capture button, the client device captures a video clip designated with a categorization specifying the emotion associated with the selected emotion capture button.
A reactive profile picture brings a profile image to life by displaying short video segments of the target user expressing a relevant emotion in reaction to an action by a viewing user that relates to content associated with the target user in an online system such as a social media web site. The viewing user therefore experiences a real-time reaction in a manner similar to a face-to-face interaction. The reactive profile picture can be automatically generated from either a video input of the target user or from a single input image of the target user.
A client device displays a content item and a first facial expression superimposed on the content item. Concurrently with and separately from displaying the first facial expression, a range of emotion indicators is displayed, each emotion indicator of the range of emotion indicators corresponding to a respective opinion of a range of opinions. A first user input is detected at a display location corresponding to a respective emotion indicator of the range of emotion indicators. In response to detecting the first user input, the first facial expression is updated to match the respective emotion indicator.
Exemplary embodiments relate to the application of media effects, such as visual overlays, sound effects, etc. to a video conversation. A media effect may be applied as a reaction to an occurrence in the conversation, such as in response to an emotional reaction detected by emotion analysis of information associated with the video. Effect application may be controlled through gestures, such as applying different effects with different gestures, or cancelling automatic effect application using a gesture. Effects may also be applied in group settings, and may affect multiple users. A real-time data channel may synchronize effect application across multiple participants. When broadcasting a video stream that includes effects, the three channels may be sent to an intermediate server, which stitches the three channels together into a single video stream; the single video stream may then be sent to a broadcast server for distribution to the broadcast recipients.
Techniques for emotion detection and content delivery are described. In one embodiment, for example, an emotion detection component may identify at least one type of emotion associated with at least one detected emotion characteristic. A storage component may store the identified emotion type. An application programming interface (API) component may receive a request from one or more applications for emotion type and, in response to the request, return the identified emotion type. The one or more applications may identify content for display based upon the identified emotion type. The identification of content for display by the one or more applications based upon the identified emotion type may include searching among a plurality of content items, each content item being associated with one or more emotion type. Other embodiments are described and claimed.
In one embodiment, a method includes a computing system receiving a request to create a live streaming channel associated with a television program. The system may determine a plurality of breaks of the television program and their respective start times. The system may identify target users of a social-media network based on their respective user profiles, social-graph data, or activity patterns on the social-media network. The system may create the live streaming channel based on the request. Upon determining that a current time is within a predetermined time window prior to a first start time of a first break of the plurality of breaks, the system may send notifications to the target users, wherein each of notifications includes a link to the live streaming channel through which live content related to the television program may be streamed.
The present disclosure relates to systems, methods, and devices for augmenting text messages. In particular, the message system augments text messages with emotion information of a user based on characteristics of a keyboard input from the user. For example, one or more implementations involve predicting an emotion of the user based on the characteristics of the keyboard input for a message. One or more embodiments of the message system select a formatting for the text of the message based on the predicted emotion and format the message within a messaging application in accordance with the selected formatting.
Systems, methods, and non-transitory computer-readable media can determine one or more chunks for a content item to be captioned. Each chunk can include one or more terms that describe at least a portion of the subject matter captured in the content item. One or more sentiments are determined based on the subject matter captured in the content item. One or more emotions are determined for the content item. At least one emoted caption is generated for the content item based at least in part on the one or more chunks, sentiments, and emotions. The emoted caption can include at least one term that conveys an emotion represented by the subject matter captured in the content item.
In one embodiment, a method includes identifying an emotion associated with an identified first object in one or more input images, selecting, based on the emotion, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. Applying the mask includes generating graphical features based on the identified first object or a second object in the input images according to instructions specified by the mask effects, and incorporating the graphical features into an output image. The emotion may be identified based on graphical features of the identified first object. The graphical features of the identified object may include facial features. The selected mask may be selected from a lookup table that maps the identified emotion to the selected mask.
Users of a social networking system perform actions on various objects maintained by the social networking system. Some of these actions may indicate that the user has a negative sentiment for an object. To make use of this negative sentiment when providing content to the user, when the social networking system determines a user performs an action on an object, the social networking system identifies topics associated with the object and associates the negative sentiment with one or more of the topics. This association between one or more topics and negative sentiment may be used to decrease the likelihood that the social networking system presents content associated with a topic that is associated with a negative sentiment of the user.
A social networking system infers a sentiment polarity of a user toward content of a page. The sentiment polarity of the user is inferred based on received information about an interaction between the user and the page (e.g., like, report, etc.), and may be based on analysis of a topic extracted from text on the page. The system infers a positive or negative sentiment polarity of the user toward the content of the page, and that sentiment polarity then may be associated with any second or subsequent interaction from the user related to the page content. The system may identify a set of trusted users with strong sentiment polarities toward the content of a page or topic, and may use the trusted user data as training data for a machine learning model, which can be used to more accurately infer sentiment polarity of users as new data is received.
A social networking system identifies communications about an object associated with a brand owner. For each communication, the social networking system identifies users who were generated the communication, users who were exposed to the communication, and users who were not exposed to the communication. The social networking system measures the impact of the communications on the behavior and/or sentiment of the users towards the brand owner. For example, the social networking system presents users with surveys after presentation of a communication about an object associated with a brand owner and determines the impact of the communication from the responses to the survey. The impact of the communications may then be reported to the brand owner.
Systems, methods, and non-transitory computer readable media can obtain a conversation of a user in a chat application associated with a system, where the conversation includes one or more utterances by the user. An analysis of the one or more utterances by the user can be performed. A sentiment associated with the conversation can be determined based on a machine learning model, wherein the machine learning model is trained based on a plurality of features including demographic information associated with users.
In one embodiment, a method includes accessing a plurality of communications, each communication being associated with a particular content item and including a text of the communication; calculating, for each of the communications, sentiment-scores corresponding to sentiments, wherein each sentiment-score is based on a degree to which n-grams of the text of the communication match sentiment-words associated with the sentiments; determining, for each of the communications, an overall sentiment for the communication based on the calculated sentiment-scores for the communication; calculating sentiment levels for the particular content item corresponding sentiments, each sentiment level being based on a total number of communications determined to have the overall sentiment of the sentiment level; and generating a sentiments-module including sentiment-representations corresponding to overall sentiments having sentiment levels greater than a threshold sentiment level.
Systems, methods, and non-transitory computer readable media are configured to determine a likelihood of a rejection of a notification proposed for delivery to a recipient. A delivery determination for the notification can be performed. Subsequently, the notification can be delivered to the recipient based on the delivery determination.
In one embodiment, a method includes accessing a number of content objects associated with a user; and analyzing text, audio, or visual content of each of the content objects as well as any interactions by the user with each of the content objects. The analyzing includes identifying subject matter and user sentiment related to the respective content object. The method also includes inferring, based on the identified subject matter or user sentiment, one or more interests of the user; and modifying, for display on a client device, an online page of the user to incorporate content related to one or more of the inferred interests of the user.
A social networking system identifies communications about an object associated with a brand owner. For each communication, the social networking system identifies users who were generated the communication, users who were exposed to the communication, and users who were not exposed to the communication. The social networking system determines a sentiment associated with a communication and may send a report based on the sentiment of the communications towards the brand owner. A request from a brand owner to present one or more response communications to users based on the users' relationship to a communication from a user about the object and the sentiment determined from the communication may be received by the social networking system. Based on the request, the social networking system presents a response communication to one or more users.