In one embodiment, a social networking system, in response to receiving an action request from a user, expands the portion of a social networking web site with which the user interacted to initiate the action request, and populates the expanded portion with object suggestions of the same type as the target object of the action request. In particular embodiments, the object suggestions are based at least in part on the characteristics of the target object of the action request. Such embodiments capitalize on the transitory mood of the user and facilitate and promote the chaining of subsequent action requests.
Systems, methods, and non-transitory computer-readable media can acquire a set of media content items. A mood indication can be acquired. A soundtrack can be identified based on the mood indication. A video content item can be dynamically generated in real-time based on the set of media content items and the mood indication. The video content item can include the soundtrack.
A content item is sent for display on client devices of users of an online system. Information indicating that a first user is currently viewing the content item is received from a client device. A second user connected to the first user is identified. The second user is performing a user interaction with the content item while the first user is currently viewing the content item. An emotion associated with the user interaction is determined. A widget identifying the second user and the emotion is sent for display to the client device. The widget is configured to move across the content item displayed on the client device while the first user is currently viewing the content item. Responsive to receiving from the client device a user interaction with the widget, information is sent for display indicating the second user in a field for receiving comments by the first user.
In one embodiment, a method includes a client device receiving a selection of an emotion capture button. The emotion capture button is associated with an emotion. In response to the receiving the selection of the emotion capture button, the client device captures a video clip designated with a categorization specifying the emotion associated with the selected emotion capture button.
A reactive profile picture brings a profile image to life by displaying short video segments of the target user expressing a relevant emotion in reaction to an action by a viewing user that relates to content associated with the target user in an online system such as a social media web site. The viewing user therefore experiences a real-time reaction in a manner similar to a face-to-face interaction. The reactive profile picture can be automatically generated from either a video input of the target user or from a single input image of the target user.
A client device displays a content item and a first facial expression superimposed on the content item. Concurrently with and separately from displaying the first facial expression, a range of emotion indicators is displayed, each emotion indicator of the range of emotion indicators corresponding to a respective opinion of a range of opinions. A first user input is detected at a display location corresponding to a respective emotion indicator of the range of emotion indicators. In response to detecting the first user input, the first facial expression is updated to match the respective emotion indicator.
Exemplary embodiments relate to the application of media effects, such as visual overlays, sound effects, etc. to a video conversation. A media effect may be applied as a reaction to an occurrence in the conversation, such as in response to an emotional reaction detected by emotion analysis of information associated with the video. Effect application may be controlled through gestures, such as applying different effects with different gestures, or cancelling automatic effect application using a gesture. Effects may also be applied in group settings, and may affect multiple users. A real-time data channel may synchronize effect application across multiple participants. When broadcasting a video stream that includes effects, the three channels may be sent to an intermediate server, which stitches the three channels together into a single video stream; the single video stream may then be sent to a broadcast server for distribution to the broadcast recipients.
Techniques for emotion detection and content delivery are described. In one embodiment, for example, an emotion detection component may identify at least one type of emotion associated with at least one detected emotion characteristic. A storage component may store the identified emotion type. An application programming interface (API) component may receive a request from one or more applications for emotion type and, in response to the request, return the identified emotion type. The one or more applications may identify content for display based upon the identified emotion type. The identification of content for display by the one or more applications based upon the identified emotion type may include searching among a plurality of content items, each content item being associated with one or more emotion type. Other embodiments are described and claimed.
In one embodiment, a method includes a computing system receiving a request to create a live streaming channel associated with a television program. The system may determine a plurality of breaks of the television program and their respective start times. The system may identify target users of a social-media network based on their respective user profiles, social-graph data, or activity patterns on the social-media network. The system may create the live streaming channel based on the request. Upon determining that a current time is within a predetermined time window prior to a first start time of a first break of the plurality of breaks, the system may send notifications to the target users, wherein each of notifications includes a link to the live streaming channel through which live content related to the television program may be streamed.
The present disclosure relates to systems, methods, and devices for augmenting text messages. In particular, the message system augments text messages with emotion information of a user based on characteristics of a keyboard input from the user. For example, one or more implementations involve predicting an emotion of the user based on the characteristics of the keyboard input for a message. One or more embodiments of the message system select a formatting for the text of the message based on the predicted emotion and format the message within a messaging application in accordance with the selected formatting.
Systems, methods, and non-transitory computer-readable media can determine one or more chunks for a content item to be captioned. Each chunk can include one or more terms that describe at least a portion of the subject matter captured in the content item. One or more sentiments are determined based on the subject matter captured in the content item. One or more emotions are determined for the content item. At least one emoted caption is generated for the content item based at least in part on the one or more chunks, sentiments, and emotions. The emoted caption can include at least one term that conveys an emotion represented by the subject matter captured in the content item.
In one embodiment, a method includes identifying an emotion associated with an identified first object in one or more input images, selecting, based on the emotion, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. Applying the mask includes generating graphical features based on the identified first object or a second object in the input images according to instructions specified by the mask effects, and incorporating the graphical features into an output image. The emotion may be identified based on graphical features of the identified first object. The graphical features of the identified object may include facial features. The selected mask may be selected from a lookup table that maps the identified emotion to the selected mask.
Users of a social networking system perform actions on various objects maintained by the social networking system. Some of these actions may indicate that the user has a negative sentiment for an object. To make use of this negative sentiment when providing content to the user, when the social networking system determines a user performs an action on an object, the social networking system identifies topics associated with the object and associates the negative sentiment with one or more of the topics. This association between one or more topics and negative sentiment may be used to decrease the likelihood that the social networking system presents content associated with a topic that is associated with a negative sentiment of the user.
A social networking system infers a sentiment polarity of a user toward content of a page. The sentiment polarity of the user is inferred based on received information about an interaction between the user and the page (e.g., like, report, etc.), and may be based on analysis of a topic extracted from text on the page. The system infers a positive or negative sentiment polarity of the user toward the content of the page, and that sentiment polarity then may be associated with any second or subsequent interaction from the user related to the page content. The system may identify a set of trusted users with strong sentiment polarities toward the content of a page or topic, and may use the trusted user data as training data for a machine learning model, which can be used to more accurately infer sentiment polarity of users as new data is received.
A social networking system identifies communications about an object associated with a brand owner. For each communication, the social networking system identifies users who were generated the communication, users who were exposed to the communication, and users who were not exposed to the communication. The social networking system measures the impact of the communications on the behavior and/or sentiment of the users towards the brand owner. For example, the social networking system presents users with surveys after presentation of a communication about an object associated with a brand owner and determines the impact of the communication from the responses to the survey. The impact of the communications may then be reported to the brand owner.
Systems, methods, and non-transitory computer readable media can obtain a conversation of a user in a chat application associated with a system, where the conversation includes one or more utterances by the user. An analysis of the one or more utterances by the user can be performed. A sentiment associated with the conversation can be determined based on a machine learning model, wherein the machine learning model is trained based on a plurality of features including demographic information associated with users.
In one embodiment, a method includes accessing a plurality of communications, each communication being associated with a particular content item and including a text of the communication; calculating, for each of the communications, sentiment-scores corresponding to sentiments, wherein each sentiment-score is based on a degree to which n-grams of the text of the communication match sentiment-words associated with the sentiments; determining, for each of the communications, an overall sentiment for the communication based on the calculated sentiment-scores for the communication; calculating sentiment levels for the particular content item corresponding sentiments, each sentiment level being based on a total number of communications determined to have the overall sentiment of the sentiment level; and generating a sentiments-module including sentiment-representations corresponding to overall sentiments having sentiment levels greater than a threshold sentiment level.
Systems, methods, and non-transitory computer readable media are configured to determine a likelihood of a rejection of a notification proposed for delivery to a recipient. A delivery determination for the notification can be performed. Subsequently, the notification can be delivered to the recipient based on the delivery determination.
In one embodiment, a method includes accessing a number of content objects associated with a user; and analyzing text, audio, or visual content of each of the content objects as well as any interactions by the user with each of the content objects. The analyzing includes identifying subject matter and user sentiment related to the respective content object. The method also includes inferring, based on the identified subject matter or user sentiment, one or more interests of the user; and modifying, for display on a client device, an online page of the user to incorporate content related to one or more of the inferred interests of the user.
A social networking system identifies communications about an object associated with a brand owner. For each communication, the social networking system identifies users who were generated the communication, users who were exposed to the communication, and users who were not exposed to the communication. The social networking system determines a sentiment associated with a communication and may send a report based on the sentiment of the communications towards the brand owner. A request from a brand owner to present one or more response communications to users based on the users' relationship to a communication from a user about the object and the sentiment determined from the communication may be received by the social networking system. Based on the request, the social networking system presents a response communication to one or more users.