A sender device may receive input data including to at least one of text or speech input data during a given period of time. In response, the sender device may use one or more of the emotion detection modules to analyze input data received during the same period of time to detect emotional information in the input data, which corresponds to the textual or speech input received during the given period of time. The sender device may generate a message data stream that includes both: text generated from the textual or speech input during the given period of time, and emotion data providing emotional information the same period of time. A recipient device may then use one or more emotion augmentation modules to process such a message data stream and output an emotionally augmented communication.
An example system and method elicits reviews and opinions from users via an online system or a web crawl. Opinions on topics are processed in real time to determine orientation. Each topic is analyzed sentence by sentence to find a central tendency of user orientation toward a given topic. Automatic topic orientation is used to provide a common comparable rating value between reviewers and potentially other systems on similar topics. Facets of the topics are extracted via a submission/acquisition process to determine the key variables of interest for users.
A social messaging platform includes a labeling module to label a base message according to an aggregate message response parameter, which represents the sentiment of users towards the content of the base message. The labels provide information that can be used to distinguish more nuanced sentiments and the degree of the sentiment users may have towards the base message. The aggregate message response parameter and corresponding labels are determined, in part, by identifying and evaluating icons (e.g., emojis, emoticons) present in one or more response messages posted in response to a base message. The labels, in turn, can be used in a variety of applications including recommending new content to users based on their mood, identifying messages potentially containing toxic content for review, or providing a way for businesses to evaluate public sentiment towards an advertisement and facilitate targeted advertisements to users.
Example systems and methods are described for implementing a swipe-to-like feature. In an example implementation, a list of content items is displayed on a touchscreen display, and based on detecting input of a first gesture, such as, for example, a swipe gesture, for a first one of the content items in the list, associating a predetermined first sentiment with the first content item.
Systems and methods for emoji prediction and visual sentiment analysis are provided. An example system includes a computer-implemented method. The method may be used to predict emoji or analyze sentiment for an input image. An example method includes the step of receiving an image. The example method further includes the steps of generating an emoji embedding for the image and generating a sentiment label for the image using the emoji embedding. The emoji embedding may be generated using a machine learning model.