Brevets
Meta
Chaining Connection Requests
- Applicant
- Auteurs
- N/A
- Titre
- Chaining Connection Requests
- Patent Number
- US2013179802
- Publication Date
- 2013
- uri
- https://patents.google.com/patent/US20130179802
- Description
- In one embodiment, a social networking system, in response to receiving an action request from a user, expands the portion of a social networking web site with which the user interacted to initiate the action request, and populates the expanded portion with object suggestions of the same type as the target object of the action request. In particular embodiments, the object suggestions are based at least in part on the characteristics of the target object of the action request. Such embodiments capitalize on the transitory mood of the user and facilitate and promote the chaining of subsequent action requests.
- keywords
- Mood
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Media Content
- Concepts clés
- Mood
- Méthode
- Infer the type of “mood” the user is in based on his or her actions
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Collections
Negative signals for advertisement targeting
- Applicant
- Auteurs
- N/A
- Titre
- Negative signals for advertisement targeting
- Patent Number
- CA2879830
- Publication Date
- 2014
- uri
- https://patents.google.com/patent/CA2879830
- Description
- Users of a social networking system perform actions on various objects maintained by the social networking system. Some of these actions may indicate that the user has a negative sentiment for an object. To make use of this negative sentiment when providing content to the user, when the social networking system determines a user performs an action on an object, the social networking system identifies topics associated with the object and associates the negative sentiment with one or more of the topics. This association between one or more topics and negative sentiment may be used to decrease the likelihood that the social networking system presents content associated with a topic that is associated with a negative sentiment of the user.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Concepts clés
- Negative Sentiment
- Méthode
- Method based on users's actions on various objects on social networking system
- Dispositif
- Computer
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Techniques for emotion detection and content delivery
- Applicant
- Auteurs
- N/A
- Titre
- Techniques for emotion detection and content delivery
- Patent Number
- US2015242679
- Publication Date
- 2015
- uri
- https://patents.google.com/patent/US20150242679
- Description
- Techniques for emotion detection and content delivery are described. In one embodiment, for example, an emotion detection component may identify at least one type of emotion associated with at least one detected emotion characteristic. A storage component may store the identified emotion type. An application programming interface (API) component may receive a request from one or more applications for emotion type and, in response to the request, return the identified emotion type. The one or more applications may identify content for display based upon the identified emotion type. The identification of content for display by the one or more applications based upon the identified emotion type may include searching among a plurality of content items, each content item being associated with one or more emotion type. Other embodiments are described and claimed.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Computer Vision
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Image
- Concepts clés
- Emotion
- Méthode
- Based on an image of the user's face, the computing device may analyze facial features and other characteristics to determine one or more emotion characteristics
- Dispositif
- Computer
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Pre-implant detection
- Applicant
- Auteurs
- N/A
- Titre
- Pre-implant detection
- Patent Number
- US2015012336
- Publication Date
- 2015
- uri
- https://patents.google.com/patent/US20150012336
- Description
- A social networking system identifies communications about an object associated with a brand owner. For each communication, the social networking system identifies users who were generated the communication, users who were exposed to the communication, and users who were not exposed to the communication. The social networking system measures the impact of the communications on the behavior and/or sentiment of the users towards the brand owner. For example, the social networking system presents users with surveys after presentation of a communication about an object associated with a brand owner and determines the impact of the communication from the responses to the survey. The impact of the communications may then be reported to the brand owner.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Computer-Mediated Communication
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Concepts clés
- Sentiment
- Méthode
- Survey
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Detecting And Responding To Sentiment-Based Communications About A Business On A Social Networking System
- Applicant
- Auteurs
- N/A
- Titre
- Detecting And Responding To Sentiment-Based Communications About A Business On A Social Networking System
- Patent Number
- US2015039524
- Publication Date
- 2015
- uri
- https://patents.google.com/patent/US20150039524
- Description
- A social networking system identifies communications about an object associated with a brand owner. For each communication, the social networking system identifies users who were generated the communication, users who were exposed to the communication, and users who were not exposed to the communication. The social networking system determines a sentiment associated with a communication and may send a report based on the sentiment of the communications towards the brand owner. A request from a brand owner to present one or more response communications to users based on the users' relationship to a communication from a user about the object and the sentiment determined from the communication may be received by the social networking system. Based on the request, the social networking system presents a response communication to one or more users.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Computer-Mediated Communication
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Concepts clés
- Sentiment
- Méthode
- Method by using social networking system associated with communication
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Methods and Systems for Providing User Feedback Using an Emotion Scale
- Applicant
- Auteurs
- N/A
- Titre
- Methods and Systems for Providing User Feedback Using an Emotion Scale
- Patent Number
- US2016357402
- Publication Date
- 2016
- uri
- https://patents.google.com/patent/US20160357402
- Description
- A client device displays a content item and a first facial expression superimposed on the content item. Concurrently with and separately from displaying the first facial expression, a range of emotion indicators is displayed, each emotion indicator of the range of emotion indicators corresponding to a respective opinion of a range of opinions. A first user input is detected at a display location corresponding to a respective emotion indicator of the range of emotion indicators. In response to detecting the first user input, the first facial expression is updated to match the respective emotion indicator.
- keywords
- Emotion
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Media Content
- Concepts clés
- Emotion
- Méthode
- Display a content item and an emotion scale through a range of facial expressions
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Systems and methods for generating videos based on selecting media content items and moods
- Applicant
- Auteurs
- N/A
- Titre
- Systems and methods for generating videos based on selecting media content items and moods
- Patent Number
- US2017125059
- Publication Date
- 2017
- uri
- https://patents.google.com/patent/US20170125059
- Description
- Systems, methods, and non-transitory computer-readable media can acquire a set of media content items. A mood indication can be acquired. A soundtrack can be identified based on the mood indication. A video content item can be dynamically generated in real-time based on the set of media content items and the mood indication. The video content item can include the soundtrack.
- keywords
- Mood
- Domaine de recherche
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Human-Computer Interaction & User Experience
- Données collectées (type source)
- Image
- Video
- Concepts clés
- Mood
- Méthode
- Selected by the user (a plurality of selectable mood indications are provided) and acquired by the mood processing module (i.e., receive, recognize, identify, fetch, etc.). Then, mood indications can set or indicate the tone or feeling for a video content item to be dynamically generated.
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
Augmenting text messages with emotion information
- Applicant
- Auteurs
- N/A
- Titre
- Augmenting text messages with emotion information
- Patent Number
- US2017147202
- Publication Date
- 2017
- uri
- https://patents.google.com/patent/US20170147202
- Description
- The present disclosure relates to systems, methods, and devices for augmenting text messages. In particular, the message system augments text messages with emotion information of a user based on characteristics of a keyboard input from the user. For example, one or more implementations involve predicting an emotion of the user based on the characteristics of the keyboard input for a message. One or more embodiments of the message system select a formatting for the text of the message based on the predicted emotion and format the message within a messaging application in accordance with the selected formatting.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Concepts clés
- Emotion
- Méthode
- Based on characteristics of a keyboard input from the user
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
Sentiment-Modules on Online Social Networks
- Applicant
- Auteurs
- N/A
- Titre
- Sentiment-Modules on Online Social Networks
- Patent Number
- US2017220578
- Publication Date
- 2017
- uri
- https://patents.google.com/patent/US20170220578
- Description
- In one embodiment, a method includes accessing a plurality of communications, each communication being associated with a particular content item and including a text of the communication; calculating, for each of the communications, sentiment-scores corresponding to sentiments, wherein each sentiment-score is based on a degree to which n-grams of the text of the communication match sentiment-words associated with the sentiments; determining, for each of the communications, an overall sentiment for the communication based on the calculated sentiment-scores for the communication; calculating sentiment levels for the particular content item corresponding sentiments, each sentiment level being based on a total number of communications determined to have the overall sentiment of the sentiment level; and generating a sentiments-module including sentiment-representations corresponding to overall sentiments having sentiment levels greater than a threshold sentiment level.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Computer-Mediated Communication
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Concepts clés
- Sentiment
- Méthode
- Method to access communications
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Profile Suggestions
- Applicant
- Auteurs
- N/A
- Titre
- Profile Suggestions
- Patent Number
- US2017351678
- Publication Date
- 2017
- uri
- https://patents.google.com/patent/US20170351678
- Description
- In one embodiment, a method includes accessing a number of content objects associated with a user; and analyzing text, audio, or visual content of each of the content objects as well as any interactions by the user with each of the content objects. The analyzing includes identifying subject matter and user sentiment related to the respective content object. The method also includes inferring, based on the identified subject matter or user sentiment, one or more interests of the user; and modifying, for display on a client device, an online page of the user to incorporate content related to one or more of the inferred interests of the user.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Sentiment Analysis
- Social Media and User Engagement
- Speech Processing
- Données collectées (type source)
- Text
- Audio
- Image
- Video
- Media Content
- Concepts clés
- Sentiment
- Méthode
- Sentiment analysis of a user may be performed by classifying the “polarity” of a given text and/or by making analysis of audio including a voice (...), analysis of video to perform facial/gesture recognition and emotion detection, analysis of biometric sensor data (...)
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Collections
Real-time delivery of interactions in online social networking system
- Applicant
- Auteurs
- N/A
- Titre
- Real-time delivery of interactions in online social networking system
- Patent Number
- US2018300042
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/US20180300042
- Description
- A content item is sent for display on client devices of users of an online system. Information indicating that a first user is currently viewing the content item is received from a client device. A second user connected to the first user is identified. The second user is performing a user interaction with the content item while the first user is currently viewing the content item. An emotion associated with the user interaction is determined. A widget identifying the second user and the emotion is sent for display to the client device. The widget is configured to move across the content item displayed on the client device while the first user is currently viewing the content item. Responsive to receiving from the client device a user interaction with the widget, information is sent for display indicating the second user in a field for receiving comments by the first user.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Media Content
- Concepts clés
- Emotion
- Méthode
- Determine a type of emotion using emoticons displayed within or adjacent to the content item.
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Camera with reaction integration
- Applicant
- Auteurs
- N/A
- Titre
- Camera with reaction integration
- Patent Number
- WO2018156212
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/WO2018156212
- Description
- In one embodiment, a method includes a client device receiving a selection of an emotion capture button. The emotion capture button is associated with an emotion. In response to the receiving the selection of the emotion capture button, the client device captures a video clip designated with a categorization specifying the emotion associated with the selected emotion capture button.
- keywords
- Emotion
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Video
- Concepts clés
- Emotion
- Méthode
- Indicate emotion by using emotion capture button
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
Reactive profile portraits
- Applicant
- Auteurs
- N/A
- Titre
- Reactive profile portraits
- Patent Number
- WO2018191691
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/WO2018191691
- Description
- A reactive profile picture brings a profile image to life by displaying short video segments of the target user expressing a relevant emotion in reaction to an action by a viewing user that relates to content associated with the target user in an online system such as a social media web site. The viewing user therefore experiences a real-time reaction in a manner similar to a face-to-face interaction. The reactive profile picture can be automatically generated from either a video input of the target user or from a single input image of the target user.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Image
- Video
- Concepts clés
- Emotion
- Méthode
- Based from a video input of the target user or from a single input image of the target user.
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Promote the expression of user's emotion
- Collections
Media effect application
- Applicant
- Auteurs
- N/A
- Titre
- Media effect application
- Patent Number
- US2018160055
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/US20180160055
- Description
- Exemplary embodiments relate to the application of media effects, such as visual overlays, sound effects, etc. to a video conversation. A media effect may be applied as a reaction to an occurrence in the conversation, such as in response to an emotional reaction detected by emotion analysis of information associated with the video. Effect application may be controlled through gestures, such as applying different effects with different gestures, or cancelling automatic effect application using a gesture. Effects may also be applied in group settings, and may affect multiple users. A real-time data channel may synchronize effect application across multiple participants. When broadcasting a video stream that includes effects, the three channels may be sent to an intermediate server, which stitches the three channels together into a single video stream; the single video stream may then be sent to a broadcast server for distribution to the broadcast recipients.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Audio
- Image
- Video
- Concepts clés
- Emotion
- Méthode
- Detect emotioal reaction by emotion analysis of information associated with the data
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Systems and methods for captioning content
- Applicant
- Auteurs
- N/A
- Titre
- Systems and methods for captioning content
- Patent Number
- US2018197098
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/US20180197098
- Description
- Systems, methods, and non-transitory computer-readable media can determine one or more chunks for a content item to be captioned. Each chunk can include one or more terms that describe at least a portion of the subject matter captured in the content item. One or more sentiments are determined based on the subject matter captured in the content item. One or more emotions are determined for the content item. At least one emoted caption is generated for the content item based at least in part on the one or more chunks, sentiments, and emotions. The emoted caption can include at least one term that conveys an emotion represented by the subject matter captured in the content item.
- keywords
- Emotion
- Domaine de recherche
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Audio
- Image
- Video
- Concepts clés
- Emotion
- Méthode
- Systems, methods, and non-transitory computer-readable media
- Dispositif
- Computer
- Objectifs du brevet
- Determine/Identify Content Emotion
- Collections
Dynamic mask application
- Applicant
- Auteurs
- N/A
- Titre
- Dynamic mask application
- Patent Number
- US2018182141
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/US20180182141
- Description
- In one embodiment, a method includes identifying an emotion associated with an identified first object in one or more input images, selecting, based on the emotion, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. Applying the mask includes generating graphical features based on the identified first object or a second object in the input images according to instructions specified by the mask effects, and incorporating the graphical features into an output image. The emotion may be identified based on graphical features of the identified first object. The graphical features of the identified object may include facial features. The selected mask may be selected from a lookup table that maps the identified emotion to the selected mask.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Image
- Concepts clés
- Emotion
- Méthode
- Method based on graphical features of the identified first object, including facial features
- Dispositif
- Computer
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Systems and methods for determining sentiments in conversations in a chat application
- Applicant
- Auteurs
- N/A
- Titre
- Systems and methods for determining sentiments in conversations in a chat application
- Patent Number
- US2018165582
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/US20180165582
- Description
- Systems, methods, and non-transitory computer readable media can obtain a conversation of a user in a chat application associated with a system, where the conversation includes one or more utterances by the user. An analysis of the one or more utterances by the user can be performed. A sentiment associated with the conversation can be determined based on a machine learning model, wherein the machine learning model is trained based on a plurality of features including demographic information associated with users.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Computer-Mediated Communication
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Media Content
- Concepts clés
- Sentiment
- Méthode
- Machine learning model based on a plurality of features including demographic information associated with users
- Dispositif
- Computer
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Integration of Live Streaming Content with Television Programming
- Applicant
- Auteurs
- N/A
- Titre
- Integration of Live Streaming Content with Television Programming
- Patent Number
- US2019208272
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/US20190208272
- Description
- In one embodiment, a method includes a computing system receiving a request to create a live streaming channel associated with a television program. The system may determine a plurality of breaks of the television program and their respective start times. The system may identify target users of a social-media network based on their respective user profiles, social-graph data, or activity patterns on the social-media network. The system may create the live streaming channel based on the request. Upon determining that a current time is within a predetermined time window prior to a first start time of a first break of the plurality of breaks, the system may send notifications to the target users, wherein each of notifications includes a link to the live streaming channel through which live content related to the television program may be streamed.
- keywords
- Emotion
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Media Content
- Concepts clés
- Emotion
- Méthode
- Based on user input indicating a comment or a user emotion, including emoticons
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
Systems and methods for notification send control using negative sentiment
- Applicant
- Auteurs
- N/A
- Titre
- Systems and methods for notification send control using negative sentiment
- Patent Number
- US2019208025
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/US20190208025
- Description
- Systems, methods, and non-transitory computer readable media are configured to determine a likelihood of a rejection of a notification proposed for delivery to a recipient. A delivery determination for the notification can be performed. Subsequently, the notification can be delivered to the recipient based on the delivery determination.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Media Content
- Concepts clés
- Negative Sentiment
- Méthode
- Systems, methods and non-transitory computer readable media
- Dispositif
- Computer
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Sentiment polarity for users of a social networking system
- Applicant
- Auteurs
- N/A
- Titre
- Sentiment polarity for users of a social networking system
- Patent Number
- US2020286000
- Publication Date
- 2020
- uri
- https://patents.google.com/patent/US20200286000
- Description
- A social networking system infers a sentiment polarity of a user toward content of a page. The sentiment polarity of the user is inferred based on received information about an interaction between the user and the page (e.g., like, report, etc.), and may be based on analysis of a topic extracted from text on the page. The system infers a positive or negative sentiment polarity of the user toward the content of the page, and that sentiment polarity then may be associated with any second or subsequent interaction from the user related to the page content. The system may identify a set of trusted users with strong sentiment polarities toward the content of a page or topic, and may use the trusted user data as training data for a machine learning model, which can be used to more accurately infer sentiment polarity of users as new data is received.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Media Content
- Concepts clés
- Sentiment
- Méthode
- Machine learning model based on user data including interaction between the user and the page
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Mood-based messaging
- Applicant
- Auteurs
- N/A
- Titre
- Mood-based messaging
- Patent Number
- US2012280951
- Publication Date
- 2012
- uri
- https://patents.google.com/patent/US20120280951
- Description
- A method for social interacting, including using a portable messaging device for designating, from time to time, a plurality of friends, selecting a mood, sending one or more representations of the selected mood to each of the plurality of designated friends, further selecting an updated mood, and further sending one or more representations of the updated mood to each of the plurality of designated friends, to supersede the previously sent one or more representations of the mood. A user interface is also described and claimed.
- keywords
- Mood
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Media Content
- Concepts clés
- Mood
- Méthode
- The portable messaging device enables a user to select a mood descriptor such as “happy”, “sad”, “tired” and “surprised”, and a strength associated therewith, indicating how happy, how sad, how tired, or how surprised the user is
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
Providing help information based on emotion detection
- Applicant
- Auteurs
- N/A
- Titre
- Providing help information based on emotion detection
- Patent Number
- WO2014159612
- Publication Date
- 2014
- uri
- https://patents.google.com/patent/WO2014159612
- Description
- A device may detect a negative emotion of a user and identify, based on detecting the negative emotion of the user, a task being performed by the user in relation to an item. The device may obtain, based on identifying the task, information to aid the user in performing the identified task in relation to the item. The information may include at least one of information, obtained from a memory associated with the device, in a help document, a user manual, or an instruction manual relating to performing the task in relation to the item; information, obtained from a network, identifying a document relating to performing the task in relation to the item; or information identifying a video relating to performing the task in relation to the item. The device may provide the obtained information to the user.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Speech Processing
- Données collectées (type source)
- Audio
- Image
- Video
- Concepts clés
- Negative emotion
- Méthode
- User device may monitor the user visually and/or audibly and may detect a negative emotion of the user based on monitoring the user's facial expression, the user's body language, and audible signals from the user
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Collections
Automatic sequencing of video playlists based on mood classification of each video and video cluster transitions
- Applicant
- Auteurs
- N/A
- Titre
- Automatic sequencing of video playlists based on mood classification of each video and video cluster transitions
- Patent Number
- US9165255
- Publication Date
- 2015
- uri
- https://patents.google.com/patent/US9165255
- Description
- A given set of videos are sequenced in an aesthetically pleasing manner using models learned from human curated playlists. Semantic features associated with each video in the curated playlists are identified and a first order Markov chain model is learned from curated playlists. In one method, a directed graph using the Markov model is induced, wherein sequencing is obtained by finding the shortest path through the directed graph. In another method a sampling based approach is implemented to produce paths on the digraph. Multiple samples are generated and the best scoring sample is returned as the output. In a third method, a relevance based random walk sampling algorithm is modified to produce a reordering of the playlist.
- keywords
- Mood
- Domaine de recherche
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Audio
- Video
- Concepts clés
- Musical Mood
- Méthode
- Mood descriptors are extracted from adjectives associated with curated and uncurated playlists in a video repository. A classifier is trained from these mood descriptors to generate dimensional mood features for each video in the curated playlists.
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify Content Emotion
- Collections
Geometric and acoustic joint learning
- Applicant
- Auteurs
- N/A
- Titre
- Geometric and acoustic joint learning
- Patent Number
- US8977374
- Publication Date
- 2015
- uri
- https://patents.google.com/patent/US8977374
- Description
- Described herein are methods and system for analyzing music audio. An example method includes obtaining a music audio track, calculating acoustic features of the music audio track, calculating geometric features of the music audio track in view of the acoustic features, and determining a mood of the music audio track in view of the geometric features.
- keywords
- Mood
- Domaine de recherche
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Audio
- Concepts clés
- Musical mood
- Méthode
- Using geometric and acoustic joint learning to calculate acoustic features of the music audio track. Using machine learning techniques along with the joint acoustic-geometric feature vector to classify a mood
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify Content Emotion
- Collections
Capturing media content in accordance with a viewer expression
- Applicant
- Auteurs
- N/A
- Titre
- Capturing media content in accordance with a viewer expression
- Patent Number
- WO2015061476
- Publication Date
- 2015
- uri
- https://patents.google.com/patent/WO2015061476
- Description
- Systems and methods for capturing the emotion of a user when viewing particular media content. The method implemented on a computer system having one or more processors and memory includes detecting display of a media content item, e.g. a video clip, an audio clip, a photo or text message. While the media content item is being displayed, the viewer expression e.g. emotion is detected corresponding to a predefined viewer expression i.e. by using a database to compare the expressions with each other; as well as the identifying a portion of the media content item (e.g. the scene of the video clip) that corresponds with the viewer's expression i.e. emotion. The viewer expression or emotion is based on one of: a facial expression, a body movement, a voice, or an arm, leg or finger gesture and is presumed to be a viewer reaction to the portion of the media content item.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Software Development
- Speech Processing
- Données collectées (type source)
- Audio
- Video
- Concepts clés
- Viewer expressions
- Méthode
- Expression is based on one of: facial expressions, body movements, a voice, or arm, leg, or finger gestures, and is compared with predefined viewer expressions using a database. A portion of the media content item that corresponds to the viewer's expression is also identified
- Dispositif
- Connected television and Google television device equiped with a camera
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Providing user-defined parameters to an activity assistant
- Applicant
- Auteurs
- N/A
- Titre
- Providing user-defined parameters to an activity assistant
- Patent Number
- US9348480
- Publication Date
- 2016
- uri
- https://patents.google.com/patent/US9348480
- Description
- Disclosed herein is an “activity assistant” and an “activity assistant user interface” that provides users with dynamically-selected “activities” that are intelligently tailored to the user's world. For example, a graphical UI includes selectable context elements, each of which corresponds to a user-attribute whose value provides a signal to the activity assistant. In response to selecting a parameter associated with at least one of the selectable context elements, a first signal is generated and provided to the activity assistant. In response to providing the signal, one or more activities are populated and ordered based, at least in part, on the signal, and subsequently displayed. The parameters may include a current mood of a user, a current location of the user, associations with other users, and a time during which the user desires to carry out the activity in some examples.
- keywords
- Mood
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Text
- Concepts clés
- Mood
- Méthode
- The context panel provides an interactive mechanism for users to provide context signal data that describes a “user context”, including their mood (e.g., “up for anything”, “lazy”, “productive”, “social”, etc.)
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
Identifying and rendering content relevant to a user's current mental state and context
- Applicant
- Auteurs
- N/A
- Titre
- Identifying and rendering content relevant to a user's current mental state and context
- Patent Number
- US9712587
- Publication Date
- 2017
- uri
- https://patents.google.com/patent/US9712587
- Description
- Systems and methods are provided for identifying and rendering content relevant to a user’s current mental state and context. In an aspect, a system includes a state component that determines a state of a user during a current session of the user with the media system based on navigation of the media system by the user during the current session, media items provided by the media system that are played for watching by the user during the current session, and a manner via which the user interacts with or reacts to the played media items. In an aspect, the state of the user includes a mood of the user. A selection component then selects a media item provided by the media provider based on the state of the user, and a rendering component effectuates rendering of the media item to the user during the current session.
- keywords
- Mood
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Media Content
- Concepts clés
- Mood
- Méthode
- Infer the type of “mood” the user is in based on his or her actions (user's navigation of a media provider, content accessed and viewed, manner in which the user interacts with the content)
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
Systems and Methods for Associating Media Content with Viewer Expressions
- Applicant
- Auteurs
- N/A
- Titre
- Systems and Methods for Associating Media Content with Viewer Expressions
- Patent Number
- US2017078743
- Publication Date
- 2017
- uri
- https://patents.google.com/patent/US20170078743
- Description
- Systems and methods for capturing media content in accordance with viewer expression are disclosed. In some implementations, a method is performed at a computer system having one or more processors and memory storing one or more programs for execution by the one or more processors. The method includes: (1) while a media content item is being presented to a user, capturing a momentary reaction of the user; (2) comparing the captured user reaction with one or more previously captured reactions of the user; (3) identifying the user reaction as one of a plurality of reaction types based on the comparison; (4) identifying the portion of the media content item corresponding to the momentary reaction; and (5) storing an association between the identified user reaction and the portion of the media content item.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Software Development
- Speech Processing
- Données collectées (type source)
- Audio
- Video
- Concepts clés
- Viewer expressions
- Méthode
- Expression is based on one of: facial expressions, body movements, a voice, or arm, leg, or finger gestures, and is compared with predefined viewer expressions using a database. A portion of the media content item that corresponds to the viewer's expression is also identified
- Dispositif
- Connected television and Google television device equiped with a camera
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
System for and method of accessing and selecting emoticons, content, and mood messages during chat sessions
- Applicant
- Auteurs
- N/A
- Titre
- System for and method of accessing and selecting emoticons, content, and mood messages during chat sessions
- Patent Number
- US2018059885
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/US20180059885
- Description
- Emoticons or other images are inserted into text messages during chat sessions without leaving the chat session by entering an input sequence onto an input area of a touchscreen on an electronic device, thereby causing an emoticon library to be presented to a user. The user selects an emoticon, and the emoticon library either closes automatically or closes after the user enters a closing input sequence. The opening and closing input sequences are, for example, any combination of swipes and taps along or on the input area. Users are also able to add content to chat sessions and generate mood messages to chat sessions.
- keywords
- Mood
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Media Content
- Concepts clés
- Mood messages
- Méthode
- The user selects an emoticon from the emoticon library presented to the user during chat sessions. Mood messages can also visually enhance text messages
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
Emotion expression in virtual environment
- Applicant
- Auteurs
- N/A
- Titre
- Emotion expression in virtual environment
- Patent Number
- WO2018102007
- Publication Date
- 2018
- uri
- https://patents.google.com/patent/WO2018102007
- Description
- Meetings held in virtual environments can allow participants to conveniently express emotions to a meeting organizer and/or other participants. The avatar representing a meeting participant can be enhanced to include an expression symbol selected by that participant. The participant can choose among a set of expression symbols offered for the meeting.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Software Development
- Données collectées (type source)
- Media Content
- Concepts clés
- Emotional expression
- Méthode
- Based on the participant choice among a set of expression symbols
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
Selecting soundtracks
- Applicant
- Auteurs
- N/A
- Titre
- Selecting soundtracks
- Patent Number
- US10489450
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/US10489450
- Description
- Implementations generally relate to selecting soundtracks. In some implementations, a method includes determining one or more sound mood attributes of one or more soundtracks, where the one or more sound mood attributes are based on one or more sound characteristics. The method further includees determining one or more visual mood attributes of one or more visual media items, where the one or more visual mood attributes are based on one or more visual characteristics. The method further includes selecting one or more of the soundtracks based on the one or more sound mood attributes and the one or more visual mood attributes. The method further includes generating an association among the one or more selected soundtracks and the one or more visual media items, wherein the association enables the one or more selected soundtracks to be played while the Pone or more visual media items are displayed.
- keywords
- Mood
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Software Development
- Speech Processing
- Données collectées (type source)
- Audio
- Image
- Video
- Concepts clés
- Sound and visual mood
- Méthode
- Using learning algorithm and recognition algorithm, sound mood attributes are based on sound mood characteristics (music key, tempo, volume, rhythm, lyrics) while visual mood attributes are based on visual characheristics (content aspects such as faces and facial objects, and image aspects), that are associated with particular moods or states
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify Content Emotion
- Personalize/Improve with emotion information
- Collections
Methods, systems, and media for personalizing computerized services based on mood and/or behavior information from multiple data sources
- Applicant
- Auteurs
- N/A
- Titre
- Methods, systems, and media for personalizing computerized services based on mood and/or behavior information from multiple data sources
- Patent Number
- US2019197073
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/US20190197073
- Description
- Methods, systems, and media for personalizing computerized services based on mood and/or behavior information from multiple data sources are provided. In some implementations, the method comprises: obtaining information associated with an objective of a user of a computing device from multiple data sources; determining that a portion of information from each of the data sources is relevant to the user having the objective, wherein the portion of information is indicative of a physical or emotional state of the user of the computing device; assigning the user of the computing device into a group of users based at least in part on the objective and the portion of information from each of the data sources; determining a target profile associated with the user based at least in part on the objective and the assigned group; generating a current profile for the user of the computing device based on the portion of information from each of the data sources; comparing the current profile with the target profile to determine a recommended action, wherein the recommended action is determined to have a likelihood of impacting the physical or emotional state of the user; determining one or more devices connected to the computing device, wherein each of the one or more devices has one or more device capabilities; and causing the recommended action to be executed on one or more of the computing device and the devices connected to the computing device based on the one or more device capabilities.
- keywords
- Mood
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Text
- Audio
- Image
- Video
- Media Content
- Concepts clés
- Emotional state
- Méthode
- Emotional state of the user can be predicted using information from various data sources (e.g., contextual data, social data, general data, etc.) including, among other things, content and information published by the user on a social networking service, biometric and location data, mobile device data, and any other suitable data
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
Dynamic text-to-speech provision
- Applicant
- Auteurs
- N/A
- Titre
- Dynamic text-to-speech provision
- Patent Number
- CN109891497
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/CN109891497
- Description
- A dynamic text-to-speech (TTS) process and system are described. In response to receiving a command to provide information to a user, a device retrieves information and determines user and environment attributes including: (i) a distance between the device and the user when the user uttered the query; and (ii) voice features of the user. Based on the user and environment attributes, the device determines a likely mood of the user, and a likely environment in which the user and user device are located in. An audio output template matching the likely mood and voice features of the user is selected. The audio output template is also compatible with the environment in which the user and device are located. The retrieved information is converted into an audio signal using the selected audio output template and output by the device.
- keywords
- Mood
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Software Development
- Speech Processing
- Données collectées (type source)
- Audio
- Concepts clés
- Mood
- Méthode
- A mood classifier may predict the likely mood of the user based on the pitch, tone, amplitude, and frequency data of the audio signal
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Automated nursing assessment
- Applicant
- Auteurs
- N/A
- Titre
- Automated nursing assessment
- Patent Number
- US10376195
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/US10376195
- Description
- This document describes automated nursing assessments. Automation of the nursing assessment involves a nursing-assessment device that makes determinations of a person’s mood, physical state, psychosocial state, and neurological state. To determine a mood and physical state of a person, video of the person is captured while the person is positioned in front of an everyday object, such as a mirror. The captured video is then processed according to human condition recognition techniques, which produces indications of the person’s mood and physical state, such as whether the person is happy, sad, healthy, sick, vital signs, and so on. In addition to mood and physical state, the person’s psychosocial and neurological state are also determined. To do so, questions are asked of the person. These questions are determined from a plurality of psychosocial and neurological state assessment questions, which include queries regarding how the person feels, what the person has been doing, and so on. The determined questions are asked through audible or visual interfaces of the nursing-assessment device. The person’s responses are then analyzed. The analysis involves processing the received answers according to psychosocial and neurological state assessment techniques to produce indications of the person’s psychosocial and neurological state.
- keywords
- Mood
- Domaine de recherche
- Computer-Mediated Communication
- Computer Vision
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Audio
- Image
- Video
- Concepts clés
- Mood
- Méthode
- Mood is determined using a video of the person captured while she is positioned in front of an everyday object, such as a mirror. The captured video is then processed according to human condition recognition techniques
- Dispositif
- Everyday object, such as a mirror, configured as a nursing-assessment device to include a camera, speakers, microphone, and computing resources.
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
Graphical image retrieval based on emotional state of a user of a computing device
- Applicant
- Auteurs
- N/A
- Titre
- Graphical image retrieval based on emotional state of a user of a computing device
- Patent Number
- US2019228031
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/US20190228031
- Description
- A computing device is described that includes a camera configured to capture an image of a user of the computing device, a memory configured to store the image of the user, at least one processor, and at least one module. The at least one module is operable by the at least one processor to obtain, from the memory, an indication of the image of the user of the computing device, determine, based on the image, a first emotion classification tag, and identify, based on the first emotion classification tag, at least one graphical image from a database of pre-classified images that has an emotional classification that is associated with the first emotion classification tag. The at least one module is further operable by the at least one processor to output, for display, the at least one graphical image.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Image
- Video
- Concepts clés
- Emotion
- Méthode
- Based on an image of the user, for example his or her face, the computing device may analyze facial features and other characteristics to determine one or more emotion classification tags
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Determine/Identify Content Emotion
- Personalize/Improve with emotion information
- Collections
Summary generation for live summaries with user and device customization
- Applicant
- Auteurs
- N/A
- Titre
- Summary generation for live summaries with user and device customization
- Patent Number
- WO2023220201
- Publication Date
- 2023
- uri
- https://patents.google.com/patent/WO2023220201
- Description
- Described techniques may be utilized to receive a transcription stream including transcribed text that has been transcribed from speech, and to receive a summary request for a summary to be provided on a display of a device. Extracted text may be identified from the transcribed text and in response to the summary request. The extracted text may be processed using a summarization machine learning (ML) model to obtain a summary of the extracted text, and the summary may be displayed on the display of the device. When an image is captured, an augmented summary may be generated that includes the image together with a visual indication of one or more of an emotion, an entity, or an intent associated with the image, the summary, or the extracted text.
- keywords
- Emotion
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Natural Language Processing
- Sentiment Analysis
- Speech Processing
- Données collectées (type source)
- Text
- Audio
- Image
- Concepts clés
- Emotion
- Méthode
- Using audio or text from the transcription generator and associate one or more pre-defined set of emotions with corresponding text portions (e.g., corresponding words, phrases, sentences, or paragraphs)
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
Predictive lighting from device screens based on user profiles
- Applicant
- Auteurs
- N/A
- Titre
- Predictive lighting from device screens based on user profiles
- Patent Number
- CN113950181
- Publication Date
- 2022
- uri
- https://patents.google.com/patent/CN113950181
- Description
- The present invention relates to anticipatory lighting from device screens based on user profiles. Systems, methods, and computer readable storage mediums are provided for determining the mood of a user, deriving an appropriate lighting scheme, and then implementing the lighting scheme on all devices within a predetermined proximity to the user. Furthermore, when the user begins a task, the devices can track the user and use the lighting from the nearby screens to offer functional lighting.
- keywords
- Mood
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Media Content
- Concepts clés
- Emotional state
- Méthode
- Determination of what the user is likely to be doing next and the general mood of the user is based at least in part on information determined from the user's devices and/or sensors (web activity history, user's device interactions, environmental data such as location, time of day, weather and movement sensor data)
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Collections
Methods for Emotion Classification in Text
- Applicant
- Auteurs
- N/A
- Titre
- Methods for Emotion Classification in Text
- Patent Number
- US2022292261
- Publication Date
- 2022
- uri
- https://patents.google.com/patent/US20220292261
- Description
- The technology relates to methods for detecting and classifying emotions in textual communication and using this information to suggest graphical indicia such as emoji, stickers or GIFs to a user. Two main types of models are fully supervised models and few-shot models. In addition to fully supervised and few-shot models, other types of models focusing on the back-end (server) side or client (on-device) side may also be employed. Server-side models are larger-scale models that can enable higher degrees of accuracy, such as for use cases where models can be hosted on cloud servers where computational and storage resources are relatively abundant. On-device models are smaller-scale models, which enable use on resource-constrained devices such as mobile phones, smart watches or other wearables (e.g., head mounted displays), in-home devices, embedded devices, etc.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Natural Language Processing
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Concepts clés
- Emotion
- Méthode
- Supervised models and few-shot models. Using machine-learned models to detect/classify direct versus induced emotion, notably based on text and emoticons
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Promote the expression of user's emotion
- Collections
Methods, systems, and media for ambient background noise modification based on mood and/or behavior information
- Applicant
- Auteurs
- N/A
- Titre
- Methods, systems, and media for ambient background noise modification based on mood and/or behavior information
- Patent Number
- US2023092307
- Publication Date
- 2023
- uri
- https://patents.google.com/patent/US20230092307
- Description
- Methods, systems, and media for ambient background noise modification are provided. In some implementations, the method comprises: identifying at least one noise present in an environment of a user having a user device, an activity the user includes currently engaged in, and a physical or emotional state of the user; determining a target ambient noise to be produced in the environment based at least in part on the identified noise, the activity the user is currently engaged in, and the physical or emotional state of the user; identifying at least one device associated with the user device to be used to produce the target ambient noise; determining sound outputs corresponding to each of the one or more identified devices, wherein a combination of the sound outputs produces an approximation of one or more characteristics of the target ambient noise; and causing the one or more identified devices to produce the determined sound outputs.
- keywords
- Mood
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Text
- Audio
- Image
- Video
- Media Content
- Concepts clés
- Emotional state
- Méthode
- Emotional state of the user can be predicted using information from various data sources (e.g., contextual data, social data, general data, etc.) including, among other things, content and information published by the user on a social networking service, biometric and location data, mobile device data, and any other suitable data
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
Summary generation for live summaries with user and device customization
- Applicant
- Auteurs
- N/A
- Titre
- Summary generation for live summaries with user and device customization
- Patent Number
- WO2023220201
- Publication Date
- 2023
- uri
- https://patents.google.com/patent/WO2023220201
- Description
- Described techniques may be utilized to receive a transcription stream including transcribed text that has been transcribed from speech, and to receive a summary request for a summary to be provided on a display of a device. Extracted text may be identified from the transcribed text and in response to the summary request. The extracted text may be processed using a summarization machine learning (ML) model to obtain a summary of the extracted text, and the summary may be displayed on the display of the device. When an image is captured, an augmented summary may be generated that includes the image together with a visual indication of one or more of an emotion, an entity, or an intent associated with the image, the summary, or the extracted text.
- keywords
- Emotion
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Natural Language Processing
- Sentiment Analysis
- Speech Processing
- Données collectées (type source)
- Text
- Audio
- Image
- Concepts clés
- Emotion
- Méthode
- Using audio or text from the transcription generator and associate one or more pre-defined set of emotions with corresponding text portions (e.g., corresponding words, phrases, sentences, or paragraphs)
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
User-guided adaptive playlisting using joint audio-text embeddings
- Applicant
- Auteurs
- N/A
- Titre
- User-guided adaptive playlisting using joint audio-text embeddings
- Patent Number
- WO2024043931
- Publication Date
- 2024
- uri
- https://patents.google.com/patent/WO2024043931
- Description
- A method includes providing, by an audio playback interface, an initial playlist comprising audio tracks. The method includes receiving a user preference associated with an initial audio track during a listening session, wherein the user preference is indicative of a listening mood of a user and comprises one or more of a user behavior or a natural language input. The method includes generating a representation of the user preference in a joint audio- text embedding space by applying a two-tower model comprising an audio embedding network and a text embedding network. A proximity of two embeddings is indicative of semantic similarity. The method includes training a machine learning model to generate an updated playlist responsive to the listening mood of the user during the listening session. The method includes applying the machine learning model to generate the updated playlist. The method includes substituting the initial playlist with the updated playlist.
- keywords
- Mood
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Natural Language Processing
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Text
- Audio
- Media Content
- Concepts clés
- Listening mood
- Méthode
- The mood of the user may be inferred by analyzing listen/skip behavior and/or based on user-provided natural-language inputs describing their current interests
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
System(s) and method(s) for causing contextually relevant emoji(s) to be visually rendered for presentation to user(s) in smart dictation
- Applicant
- Auteurs
- N/A
- Titre
- System(s) and method(s) for causing contextually relevant emoji(s) to be visually rendered for presentation to user(s) in smart dictation
- Patent Number
- US2024078374
- Publication Date
- 2024
- uri
- https://patents.google.com/patent/US20240078374
- Description
- Implementations described herein relate to causing emoji(s) that are associated with a given emotion class expressed by a spoken utterance to be visually rendered for presentation to a user at a display of a client device of the user. Processor(s) of the client device may receive audio data that captures the spoken utterance, process the audio data to generate textual data that is predicted to correspond to the spoken utterance, and cause a transcription of the textual data to be visually rendered for presentation to the user via the display. Further, the processor(s) may determine, based on processing the textual data, whether the spoken utterance expresses a given emotion class. In response to determining that the spoken utterance expresses the given emotion class, the processor(s) may cause emoji(s) that are stored in association with the given emotion class to be visually rendered for presentation to the user via the display.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Natural Language Processing
- Sentiment Analysis
- Speech Processing
- Données collectées (type source)
- Text
- Audio
- Concepts clés
- Emotion class
- Méthode
- Based on textual data generated from audio data that captures the spoken utterance, the processor may determine a given emotion class from among a plurality of disparate emotion classes
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Collections
X
System and method for evaluating sentiment
- Applicant
- X
- Auteurs
- N/A
- Titre
- System and method for evaluating sentiment
- Patent Number
- US2015331563
- Publication Date
- 2015
- uri
- https://patents.google.com/patent/US20150331563
- Description
- An example system and method elicits reviews and opinions from users via an online system or a web crawl. Opinions on topics are processed in real time to determine orientation. Each topic is analyzed sentence by sentence to find a central tendency of user orientation toward a given topic. Automatic topic orientation is used to provide a common comparable rating value between reviewers and potentially other systems on similar topics. Facets of the topics are extracted via a submission/acquisition process to determine the key variables of interest for users.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Natural Language Processing
- Sentiment Analysis
- Données collectées (type source)
- Text
- Concepts clés
- Sentiment
- Méthode
- Opinions are processed by analyzing topics, potentially with multiple granularities of detection, e.g., word-by-word, phrase-by-phrase, sentence by sentence, using parts of speech and other natural language taggers or analyzers, to find a central tendency of user orientation
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
- X
Augmentation of Communications with Emotional Data
- Applicant
- X
- Auteurs
- N/A
- Titre
- Augmentation of Communications with Emotional Data
- Patent Number
- US2018077095
- Publication Date
- 2018
- Description
- A sender device may receive input data including to at least one of text or speech input data during a given period of time. In response, the sender device may use one or more of the emotion detection modules to analyze input data received during the same period of time to detect emotional information in the input data, which corresponds to the textual or speech input received during the given period of time. The sender device may generate a message data stream that includes both: text generated from the textual or speech input during the given period of time, and emotion data providing emotional information the same period of time. A recipient device may then use one or more emotion augmentation modules to process such a message data stream and output an emotionally augmented communication.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Computer Vision
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Software Development
- Speech Processing
- Données collectées (type source)
- Text
- Audio
- Image
- Video
- Concepts clés
- Emotion
- Méthode
- Emotion detection could use image data or biometric data, including facial recognition, automaticaly obtained from input devices
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Promote the expression of user's emotion
- Collections
- X
Emoji prediction and visual sentiment analysis
- Applicant
- X
- Auteurs
- N/A
- Titre
- Emoji prediction and visual sentiment analysis
- Patent Number
- US20200073485
- Publication Date
- 2020
- uri
- https://patents.google.com/patent/US20200073485
- Description
- Systems and methods for emoji prediction and visual sentiment analysis are provided. An example system includes a computer-implemented method. The method may be used to predict emoji or analyze sentiment for an input image. An example method includes the step of receiving an image. The example method further includes the steps of generating an emoji embedding for the image and generating a sentiment label for the image using the emoji embedding. The emoji embedding may be generated using a machine learning model.
- keywords
- https://patents.google.com/patent/US20200073485A1/
- Domaine de recherche
- Computer-Mediated Communication
- Computer Vision
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Image
- Media Content
- Concepts clés
- Sentiment
- Méthode
- Emoticone embedding may be generated for the image using a machine learning model, then a sentiment label is generated using the emoticone embedding
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify Content Emotion
- Collections
- X
Systems and methods for swipe-to-like
- Applicant
- X
- Auteurs
- N/A
- Titre
- Systems and methods for swipe-to-like
- Patent Number
- US11010050
- Publication Date
- 2021
- uri
- https://patents.google.com/patent/US11010050
- Description
- Example systems and methods are described for implementing a swipe-to-like feature. In an example implementation, a list of content items is displayed on a touchscreen display, and based on detecting input of a first gesture, such as, for example, a swipe gesture, for a first one of the content items in the list, associating a predetermined first sentiment with the first content item.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Media Content
- Concepts clés
- Sentiment
- Méthode
- Based on the touch input or any indicia, including emoticons, a sentiment indicia is associated with the content and displayed
- Dispositif
- Device (Touchscreen)
- Objectifs du brevet
- Determine/Identify User Emotion
- Promote the expression of user's emotion
- Collections
- X
Labeling messages on a social messaging platform using message response information
- Applicant
- X
- Auteurs
- N/A
- Titre
- Labeling messages on a social messaging platform using message response information
- Patent Number
- WO2022256584
- Publication Date
- 2022
- uri
- https://patents.google.com/patent/WO2022256584
- Description
- A social messaging platform includes a labeling module to label a base message according to an aggregate message response parameter, which represents the sentiment of users towards the content of the base message. The labels provide information that can be used to distinguish more nuanced sentiments and the degree of the sentiment users may have towards the base message. The aggregate message response parameter and corresponding labels are determined, in part, by identifying and evaluating icons (e.g., emojis, emoticons) present in one or more response messages posted in response to a base message. The labels, in turn, can be used in a variety of applications including recommending new content to users based on their mood, identifying messages potentially containing toxic content for review, or providing a way for businesses to evaluate public sentiment towards an advertisement and facilitate targeted advertisements to users.
- keywords
- Sentiment Analysis
- Domaine de recherche
- Computer-Mediated Communication
- Natural Language Processing
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Text
- Audio
- Image
- Video
- Media Content
- Concepts clés
- Sentiment
- Méthode
- The sentiment can be evaluated by identifying icons present in the response messages, for example emojis and emoticons, or by mapping textual content to one or more icons using a natural language processing model
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
- X
TikTok
Video file processing method, system, medium and electronic equipment
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Video file processing method, system, medium and electronic equipment
- Patent Number
- CN110267113
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/CN110267113
- Description
- The invention provides a video file processing method and system, a medium and electronic equipment. The method comprises the steps of obtaining voice comment information input by a user for a current video file, wherein the voice comment information comprises voice content, voice duration and comment mood; identifying the content of the current video file, and generating a plurality of video scenes; determining the video scene matched with the voice comment information; and when the video file is played to the video scene, outputting the voice content. According to the method, the interactive interestingness of the reviewer can be improved; and user viscosity can be further increased.
- keywords
- Mood
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Speech Processing
- Données collectées (type source)
- Audio
- Video
- Concepts clés
- Mood
- Méthode
- The voice comment mood is based on the attitude of the user speaking, such as exclamation, question, anger and the like. Informations of the current video file are also obtained, notably the video expression emotions which include questions, exclamations, anger, laughter, and the like
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Promote the expression of user's emotion
- Collections
- TikTok
Image emotional semantic analysis method, device and electronic equipment
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Image emotional semantic analysis method, device and electronic equipment
- Patent Number
- CN110378406
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/CN110378406
- Description
- The embodiment of the invention provides an image sentiment analysis method and device and electronic equipment, and belongs to the technical field of image processing, and the method comprises the steps: inputting an image into a first network, so as to determine the theme of the image; determining an image theme emotion attribute of the image theme; and inputting the image of which the image theme emotion attribute is determined into a second network to quantify the image theme emotion attribute, with the second network comprising sub-networks corresponding to each theme, and the image of the corresponding theme being input into the sub-network of the corresponding theme. Through the processing scheme of the invention, the emotion of the image theme can be obtained, and the emotion can be quantified.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Image
- Concepts clés
- Emotion
- Méthode
- After the subject of the input image is determined by a network model, the image subject emotional attribute of the image is determined. It may be accomplished, for example, by associating particular image themes with particular image theme emotion attributes ; by a separate module using a lookup table of image themes and emotional attributes ; or manually. For each topic, a separate second network is used to quantize the image emotional attribute
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify Content Emotion
- Collections
- TikTok
A kind of processing method of interaction data, device, equipment and storage medium
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- A kind of processing method of interaction data, device, equipment and storage medium
- Patent Number
- CN109656464
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/CN109656464
- Description
- The embodiment of the invention discloses an interactive data processing method and device, equipment and a storage medium, and the method comprises the steps: recording the touch time of a touch event when the touch event for a preset type of control is detected on a live display interface; If the touch time exceeds a set time threshold, determining that the touch event is a long-press operation; When the touch event is a long-press operation, generating animation paths at set time intervals; Wherein every two generated adjacent animation paths are different; And playing an animation according to each animation path. According to the technical scheme provided by the embodiment of the invention, the interest of the live broadcast room can be improved, and user emotion expression can be promoted.
- keywords
- Emotion
- Domaine de recherche
- Haptic
- Human-Computer Interaction & User Experience
- Données collectées (type source)
- Media Content
- Concepts clés
- Emotion
- Méthode
- While long-press operation is identified with touch time, animations are generated
- Dispositif
- Smartphone, tablet
- Objectifs du brevet
- Determine/Identify User Emotion
- Promote the expression of user's emotion
- Collections
- TikTok
The method and apparatus of personage's emotion for identification
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- The method and apparatus of personage's emotion for identification
- Patent Number
- CN110175565
- Publication Date
- 2019
- uri
- https://patents.google.com/patent/CN110175565
- Description
- The embodiment of the invention discloses a method and device for identifying character emotion. A specific embodiment of the method comprises the steps of extracting a face image set from a to-be-processed figure video, dividing the face image sert into at least one face image group based on a matching relationship between the face images, the different face image groups corresponding to different persons displayed by the person video, for each face image group in the at least one face image group, performing expression recognition on each face image in the face image group to obtain a corresponding expression recognition result, and determining the emotion information of a person corresponding to the face image group based on the expression recognition result corresponding to each face image in the face image group. According to the embodiment, the character emotion recognition based on facial expressions is realized.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Données collectées (type source)
- Image
- Video
- Concepts clés
- Emotion
- Méthode
- Emotional information of the character is determined performing expressive recognition on face images extracted from a character video
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Collections
- TikTok
Information processing method and device and terminal equipment
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Information processing method and device and terminal equipment
- Patent Number
- CN111970402
- Publication Date
- 2020
- uri
- https://patents.google.com/patent/CN111970402
- Description
- An embodiment of the invention discloses an information processing method and device, and electronic equipment. One specific embodiment of the information processing method comprises the steps of: displaying a mood image list in response to a received preset instruction for indicating addition of mood images, wherein the mood image list comprises at least one mood image; determining a target mood image according to a selection operation of a user on the mood images in the mood image list; and adding the target mood image into a user head portrait. According to the information processing method and the device, the convenience of acquiring the mood of the user by contacts of the user is improved.
- keywords
- Mood
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Données collectées (type source)
- Image
- Media Content
- Concepts clés
- Mood
- Méthode
- User can select a mood image in the mood image list
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
- TikTok
Video recommendation method and device, electronic equipment and computer-readable storage medium
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Video recommendation method and device, electronic equipment and computer-readable storage medium
- Patent Number
- CN111726691
- Publication Date
- 2020
- uri
- https://patents.google.com/patent/CN111726691
- Description
- The invention provides a video recommendation method and device, electronic equipment and a computer readable storage medium, and the method comprises the steps: obtaining the sensory information of a user when the user is detected to watch a video; sending the sensory information to a server, receiving related information of the to-be-recommended video returned by the server, and displaying the related information of the to-be-recommended video to the user, wherein the to-be-recommended video is determined based on sensory information. Scheme of the present disclosure, when it is detected that the user watches the video, the sensory information of the user is obtained, the watching state of the user, including the watching feeling of the currently watched video and the to-be-recommended video determined based on the sensory information of the user, can be reflected on the basis of the sensory information, and the watching feeling of the user is considered, so that the recommended video is more accurate and more accordant with the mind of the user.
- keywords
- Feeling
- Domaine de recherche
- Computer Vision
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Speech Processing
- Données collectées (type source)
- Image
- Audio
- Video
- Concepts clés
- Feeling
- Méthode
- The viewing experience of the user can be reflected by combining the facial image and the voice information
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
- TikTok
Reading interaction method, device, equipment, server and storage medium
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Reading interaction method, device, equipment, server and storage medium
- Patent Number
- CN111443794
- Publication Date
- 2020
- uri
- https://patents.google.com/patent/CN111443794
- Description
- The invention discloses a reading interaction method, device and equipment, a server and a storage medium. The method comprises the following steps: receiving a user face image captured when a user reads; identifying the face image of the user, and determining a state expression of the user; and when the state expression meets a reading interaction condition, determining interaction content corresponding to the state expression and playing the interaction content. By means of the method, emotion change sharing between the user and the virtual reading partner in the reading process is achieved through state expression interaction with the reading user in the reading process, so that the single boring reading process is improved, the reading interestingness of the user is improved, and the reading enthusiasm of the user is improved.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Computer Vision
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Données collectées (type source)
- Image
- Video
- Concepts clés
- Emotion
- Méthode
- The state expression is determined using facial image specifically captured during the reading process of the user, using image recognition algorithm and a preset state expression library
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Collections
- TikTok
Method and device for video dubbing
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Method and device for video dubbing
- Patent Number
- CN111767431
- Publication Date
- 2020
- uri
- https://patents.google.com/patent/CN111767431
- Description
- The embodiment of the invention discloses a method and device for video music matching. A specific embodiment of the method comprises the steps of obtaining a to-be-matched music video; inputting the to-be-matched video into a pre-trained video emotion classification model to obtain at least one piece of emotion classification information corresponding to the to-be-matched video and a probability corresponding to the emotion classification information; acquiring a to-be-recalled music information set corresponding to the to-be-matched music video, wherein each piece of music information in the to-be-recalled music information set corresponds to at least one emotion label; and generating a recalled music information list based on matching of the at least one piece of emotion classification information and probability corresponding to the to-be-matched music video and the emotion tags corresponding to the music information in the to-be-recalled music information set. According to the embodiment, the emotion dimension information of the video and the gameplay is fully utilized, so that the matching degree of the video music matching is effectively improved.
- keywords
- Emotion
- Domaine de recherche
- Computer Vision
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Image
- Audio
- Video
- Concepts clés
- Emotion
- Méthode
- Using image features extracted from the video, emotion is detected by performing a similarity comparison with preset classification features corresponding to the emotion classification information. The generated similarity may be used as the probability corresponding to the emotion classification information
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify Content Emotion
- Personalize/Improve with emotion information
- Collections
- TikTok
Background music generation method and device, readable medium and electronic equipment
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Background music generation method and device, readable medium and electronic equipment
- Patent Number
- CN111782576
- Publication Date
- 2020
- uri
- https://patents.google.com/patent/CN111782576
- Description
- The invention relates to a background music generation method and device, a readable medium and electronic equipment. The method relates to the technical field of electronic information processing, and comprises the steps of obtaining a target text and a target type of the target text, determining a target chord corresponding to the target type, determining a target emotion label of the target text, and generating background music corresponding to the target text according to the target emotion label and the target chord. The background music suitable for the target text is automatically generated according to the type and emotion label of the target text, manual selection is not needed, the limitation of existing music is avoided, the background music generation efficiency can be improved, the application range of the background music is expanded, and therefore, the connotation and expressive force of the target text are increased through the background music.
- keywords
- Emotion
- Domaine de recherche
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Text
- Concepts clés
- Emotion
- Méthode
- After the target text is obtained, a target emotion label suitable for the target text is determined according to the number of vocabularies corresponding to each emotion. The target chord and the target emotion label may be used as inputs to a pre-trained music generation model to obtain the background music
- Dispositif
- Device
- Objectifs du brevet
- Personalize/Improve with emotion information
- Collections
- TikTok
Method and device for generating stickers
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Method and device for generating stickers
- Patent Number
- US2021004574
- Publication Date
- 2021
- uri
- https://patents.google.com/patent/US20210004574
- Description
- A method and a device for generating stickers are provided. An embodiment of the method includes extracting an image sequence from a person-contained video to be processed; identifying emotions of the faces respectively displayed by each of the target images in the image sequence to obtain corresponding identification results; based on the emotional levelscorresponding to the emotion labels in the identification results corresponding to each of the target images, extracting a video fragment from the person-contained video, and acting the video fragment as the stickers. The image sequence comprises target images displaying faces; the identification results comprise emotion labels and emotional levels corresponding to the emotion labels. The embodiment can extract the video fragment from the given person-contained video to act as stickers based on the facial emotion match, which can achieve the generation of stickers based on the facial emotion match.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Computer Vision
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Données collectées (type source)
- Image
- Video
- Concepts clés
- Emotion
- Méthode
- Based on face detection, a target image in the image sequence is determined. Using convolutional neural networks, emotions are identified on the face respectively displayed in each target image, resulting in an emotion label and an emotional level
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Personalize/Improve with emotion information
- Collections
- TikTok
Interaction method, interaction device, electronic equipment and storage medium
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Interaction method, interaction device, electronic equipment and storage medium
- Patent Number
- CN112764612
- Publication Date
- 2021
- uri
- https://patents.google.com/patent/CN112764612
- Description
- The embodiment of the invention provides an interaction method and device, electronic equipment and a storage medium. The method comprises the following steps: receiving a first trigger operation acting on a target control corresponding to a target work; and in response to the first trigger operation, displaying a target visual element at a preset display position corresponding to the target work, and controlling the target visual element to move from the preset display position to a first target display position of a target identifier, the target identifier being an identifier of a target publisher of the target work. According to the embodiment of the invention, by adopting the technical scheme, a new interaction mode can be provided for the user, so that the user can express a new emotion through the interaction model, the interestingness of the user during interaction can be improved, and the user interaction frequency and the interaction experience are further improved.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Media Content
- Concepts clés
- Emotion
- Méthode
- Based on user input such as a like control, a collection control, or a comment control
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
- TikTok
Music sharing method and device, electronic equipment and storage medium
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Music sharing method and device, electronic equipment and storage medium
- Patent Number
- CN113050857
- Publication Date
- 2021
- uri
- https://patents.google.com/patent/CN113050857
- Description
- The embodiment of the invention discloses a music sharing method, system and device, electronic equipment and a storage medium, and the method comprises the following steps: entering a lyric video template display interface related to a target song when an instruction related to the display of a lyric video template is triggered; generating a lyric video according to a video editing operation of a user on the lyric video template display interface; and in response to a video publishing instruction of a user, publishing the lyric video to a target position. According to the technical scheme disclosed by the embodiment of the invention, the problems that in the prior art, the music sharing mode and scene are relatively single, and the requirement that the user shares the music-related, active and rich emotion expression cannot be met are solved, the music expression requirement of the user can be amplified, the user is helped to more actively and richer express the music-related emotion, and a streaming media product can be helped to complete more social media transmission, so that more high-quality new users can be obtained.
- keywords
- Emotion
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Text
- Image
- Video
- Media Content
- Concepts clés
- Emotion
- Méthode
- User may express his emotions by editing a lyric video template associated with a target song. In addition, the system also comprises controls for praise, comment, forwarding, adding and publishing and the like
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
- TikTok
Display control method, device, equipment and storage medium
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Display control method, device, equipment and storage medium
- Patent Number
- CN113923499
- Publication Date
- 2022
- uri
- https://patents.google.com/patent/CN113923499
- Description
- The embodiment of the invention discloses a display control method and device, equipment and a storage medium. The method comprises the steps of playing a first video in a first display area in a video stream playing interface of a preset application program, receiving a switching instruction, and switching to play a second video in the first display area under the condition that a preset reminding condition is met, the display area of the second video is changed from the first display area to a second display area in the playing process, the size of the second display area is smaller than that of the first display area, and preset prompt information is displayed outside the second display area in the video stream playing interface. By adopting the technical scheme, the reduction change of the display size is combined with the prompt information, so that the perception of a user on anti-addiction can be enhanced, the reminding effect is enhanced, the problem that the user is easily addicted to watch a video stream is solved. Meanwhile, the abrupt feeling and the blocking feeling caused by directly playing a video in a smaller area are avoided, and the user experience is improved. And the video watching experience of the user is ensured.
- keywords
- Feeling
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Media Content
- Concepts clés
- Feeling
- Méthode
- By reducing the display area of the switched video and displaying the prompt information in the video stream playing interface, the anti-enthusiasm perception of the user can be enhanced, the abrupt feeling and the blocking feeling are avoided, and the video watching experience of the user is ensured
- Dispositif
- Device
- Objectifs du brevet
- N/A
- Collections
- TikTok
Video processing method and apparatus, device and medium
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Video processing method and apparatus, device and medium
- Patent Number
- EP4145837
- Publication Date
- 2023
- uri
- https://patents.google.com/patent/EP4145837
- Description
- Provided in embodiments of the present disclosure are a video processing method and apparatus, a device, and a medium. The method includes: receiving a play trigger operation for a forwarded video, the forwarded video being forwarded from an original video by a forwarding user; obtaining the original video, and forwarding comment information provided by the forwarding user on the original video when forwarding the original video; and playing the original video on a play interface of video works of the forwarding user, and displaying the forwarding comment information. In the embodiments of the present disclosure, when the forwarding user forwards the original video, the original video can be played when receiving a play station for the forwarded video, and meanwhile, the forwarding comment information provided by the forwarding user on the original video during the forwarding is displayed while playing the original video, instead of only displaying the comment information of the creator as in the related art. Thus, in this case, it is convenient for the user to know the forwarding user's feeling about the forwarded video, thereby improving the user's ense of experience.
- keywords
- Feeling
- Domaine de recherche
- Computer-Mediated Communication
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Social Media and User Engagement
- Software Development
- Données collectées (type source)
- Text
- Concepts clés
- Feeling
- Méthode
- Based on user input indicating a comment or a user emotion
- Dispositif
- Device
- Objectifs du brevet
- Promote the expression of user's emotion
- Collections
- TikTok
Sound effect adding method and apparatus, storage medium, and electronic device
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Sound effect adding method and apparatus, storage medium, and electronic device
- Patent Number
- US2023259712
- Publication Date
- 2023
- uri
- https://patents.google.com/patent/US20230259712
- Description
- The present invention relates to a sound effect adding method and apparatus, a storage medium, and an electronic device. The method comprises: determining, on the basis of an emotion judgment model, a statement emotion label of each statement of a text to be processed; determining an emotion offset value of said text on the basis of the type of emotion labels which are largest in quantity among the multiple statement emotion labels; for each paragraph of said text, determining an emotion distribution vector of the paragraph according to the statement emotion label of at least one statement corresponding to the paragraph; determining emotion probability distribution of the paragraph on the basis of the emotion offset value and the remotion distribution vector corresponding to the paragraph; determining, according to the emotion probability distribution of the paragraph and sound effect emotion labels of multiple sound effects in a sound effect library, a target sound effect matching the paragraph; and adding the target sound effect to an audio position corresponding to the paragraph in an audio file corresponding to said text. Thus, the effect of automatically selecting and adding sound effects can be implemented, and the efficiency of adding sound effects can be improved.
- keywords
- Emotion
- Domaine de recherche
- Natural Language Processing
- Sentiment Analysis
- Software Development
- Speech Processing
- Données collectées (type source)
- Text
- Audio
- Concepts clés
- Emotion
- Méthode
- Input may be the to-be-processed text or an audio file corresponding to the to-be-processed text that has been voiced. Emotions of the statements are determined by analyzing semantics of words, a voice emotion of a reciter and determining semantics of speech
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify Content Emotion
- Personalize/Improve with emotion information
- Collections
- TikTok
Media collection generation method and apparatus, electronic device, and storage medium
- Applicant
- TikTok
- Auteurs
- N/A
- Titre
- Media collection generation method and apparatus, electronic device, and storage medium
- Patent Number
- WO2023165368
- Publication Date
- 2023
- uri
- https://patents.google.com/patent/WO2023165368
- Description
- Embodiments of the present disclosure provide a media collection generation method and apparatus, an electronic device, a storage medium, a computer program product, and a computer program. In the media collection generation method, a plurality of emotion identifiers are displayed in a playing interface for a target piece of media, the emotion identifiers being used for representing preset emotion types; and in response to a first interaction operation on a target emotion identifier, adding the target piece of media to a target emotion media collection corresponding to the programmed computer scans. The emotion identifiers that are preconfigured in the playing interface and triggered by means of an interaction operation implement classification of a target piece of media, and consequently generation of corresponding emotion media collections is achieved, causing a generated emotion media collection to achieve media classification on the basis of on the emotion and feeling of a user; the use experience of a personalized media collection for a user is improved, media collection generation steps and logic are simplified, and media collection generation efficiency is improved.
- keywords
- Emotion
- Domaine de recherche
- Human-Computer Interaction & User Experience
- Sentiment Analysis
- Software Development
- Données collectées (type source)
- Audio
- Image
- Video
- Media Content
- Concepts clés
- Emotion
- Méthode
- A plurality of emotional marks are displayed in the playback interface of the target media to represent the preset emotion type. Once target media classification is realized, the generation of the corresponding emotional media collection is realized based on the user's emotional feelings
- Dispositif
- Device
- Objectifs du brevet
- Determine/Identify User Emotion
- Determine/Identify Content Emotion
- Personalize/Improve with emotion information
- Collections
- TikTok