-
Applicant
-
Google
-
Auteurs
-
N/A
-
Titre
-
System(s) and method(s) for causing contextually relevant emoji(s) to be visually rendered for presentation to user(s) in smart dictation
-
Patent Number
-
US2024078374
-
Publication Date
-
2024
-
uri
-
https://patents.google.com/patent/US20240078374
-
Description
-
Implementations described herein relate to causing emoji(s) that are associated with a given emotion class expressed by a spoken utterance to be visually rendered for presentation to a user at a display of a client device of the user. Processor(s) of the client device may receive audio data that captures the spoken utterance, process the audio data to generate textual data that is predicted to correspond to the spoken utterance, and cause a transcription of the textual data to be visually rendered for presentation to the user via the display. Further, the processor(s) may determine, based on processing the textual data, whether the spoken utterance expresses a given emotion class. In response to determining that the spoken utterance expresses the given emotion class, the processor(s) may cause emoji(s) that are stored in association with the given emotion class to be visually rendered for presentation to the user via the display.
-
keywords
-
Emotion
-
Domaine de recherche
-
Computer-Mediated Communication
-
Human-Computer Interaction & User Experience
-
Natural Language Processing
-
Sentiment Analysis
-
Speech Processing
-
Données collectées (type source)
-
Text
-
Audio
-
Concepts clés
-
Emotion class
-
Méthode
-
Based on textual data generated from audio data that captures the spoken utterance, the processor may determine a given emotion class from among a plurality of disparate emotion classes
-
Dispositif
-
Device
-
Objectifs du brevet
-
Determine/Identify User Emotion
-
Personalize/Improve with emotion information