Systems and methods for capturing the emotion of a user when viewing particular media content. The method implemented on a computer system having one or more processors and memory includes detecting display of a media content item, e.g. a video clip, an audio clip, a photo or text message. While the media content item is being displayed, the viewer expression e.g. emotion is detected corresponding to a predefined viewer expression i.e. by using a database to compare the expressions with each other; as well as the identifying a portion of the media content item (e.g. the scene of the video clip) that corresponds with the viewer's expression i.e. emotion. The viewer expression or emotion is based on one of: a facial expression, a body movement, a voice, or an arm, leg or finger gesture and is presumed to be a viewer reaction to the portion of the media content item.
A dynamic text-to-speech (TTS) process and system are described. In response to receiving a command to provide information to a user, a device retrieves information and determines user and environment attributes including: (i) a distance between the device and the user when the user uttered the query; and (ii) voice features of the user. Based on the user and environment attributes, the device determines a likely mood of the user, and a likely environment in which the user and user device are located in. An audio output template matching the likely mood and voice features of the user is selected. The audio output template is also compatible with the environment in which the user and device are located. The retrieved information is converted into an audio signal using the selected audio output template and output by the device.
Meetings held in virtual environments can allow participants to conveniently express emotions to a meeting organizer and/or other participants. The avatar representing a meeting participant can be enhanced to include an expression symbol selected by that participant. The participant can choose among a set of expression symbols offered for the meeting.
Described herein are methods and system for analyzing music audio. An example method includes obtaining a music audio track, calculating acoustic features of the music audio track, calculating geometric features of the music audio track in view of the acoustic features, and determining a mood of the music audio track in view of the geometric features.