A sender device may receive input data including to at least one of text or speech input data during a given period of time. In response, the sender device may use one or more of the emotion detection modules to analyze input data received during the same period of time to detect emotional information in the input data, which corresponds to the textual or speech input received during the given period of time. The sender device may generate a message data stream that includes both: text generated from the textual or speech input during the given period of time, and emotion data providing emotional information the same period of time. A recipient device may then use one or more emotion augmentation modules to process such a message data stream and output an emotionally augmented communication.
The present disclosure relates to systems, methods, and devices for augmenting text messages. In particular, the message system augments text messages with emotion information of a user based on characteristics of a keyboard input from the user. For example, one or more implementations involve predicting an emotion of the user based on the characteristics of the keyboard input for a message. One or more embodiments of the message system select a formatting for the text of the message based on the predicted emotion and format the message within a messaging application in accordance with the selected formatting.
This document describes automated nursing assessments. Automation of the nursing assessment involves a nursing-assessment device that makes determinations of a person’s mood, physical state, psychosocial state, and neurological state. To determine a mood and physical state of a person, video of the person is captured while the person is positioned in front of an everyday object, such as a mirror. The captured video is then processed according to human condition recognition techniques, which produces indications of the person’s mood and physical state, such as whether the person is happy, sad, healthy, sick, vital signs, and so on. In addition to mood and physical state, the person’s psychosocial and neurological state are also determined. To do so, questions are asked of the person. These questions are determined from a plurality of psychosocial and neurological state assessment questions, which include queries regarding how the person feels, what the person has been doing, and so on. The determined questions are asked through audible or visual interfaces of the nursing-assessment device. The person’s responses are then analyzed. The analysis involves processing the received answers according to psychosocial and neurological state assessment techniques to produce indications of the person’s psychosocial and neurological state.
A given set of videos are sequenced in an aesthetically pleasing manner using models learned from human curated playlists. Semantic features associated with each video in the curated playlists are identified and a first order Markov chain model is learned from curated playlists. In one method, a directed graph using the Markov model is induced, wherein sequencing is obtained by finding the shortest path through the directed graph. In another method a sampling based approach is implemented to produce paths on the digraph. Multiple samples are generated and the best scoring sample is returned as the output. In a third method, a relevance based random walk sampling algorithm is modified to produce a reordering of the playlist.
The invention relates to a background music generation method and device, a readable medium and electronic equipment. The method relates to the technical field of electronic information processing, and comprises the steps of obtaining a target text and a target type of the target text, determining a target chord corresponding to the target type, determining a target emotion label of the target text, and generating background music corresponding to the target text according to the target emotion label and the target chord. The background music suitable for the target text is automatically generated according to the type and emotion label of the target text, manual selection is not needed, the limitation of existing music is avoided, the background music generation efficiency can be improved, the application range of the background music is expanded, and therefore, the connotation and expressive force of the target text are increased through the background music.