The embodiment of the invention provides an interaction method and device, electronic equipment and a storage medium. The method comprises the following steps: receiving a first trigger operation acting on a target control corresponding to a target work; and in response to the first trigger operation, displaying a target visual element at a preset display position corresponding to the target work, and controlling the target visual element to move from the preset display position to a first target display position of a target identifier, the target identifier being an identifier of a target publisher of the target work. According to the embodiment of the invention, by adopting the technical scheme, a new interaction mode can be provided for the user, so that the user can express a new emotion through the interaction model, the interestingness of the user during interaction can be improved, and the user interaction frequency and the interaction experience are further improved.
A social messaging platform includes a labeling module to label a base message according to an aggregate message response parameter, which represents the sentiment of users towards the content of the base message. The labels provide information that can be used to distinguish more nuanced sentiments and the degree of the sentiment users may have towards the base message. The aggregate message response parameter and corresponding labels are determined, in part, by identifying and evaluating icons (e.g., emojis, emoticons) present in one or more response messages posted in response to a base message. The labels, in turn, can be used in a variety of applications including recommending new content to users based on their mood, identifying messages potentially containing toxic content for review, or providing a way for businesses to evaluate public sentiment towards an advertisement and facilitate targeted advertisements to users.
Embodiments of the present disclosure provide a media collection generation method and apparatus, an electronic device, a storage medium, a computer program product, and a computer program. In the media collection generation method, a plurality of emotion identifiers are displayed in a playing interface for a target piece of media, the emotion identifiers being used for representing preset emotion types; and in response to a first interaction operation on a target emotion identifier, adding the target piece of media to a target emotion media collection corresponding to the programmed computer scans. The emotion identifiers that are preconfigured in the playing interface and triggered by means of an interaction operation implement classification of a target piece of media, and consequently generation of corresponding emotion media collections is achieved, causing a generated emotion media collection to achieve media classification on the basis of on the emotion and feeling of a user; the use experience of a personalized media collection for a user is improved, media collection generation steps and logic are simplified, and media collection generation efficiency is improved.
Exemplary embodiments relate to the application of media effects, such as visual overlays, sound effects, etc. to a video conversation. A media effect may be applied as a reaction to an occurrence in the conversation, such as in response to an emotional reaction detected by emotion analysis of information associated with the video. Effect application may be controlled through gestures, such as applying different effects with different gestures, or cancelling automatic effect application using a gesture. Effects may also be applied in group settings, and may affect multiple users. A real-time data channel may synchronize effect application across multiple participants. When broadcasting a video stream that includes effects, the three channels may be sent to an intermediate server, which stitches the three channels together into a single video stream; the single video stream may then be sent to a broadcast server for distribution to the broadcast recipients.
A method and a device for generating stickers are provided. An embodiment of the method includes extracting an image sequence from a person-contained video to be processed; identifying emotions of the faces respectively displayed by each of the target images in the image sequence to obtain corresponding identification results; based on the emotional levelscorresponding to the emotion labels in the identification results corresponding to each of the target images, extracting a video fragment from the person-contained video, and acting the video fragment as the stickers. The image sequence comprises target images displaying faces; the identification results comprise emotion labels and emotional levels corresponding to the emotion labels. The embodiment can extract the video fragment from the given person-contained video to act as stickers based on the facial emotion match, which can achieve the generation of stickers based on the facial emotion match.
The embodiment of the invention discloses a method and device for video music matching. A specific embodiment of the method comprises the steps of obtaining a to-be-matched music video; inputting the to-be-matched video into a pre-trained video emotion classification model to obtain at least one piece of emotion classification information corresponding to the to-be-matched video and a probability corresponding to the emotion classification information; acquiring a to-be-recalled music information set corresponding to the to-be-matched music video, wherein each piece of music information in the to-be-recalled music information set corresponds to at least one emotion label; and generating a recalled music information list based on matching of the at least one piece of emotion classification information and probability corresponding to the to-be-matched music video and the emotion tags corresponding to the music information in the to-be-recalled music information set. According to the embodiment, the emotion dimension information of the video and the gameplay is fully utilized, so that the matching degree of the video music matching is effectively improved.
A client device displays a content item and a first facial expression superimposed on the content item. Concurrently with and separately from displaying the first facial expression, a range of emotion indicators is displayed, each emotion indicator of the range of emotion indicators corresponding to a respective opinion of a range of opinions. A first user input is detected at a display location corresponding to a respective emotion indicator of the range of emotion indicators. In response to detecting the first user input, the first facial expression is updated to match the respective emotion indicator.