TikTok
- Titre
- TikTok
Contenus
-
Information processing method and device and terminal equipmentAn embodiment of the invention discloses an information processing method and device, and electronic equipment. One specific embodiment of the information processing method comprises the steps of: displaying a mood image list in response to a received preset instruction for indicating addition of mood images, wherein the mood image list comprises at least one mood image; determining a target mood image according to a selection operation of a user on the mood images in the mood image list; and adding the target mood image into a user head portrait. According to the information processing method and the device, the convenience of acquiring the mood of the user by contacts of the user is improved.
-
Video file processing method, system, medium and electronic equipmentThe invention provides a video file processing method and system, a medium and electronic equipment. The method comprises the steps of obtaining voice comment information input by a user for a current video file, wherein the voice comment information comprises voice content, voice duration and comment mood; identifying the content of the current video file, and generating a plurality of video scenes; determining the video scene matched with the voice comment information; and when the video file is played to the video scene, outputting the voice content. According to the method, the interactive interestingness of the reviewer can be improved; and user viscosity can be further increased.
-
Video processing method and apparatus, device and mediumProvided in embodiments of the present disclosure are a video processing method and apparatus, a device, and a medium. The method includes: receiving a play trigger operation for a forwarded video, the forwarded video being forwarded from an original video by a forwarding user; obtaining the original video, and forwarding comment information provided by the forwarding user on the original video when forwarding the original video; and playing the original video on a play interface of video works of the forwarding user, and displaying the forwarding comment information. In the embodiments of the present disclosure, when the forwarding user forwards the original video, the original video can be played when receiving a play station for the forwarded video, and meanwhile, the forwarding comment information provided by the forwarding user on the original video during the forwarding is displayed while playing the original video, instead of only displaying the comment information of the creator as in the related art. Thus, in this case, it is convenient for the user to know the forwarding user's feeling about the forwarded video, thereby improving the user's ense of experience.
-
Display control method, device, equipment and storage mediumThe embodiment of the invention discloses a display control method and device, equipment and a storage medium. The method comprises the steps of playing a first video in a first display area in a video stream playing interface of a preset application program, receiving a switching instruction, and switching to play a second video in the first display area under the condition that a preset reminding condition is met, the display area of the second video is changed from the first display area to a second display area in the playing process, the size of the second display area is smaller than that of the first display area, and preset prompt information is displayed outside the second display area in the video stream playing interface. By adopting the technical scheme, the reduction change of the display size is combined with the prompt information, so that the perception of a user on anti-addiction can be enhanced, the reminding effect is enhanced, the problem that the user is easily addicted to watch a video stream is solved. Meanwhile, the abrupt feeling and the blocking feeling caused by directly playing a video in a smaller area are avoided, and the user experience is improved. And the video watching experience of the user is ensured.
-
Video recommendation method and device, electronic equipment and computer-readable storage mediumThe invention provides a video recommendation method and device, electronic equipment and a computer readable storage medium, and the method comprises the steps: obtaining the sensory information of a user when the user is detected to watch a video; sending the sensory information to a server, receiving related information of the to-be-recommended video returned by the server, and displaying the related information of the to-be-recommended video to the user, wherein the to-be-recommended video is determined based on sensory information. Scheme of the present disclosure, when it is detected that the user watches the video, the sensory information of the user is obtained, the watching state of the user, including the watching feeling of the currently watched video and the to-be-recommended video determined based on the sensory information of the user, can be reflected on the basis of the sensory information, and the watching feeling of the user is considered, so that the recommended video is more accurate and more accordant with the mind of the user.
-
Video dubbing method and device, electronic equipment and computer readable storage mediumThe invention provides a video music matching method and device, electronic equipment and a computer readable storage medium, and relates to the technical field of video processing. The method comprises the following steps: inputting a video to be subjected to music matching into a pre-trained first neural network model to obtain specific-dimension video features of the video to be subjected to music matching; inputting the video features of the specific dimension into a predetermined second neural network model to obtain an emotion category of the video to be subjected to music matching; obtaining a plurality of songs corresponding to the emotion categories from a song library pool, and respectively extracting audio features of the songs; and performing Euclidean distance calculation according to the specific dimension video features of the video to be subjected to music matching and the audio features of each track, and taking the track of which the Euclidean distance is within a preset range as a recommended track of the video to be subjected to music matching. According to the invention, based on the emotion category expressed by the video, automatic music matching can be carried out on the video, and the use experience of a user is improved.
-
Method and device for generating stickersA method and a device for generating stickers are provided. An embodiment of the method includes extracting an image sequence from a person-contained video to be processed; identifying emotions of the faces respectively displayed by each of the target images in the image sequence to obtain corresponding identification results; based on the emotional levelscorresponding to the emotion labels in the identification results corresponding to each of the target images, extracting a video fragment from the person-contained video, and acting the video fragment as the stickers. The image sequence comprises target images displaying faces; the identification results comprise emotion labels and emotional levels corresponding to the emotion labels. The embodiment can extract the video fragment from the given person-contained video to act as stickers based on the facial emotion match, which can achieve the generation of stickers based on the facial emotion match.
-
Reading interaction method, device, equipment, server and storage mediumThe invention discloses a reading interaction method, device and equipment, a server and a storage medium. The method comprises the following steps: receiving a user face image captured when a user reads; identifying the face image of the user, and determining a state expression of the user; and when the state expression meets a reading interaction condition, determining interaction content corresponding to the state expression and playing the interaction content. By means of the method, emotion change sharing between the user and the virtual reading partner in the reading process is achieved through state expression interaction with the reading user in the reading process, so that the single boring reading process is improved, the reading interestingness of the user is improved, and the reading enthusiasm of the user is improved.
-
Image emotional semantic analysis method, device and electronic equipmentThe embodiment of the invention provides an image sentiment analysis method and device and electronic equipment, and belongs to the technical field of image processing, and the method comprises the steps: inputting an image into a first network, so as to determine the theme of the image; determining an image theme emotion attribute of the image theme; and inputting the image of which the image theme emotion attribute is determined into a second network to quantify the image theme emotion attribute, with the second network comprising sub-networks corresponding to each theme, and the image of the corresponding theme being input into the sub-network of the corresponding theme. Through the processing scheme of the invention, the emotion of the image theme can be obtained, and the emotion can be quantified.
-
A kind of processing method of interaction data, device, equipment and storage mediumThe embodiment of the invention discloses an interactive data processing method and device, equipment and a storage medium, and the method comprises the steps: recording the touch time of a touch event when the touch event for a preset type of control is detected on a live display interface; If the touch time exceeds a set time threshold, determining that the touch event is a long-press operation; When the touch event is a long-press operation, generating animation paths at set time intervals; Wherein every two generated adjacent animation paths are different; And playing an animation according to each animation path. According to the technical scheme provided by the embodiment of the invention, the interest of the live broadcast room can be improved, and user emotion expression can be promoted.
-
Sound effect adding method and apparatus, storage medium, and electronic deviceThe present invention relates to a sound effect adding method and apparatus, a storage medium, and an electronic device. The method comprises: determining, on the basis of an emotion judgment model, a statement emotion label of each statement of a text to be processed; determining an emotion offset value of said text on the basis of the type of emotion labels which are largest in quantity among the multiple statement emotion labels; for each paragraph of said text, determining an emotion distribution vector of the paragraph according to the statement emotion label of at least one statement corresponding to the paragraph; determining emotion probability distribution of the paragraph on the basis of the emotion offset value and the remotion distribution vector corresponding to the paragraph; determining, according to the emotion probability distribution of the paragraph and sound effect emotion labels of multiple sound effects in a sound effect library, a target sound effect matching the paragraph; and adding the target sound effect to an audio position corresponding to the paragraph in an audio file corresponding to said text. Thus, the effect of automatically selecting and adding sound effects can be implemented, and the efficiency of adding sound effects can be improved.
-
Method and device for video dubbingThe embodiment of the invention discloses a method and device for video music matching. A specific embodiment of the method comprises the steps of obtaining a to-be-matched music video; inputting the to-be-matched video into a pre-trained video emotion classification model to obtain at least one piece of emotion classification information corresponding to the to-be-matched video and a probability corresponding to the emotion classification information; acquiring a to-be-recalled music information set corresponding to the to-be-matched music video, wherein each piece of music information in the to-be-recalled music information set corresponds to at least one emotion label; and generating a recalled music information list based on matching of the at least one piece of emotion classification information and probability corresponding to the to-be-matched music video and the emotion tags corresponding to the music information in the to-be-recalled music information set. According to the embodiment, the emotion dimension information of the video and the gameplay is fully utilized, so that the matching degree of the video music matching is effectively improved.
-
The method and apparatus of personage's emotion for identificationThe embodiment of the invention discloses a method and device for identifying character emotion. A specific embodiment of the method comprises the steps of extracting a face image set from a to-be-processed figure video, dividing the face image sert into at least one face image group based on a matching relationship between the face images, the different face image groups corresponding to different persons displayed by the person video, for each face image group in the at least one face image group, performing expression recognition on each face image in the face image group to obtain a corresponding expression recognition result, and determining the emotion information of a person corresponding to the face image group based on the expression recognition result corresponding to each face image in the face image group. According to the embodiment, the character emotion recognition based on facial expressions is realized.
-
Interaction method, interaction device, electronic equipment and storage mediumThe embodiment of the invention provides an interaction method and device, electronic equipment and a storage medium. The method comprises the following steps: receiving a first trigger operation acting on a target control corresponding to a target work; and in response to the first trigger operation, displaying a target visual element at a preset display position corresponding to the target work, and controlling the target visual element to move from the preset display position to a first target display position of a target identifier, the target identifier being an identifier of a target publisher of the target work. According to the embodiment of the invention, by adopting the technical scheme, a new interaction mode can be provided for the user, so that the user can express a new emotion through the interaction model, the interestingness of the user during interaction can be improved, and the user interaction frequency and the interaction experience are further improved.
-
Music sharing method and device, electronic equipment and storage mediumThe embodiment of the invention discloses a music sharing method, system and device, electronic equipment and a storage medium, and the method comprises the following steps: entering a lyric video template display interface related to a target song when an instruction related to the display of a lyric video template is triggered; generating a lyric video according to a video editing operation of a user on the lyric video template display interface; and in response to a video publishing instruction of a user, publishing the lyric video to a target position. According to the technical scheme disclosed by the embodiment of the invention, the problems that in the prior art, the music sharing mode and scene are relatively single, and the requirement that the user shares the music-related, active and rich emotion expression cannot be met are solved, the music expression requirement of the user can be amplified, the user is helped to more actively and richer express the music-related emotion, and a streaming media product can be helped to complete more social media transmission, so that more high-quality new users can be obtained.
-
Background music generation method and device, readable medium and electronic equipmentThe invention relates to a background music generation method and device, a readable medium and electronic equipment. The method relates to the technical field of electronic information processing, and comprises the steps of obtaining a target text and a target type of the target text, determining a target chord corresponding to the target type, determining a target emotion label of the target text, and generating background music corresponding to the target text according to the target emotion label and the target chord. The background music suitable for the target text is automatically generated according to the type and emotion label of the target text, manual selection is not needed, the limitation of existing music is avoided, the background music generation efficiency can be improved, the application range of the background music is expanded, and therefore, the connotation and expressive force of the target text are increased through the background music.
-
Media collection generation method and apparatus, electronic device, and storage mediumEmbodiments of the present disclosure provide a media collection generation method and apparatus, an electronic device, a storage medium, a computer program product, and a computer program. In the media collection generation method, a plurality of emotion identifiers are displayed in a playing interface for a target piece of media, the emotion identifiers being used for representing preset emotion types; and in response to a first interaction operation on a target emotion identifier, adding the target piece of media to a target emotion media collection corresponding to the programmed computer scans. The emotion identifiers that are preconfigured in the playing interface and triggered by means of an interaction operation implement classification of a target piece of media, and consequently generation of corresponding emotion media collections is achieved, causing a generated emotion media collection to achieve media classification on the basis of on the emotion and feeling of a user; the use experience of a personalized media collection for a user is improved, media collection generation steps and logic are simplified, and media collection generation efficiency is improved.