Bringing Portraits to Life
- Auteur-es
- Hadar Averbuch-Elor, Daniel Cohen-Or, Johannes Kopf, Michael Cohen
- Nombre Auteurs
- 4
- Titre
- Bringing Portraits to Life
- Année de publication
- 2017
- Référence (APA)
- Averbuch-Elor, H., Cohen-Or, D., Kopf, J., & Cohen, M. F. (2017). Bringing portraits to life. ACM Transactions on Graphics, 36(6), 1‑13. https://doi.org/10.1145/3130800.3130818
- résumé
- We present a technique to automatically animate a still portrait, making it possible for the subject in the photo to come to life and express various emotions. We use a driving video (of a different subject) and develop means to transfer the expressiveness of the subject in the driving video to the target portrait. In contrast to previous work that requires an input video of the target face to reenact a facial performance, our technique uses only a single target image. We animate the target image through 2D warps that imitate the facial transformations in the driving video. As warps alone do not carry the full expressiveness of the face, we add fine-scale dynamic details which are commonly associated with facial expressions such as creases and wrinkles. Furthermore, we hallucinate regions that are hidden in the input target face, most notably in the inner mouth. Our technique gives rise to reactive profiles, where people in still images can automatically interact with their viewers. We demonstrate our technique operating on numerous still portraits from the internet.
- Mots-clés
- face animation, facial reenactment
- URL
- https://research.facebook.com/file/265001855227077/elor2017_bringingportraits-1.pdf
- doi
- https://doi.org/10.1145/3130800.3130818
- Accessibilité de l'article
- Libre
- Champ
- Computer Vision, Computational Photography & Intelligent Cameras
- Type contenu (théorique Applicative méthodologique)
- Applicatif
- Méthode
-
The method involves using a driving video and 2D warps to transfer the expressiveness of the driving subject to the target portrait, adding fine-scale dynamic details and hallucinating hidden regions. The method is designed to work in real-time and provide an interactive experience for the viewers.
"Our method takes a single target image of a neutral face in frontal pose and generates a video that expresses various emotions."
"We extract and track facial and non-facial features in the driving video (colored in red and yellow, respectively) and compute correspondences to the target image. To generate the animated target frames, we perform a 2D warping to generate a coarse target frame, followed by a transfer of hidden regions (i.e., the mouth interior) and fine-scale details." - Cas d'usage
- N/A
- Objectifs de l'article
-
The objective of the article is to "present a technique to automatically animate a still portrait, making it possible for the subject in the photo to come to life and express various
emotions and to demonstrate its effectiveness through a user study, and discuss its potential applications in various domains." - Question(s) de recherche/Hypothèses/conclusion
- Research question is whether it is possible to animate still portraits and transfer the expressiveness of a driving subject to the target portrait in real-time and make it look real.
- The hypothesis is that by using a driving video and 2D warps, it is possible to transfer the expressiveness of the driving subject to the target portrait and create a realistic and interactive animation.
- The conclusions are that the proposed technique is effective in animating still portraits and transferring the expressiveness of a driving subject to the target portrait in real-time. The user study shows that the animations are perceived as realistic and engaging, and the technique has potential applications in various domains, such as entertainment, education, and communication.
- Cadre théorique/Auteur.es
- The theoretical framework of the article includes computer vision, facial expression analysis, and machine learning. The main authors cited include Kai Li, Feng Xu, Jue Wang, Qionghai Dai, Yebin Liu, Zicheng Liu, Ying Shan, Zhengyou Zhang, Iacopo Masi, Anh Tuan Tran, Jatuporn Toy Leksut, Tal Hassner, Gérard G. Medioni, and Maja Pantic
- Concepts clés
- Portrait animation, Facial expression synthesis, 2D warps, Emotion transfer
- Données collectées (type source)
-
"The participants were presented with 24 randomly selected videos,
eight of which are real. They were asked to rate them based on how
real the animation looks." - from very likely fake, likely fake, could equally be
real or fake, likely real to very likely real. - Définition des émotions
- Categorical emotions
- Ampleur expérimentation (volume de comptes)
- 24 videos rated by 30 users
- Technologies associées
- Computer vision, Facial expression analysis, Machine learning
- Mention de l'éthique
- Non
- Finalité communicationnelle
- The proposed technique has potential applications in various domains, such as entertainment, education, and communication.
- Pages du site
- Contenu
Fait partie de Bringing Portraits to Life