Deep-Learning-Based Facial Retargeting Using Local Patches
- Abstract
- In the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch-based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re-enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to styl
- Author(s)
- 정선진
- Issued Date
- 2025-02-01
- Type
- Article
- Keyword
- 컴퓨터그래픽스응용
- DOI
- 10.1111/cgf.15263
- URI
- http://repository.sungshin.ac.kr/handle/2025.oak/8671
- Publisher
- WILEY
- ISSN
- 0167-7055
-
Appears in Collections:
- 컴퓨터공학과 > 학술논문
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.