OAK

Speed-Aware Audio-Driven Speech Animation using Adaptive Windows

Metadata Downloads
Abstract
We present a novel method that can generate realistic speech animations of a 3D face from audio using multiple adaptive windows. In contrast to previous studies that use a fixed size audio window, our method accepts an adaptive audio window as input, reflecting the audio speaking rate to use consistent phonemic information. Our system consists of three parts. First, the speaking rate is estimated from the input audio using a neural network trained in a self-supervised manner. Second, the appropriate window size that encloses the audio features is predicted adaptively based on the estimated speaking rate. Another key element lies in the use of multiple audio windows of different sizes as input to the animation generator: a small window to concentrate on detailed information and a large window to consider broad phonemic information near the center frame. Finally, the speech animation is generated from the multiple adaptive audio windows. Our method can generate realistic speech animations from in-the-wild audios at any speaking rate, i.e., fast raps, slow songs, as well as normal speech. We demonstrate via extensive quantitative and qualitative evaluations including a user study that our method outperforms state-ofthe-art approaches.
Author(s)
정선진Yeongho SeolKwanggyoon SeoHyeonho NaSeonghyeon KimVanessa TanJunyong Noh
Issued Date
2025-02-01
Type
Article
Keyword
컴퓨터그래픽스응용
DOI
10.1145/3691341
URI
http://repository.sungshin.ac.kr/handle/2025.oak/8758
Publisher
ASSOC COMPUTING MACHINERY
ISSN
0730-0301
Appears in Collections:
컴퓨터공학과 > 학술논문
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.