An Acoustic Signal Based Language Independent Lip Synchronization Method and Its Implementation via Extended LPC Genisletilmis DTK Yardimi ile Akustik Isaret Tabanli, Dilden Bagimsiz Bir Ses-Dudak Sekli Eslestirme Yontemi ve Gerceklemesi


Cankurtaran H. S., BOYACI A., YARKAN S.

28th Signal Processing and Communications Applications Conference, SIU 2020, Gaziantep, Türkiye, 5 - 07 Ekim 2020 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/siu49456.2020.9302377
  • Basıldığı Şehir: Gaziantep
  • Basıldığı Ülke: Türkiye
  • Anahtar Kelimeler: formant frequency, linear predictive coding, lip sync
  • İstanbul Ticaret Üniversitesi Adresli: Evet

Özet

Processing human speech with the use of digital technologies leads to several important fields of research. Speech- to-text and lip-syncing are among the instances of relevant prominent research areas. In this regard, audio-visualization of acoustic signals, providing visual aid in real-time for disabled people, and realization of text-free animation applications are just to name a few. Therefore, in this study, a language-independent lip-sync method that is based on extended linear predictive coding is proposed. The proposed method operates on baseband electrical signal that is acquired by a standard single-channel off-the-shelf microphone and exploits the statistical characteristics of acoustic signals produced by human speech. In addition, the proposed method is implemented on an embedded system, tested, and its performance is evaluated. Results are given along with discussions and future directions.