Sign In

Communications of the ACM

ACM TechNews

Deepfake Audio Has a Tell

View as: Print Mobile App Share:

Deepfaked audio often results in vocal tract reconstructions that resemble drinking straws, rather than biological vocal tracts.

Credit: Logan Blue et al

Researchers at the University of Florida can detect audio deepfakes by measuring acoustic and fluid dynamic distinctions between organic and synthetic voice samples.

The researchers inverted techniques used to replicate the sounds a person makes to acoustically model their vocal tract, in order to approximate the speaker's tract during a segment of speech.

Using the process to analyze deepfaked audio samples, on the other hand, can result in model vocal tract shapes that do not appear in people.

"By estimating the anatomy responsible for creating the observed speech, it's possible to identify whether the audio was generated by a person or a computer," the researchers explain.

From Ars Technica
View Full Article


Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account