Deepfake Software Fools Voice Authentication With 99% Success Rate

Creating a fake voice to trick authentication systems has never been so easy or effective.

Computer scientists at the University of Waterloo figured out how to successfully fool voice authentication systems 99% of the time using deepfake voice creation software.

Andre Kassis, a Computer Security and Privacy PhD candidate at Waterloo, who is also the lead author of this research study, explains how voice authentication works:

"When enrolling in voice authentication, you are asked to repeat a certain phrase in your own voice. The system then extracts a unique vocal signature (voiceprint) from this provided phrase and stores it on a server ... For future authentication attempts, you are asked to repeat a different phrase and the features extracted from it are compared to the voiceprint you have saved in the system to determine whether access should be granted."

The team at Waterloo beat the authentication by using machine learning-enabled deepfake software to generate a copy of a voice. All the software needs is five minutes of recorded voice audio from which to learn to be a convincing fake. Even spoofing countermeasures employed by the voice authentication systems don't flag the fake voice because a program written by the team removes markers from the deepfake audio that "betray it is computer-generated."

Fake voices fooling voice authentication is something hackers were doing back at Black Hat in 2018, but the success rate of this latest method is sure to raise a lot of concern for any services relying on it for security.

With just six attempts, the Waterloo research team managed a 99% success rate. The success rate does vary, though, based on the voice authentication system being targeted. However, Kassis says that the spoofing countermeasures employed today by these systems is, "critically flawed."

The ease with which voice authentication can be fooled has led to Waterloo computer science professor Urs Hengartner stating, "we hope that companies relying on voice authentication as their only authentication factor will consider deploying additional or stronger authentication measures."

Subscribe to PCMag Middle East newsletter

About Matthew Humphries