AI Also Tricks Us With Voices: The Human Ability To Detect Audio “Deep Fakes” Is Unreliable

Deep Fakes

The Risks Of Audio Deep Fakes

Audio deepfakes, generated through artificial intelligence, represent an increasing threat in the digital world. These fakes can imitate a natural person’s voice or even generate unique voices, which poses severe challenges in cybersecurity and copyright protection.

A Dangerous Deception

Some criminals have used these audio deepfake tools to deceive bankers and authorize fraudulent money transfers. In one shocking case, a fraudster managed to trick a bank manager and steal $35 million using an imitation voice in a phone call.

The Proliferation Of Audio ‘Deep Fakes’

Unlike video or image deepfakes, audio deepfakes have largely gone unnoticed. However, its potential to destroy reputations and cause cyberattacks is just as worrying. These deepfakes are generated by machine learning models that analyze leaked audio samples, meaning anyone can create a convincing imitation of another’s voice.

A Revealing Study

A study conducted by University College London looked at people’s ability to detect voice fakes. The results were surprising: participants could only distinguish profound speech imitations 73% of the time. Even those who received examples of voice imitations for training did not improve their detection ability.

The Challenge Of Discerning Reality From Fiction

Cybersecurity experts warn that discerning fact from fiction will become increasingly tricky as audio deepfake techniques improve. Instead of training people to detect these fakes, focusing efforts on developing more sophisticated automatic detectors is more effective.

The Game Of Cat And Mouse

Some experts express skepticism about the effectiveness of pitting AI against AI. They argue that competition will always exist to develop the most advanced technology. However, others emphasize the importance of investing resources in improving audio deepfake detectors.

Attacks Directed At Public Figures

While ordinary users are also vulnerable to audio deepfakes, the most sophisticated attacks target public figures with many publicly available voice records. It is crucial that everyone is aware of the existence of these auditory fakes and takes precautions when consuming digital content.

Also Read: Elon Musk challenges ChatGPT With Grok, His Mocking Artificial Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *