DH Latest NewsDH NEWSLatest NewsNEWSScience

Study finds that humans can successfully detect artificially generated speech only 73% of the time

A recent study has revealed that humans can accurately detect artificially generated speech only 73 percent of the time, regardless of whether they are English or Mandarin speakers.

Researchers at University College London conducted the study, utilizing a text-to-speech algorithm trained on publicly available datasets in English and Mandarin. The algorithm was used to generate 50 deepfake speech samples in each language for the experiment.

Deepfake AI is a form of generative artificial intelligence that produces synthetic media designed to resemble the voice or appearance of real individuals.

The researchers played the generated speech samples to 529 participants to evaluate their ability to distinguish between real and fake speech. The results showed that the participants could only identify the fake speech with 73 percent accuracy. However, there was a slight improvement after participants received training to recognize different aspects of deepfake speech.

This study is groundbreaking as it is the first to assess human proficiency in detecting artificially generated speech in languages other than English.

Kimberly Mai, the first author of the study, remarked that training individuals to detect deepfakes may not necessarily enhance their performance in this regard. Furthermore, the study showed that current automated detectors are also not entirely reliable, particularly when encountering changes in the test audio conditions, such as a different speaker.

Dr. Karl Jones, the Head of Engineering at Liverpool John Moores University, issued a warning that the UK’s justice system is ill-equipped to counter the use of deepfakes, calling deepfake speech nearly the perfect crime due to its undetectability.

Sam Gregory, the Executive Director of Witness, an international nonprofit organization, highlighted a “detection equity gap” where crucial parties like journalists, fact-checkers, civil society members, and election officials lack access to detection tools. He emphasized the need for investment in supporting intermediaries to combat the deepfake threat effectively.

Overall, the study’s findings underscore the challenges in detecting deepfake speech and emphasize the urgency for improved and reliable automated detectors, as well as strategies to mitigate the threat posed by deepfake content to organizations and society at large.

shortlink

Post Your Comments


Back to top button