Deepfake audio technology has seen an increased threat to cybersecurity recently, especially as artificial technology makes its way through the dark web as a service. Security experts are warning people to be wary of the rise in threat actors offering voice cloning-as-a-service (VCaaS) whose efforts could streamline deepfake-based fraud.
What is Voice cloning-as-a-service?
The technology works by creating voice models which mimic an individual’s voice. By inputting voice recordings of the intended target into the deepfake audio technology, a synthetic voice will be generated. A popular modern trend, the area is tipped to be one of the leading topics of conversation at cyber security events this year.
This can then be used in online fraudulent activities such as completing multi-factor authentication using the target’s voice for banks, and businesses, among other things. Although, one of the biggest recorder threats that voice cloning-as-a-service can bring is the increase in how effective business email compromise (BEC)-style attacks can be.
News of the rising threat of voice cloning-as-a-service
In the latest report from Recorded Future titled I Have No Mouth and I Must Do Crime, based on threat intelligence analysis invested in the cybercrime underground, it was found that the dark web is increasingly becoming a hub for these out-of-the-box voice cloning platforms.
Some of these are not all that expensive to use either, with a few being voice cloning platforms being recorded as free to use as long as the consumer has a registered account. Whilst others are as low as $5 per month according to the vendor.
As a result, cybercrime involving voice phishing, impersonation, and call-back scams is becoming more feasible for fraudsters. Through the chatter observed by Recorded Future, it was found that these parts of cybercrime were frequently mentioned in conjunction with the services available with such voice cloning tools.
Advancements in voice cloning cybercrimes
With the rise of legitimate audio tools making waves throughout the world, Cyber-criminals are taking advantage of said technology to enhance their fraudulent activities. For instance, tools used in audiobook voiceovers, film and television dubbing, voice acting, and advertising are the perfect medium for cyber-criminals to achieve full voice samples of their target individuals.
An example of this was witnessed in ElevenLabs’ Prime Voice AI software, a browser-based text-to-speech tool where users can upload custom voice samples. The service is currently restricted to paid customers for a premium charge, but this has only encouraged more dark web innovations, according to the report.
“It has led to an increase in references to threat actors selling paid accounts to ElevenLabs – as well as advertising VCaaS offerings. These new restrictions have opened the door for a new form of commodified cybercrime that needs to be addressed in a multi-layered way,” the report stated.
The current limitations of deepfake technologies
The threat of deepfake voice technologies becoming a global issue for cybersecurity is very apparent. At the current deepfake voice technologies are limited by their production of voice samples and are typically only able to generate one-time samples of the target’s voice. Much of which cannot be used in real-time extended conversations.
However, as we have seen over the last year, AI technologies are quickly advancing and new threats to cybersecurity are appearing each day. That is why it is important that an industry-wide approach is discussed and implemented to tackle the threat before it escalates, Recorded Future argued.
However, “Risk mitigation strategies need to be multidisciplinary, addressing the root causes of social engineering, phishing and vishing, disinformation, and more. Voice cloning technology is still leveraged by humans with specific intentions – it does not conduct attacks on its own,” the report concluded.
“Therefore, adopting a framework that educates employees, users, and customers about the threats it poses will be more effective in the short-term than fighting abuse of the technology itself – which should be a long-term strategic goal.”