Although it's far from perfect, artificial intelligence (AI) has a lot of potential in the 21st century. With applications ranging from next-gen video gaming to "smart" home appliances, highly sophisticated medical devices, and even more, AI could revolutionize daily life within the next few years.
But AI is still in its infancy stages. It's not quite perfect and, as such, it's still very susceptible to exploitation – especially if the technology falls into the wrong hands. Unfortunately, that's exactly what happened to a British energy company in late 2019.
Investigating the Crime
The crime occurred on Friday, August 6, 2019. According to reports, this is when the managing director of the British energy company was contacted via telephone and ordered to wire more than $240,000 to an unknown account in Hungary. Although the managing director later noted that the request was "rather strange," he complied anyway. He was speaking to his boss, after all.
Only it wasn't his boss on the other end. Instead, it was a deepfake voice – a computer-generated voice that is driven by highly sophisticated artificial intelligence (AI). Making matters worse, a deepfake audio or video file can mimic its subject so closely that it's nearly impossible to tell it apart from its authentic counterpart – such as the incident in August 2019.
Once the transaction was complete, which totaled 220,000 euros, the funds were then moved from the account in Hungary to another account in Mexico. It was then scattered around to numerous locations and accounts. Suspects have yet to be named at the time of this writing and none of the funds have been successfully located.
Even more troubling is the fact that the hackers didn't develop this deepfake on their own. Instead, they used commercially available software to generate the mimicked voice. There are many different solutions available that offer such a service, and the most sophisticated hackers can even process their mimicked voices with machine learning algorithms; thus making their mimicked voices all the more complete and convincing.
According to statistics by Symantec, a popular cybersecurity firm, this isn't the first time a hacker has mimicked the voice of an executive while attempting to defraud a business. In fact, there have been at least three reported cases – although the specific victims haven't been identified – of similar events.
Fighting Back
While there are many different software solutions available that allow you to mimic the voices of other humans, some developers are fighting back. Google is actively pursuing next-gen systems that are capable of immediately recognizing spoofed or mimicked speech. On the other end, however, Google is also responsible for developing some of the most sophisticated AI systems to date. It's difficult to say how the platforms will interact with one another in a live setting.
Charlotte Stanton, director with the Carnegie Endowment for International Peace in the Silicon Valley office, summed it up by saying: "There’s a tension in the commercial space between wanting to make the best product and considering the bad applications that product could have. Researchers need to be more cautious as they release technology as powerful as voice-synthesis technology, because clearly it’s at a point where it can be misused."
AI-Powered Heist Nets Hackers More Than $240,000
Comments
No comments yet. Sign in to add the first!