
The Federal Bureau of Investigation (FBI) has sounded the alarm on a new scam leveraging artificial intelligence (AI) voice technology. This sophisticated fraud emerged last month using deepfake technology to mimic high-ranking U.S. government officials, deceiving victims into clicking malicious links.
Ars Technica reported that the FBI’s Internet Crime Complaint Center (IC3) described the scam as an attempt to gain access to personal accounts by building rapport and trust with the target through deepfake voice messages and text messages. Deepfake technology can replicate an individual’s voice and speech patterns with alarming accuracy, making it challenging to detect without specialized analysis.
The FBI reports that cybercriminals typically suggest moving conversations to alternative messaging platforms before attempting to lure victims into clicking on malicious links, thereby gaining access to their devices. This isn’t the first instance of such deception; previous cases include a deepfake voice scam impersonating the CEO of password manager “LastPass,” and a deepfake robocall mimicking former President Joe Biden’s voice for election interference.
To combat these threats, the FBI advises recipients of suspicious messages to verify the sender’s identity directly and refrain from opening any questionable links or attachments. However, cybersecurity experts caution that the urgent nature of these scam messages often leads victims to act impulsively. While there’s no foolproof method to completely prevent these scams, maintaining a heightened awareness that anyone can fall victim is the most crucial defense strategy.