FBI Warning For All iPhone, Android Users—Hang Up Now, Use This Code
ByDavey Winder
, Senior Contributor.
Hang up and use a secret code to combat AI smartphone attacks, FBI says.
Update, March 22, 2025: This story, originally published March 20, has been updated with details of a new law enforcement from Europol about the changing DNA of criminality and the use of AI attacks, along with further information with regard to the ongoing AI cyber threats that Gmail users are facing and the FBI warning to use a secret code that has been issued in response.
There has been no shortage of AI-powered security threat warnings in recent weeks from code that can compromise your Chrome password manager credentials to critical AI attacks costing hackers as little as $5 to create. But it’s the deepfake attacks hitting smartphone users, despite the best efforts of the likes of Google and others to defend against them, that are of most concern. Indeed, these ongoing attacks are so convincing that security experts, including the FBI, have issued warnings to the public to hang up now and create a secret code by way of protection. Here’s what you need to know and do.
Although you might immediately think of face-swapping videos when it comes to deepfake attacks, that is far from the complete threat picture. If you want to see how good you are at spotting a deepfake face, there’s a quick test you can take but be warned it’s much more challenging than you think. Voice fakes, driven by AI, really started gaining public attention after I wrote a viral article in 2024 concerning a security expert who almost got fooled, with potentially very costly consequences.
Adrianus Warmenhoven, a cybersecurity expert at NordVPN, has now added to the voices warning iPhone and Android users about the threat. “Phone scammers increasingly use voice cloning tools for their fraudulent activities because this kind of software has become more affordable and effective over time,” Warmenhoven told me. A common approach, and one that is seen in ongoing attacks currently, is to use this deepfake audio to “approach family members of the individual they are impersonating,” Warmenhoven said, “and extort money by simulating an emergency.”
Referencing an October 2024 report from Truecaller and The Harris Poll, AmericaUnder Attack: The Shifting Landscape of Spam and Scam Calls in America, Warmenhoven pointed to the startling statistic that when it came to the U.S. alone, across the previous 12 months, the total number of phone scam victims exceeded 50 million and losses were estimated at $452 per victim. “As deepfakes dramatically change the landscape of scam phone calls,” Warmenhoven warned, “it is crucial to ensure that everyone in the family understands what voice cloning is, how it works, and how it could be used in scams, such as impersonating a family member to request money or personal information.”
“Deepfakes will become unrecognizable,” Siggi Stefnisson, cyber safety chief technical officer at trust-based security platform Gen, whose brands include Norton and Avast, warned. “AI will become sophisticated enough that even experts may not be able to tell what’s authentic.”
Confirming that the DNA of organized crime is going through a period of change, Catherine De Bolle, the executive director of Europol, said that they are now technology-driven criminal enterprises that are “more adaptable and more dangerous than ever before.” The new European Serious Organised Crime Threat Assessment issued the stark warning that crime is being accelerated by AI and emerging technologies. “AI is fundamentally reshaping the organized crime landscape,” it said. By rapidly exploiting these new technologies, using the accessibility, adaptability and sophistication of AI today, threat actors have added a powerful attack tool to their arsenal. “These technologies automate and expand criminal operations, making them more scalable and harder to detect,” the assessment warned.
AI is increasingly used in online fraud schemes, Europol said, driven by social engineering attacks that can result in access to vast amounts of data, including stolen personal information. “Nearly all forms of serious and organized crime have a digital footprint,” the assessment stated, “whether as a tool, target or facilitator.”
“The value of AI is that it makes things faster,” Evan Dornbush, a former NSA cybersecurity expert, told me, “not more creative or inventive or persistent.” Dornbush is not wrong. Attackers can create sophisticated and believable messages in double-fast time and, most importantly, keep tweaking these automatically so every iteration is more believable than the last. "But speed is irrelevant if we cannot disrupt the attacker’s profit potential,” Dornbush concluded; “AI is decreasing the costs for criminals, and the community needs novel ways to either decrease their payouts, increase their operating budgets, or both."
“Breaking this new criminal code means dismantling the systems that allow these networks to thrive,” De Bolle confirmed, “targeting their finances, disrupting their supply chains and staying ahead of their use of technology.”
As I reported Dec. 7, 2024, the Federal Bureau of Investigation has also been warning the public of such attacks. Indeed, the FBI went as far as to issue public service alert number I-120324-PSA addressing this very subject. Both the FBI and Warmenhoven recommend the same mitigation, as brutal and startling as it sounds, to hang up and create a secret code known only to your close family and friends.
Warmenhoven also advised that people should be cautious about the content of their social media postings. “Social media is the largest publicly available resource of voice samples for cybercriminals,” Warmenhoven warned. This means that everyone should be wary of what they post in terms of how it could be used to negatively impact their security “through the rise of deepfakes, voice cloning, and other scams enabled by AI tools.”
To mitigate the risk of these sophisticated and increasingly dangerous AI attacks against iPhone and Android users, the FBI said that people should hang up the phone immediately if they get a call claiming to be from a family member or close friend asking for money in such a fashion, and verify the identity of the person calling using direct means yourself. The FBI also warned that all members of the public should create a secret word or phrase that is only known to you and your close contacts and use this to identify a caller claiming to be someone in trouble, no matter how convincing they sound. And convincing they will be as the deepfake call will be based on public audio clips, from social media videos, for example, that are then fed through the AI tooling to produce, in effect, that person saying anything that is typed in.
