Yes, your iPhone is more secure than Android devices, but no, it’s not impervious to attack. A new report has suddenly warned Apple users that a dangerous trojan is stealing facial recognition data from iPhones to target users’ bank accounts.
It’s “an exceptionally rare occurrence,” Group-IB acknowledges, as it warns that “a new sophisticated mobile Trojan specifically aimed at iOS users” has been found collecting “facial recognition data, identity documents, and intercepting SMS.”
That it’s a variant of previously detected Android malware will be cold comfort for iPhone users convinced that what happens on their iPhone, stays on their iPhone.
The malware doesn’t compromise the iPhone’s own FaceID data—that remains fully locked down. Instead, it’s designed to steal facial captures from the device which are then fed into an “AI-driven face-swapping service to create deepfakes.” These are combined with intercepted SMS messages to enable “cybercriminals to gain unauthorized access to the victim’s banking account.”
Group-IB describes this as “a new… previously unseen… technique of monetary theft.” The researchers say that the attack has mainly focused on Asia-Pacific—so far.
Initially, the hackers relied on Apple’s TestFlight—which distributes beta software—for malware distribution. When that avenue was shut down, it turned to social engineering techniques to trick users into installing an MDM profile on their phones that “allowed the threat actor to gain complete control over the victim’s device.”
The focus of the malware is facial identity assurance, whether that’s via an app or as part of seemingly sophisticated online banking identity assurance. It illustrates a clear risk as we use facial data to confirm ever more transactions.
While so-called liveness and anti-spoofing technologies have been designed to combat exactly this, the increasing ease of use of deep fakes undermines those protections. By combining a deep fake with a secondary form of identification, say an SMS one-time passcode, it seems that security can be opened with ease. A single malware targeting both facial data and SMS achieves this in a one-stop-shop.
Group-IB says that the social engineering techniques made use of popular messaging apps to encourage installs of seemingly official goverment apps or clicks onto fake web pages. These were used to capture information from the user and their device.
This report will grab headlines because it combines an attack on iPhones with harvesting facial data. The fact it’s not Apple’s proprietary FaceID capture won’t stop the concerns. The issue, though, is not a general one. If iPhone users don’t succumb to the social engineering in the first place—installing apps or clicking onto fraudulent web pages, then the attack does not work. That’s one key takeaway here.
The other key takeaway, of course, is that iPhone users need to be just as careful as their Android counterparts when it comes to installing, clicking and opening on their phones. If you don’t implicitly trust the sender or the developer, just leave it alone.
I have approached Apple for any comments on the Group-IB report.
While the attack has focused primarily on Asia thus far, this looks like signaling a new attack vector that is applicable everywhere. And Android users need to be even more concerned—the same threat previously discovered on Android devices was more dangerous and able to capture more user data.
The threat from deep fake technology is generating increasingly alarming headlines given the reality of the output, the ease of use and the ready availability. This is only going to get worse. The cyber theme of 2024 is fast becoming AI centric (1,2) with a raft of new threats that simply did not exist before.
This warning represents the (fake) shape of things to come.