The iPhone versus Android battle is about to escalate—perhaps as never before. And if you’re a premium Samsung user, then a new statement suddenly released by Apple should give you plenty of reasons to think about making the switch.
In a land far, far away, Apple made a huge statement this week that has serious implications for the battle that is about to intensify between its iPhone and Android alternatives, which at the premium end of the market, largely means Samsung.
“Some companies regularly scan personal information in the cloud,” Apple said, “to monetize the information of their users. Apple does not. We have chosen a very different path—one that prioritizes the security and privacy of our users.”
The statement was made in Australia, yet another market looking at pushing a surveillance burden onto tech providers to flag bad behaviors across their user base.
The country’s eSafety proposals to combat terrorism/radicalization and child sexual abuse have the same apparent contradiction we have seen elsewhere—maintaining user security and privacy while monitoring (in some way) user content.
“Scanning every user’s privately stored iCloud data would pose serious security and privacy problems,” Apple warned in the statement published in The Guardian. “Scanning for particular content opens the door for bulk surveillance of communications and storage systems.”
The statement is notable for two reasons. The risk Apple is highlighting is that by technical breaking the sanctity of end-to-end encryption, technology providers lose the primary defense against scope creep. Put simply, what starts with CSAM or radicalization moves to political dissent or sexual freedoms.
“Tools of mass surveillance,” Apple warned, “have widespread negative implications for freedom of opinion and expression and, by extension, democracy as a whole. For example, awareness that the government may compel a provider to watch what people are doing raises the serious risk of chilling legitimate associational, expressive, political freedoms, and economic activity.”
This is notable because it is the exact argument made by myself and others when Apple proposed device-side scanning for CSAM back in 2021: “Apple will be pressed into expanding its CSAM screening to look for other content at the insistence of governments where Apple sells its devices. In the past Apple has rebuffed such requests because they’re technically impossible. Well, that has suddenly changed.”
The backlash against Apple’s plans being so un-Apple was relentless, and ultimately the iPhone maker changed its mind and let its device-side scanning plans drop away. Instead, it opted for Communication Safety features, to warn if content is dangerous and provide outlets for minors to seek help. A year or so later, Apple’s launch of end-to-end encryption across most iCloud storage was a huge shift in the opposite direction, and its biggest security boost in years.
Putting the irony of this welcome clarity to one side, the other reason this privacy confirmation from Apple is notable is that private on-device processing versus open cloud processing is about to be a headline differentiator between Apple and Google on the ecosystem front. And that means Apple versus Samsung on the device front.
Google scans photos stored in its cloud as well as other content for CSAM and other material. It also uses the cloud for other types of classification, relying on the heavier processing to categorize content and support its apps and services. Right now, this is also making headlines, as Gemini (née Bard) is rolled out across Google’s ecosystem, along with a raft of warnings that user content created while engaging with this AI will likely be stored in the cloud and may be subject to human review.
Apple doesn’t want to do this. Its approach is to run AI on its devices, inside the end-to-end encryption bubble where that applies, and prevent itself from having any access to content stored cloud-side. This is much harder than the economies of scale involved in cloud process, but if anyone can, Apple can, runs the theory.
As I reported last month, Apple appears to be evaluating the performance of device-side generative AI against leading cloud-side alternatives. And so, we have a parallel universe approaching, where iOS pushes its inevitable generative AI built around device privacy and security, and Google pushes its alternative built around privacy warnings and guidance notifications as to what data is used and stored online.
This is the next privacy nightmare in waiting, and it will polarize the Android versus iOS debate in much the same way as we have seen with privacy labels and encryption.
In my view, from a device perspective, iPhones and their Samsung alternatives have never been closer in terms of performance, features and functionality. But the AI tidal wave will create a new level of differentiation not seen before. And if your plan is to drop between $1000 and $2000 on a smartphone, I’d want my privacy fully built in.
I would not want an ecosystem that could wrongly flag content or introduce any risk of cloud-side data compromises outside my encryption bubble. I’d want clarity as to the deep-seated philosophy around the protection and sovereignty of my content. And I’d want a technical assurance that cannot be compromised.
And to this we can add comparable chaos when it comes to Android security updates versus Apple’s equivalents, and the materially more dangerous malware attacks—some of which have specifically targeted Samsung devices.
And so, in 2024 as generative AI looks to change our smartphones forever, the case for iPhones at the premium end of the market has never been stronger—this despite Google’s AI head-start and fast-tracking AI feature releases. Perhaps that’s why last year’s top seven best-selling premium devices were all iPhones.