Update: Republished on March 24 with new warnings for Google users and new concerns as the serious risks from these upgrades become clear.

There’s a new battle taking place on your computers and your phones that will shape how you use technology for years to come. Google is leading the charge—albeit it’s not alone, and Gmail will likely change more than any other platform. That means serious decisions for its 3 billion users, who are well advised to think before clicking “yes.”

We’re talking AI and the breakneck speed with which new tools are being stitched into the platforms and services we all use daily. Apple may have been hit with an unintended slowdown, but not Google and Microsoft. There’s no stopping them.

Take your Chrome search history as an example. It’s surprisingly personal — what you search on the web and how you term those searches. But AI will be let loose on that history, if you let it, using this to get to know you better so it can help you more. But this isn’t an executive assistant, it’s a technology platform owned by the world’s most valuable marketing machine. Buyer beware, as they say.

Or how about Microsoft’s new (and seemingly automated) opt-in to having its own Copilot AI let loose on OneDrive. “Do you want Microsoft Copilot sniffing your OneDrive files?” PC World asks. “Too late. Allowing AI to sniff your cloud files may seem a little creepy, but Microsoft says it will only work with your authorization.”

And so we come to Gmail, and Google’s confirmation on Thursday that “Gmail is rolling out a smarter search feature powered by AI to show you the most relevant results, faster.” No doubt this is useful. Factoring in how you engage with emails and senders to better serve up results, to resolve the pain in email search. “If you’ve ever struggled with finding information in your overflowing inbox,” Google says, “you’re not alone.”

But again this is AI set loose on your personal information. I asked Google about the privacy implications and was assured that “our priority is respecting our users’ privacy while giving them choice and control over their data. To that end, this particular tool is one of the ‘smart features’ that users can control in their personalization settings.” You can read more about those settings here.

There’s no suggestion that your data is being syphoned off to train models or enhance marketing profiles, but it is being analyzed. As Android Police has just warned, “if you think Google’s terms of service are reasonable, you may still want to stop Google from storing your conversations in Gemini. The AI landscape is evolving rapidly, and legislators are slow to keep up with the ethical and legal ramifications of generative AI.”

Users must now need to decide on their own red lines. For me, there’s a big difference between auditable on-device AI analysis versus what’s done in the cloud, however assuring privacy policies might be. There’s a major difference between can’t and won’t, as Amazon’s recent change to its own local versus cloud processing makes clear.

Android Police recommends “turning off AI training now. It won’t impact your Gemini experience and acts as insurance against any changes to Gemini’s terms of service.” The good news is that “you only need to turn AI training off on one device to disable it across all devices where you’re signed into Gemini.” The bad news is that there is a variety of privacy policies governing different platforms and services. It’s worth double checking for any AI you’re using with access to private content — like emails.

As ESET’s Jake Moore has warned, “any data that we share online—even in private channels—has the potential of being stored, analyzed and even shared with third parties. When information is a premium and even seen as a currency of its own, AI models can be designed to delve more deeply into users divulging vast amounts of personal information. Data sharing can ultimately create security and privacy issues in the future and many users are simply unaware of the risks.”

I’ve argued before that Gmail and email more widely need to catch up with the on-device processing being applied to other platforms, and here’s another good reason why that’s becoming so critical. It has become a selling point for new message and app security features. The same should be true for email.

It’s no coincidence that Apple is struggling to make AI work where others are not. As Wired says, “Apple’s approach to this stuff is likely not close to the norm. You’ll need to be comfortable handing over large amount of data to make Alexa work its best, while OpenAI’s Sam Altman seems happy to destroy entire categories of jobs at the altar of progress. But Tim Cook and Apple? A cleaner, more positive image has for decades been part of the company’s appeal, and that includes a very clear focus on privacy.”

This new Gmail tool is dubbed “most relevant” search and is rolling out across personal Google accounts. Google says “it can be accessed on the web and in the official Gmail app for Android and iOS.” You can toggle back and forth between legacy “recent” and AI “relevant” results. Business users will get this as well, but not for some time.

It’s interesting that this change is being timed differently for home and business users. There are increasing concerns across enterprises as to the proprietary and sensitive data leaking out via AI prompts, with little if any governance at play.

According to Global Data (via Verdict), almost three-quarters of businesses are worried “about the privacy and data integrity risks of artificial intelligence (AI), which is slowing adoption of the technology.” These researchers also warn that “59% of businesses [are] lacking confidence in adopting the technology for their organizations. Only a fifth (21%) of respondents reported high or very high adoption of AI within their organizations.”

Just as with home users, this space is moving so quickly that users are failing to grasp the security and privacy implications of what’s taking place on their computers and phones. The risk is that the default CISO position needs to become “no” until the right governance and controls are available. Exciting new features are neatly presented, but front a vast ecosystem of data capture and analysis. This relies on policies to protect our most sensitive data from leakage or abuse. We have seen some — we will see more.

Harmonic Security warns that “generative AI tools have become integral to modern workflows, promising efficiency and innovation, these benefits come [with] significant risks related to data security. Despite their potential, many organizations hesitate to fully adopt Al tools due to concerns about sensitive data being inadvertently shared and possibly used to train these systems. Organizations risk losing their competitive edge if they expose sensitive data. Yet at the same time, they also risk losing out if they don’t adopt GenAl and fall behind.” Maybe one explanation for Google’s staggered timing.

In the last year we have seen one Gmail/Workspace AI upgrade after another. This won’t stop. And so it will become ever more important for users to be clear as to what they’re agreeing, how it works, and what opt-ins and opt-outs are available.

Share.
Exit mobile version