Google’s Cloud Next 2024 has drawn to a close but the news stories keep on coming. One that hasn’t surfaced, however, could well turn out to be the most important, at least from the user security perspective: the use of AI large language models to protect Gmail users from harm.
Google Announces AI-Powered Gmail Security Evolution
I couldn’t attend Cloud Next this year, but my Google insiders have been keeping me informed of the most important updates nonetheless. And I’m glad they did. Otherwise, I might well have missed a major security update for Gmail and Google Drive users.
The main problem being addressed is that generative AI has become so good so quickly that it has “dramatically lowered the barrier to attacks,” according to Google, which it admits has led to “a spike in higher quality phishing at scale.” As you might imagine, getting access to Gmail and Drive accounts is high on the attacker’s agenda, given the goldmine of readily actionable data they contain. The solution, Google says, was conceptually simple albeit technically challenging: “We built custom LLMs to help fight back.” First deployed in late 2023, these LLMs are now “yielding big results,” Google says.
These custom LLMs are trained “on a diet of the latest, most terrible spam and phishing” content because what LLMs are uniquely good at is identifying semantically similar content. Given the large Google Workspace user base of 3 billion, “the results are very impactful—and the LLMs will only get better at this as we go,” a Google spokesperson says.
- 20% more spam is blocked in Gmail using LLMs
- 1000% more user-reported Gmail spam is reviewed each day
- 90% faster response time dealing with new spam and phishing attacks in Drive
The Positive Side Of The AI Security Fence
Although there has been plenty of talk in recent months about the Google Gemini LLM, not all of it has been filled with praise, quite the opposite in fact. My colleague, Zak Doffman, is a highly-respected matters of privacy contributor at Forbes and recently warned of concerns regarding Google’s AI-powered message helpers. While Zak’s concerns come from the right place, real-world knowledge of AI privacy implications, many commentators have just jumped upon the ‘AI is evil’ bandwagon. It’s comforting, therefore, to be able to report on something focused on generative AI LLMs but from the positive side of the security fence.
As well as detecting twice as much malware than bog-standard third-party antivirus and security products, according to Google, these AI-powered defenses stop 99.9% of spam. Although that’s a pretty impressive number, a Google spokesperson told me that “inside Google Workspace, we’re very focused on innovating to tackle that last 0.1%.”
10 Million Paying Customers To Be Offered New AI Security Tooling
As well as these built-in security advances for more than 3 billion Google Workspace users and 10 million paying customers when it comes to Gmail and Drive, Google has also announced an optional new AI-security add-on. In order to address a common request of Workspace customers, the protection of confidential information in files, Google has built a tool to automatically classify and protect such sensitive data. “Protecting obvious confidential information is straightforward,” Google says, “but safeguarding unexpectedly sensitive data is very hard.” The reason so many customers request help is that many of them perform the same task manually right now. The new AI tooling will “find these hidden pockets of sensitive data, and make recommendations for added protections, which can automatically be implemented with a few simple clicks,” Google says. When it comes to pricing, I am told that the tooling can be fine-tuned for the needs of every customer at a cost of $10 per user per month and can be added to most Workspace plans.