Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
In a continent cracking down on immigration, Spain embraces migrants

In a continent cracking down on immigration, Spain embraces migrants

14 December 2025
AI (or AI People) Are Time’s Person Of The Year

AI (or AI People) Are Time’s Person Of The Year

14 December 2025
EU indefinitely freezes Russian assets to prevent Hungary and Slovakia from vetoing support Ukraine

EU indefinitely freezes Russian assets to prevent Hungary and Slovakia from vetoing support Ukraine

14 December 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » Gmail, Outlook, Apple Mail Warning—AI Attack Nightmare Is Coming True
Innovation

Gmail, Outlook, Apple Mail Warning—AI Attack Nightmare Is Coming True

Press RoomBy Press Room16 March 20258 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
Gmail, Outlook, Apple Mail Warning—AI Attack Nightmare Is Coming True

Republished on March 16th with additional security industry analysis on the threat from semi-autonomous AI attacks and a new GenAI attack warning.

Email users have been warned for some time that AI attacks and hacks will ramp up this year, becoming ever harder to detect. And while this will include frightening levels of deepfake sophistication, it will also enable more attackers to conduct more attacks, with AI operating largely independently, “carrying out attacks.” That has always been the nightmare scenario and it is suddenly coming true, putting millions of you at risk.

We know this, but seeing is believing. A new video and blog from Symantec has just shown how a new AI agent ot operator can be deployed to conduct a phishing attack. “Agents have more functionality and can actually perform tasks such as interacting with web pages. While an agent’s legitimate use case may be the automation of routine tasks, attackers could potentially leverage them to create infrastructure and mount attacks.”

The security team has warned of this before, that “while existing Large Language Model (LLM) AIs are already being put to use by attackers, they are largely passive and could only assist in performing tasks such as creating phishing materials or even writing code. At the time, we predicted that agents would eventually be added to LLM AIs and that they would become more powerful as a result, increasing the potential risk.”

Now there’s a proof of concept. It’s rudimentary but will quickly become more advanced. The sight of an AI agent hunting the internet and LinkedIn to find a target’s email address and then for website advice on crafting malicious scripts, before writing its own lure should put fear into all of us. There’s no limit to how far this will go.

“We’ve been monitoring usage of AI by attackers for a while now,” Symantec’s Dick O’Brien explained to me. “While we know they’re being used by some actors, we’ve been predicting that the advent of AI agents could be the moment that AI-assisted attacks start to pose a greater threat, because an agent isn’t passive, it can do things as opposed to generate text or code. Our goal was to see if an agent could carry out an an attack end to end with no intervention from us other than the initial prompt.”

As SlashNext’s J Stephen Kowski’s told me, “the rise of AI agents like Operator shows the dual nature of technology — tools built for productivity can be weaponized by determined attackers with minimal effort. This research highlights how AI systems can be manipulated through simple prompt engineering to bypass ethical guardrails and execute complex attack chains that gather intelligence, create malicious code, and deliver convincing social engineering lures.”

Even the inbuilt security is ludicrously lightweight. “Our first attempt failed quickly as Operator told us that it was unable to proceed as it involves sending unsolicited emails and potentially sensitive information,” Symantec said, with its POC showing how this was easily overcome. “This could violate privacy and security policies. However, tweaking the prompt to state that the target had authorized us to send emails bypassed this restriction, and Operator began performing the assigned tasks.”

The agent used was from OpenAI, but this will be a level playing field and it’s the nature of the capability that matters not the identity of the AI developer. Perhaps the most notable aspect of this attack is that when the Operator fails to find the target’s email address online, and so successfully deduces what the address would likely be from others within the same organization that it could find.

Black Duck’s Andrew Bolster warned me that “as AI-driven tools are given more capabilities via systems such as OpenAI Operator or Anthropic’s Computer Use, the challenge of ‘constraining’ LLMs comes into clearer focus,” adding that “examples like this demonstrate the trust-gap in underlying LLMs guardrails that supposedly prevent ‘bad’ behavior, whether established through reinforcement, system prompts, distillation or other methods; LLM’s can be ’tricked’ into bad behavior. In fact, one could consider this demonstration as a standard example of social engineering, rather than exploiting a vulnerability. The researchers simply put on a virtual hi-vis jacket and acted to the LLM like they were “supposed” to be there.”

“Agents such as Operator demonstrate both the potential of AI and some of the possible risks,” Symantec warns. “The technology is still in its infancy, and the malicious tasks it can perform are still relatively straightforward compared to what may be done by a skilled attacker. However, the pace of advancements in this field means it may not be long before agents become a lot more powerful. It is easy to imagine a scenario where an attacker could simply instruct one to ‘breach Acme Corp’ and the agent will determine the optimal steps before carrying them out.”

And that really is the nightmare scenario. “We were a little surprised that it actually worked for us on day one,” O’Brien told me, given it’s the first agent to launch.

Guy Feinberg from Oasis Security agrees, telling me “AI agents, like human employees, can be manipulated. Just as attackers use social engineering to trick people, they can prompt AI agents into taking malicious actions.”

“Organizations need to implement robust security controls that assume AI will be used against them,” warns Kowski. “The best defense combines advanced threat detection technologies that can identify behavioral anomalies with proactive security measures that limit what information is accessible to potential attackers in the first place.”

This week we have also seen a report into “Microsoft Copilot Spoofing” as a new “phishing vector,” with users not yet trained on how to detect these new attacks. That’s one of the reasons AI fueled attacks are much more likely to hit their targets. You can expect to see continuous reports as this new threat landscape shapes up.

And that warning has been reinforced by a second report this weekend into AI-fueled attacks, painting a frightening picture as to what’s coming next. While “most traditional GenAI tools have various guardrails in place to combat attempts to use them for malicious purposes,” says the research team at Tenable, “cybercriminal usage of tools like OpenAI’s ChatGPT and Google’s Gemini have been documented by both OpenAI (“Disrupting malicious uses of AI by state-affiliated threat actors”) and Google (“Adversarial Misuse of Generative AI”). OpenAI recently removed accounts of Chinese and North Korean users caught using ChatGPT for malicious purposes.”

Notwithstanding Symantec’s POC highlighting how easily some of these mainstream GenAI guardrails can be bypassed, Tenable warns that “with the recent open source release of DeepSeek’s local LLMs, like DeepSeek V3 and DeepSeek R1, we anticipate cybercriminals will seek to utilize these freely accessible models.”

As regards DeepSeek’s R1, the team says it wanted “to evaluate its malicious software, or malware generation capability, under two scenarios: a keylogger and a simple ransomware… Our initial test focused on creating a Windows keylogger: a compact, C++-based implementation compatible with the latest Windows version. The ideal outcome would include features such as evasion from Windows Task Manager and mechanisms to conceal or encrypt the keylogging file, making detection more difficult. We also evaluated DeepSeek’s ability to generate a simple ransomware sample.”

The team then tasked the tool with helping develop a ransomware attack. “DeepSeek was able to identify potential issues when planning the development of this simple ransomware, such as file permissions, handling large files, performance and anti-debugging techniques. Additionally, DeepSeek was able to identify some potential challenges in implementation, including the need for testing and debugging.”

“The bottom line,” says Feinberg, “is that you can’t stop attackers from manipulating AI, just like you can’t stop them from phishing employees. The solution is better governance and security for all identities—human and non-human alike.”

The answer, he says is to assign permission to AI in the same way as you would do people — treat them the same. And that included identity-based governance and security, and an assumption that AI will be tricked into making mistakes.

“Manipulation Is inevitable,” Feinberg warns. “Just as we can’t prevent attackers from tricking people, we can’t stop them from manipulating AI agents. The key is limiting what these agents can do without oversight. AI agents need identity governance. They must be managed like human identities, with least privilege access, monitoring, and clear policies to prevent abuse. Security teams need visibility.”

Frightening enough for now, but Tenable warns that “we believe that DeepSeek is likely to fuel further development of malicious AI-generated code by cybercriminals in the near future,” given that they have already used the tool “to create a keylogger that could hide an encrypted log file on disk as well as develop a simple ransomware executable.”

Two very different uses of AI tools to either craft or even execute attacks. One thing is already clear though — we are not yet ready for this.

ai attack android warning apple mail warning email warning Gmail warning google warning iphone warning outlook warning phishing windows warning
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

AI (or AI People) Are Time’s Person Of The Year

AI (or AI People) Are Time’s Person Of The Year

14 December 2025
Illinois Latest State To Approve ‘Right To Die’ Legislation

Illinois Latest State To Approve ‘Right To Die’ Legislation

13 December 2025
Early Buzz For ‘Highguard,’ The Game Awards Closer, Is Quite Poor

Early Buzz For ‘Highguard,’ The Game Awards Closer, Is Quite Poor

13 December 2025
Apple Confirms iPhone Attacks—All Users Must Update Now

Apple Confirms iPhone Attacks—All Users Must Update Now

13 December 2025
Samsung Galaxy S26 Release Date: What’s Happening In May?

Samsung Galaxy S26 Release Date: What’s Happening In May?

13 December 2025
Google’s Play Update—Bad News For Most Samsung Users

Google’s Play Update—Bad News For Most Samsung Users

13 December 2025
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
John Summit went from working 9 a.m. to 9 p.m. in a ,000 job to a multimillionaire DJ—‘I make more in one show than I would in my entire accounting career’

John Summit went from working 9 a.m. to 9 p.m. in a $65,000 job to a multimillionaire DJ—‘I make more in one show than I would in my entire accounting career’

18 October 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Big 12 in advanced talks for deal with RedBird-backed fund

Big 12 in advanced talks for deal with RedBird-backed fund

14 December 20250 Views
U.S., Mexico strike deal to settle Rio Grande water dispute

U.S., Mexico strike deal to settle Rio Grande water dispute

14 December 20250 Views
At least 2 killed and several more hurt in shooting at Brown University with suspect still at large

At least 2 killed and several more hurt in shooting at Brown University with suspect still at large

13 December 20250 Views
Danish intelligence report warns of US economic leverage and military threat under Trump

Danish intelligence report warns of US economic leverage and military threat under Trump

13 December 20250 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
In a continent cracking down on immigration, Spain embraces migrants

In a continent cracking down on immigration, Spain embraces migrants

14 December 2025
AI (or AI People) Are Time’s Person Of The Year

AI (or AI People) Are Time’s Person Of The Year

14 December 2025
EU indefinitely freezes Russian assets to prevent Hungary and Slovakia from vetoing support Ukraine

EU indefinitely freezes Russian assets to prevent Hungary and Slovakia from vetoing support Ukraine

14 December 2025
Most Popular
Microsoft’s AI boss calls Elon Musk a ‘bulldozer’ with ‘superhuman capabilities’

Microsoft’s AI boss calls Elon Musk a ‘bulldozer’ with ‘superhuman capabilities’

14 December 20250 Views
Big 12 in advanced talks for deal with RedBird-backed fund

Big 12 in advanced talks for deal with RedBird-backed fund

14 December 20250 Views
U.S., Mexico strike deal to settle Rio Grande water dispute

U.S., Mexico strike deal to settle Rio Grande water dispute

14 December 20250 Views
© 2025 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.