Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
Gen Alpha is using makeup to pass age verification tech: a mom found her son using an eyebrow pencil

Gen Alpha is using makeup to pass age verification tech: a mom found her son using an eyebrow pencil

5 May 2026
Coinbase CEO replacing ‘pure managers’ with ‘player-coaches’ is sign org chart is changing

Coinbase CEO replacing ‘pure managers’ with ‘player-coaches’ is sign org chart is changing

5 May 2026
The Elon Musk-OpenAI trial provides more heat than light on the debate over who should control AI

The Elon Musk-OpenAI trial provides more heat than light on the debate over who should control AI

5 May 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » Could Poor AI Literacy Cause Bad Personal Decisions?
Innovation

Could Poor AI Literacy Cause Bad Personal Decisions?

Press RoomBy Press Room14 August 20255 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
Could Poor AI Literacy Cause Bad Personal Decisions?

A recent article in Ars Technica revealed that a man switched from household salt (sodium chloride) to sodium bromide after using an AI tool. He ended up in an emergency room. Nate Anderson wrote, “His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies…. But the bigger problem was that the man appeared to be suffering from a serious case of “bromism.” This is an ailment related to excessive bromine. After seeing this, it made me wonder if poor critical thinking skills and low AI literacy could actually cause people to make bad or even harmful decisions.

As a weather and climate scientist, I am particularly aware of widespread misinformation and disinformation propagating around. People think the Earth is flat or that scientists can steer hurricanes. National Weather Service offices are fielding calls from people with wacky theories about geoengineering, groundhogs, and so forth. My fear is that a lack of understanding of Generative AI might make things worse and even cause harm as we saw in the case of bromism.

Even in my own circle of intelligent friends and family members, it is clear to me that some people have very limited understanding of AI. They are familiar with Large Language Model tools like ChatGPT, Gemini, Grok, CoPilot, and others. They assume that’s AI. It certainly is AI, but there is more to AI too. I experience a version of these types of assumptions, ironically, in my professional field. People see meteorologists on television. Because that is the most accessible type of meteorologist to them, they assume all meteorologists are on television. The majority of meteorologists do not work in the broadcast industry at all, but I digress.

Let’s define AI. According to the Digital.gov, “Artificial intelligence (AI) is an emerging technology where machines are programmed to learn, reason, and perform in ways that simulate human intelligence. Although AI technology took a dramatic leap forward, the ability of machines to automate manual tasks has been around for a long time.”

The popular AI tools like ChatGPT or Gemini are examples of Generative artificial intelligence or GenAI. A Congressional website noted, “Generative artificial intelligence (GenAI) refers to AI models, in particular those that use machine learning (ML) and are trained on large volumes of data, that are able to generate new content.” Other types of AI models may do things like classify data, synthesize information, or even make decisions. AI, for example, is used in automated vehicles and is even integrated into emerging generations of weather forecast models. The website went on to say, “GenAI, when prompted (often by a user inputting text), can create various outputs, including text, images, videos, computer code, or music.” Many people are using GenAI Large Language Models or LLMs daily without context, which brings me back to the salt case article in Ars Technica.

Nate Anderson continued, “…. It’s not clear that the man was actually told by the chatbot to do what he did. Bromide salts can be substituted for table salt—just not in the human body. They are used in various cleaning products and pool treatments, however.” Doctors replicated his search and found that bromide is mentioned but with proper context noting that it is not suitable for all uses. AI hallucination can happen when LLMs produce factually incorrect, outlandish, unsubstantiated or bad information. However, it seems that this case was more about context and critical thinking (or lack thereof).

As a weather expert, I have learned over the years that assumptions about how the public consumes information can be flawed. You would be surprised at how many ways “30% chance of rain” or “tornado watch” is consumed. Context matters. In my discipline, we have a problem with “social mediarology.” People post single run hurricane models and snowstorm forecasts two weeks out for clicks, likes, and shared Most credible meteorologists understand the context of that information, but someone receiving it on TikTok or YouTube may not. Without context, the use of critical thinking skills, or an understanding of LLMs, bad information is likely to be consumed or spread.

University of Washington linguist Emily Bender studies LLMs and has consistently warned that language models are simply unverified text synthesis machines. In fact, she recently argued that the first ”L” in LLM should stand for “limited” not “large.” Her scholarship is important to consider as we plunger deeper into the Generative AI pool.

To be clear, I am actually an advocate of proper, ethical use of AI. The climate scientist side of me keeps an eye on the energy and water consumption aspects as well, but I believe we will find a solution to that problem. Microsoft, for example, has explored underwater data centers. AI is here. That ship has sailed. However, it is important that people understand its strengths, weakness, opportunities and threats. People fear what they don’t understand.

AI AI hallucination ChatGPT copilot Gemini Grok large language models
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

A new Google AI deal with the Pentagon has sparked employee backlash. Their leverage appears limited

A new Google AI deal with the Pentagon has sparked employee backlash. Their leverage appears limited

4 May 2026

Why Great Whites Keep Returning To The Gulf Of Mexico

1 May 2026

Do Sharks Fear Electricity? New Research Tests A Low-Tech Deterrent

29 April 2026
Why Innovation Will Be Won—or Lost—in Cyberspace

Why Innovation Will Be Won—or Lost—in Cyberspace

29 April 2026
5 Things I Wish I Knew When I Started ‘Diablo 4: Lord Of Hatred’

5 Things I Wish I Knew When I Started ‘Diablo 4: Lord Of Hatred’

29 April 2026
Google Wants To Speed Up Your Smart Home

Google Wants To Speed Up Your Smart Home

29 April 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

6 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Early retirement is shrinking Gen X’s brain, new research warns

Early retirement is shrinking Gen X’s brain, new research warns

5 May 20265 Views
ServiceNow just unveiled an AI workforce that can run your entire company

ServiceNow just unveiled an AI workforce that can run your entire company

5 May 20264 Views
Tokyo is throwing out its strict office dress code and asking workers to wear shorts amid the war in Iran energy crisis

Tokyo is throwing out its strict office dress code and asking workers to wear shorts amid the war in Iran energy crisis

5 May 20264 Views
Google DeepMind workers in the U.K. vote to unionize over military AI contracts

Google DeepMind workers in the U.K. vote to unionize over military AI contracts

5 May 20261 Views

Recent Posts

  • Gen Alpha is using makeup to pass age verification tech: a mom found her son using an eyebrow pencil
  • Coinbase CEO replacing ‘pure managers’ with ‘player-coaches’ is sign org chart is changing
  • The Elon Musk-OpenAI trial provides more heat than light on the debate over who should control AI
  • Jamie Dimon and Dario Amodei sidestep question about whether the AI cyber ‘freakout’ is warranted
  • Early retirement is shrinking Gen X’s brain, new research warns

Recent Comments

No comments to show.
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
Gen Alpha is using makeup to pass age verification tech: a mom found her son using an eyebrow pencil

Gen Alpha is using makeup to pass age verification tech: a mom found her son using an eyebrow pencil

5 May 2026
Coinbase CEO replacing ‘pure managers’ with ‘player-coaches’ is sign org chart is changing

Coinbase CEO replacing ‘pure managers’ with ‘player-coaches’ is sign org chart is changing

5 May 2026
The Elon Musk-OpenAI trial provides more heat than light on the debate over who should control AI

The Elon Musk-OpenAI trial provides more heat than light on the debate over who should control AI

5 May 2026
Most Popular
Jamie Dimon and Dario Amodei sidestep question about whether the AI cyber ‘freakout’ is warranted

Jamie Dimon and Dario Amodei sidestep question about whether the AI cyber ‘freakout’ is warranted

5 May 20263 Views
Early retirement is shrinking Gen X’s brain, new research warns

Early retirement is shrinking Gen X’s brain, new research warns

5 May 20265 Views
ServiceNow just unveiled an AI workforce that can run your entire company

ServiceNow just unveiled an AI workforce that can run your entire company

5 May 20264 Views

Archives

  • May 2026
  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • March 2022
  • January 2021
  • March 2020
  • January 2020

Categories

  • Blog
  • Business
  • Entrepreneurs
  • Global
  • Innovation
  • Leadership
  • Living
  • Money & Finance
  • News
  • Press Release
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.