Artificial intelligence chatbots are transforming industries and reshaping interactions — and as their adoption soars, glaring cracks in their design and training are emerging, revealing the potential for major harm from poorly trained AI systems.

Earlier this month, a Michigan college student received a chilling, out-of-the-blue message from a chatbot during their conversation:

“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”

The case adds to a growing spate of incidents, from spreading misinformation to generating misleading, offensive or harmful outputs, and underscore the need for regulation and ethical guardrails in the dizzying race for AI-powered solutions.

The Risks of Unchecked AI: Garbage In, Garbage Out

Robert Patra, a technologist specializing in data monetization, analytics and AI-driven enterprise transformation, points to two scenarios that amplify chatbot risks: open-ended bots designed to answer anything, and context-specific bots lacking fallback mechanisms for queries beyond their scope.

In one instance, Patra’s team developed a chatbot for a Fortune 10 supply chain ecosystem. While trained on proprietary organizational data, the chatbot faced two critical limitations during beta testing: hallucinations — producing incorrect responses when queries exceeded its training scope — and the absence of human fallback mechanisms. “Without a mechanism to hand off complex queries to human support, the system struggled with escalating conversations appropriately,” explains Patra.

Tom South, Director of Organic and Web at Epos Now, warns that poorly trained systems — especially those built on social media data — can generate unexpected, harmful outputs. “With many social media networks like X [formerly Twitter] allowing third parties to train AI models, there’s a greater risk that poorly trained programs will be vulnerable to issuing incorrect or unexpected responses to queries,” South says.

Microsoft’s Tay in 2016 is a prime example of chatbot training gone awry — within 24 hours of its launch, internet trolls manipulated Tay into spouting offensive language. Lars Nyman, CMO of CUDO Compute, calls this phenomenon a “mirror reflecting humanity’s internet id” and warns of the rise of “digital snake oil” if companies neglect rigorous testing and ethical oversight.

Hallucinations: When AI Gets It Wrong

Large language models (LLMs) like Google Gemini are essentially advanced text predictors, explains Dr. Peter Garraghan, CEO of Mindgard and Professor of Computer Science at Lancaster University. Yet, when trained on vast internet datasets, these systems can produce nonsensical or harmful outputs, such as Gemini’s infamous “Please die” response.

“As Gemini’s training included diverse internet content, it likely encountered phrases such as ‘please die’ in its dataset. This means specific user inputs can unintentionally or deliberately trigger outputs based on such associations,” says Garraghan.

LLMs hallucinate because errors compound over iterations, says Jo Aggarwal, co-founder and CEO of Wysa.

“Each time an LLM generates a word, there is potential for error, and these errors auto-regress or compound, so when it gets it wrong, it doubles down on that error exponentially,” he says.

Dan Balaceanu, co-founder of DRUID AI, highlights the need for rigorous testing and fine-tuning, saying the issue is in the varying levels of training data and algorithms used from model to model.

“If this data is biased, incorrect or flawed, it’s likely that the AI model may learn incorrect patterns which can lead to the technology being ill-prepared to answer certain questions. Consistency is key, and making sure that the training data used is always accurate, timely and of the highest quality.”

Biases can also infiltrate through underrepresentation and overrepresentation of certain groups, skewed content or even the biases of annotators labeling the data, says Nikhil Vagdama, co-founder of Exponential Science. For instance, chatbots trained on historical datasets that predominantly associate leadership with men may perpetuate gender stereotypes.

“Techniques like reinforcement learning can reinforce patterns that align with biased outcomes,” he says. “The algorithms might also assign disproportionate weight to certain data features, leading to skewed outputs. If not carefully designed, these algorithms can unintentionally prioritise biased data patterns over more balanced ones.”

Additionally, geopolitical and corporate motivations can compound these risks. John Weaver, Chair of the AI Practice Group at McLane Middleton, points to Chinese chatbots trained on state-approved narratives.

“Depending on the context, the misinformation could be annoying or harmful,” says Weaver. “An individual who manages a database of music information and creates a chatbot to help users navigate it may instruct the chatbot to disfavor Billy Joel. Generally speaking, that’s more annoying than harmful — unless you’re Billy Joel.”

Weaver also references a notable 2021 incident involving Air Canada’s chatbot, which mistakenly offered a passenger a discount it wasn’t authorized to provide.

“Trained with the wrong data — even accidentally — any chatbot could provide harmful or misleading responses. Not out of malice, but out of simple human error — ironically, the type of mistake that many hope AI will help to eliminate.”

Power And Responsibility

Wysa co-founder Aggarwal emphasizes the importance of creating a safe and trustworthy space for users, particularly in sensitive domains like mental health.

“To build trust with our users and help them feel comfortable sharing their experiences, we add non-LLM guardrails both in the user input as well as the chatbot output,” Aggarwal explains. “This ensures the overall system works in a more deterministic manner as far as user safety and clinical protocols are concerned. These include using non-LLM AI to classify user statements for their risk profile, and taking potentially high risk statements to a non-LLM approach.”

“Chatbots hold immense potential to transform industries,” says Patra. “But their implementation demands a balance between innovation and responsibility.”

Why do chatbots go rogue? “It’s a mix of poor guardrails, human mimicry and a truth no one likes to admit: AI reflects us,” adds Nyman. “A poorly trained chatbot can magnify our biases, humor, and even our darker impulses.”

Share.
Exit mobile version