The deployment of Generative AI (GenAI) is often highlighted in current discussions as a means to achieve significant productivity gains in business. This is nothing new. Any tool adopted by a company has the primary mission of supporting its performance. This expectation sets a high bar for GenAI—one that often requires additional human oversight, contradicting the very productivity gains it promises.
Although the function of GenAI tools is supposed to be ‘generating’ productivity, this promise is hindered by the blurred line between potential and hope, while being jeopardized by the mix-up between vision and hype.
On one hand, organizational, technical and socio-technical challenges related to the evident need for human oversight, transparency, explainability, technical robustness, security, privacy, data governance, non-discrimination, and fairness are nothing less than business performance requirements for which the hope has not yet fully met its potential.
On the other hand, there are factors that can be mistaken for vision but are ultimately more about surface appeal, creating excitement that leads to more distractions than direction toward business performance: the hype.
Here are six down-to-earth watch-outs about GenAI that leaders should keep on their radar to avoid these two traps—whether it’s mistaking hope for potential or hype for vision.
1) There can be no performance without ensuring accuracy and precision—GenAI is no exception.
GenAI’s value depends on its accuracy—how closely its outputs align with reliable references—and precision—how consistently relevant its responses are. Without meeting these two criteria, there is a significant risk for many use cases in various businesses, even for the simplest scenarios, such as a chatbot handling customer reservations, as seen in the Air Canada case. Inaccuracy and imprecision, (not to mention hallucinations) undermine trust, which is a critical barrier to GenAI adoption in business settings, among other risks, as highlighted by Bhaskar Chakravorti.
2) Anyone claiming that a GenAI is unbiased is either promoting the most biased one, unknowingly holding a biased view, or possibly both at the same time.
GenAI’s biggest limitation is the misconception that it can be unbiased, a belief that is, moreover, influenced by the way some GenAI chabots respond to that question. But GenAI assurance of neutrality calls neutrality into question. In asserting that it is not biased, a Gen AI chatbot inadvertently reveals the very bias it seeks to deny. A recent study carried out by researchers from the UK and Brazil has highlighted concerns about the objectivity of ChatGPT. When asked whether it is biased, the fact that ChatGPT can, even occasionally, claim not to be is paradoxically the most profound bias.
3) What is predictable is anything but special; only the unpredictable truly is. And being special sometimes means going off-pattern.
Unlike human creativity, which emerges from personal experiences, emotions, and purpose, GenAI operates within learned patterns, and therefore cannot break free from the patterns it has learned. As a result, AI-generated content, though sophisticated, lacks the originality and unpredictability that define human innovation and creativity. This difference between creating something new and merely generating variations of what exists is crucial, as highlighted in a recent study published in ScienceAdvances. Yet, AI is unable to venture into uncharted territories of thought that define human ingenuity.
4) Confusing humans with AI began when we started referring to people as ‘Human Resources’—But humans are beings not resources.
Labeling people as “resources” primes us to value efficiency over empathy and outputs over humanity. This mindset has led us to equate human worth with productivity, seeing ourselves as cogs in a machine, easily replaceable by more efficient mechanisms thereby, setting the stage for fears about GenAI replacing human roles. This fear, compounded by insufficient attention to human factors in GenAI deployment—support, training, and adoption—acts as a brake on the productivity gains promised by AI, leading to a state of “inacceleration.”
5) The race toward AGI is a new gold rush, and as with the original, not everyone will get richer—or more precisely, smarter.
Those who herald GPT as merely an appetizer to the main course of artificial general intelligence may not be the ones who bring it to life. Yet, just as the electric lightbulb was not invented by perfecting the candle, large language models (LLMs) are unlikely to evolve into something beyond sophisticated chatbots. Among other things, LLMs lack any form of embodiment or interaction with the physical world, do not possess the ability to reason about cause and effect, have limited ability to maintain context over long conversations and struggle with tasks that require understanding long-term dependencies or managing multiple streams of information simultaneously.
6) The future of AI does not lie in more Artificial Intelligence, but in more Integrity—with the latter guiding the former.
As Warren Buffett famously said, “In looking for people to hire, look for three qualities: integrity, intelligence, and energy. And if they don’t have the first, the other two will kill you.” This wisdom begs the question: as we begin to ‘hire’ powerful intelligent machines to perform tasks traditionally done by humans, how do we ensure they possess something akin to what we call integrity? It is integrity that ensures that our creations do not become instruments of oppression, discrimination or alienation. Our ability to create an environment where AI uplifts the very essence of humanity will shape our future.
Let’s strive for potential, not just hope, in building a human-centered vision, not just hype—prioritizing Artificial Integrity, not just Artificial Intelligence.