In recent months, artificial intelligence has taken technology’s center stage. Professionals ranging from marketers to teachers to artists have experimented with creating content and images with generative AI, and businesses across industries are scrambling to incorporate AI to enhance productivity, cut costs and even improve worker safety.
Yet while we hear a lot about how AI is making life and work easier, its growing use also comes with complications. For example, experts warn of the proliferation of deepfake videos, audio and images, and high-profile media companies have been embarrassed by publishing low-quality and even inaccurate AI-generated content. Below, 20 members of Forbes Technology Council detail some of the ways AI is, or may soon be, making life more difficult for everyone and how these headaches can be addressed.
1. AI Assistants Often Misunderstand Prompts
AI assistants are great, but sometimes they misunderstand us, leading to frustrating loops of repetition and irrelevant answers. Clearer communication guidelines and improved natural language processing could make interactions smoother and less annoying. – Wen Shaw, Cooby
2. LLMs Struggle To Connect Different Knowledge Domains
Large language models such as ChatGPT have issues connecting knowledge from different domains, such as physics and math. Asking questions that connect different domains will most likely lead to wrong answers and conclusions that are purely hallucinatory and deeply incorrect. In the future, generative AI will likely include domain intersections in addition to the simple predictions used today. – Ondrej Krehel
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
3. It’s Difficult To Track Down The Human Behind Anomalous Actions
AI enabled hyper-automation helps enhance the speed, efficiency and precision of work operations. However, problems start when systems do not behave as expected. In an AI environment driven by bots and agents, tracing the human who triggered a task that leads to an anomalous action is essential. However, attribution and traceability are arduous, which makes fixing issues impossible. – Vishwas Manral, Precize Inc.
4. Content Generated By AI Must Be Fact-Checked
Workers have benefited from shortcuts thanks to AI. But these shortcuts usually require a new layer of legwork. While AI makes it easier to reduce expert workloads and compose day-to-day communications, wariness of AI-generated content persists. Every worker using AI must now become an “editor” to ensure accuracy and a “human touch,” lest the authenticity of their work be questioned. – Rujul Zaparde, Zip
5. AI-Generated Content Lacks ‘Human Style’
AI’s responses lack “human” authenticity and ignore individual style, making communication generic, dry and artificial. In the future, AI could become more “personal” and able to adapt to individual conversational habits by being trained on private data, such as emails or text messages. For now, editing AI-generated text before sending it is the best way to mitigate the issue and maintain authentic communication. – Victor Shilo, EastBanc Technologies
6. Over-Automated Service Can Frustrate Customers
Over-automating customer service and making it difficult to access human help is a pitfall that needs to be avoided. Ensuring chatbots are fed with user context and enabling AI chatbots to carry out actions specific to the user’s situation, as well as adding the ability for users to chat with a human and/or request a human call, are great remedies. – Zehra Cataltepe, TAZI AI
7. AI-Enabled Health Research Isn’t Trusted By Patients
There is a danger of erroneous healthcare information being disseminated now that “Dr. Search Engine” is being joined by “Dr. ChatGPT.” AI holds promise to help healthcare providers find answers faster, but most consumers are concerned with the lack of transparency into and vetting of the information used. Having LLMs trained on accurate, trusted healthcare information is required to build trust with patients. – Yaw Fellin, Wolters Kluwer Health
8. Too Many People Overestimate AI’s Abilities
The common belief, “No matter the question, AI is the answer!” sets the wrong expectations and misdirects people when they’re making assumptions or decisions—including both consumers and IT professionals. Continuously educate your staff on the AI capabilities relevant to their field, measure the value of AI initiatives and fail fast. – Yuri Gubin, DataArt Solutions, Inc.
9. Many Companies Are Pursuing AI Without A Real Plan
Every industry is looking at how to use AI to gain a competitive edge, but organizations need clarity about the purpose and potential benefits of implementing AI before they begin. Without a firm data foundation, AI quickly becomes a liability, not an asset. To address specific business challenges and prevent mishaps, companies need to make sure the data they use will reach the desired outcome. – Kevin Campbell, Syniti
10. Information Provided Without Context Leads To More Work
AI makes life more difficult when the data it produces is inaccurate. If the output doesn’t include source references, it may result in users spending more time researching the outcome than they would have spent producing the answer themselves. Only using models that include sources and context that a user can trust is vital to improving this situation. – Jamelle Brown
11. Brands May Choose To Block The Use Of Their IP
Brands—as seen by the recent New York Times lawsuit against OpenAI—will become more protective of their intellectual property, raising questions about how companies monetize their assets and where AI fits into the equation. This also has the potential to limit the data available to AI models, keeping some brands and content out of “AI-powered discourse.” It will be a tough balance to strike. – David Talby, John Snow Labs
12. Consumers Are Becoming Disenchanted With AI For Its Own Sake
Companies are rushing to brand themselves as “AI,” leading to hype that sets unrealistic expectations for consumers. As a result, consumers are becoming disenchanted with the concept of AI. Clear education on what AI is included in your technology and how it is safely applied to improve the user experience is important in distinguishing your product from others. – Sarah Lackey, Open Lending
13. Public Trust In Legitimate Media May Be Damaged
As AI-generated content becomes more realistic and deepfakes become more convincing, there’s a risk that we will see an erosion of trust in legitimate images, audio recordings and written documentation. This crisis of credibility could be detrimental to productive dialogue when it comes to matters of accountability, decision making and reconciliation that rely on shared perceptions of reality. – Merav Yuravlivker, Data Society
14. Unnoticed Errors May Be Difficult To Correct
While AI streamlines many tasks, overreliance on AI can introduce challenges. Without human auditing, errors may go unnoticed, requiring extensive time for correction. Human involvement in sampling AI-controlled quality outputs is essential. This hybrid approach ensures AI efficiency while maintaining human oversight to catch errors promptly, enhancing overall quality and productivity. – Christopher Rogers, Carenet Health
15. AI Models May Lose Accuracy Over Time
AI models face a challenge: Once released, they may “drift” and lose accuracy and effectiveness over time. This poses risks for companies that use bots for customer interactions, as they can produce irrelevant or inaccurate responses and lead to customer frustration. Companies must continuously test their bots to detect any early signs of model drift and prevent potential disruptions in performance. – Alok Kulkarni, Cyara Solutions Corp.
16. Consumers Can Wind Up In ‘Information Silos’
AI can make it hard for people to access different kinds of news or ideas because algorithms keep showing them the same type of content. Making it easier for people to change their settings can help them see a wider variety of information. – Margarita Simonova, ILoveMyQA
17. The Collection Of Personal Data Raises Privacy Concerns
AI’s collection of personal data raises privacy and security worries. Tighter rules, transparent practices and educating consumers can reduce risks. Bias in decision making stemming from skewed data is another concern. Diverse data, ongoing monitoring and human oversight are vital. Addressing these challenges up front promotes responsible AI development, fostering trust. – Anand Logani, EXL
18. Phishing Attacks Are Becoming More Sophisticated
AI is significantly advancing phishing and social engineering attacks, making them smarter, more sophisticated and trickier to detect and making it easier to personalize scams. As AI learns and grows, these threats will only grow, pushing us to stay one step ahead with better security and awareness. The future demands even more robust defenses through the active use of “good” AI. – Hitesh Bhardwaj, Cloud4C
19. The Volume Of Automated Sales Calls And Emails May Grow
We may see a rise in automated sales calls and emails, since AI will allow us to multiply such communications exponentially. Targeted sales strategies have the potential to become annoying to consumers. On the flip side, AI can also help reduce the noise level by filtering out messages to consumers that aren’t seeing responses or realizing results. – Kevin Parikh, Avasant
20. Online Interactions Are Becoming Increasingly Inauthentic
Unfortunately, since AI has hit the scene, online content and interactions in professional forums seem to be becoming increasingly inauthentic. People want to engage with and learn from real people with real stories. Those who are able to stay true to their voice, creativity and messaging will differentiate their content and offerings. – Alicia Roach, eQ8