Seeking to tap into the power of artificial intelligence, many companies are working on interactive AI systems. Such systems can provide a variety of valuable benefits, from enabling always-on customer service to enhancing existing products to helping employees more quickly, easily and efficiently perform tasks.
There are many cost, competitive and productivity advantages that a well-designed interactive AI system can bring—but developing one comes with complications companies must be prepared for. Below, 20 members of Forbes Technology Council detail some of the challenges that come with building a robust, safe and well-functioning interactive AI system and how they can be solved.
1. Framing The Problem To Be Solved
In my experience, the biggest issue organizations face when developing an AI system is framing the problem to be solved. Too often, AI is implemented on top of old, bloated processes with the aim of making a business function more efficiently. To deliver real impact, a more effective mindset is, “How do we use this technology to solve an issue that has been impossible to address before?” – Mark Cameron, Alyve Consulting
2. Maintaining Internal Capabilities And Stakeholder Alignment
A key challenge in developing an interactive AI system is ensuring you have the internal capabilities needed to build and maintain it, including data, infrastructure and expertise. Additionally, aligning stakeholders’ and users’ expectations is crucial. Misalignment can lead to dissatisfaction and underutilization of the AI system. – Fabiana Clemente, YData
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
3. Minimizing Hallucinations
One challenge in developing an interactive AI system is minimizing hallucinations, where the AI generates inaccurate or nonsensical outputs. You can minimize hallucinations and ensure more accurate, relevant responses by leveraging proprietary knowledge and building robust workflows with continuous feedback from end users. – Rick Zhuang, Firework
4. Ensuring Relevant Responses
A key challenge in building an interactive AI system is ensuring relevant responses. With users making wide-ranging requests, there must be intelligent orchestration to grasp true intent and optimize inputs and outputs. Without such orchestration, interactions quickly become hit-or-miss. Orchestration will help the system develop trustworthy, contextualized responses and create seamless user experiences. – Alex Saric, Ivalua
5. Creating A Natural, Conversational Model
Creating natural, coherent, conversational AI requires advanced language models trained on massive knowledge bases and interactive environments. Curated training data and infrastructure enable rapid feedback and the safe exploration of cutting-edge techniques such as commonsense reasoning and open-domain Q&A. This iterative process unlocks the next era of interactive AI experiences. – Karan Jain, NayaOne
6. Establishing Trust In The Results
Trust is a major challenge. The best way to build trust is to ensure that humans are in the loop. In machine learning systems, human-in-the-loop approaches are crucial for establishing trust in advance of full automation. Humans must treat GenAI outputs as if they were created by a junior employee, reviewing, validating and correcting these outputs as necessary. – Lalitha Rajagopalan, ORO Labs, Inc.
7. Implementing Guardrails To Ensure Accuracy
Accuracy isn’t just an internal development challenge; it’s a hot topic for anyone discussing AI. Hallucinations are always a concern with interactive AI, but with the right guardrails in place, those hallucinations can be negated. Multiple layers of guardrails can ensure heightened accuracy, and implementing those guardrails should be just as big of a priority as AI development itself. – Frank Fawzi, IntelePeer
8. Minimizing Bias
Algorithmic discrimination is one of the main challenges when developing an interactive AI system. To minimize bias, companies need to explore creating privacy-focused AI models that leverage high-performance computing power and enable real-time, precise decision-making without necessarily compromising user privacy. – Justin Lie, SHIELD AI Technologies
9. Managing Access Rights
It will be difficult to manage the access rights for AI accounts. Moreover, if AI relies on faulty data sources or training, it could lead to serious problems. Another challenge is that large and sensitive data models may require human verification, which could take a lot of time. If hackers break into the communication between users and AI systems, they could perform insider attacks easily. – Thangaraj Petchiappan, iLink Digital
10. Ensuring Compliance With Ethical And Legal Standards
The biggest challenge is earning and keeping users’ trust. Ensure your system’s information adheres to legal restrictions and moral guidelines, providing accurate data. Train AI models on diverse, legally acquired datasets. Handle user data with care, avoiding its use for model training, and ensure users retain ownership of their data. This approach helps maintain trust and complies with legal and ethical standards. – Slava Podmurnyi, Visartech Inc.
11. Creating A Model That Can Understand And Respond To A Range Of Inputs
Ensure the AI model can understand and respond appropriately to a wide range of user inputs, including ambiguous or poorly phrased queries. This challenge, in the domain of natural language understanding (NLU), can be addressed by training the AI model with large and diverse datasets. This broad coverage ensures the model can handle the various ways users might phrase their queries. – Andy Boyd, Appfire
12. Remaining Sensitive To Varying Cultural Norms
The big challenge is cultural sensitivity. Different cultures have unique norms, making it difficult for an AI system, often trained on broad or universal values, to cater to all appropriately. This can be overcome by creating sovereign solutions tailored to specific regions and using local datasets to train the AI. This approach ensures that the system is aligned with the cultural context. – Andre Reitenbach, Gcore
13. Ensuring Data Privacy And Security
One emerging issue in developing an interactive AI system is ensuring data privacy and security. This challenge can be addressed through robust encryption protocols, stringent access controls and continuous vulnerability monitoring. Additionally, integrating privacy-by-design principles during development ensures proactive data protection, preempting potential issues before they manifest. – Christopher Rogers, Carenet Health
14. Sanitizing Inputs
A company must sanitize the inputs to its AI system or risk the exposure of sensitive data or the system behaving in unpredictable ways. Using techniques such as paraphrasing user inputs ensures that users aren’t hijacking your model through the inclusion of malicious prompts. You should also be checking and validating outputs to ensure they match expected targets and behaviors. – Matt Dickson, Eclipse Telecom
15. Emphasizing The Need For Human Verification
People tend to assume AI systems are correct, and therefore, they don’t edit or override the results. This could be because they’re busy, lazy or don’t feel confident questioning the output. But AI is imperfect and still requires human verification for many tasks. Companies need to prepare for this and be explicit about the level of intervention needed to get accurate results. – David Talby, John Snow Labs
16. Optimizing Timing For Customer-Facing Tools
A challenge in developing interactive AI is ensuring it’s well-timed. Consider chatbots: They often pop up at random, offering help or suggestions that may be wholly irrelevant to you. Particularly in industries such as commerce, you want to engineer your AI around key moments in the buying journey. It should interact with consumers when they’re actually open to receiving support or guidance. – Raj De Datta, Bloomreach
17. Fostering Personalization
The best types of interactive AI systems are ones that foster personalization. Accomplishing this starts with feeding your algorithms the right data and, even more importantly, the right context. Training systems on the wrong data or too much data inherently creates bottlenecks. Tapping into the right datasets directly leads to customized and intuitive interactive AI systems. – Gleb Polyakov, Nylas
18. Translating LLM Outputs Into Structured Formats
A major challenge in developing an interactive AI system is that large language models output natural language, while most systems use formats such as JSON or XML. Overcoming this requires a conversion layer to translate LLM outputs into structured formats. Utilizing robust prompt engineering techniques in this layer can accurately map and transform responses, ensuring seamless integration with existing systems. – Hashim Hayat, Walturn
19. Ensuring Transparency
Most AI models today operate as “black boxes,” offering very little visibility into how decisions are made. This can lead to biased or unsafe decisions. As far as possible, companies leveraging AI in their products should prioritize transparency, including providing information about how the AI functions and how they manage user data. – Mike Britton, Abnormal Security
20. Classifying And Protecting The Data
An interactive AI system needs to be customized for the user and purpose, and this requires contextual data. When developing custom AI models using retrieval-augmented generation, properly managing user access and enforcing data permissions is crucial for security. This starts with classifying data and carefully identifying and enforcing permissions to access data that are in line with the business purpose of the system and the user. – Claude Mandy, Symmetry Systems Inc.