As organizations explore ways to harness artificial intelligence, including the large language models that power generative AI, it’s essential to be prepared for both “misfires” and security risks. AI tools’ capacity for bias and returning false or misleading information (a.k.a. “hallucinating”) necessitates careful training and prompting. Further, enterprise AI use cases often rely on interfacing with essential systems and accessing sensitive data, so robust security controls are critical.
It’s important to be ever-mindful that no model is flawless and that multiple security vulnerabilities can come with the many benefits. Below, 20 members of Forbes Technology Council share tips to help organizations account for the abilities, limitations and security challenges of AI.
1. Prioritize Preventing Data Leaks And Attacks
To protect AI models, prioritize a strategy to prevent data leaks and adversarial attacks, incorporating strong data governance, anonymization, encryption and strict access controls. Include adversarial resilience training and model monitoring and integrate security throughout the AI lifecycle. Utilize explainable AI for enhanced transparency and manipulation detection. – Jennifer Gold, Apollo Information Systems
2. Protect Your APIs
AI models that have access to APIs and integrations are vulnerable to attack vectors. A recent research paper details how LLMs with APIs have been used to propagate worms. You can limit the impact of such attacks by making sure your APIs are scoped to a minimum dataset. Don’t use admin or super user APIs, and put additional safety checks in place on any APIs that are accessible by LLMs. – Mohammad Nasrullah, Integry
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
3. Secure The AI Supply Chain
Businesses must secure their AI supply chain to ensure AI is being delivered safely and securely. Organizations today do not have the ability to see, know and manage AI systems to identify security risks, deploy safer applications and be more resilient to attacks. Adopting a secure-by-design approach to AI can help businesses protect themselves from unique AI supply chain vulnerabilities. – Ian Swanson, Protect AI
4. Ensure Confidential Data Isn’t Exposed
It’s crucial to ensure generative AI models do not inadvertently expose or reveal confidential data used during training or prompting. Data governance protocols such as encryption, anonymization and access controls can protect data and maintain compliance. Monitoring and auditing AI system activities can also prevent security breaches, ensuring confidentiality and the integrity of data in the AI life cycle. – Asif Hasan, Quantiphi
5. Be Wary Of Input Injection Attacks And Output Manipulation
If the adoption of enterprise AI and GenAI tools such as ChatGPT also includes data input or data training by the organization—such as customization for customer-use cases—then input injection attacks and manipulation of outputs could be a concern. This is similar to boundary condition manipulation in any computer-based system, which is used to achieve false output. – Ondrej Krehel
6. Think Through The Consequences Of Model Hallucinations
Depending on the use cases where a tool such as ChatGPT is going to be applied, there are two types of concerns. The first is the ways the outputs of a technology such as ChatGPT can impact the business. For example, what are the impacts and consequences in the event the model hallucinates or wrongly represents the business brand? The second is the potential exposure of sensitive information (whether business or personal). – Fabiana Clemente, YData
7. Be Cognizant Of Copyright Concerns
Businesses venturing into enterprise AI must navigate copyright concerns, a challenge even for giants such as Google. This happens when AI systems utilize content without proper authorization, potentially resulting in outputs that closely mirror someone else’s original work. To prevent this, companies must adhere to copyright laws and ensure their AI systems are trained on noninfringing data sources. – Gabriel Lopez, Orange Loops
8. Establish Precise Rules For Providing Corporate Data To Free Tools
It’s important to keep in mind that the operators of free AI tools are provided with data that users voluntarily give them. Therefore, it is important to establish precise rules for providing corporate data to free tools. The situation is different if a company uses an AI product developed specifically for its needs; in this case, precise conditions for the use of the data can be agreed upon with the supplier. – Filip Dvorak, Filuta AI
9. Understand The Information Customer-Facing Chatbots May Expose
Guard your brand’s reputation. By their very nature, LLMs in their current architecture will always hallucinate. You can try to limit it by fine-tuning and retraining, adjusting and adding guardrails, but there is only so much you can do. Businesses should take this into account, especially when exposing solutions to external customers. If prompted correctly, chatbots will lie, promote competitor solutions, misbehave and so on. – Pawel Rzeszucinski, Team Internet Group PLC
10. Confirm Your AI Partners Are Ethical
As the founder of a generative music startup working with large brands, I can tell you that procurement and legal teams are all over the copyright implications. No one wants to get sued for AI-related infringement, whether it’s in marketing content or elsewhere. Executives should make sure their AI partners are ethical, properly licensing or owning all training data and subsequent outputs. – Antony Demekhin, Tuney
11. Guard Against AI Model Poisoning
The risk of AI model poisoning, which refers to the intentional manipulation of a model’s training data or learning process to reinforce one’s hypothesis or display biased behavior, should be a security concern. This could lead to suboptimal decisions, unintended discrimination against some user groups, and/or the exposure of sensitive information, resulting in violation of security protocols. – Amit Jain, Roadz
12. Safeguard The AI Models Themselves
A business should consider safeguarding the AI models themselves, which can be targeted by adversaries to extract sensitive information or manipulate their behavior. To maintain security and enforce privacy programmatically, a business might employ techniques such as model encryption, differential privacy and federated learning. – Lindsey Witmer Collins, WLCM “Welcome” App Studio
13. Know Where The Model Is Hosted And Who Can Access It
Ask yourself, “Where is the model being hosted, and who has access to it?” Several customers have asked us to rewrite our existing OpenAI functionality (which uses OpenAI’s API directly) to use the Microsoft Azure OpenAI platform instead. This allows them to keep all of their data inside their secure Azure infrastructure versus going over the public internet. – Adam Sandman, Inflectra Corporation
14. Moderate GenAI-Created Content Closely
Generative AI enables the creation of new and unique content, which can be an image, text or video. Content moderation is a key requirement to make sure the application does not output illegal, inappropriate or otherwise undesirable content. This will help a company avoid lawsuits that could arise from such content being used for unwanted or inappropriate purposes. – Vishwas Manral, Precize Inc.
15. Be Prepared To Counter Jailbreaking Attempts
While data security is a concern while AI models are being built, there is a greater concern businesses should account for: jailbreaking. This entails bypassing restrictions to gain greater control and tricking an LLM into ignoring safety protocols or the content filters designed to prevent harmful responses. – Ankit Virmani, Google Inc.
16. Plan A Process For Filtering And Sanitizing Inputs
Adversaries will try to manipulate AI-driven systems with injection-like attacks, embedding malicious instructions in legitimate user inputs. Since AI language models work within natural conversations, distinguishing data from syntax is more challenging than in a more familiar structured query language injection. Plan an adaptive process for filtering and sanitizing inputs to enhance security. – Ilia Sotnikov, Netwrix
17. Establish Employee Training On Using AI Models
Revealing intellectual property and having generative AI “learn” your trade secrets should be top of mind when implementing AI models. Training employees on usage and protecting sensitive IP will always revolve around social engineering and proper employee adherence. Today, there’s little training on how to use AI and what it could do with the data given to it during the course of a session. – Tom Roberto, SG Network Services
18. Comply With Applicable Regulations
Data privacy, security and confidentiality are paramount, especially for global organizations. Compliance with regulations such as the General Data Protection Regulation and the California Consumer Privacy Act is crucial to maintaining customers’ trust (the intensity of concern varies by industry and applicable regulations). Enterprise AI and GenAI models heavily rely on provided data, emphasizing the need for tailored governance and controls to meet an organization’s unique needs. – Geetha Kommepalli, Skillsoft
19. Test Response Accuracy Thoroughly
To reduce liability risks from AI “hallucinations,” test AI response accuracy thoroughly. Use layered validation checks, feedback loops for model refinement and version control for safe rollbacks. Use simulated real-world scenarios, bias and sensitivity analyses, and third-party audits to assess model accuracy. These measures ensure AI-generated responses are accurate, reducing misinformation. – Ravi Soni, Amazon Web Services
20. Track The Metadata Regarding GenAI’s Usage
If you use LLMs such as ChatGPT in business processes, tracking the metadata regarding where generative AI has been used is important. Most organizations aren’t using ChatGPT in a well-defined way; individuals are using it without oversight or acknowledging that they’re doing it. If we plan on adopting this new technology—and we should—we have to be transparent about when and how it’s being used. – Lewis Wynne-Jones, ThinkData Works