Interos, a company providing supply chain resilience and risk management software, emailed me to say that there was a supply chain risk everyone seemed to be ignoring – AI risks.
Companies use risk management software, like the Interos solution, to monitor and analyze supplier risk events in real time. These are big data platforms that monitor news sources and assorted databases from governments, financial institutions, ESG NGOs, and other sources to detect when an adverse event has occurred or may be about to occur.
It is well known that ChatGPT can hallucinate. Almost all supply chain software companies are talking about how they are incorporating generative AI into their solutions and how this could improve user interfaces. Most argue that when the UI is trained with the company’s own data, the risk of hallucination is small.
But what Interos is talking about is different. Not just the risk of generative AI hallucinations but the risks associated with all forms of AI. The AI-related risks include data poisoning and model corruption. These, Interos argues, pose significant challenges for organizations integrating AI into their operations. They are right, I don’t hear anyone else discussing this risk.
I interviewed Ted Krantz, the new CEO of Interos, to learn more. Mr. Krantz argues that with Cloud-based architecture, componentized software, and embedded analytics, there are significant flows of information across ERP and supply chain platforms. That information comes from inside the platform applications and increasingly from outside sources, like Interos. These platforms need high-fidelity signals that can be trusted.
The life cycle path of the data, Mr. Krantz continued, includes an input stage, the model, and the output. All three of those checkpoints have challenges and opportunities.
Garbage In, Garbage Out
Garbage in, garbage out refers to the data integrity problem on the input side. Solutions, particularly solutions that leverage public data like risk management applications, are very reliant upon the quality of the signal. Is the information from these websites factual? Is it fictional? “So, there’s a corruption component at the input level that all of us have to struggle through.” This is true whether it’s generative AI application or more traditional forms of artificial intelligence.
Some algorithms can help clean data. These can be simple logic that detects if it is a zip code field; there should be five digits. If the zip is only four numbers long, then it is wrong. Or data cleaning tools can suggest that both “P&G” and “Proctor & Gamble” probably refer to the same company.
However, numerous other input errors can occur, particularly surrounding something as complex as a global supply chain. But, Mr. Krantz continues, “This gets really complicated, really fast.” Interos provides risk scoring across six different types of risks. “Each one of those independent risk factors has individual, unique variables that can require manual intervention and scrubbing to adjust for and correct. There could be changes on the regulatory front. For example, over 15,000 companies were added to the US restricted entities list in 2023 and 2024.
Or perhaps a piece of legislation’s go-live date is delayed, which changes the scoring. “It’s a hornet’s nest of literally countless potential corruptions at the input level that you’re constantly adjusting to. So, the primary point here is that for the foreseeable future, we need a team that is constantly checking the data.” The team, the CEO explains, is constantly “banging” on the system to try to detect errors as well as interacting with customers who think some of the scores may be wrong. “This is endless. It’s a beast. People don’t just get replaced here. They actually get put in more strategic positions.”
The Risk of AI Model Corruption
The AI models can also become corrupt. “The complexity at the model level is the orchestration of the private signals, the company signals, what signals are superseding by others, and getting that calibration correct.” For Interos, the AI model calculates the Interos risk score on a 0 to 100 scale. There are green, yellow, and red indicators, and maps and monitoring capabilities attached to the scores.
At the model level, one set of issues surrounds how that score is framed. “What are the variable weightings associated with that score?” How much weight should be given to each variable that makes up the score? “Much like at the input level, we need a team around this that constantly calibrates how the score should be calculated.” And like at the input level, ongoing collaboration with clients is necessary to ensure that the scoring mechanism is accurate. There is always a “human in the loop. If anyone’s saying that they don’t have that, they’re just not being truthful.” For example, online news items can generate event data. But to create real-time maps surrounding that risk often requires humans to tune the algorithm.
Whatever score is generated, “customers are naturally going to challenge that score.” We have to have a way for us to frame the integrity of the I score that is strictly an unbiased industry purview based on the data that we see.” For example, a customer might see a cyber risk score of 70. Sophisticated customers seek to understand how that score was generated and argue that if the score were calibrated differently, their score would be higher. And that customer might be right. There has to be an element of collaboration around the risk model.
One thing Interos is working toward is giving the customer the ability to weigh the parameters themselves. For example, a supplier’s risk score might be based partly on the FICO creditworthiness score generated by the Fair Issac company. In the future, Interos customers may decide to give that variable either more or less weight.
Interos’s CEO points out that the risk models have different levels of complexity. Some of the data, like FICO scores, is “quasi-historical.” In some cases, like predicting storm paths and which suppliers might be impacted, it is a real-time prediction. For more complex models, Interos needs to move more slowly before allowing customers to change the parameters.
The Risk that AI Outputs Can Be Corrupt
Finally, AI outputs can be corrupted. One risk here is the potential loss of intellectual property. This is the idea that a hacker or malign government might be able to find an entry point to a company’s application and view, corrupt, or lock up the data. All enterprise software suppliers need robust cybersecurity in place.
Interos just published a report called 5 Supply Chain Predictions You Need to Know in 2025. Interos predicts that traditional cyber attacks – malware, ransomware, phishing, etc. – will continue in 2025, but they warn that we need to be on the lookout for more disruptions to the physical infrastructure that is foundational to our digital world. Geopolitics rivalries underpin the potential for significant cyber disruptions to the hardware and software we depend on to make our world go round. Increasingly, enterprise applications run on Public Clouds. An attack that takes down a public cloud platform does not just affect one company; it affects numerous companies.
In summary, AI risks are much greater than the risks associated with Generative AI hallucinations.