The hype and excitement with artificial intelligence is starting to morph into substantive concerns: helping people and organizations achieve greater success. Questions now loom: will AI help deliver superior customer experience, enrich peoples’ work experience, and create entrepreneurial opportunities? Or is it just the latest shiny new thing?
When done right, AI can be a very successful tool for wowing customers, pleasing employees, and launching new ventures. However, the key is to do AI right — in an ethical and trustworthy manner.
Trust and ethics in AI is what is making business leaders nervous. For example, at least 72% of executives responding to a recent survey from the IBM Institute for Business Value say they “are willing to forgo generative AI benefits due to ethical concerns.” In addition, more than half (56%) indicate they are delaying major investments in generative AI until there is clarity on AI standards and regulations.
Successful AI is, and will always be, a people-centric process. Boosting people in their work. Delivering products and services to customers. Keeping things running smoothly. “AI technology is still in its early stages, and we have to assume that human input and oversight will continue to be crucial in developing responsible AI,” said Jeremy Barnes, vice president of ServiceNow.
While the level of human involvement required may change as AI continues to evolve, “I don’t believe it will ever be a fully hands-off process,” said Barnes. “Continuous improvement in AI requires regular monitoring and updates, relying on user research and human expertise for valuable insights and feedback. This ensures AI systems can evolve and adapt effectively and ethically.”
As with everything else in life, trust in AI needs to be earned. That trust is likely to keep improving, but it’s something that will evolve over years. Right now, trust is possible, but only under very specific and controlled circumstances, said Doug Ross, US chief technology officer at Capgemini Americas.
“Today, guardrails are a growing area of practice for the AI community given the stochastic nature of these models,” said Ross. “Guardrails can be employed for virtually any area of decisioning, from examining bias to preventing the leakage of sensitive data.”
At this time, generative AI use cases require significant human oversight, agreed Miranda Nash, group vice president for applications development and strategy for Oracle. “For example, generative AI embedded in business processes helps users with first drafts of employee performance summaries, financial narrative reports, and customer service summaries.”
The key word here is “help,’” Nash continued. “The responsibilities of end users haven’t changed. They still need to review and edit for accuracy to ensure their work is accurate. In situations where AI accuracy has been validated with months or even years of observation, a human may only be needed for exception handling.”
The situation is not likely to change soon, Jeremy Rambarran, professor at Touro University Graduate School, pointed out. “Although the output that’s being generated may be unique, depending on how the output is being presented, there’s always a chance that part of the results may not be entirely accurate. This will eventually change down the road as algorithms are enhanced and could eventually be updated in an automated manner.”
It’s important, then, “AI decisions should be used as just one input into a human-governed orchestration of the overall decision-making process,” said Ross.
How can AI be best directed to be ethical and trustworthy? Compliance requirements, of course, will be a major driver of AI trust in the future, said Rambarran. “We need to ensure that AI-driven processes comply with ethical guidelines, legal regulations, and industry standards. Humans should be aware of the ethical implications of AI decisions and be ready to intervene when ethical concerns arise.”
It’s also important to “foster a culture of collaboration between humans and AI systems,” Rambarran said. “Encouraging interdisciplinary teams composed of domain experts, data scientists, and AI engineers to all work together to solve complex problems effectively is vital.”
Scoreboards and dashboards are tools that can facilitate this process, said Ross. “We can also segment decisions into low, medium, and high-risk categories. High-risk decisions should be routed to a human for review and approval.”
AI won’t progress beyond the shiny-new-object phase without the governance, ethics, and trust that will enable acceptance and innovation from all quarters. We’re all in this together.