People are wondering about how to make AI as safe as possible.
To that end, we have a raft of words like “responsible AI”, “ethical AI” and “explainable AI”.
Importantly, they’re not synonyms: each of these terms addresses its own facet of how to work cautiously on technologies that have so much potential, for good or ill.
Some companies tend to feel that working slowly and steadily will address many of the biggest concerns. Here’s part of a statement from OpenAI, which I think is important given the reputation of that firm in the AI age:
“Prior to releasing any new system we conduct rigorous testing, engage external experts for feedback, work to improve the model’s behavior with techniques like reinforcement learning with human feedback, and build broad safety and monitoring systems. … We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.”
Ok, so what else goes into this on a practical level?
I recently hosted Cansu Canca who spoke about some of the needs for safe AI in our times. Canca runs an AI Ethics Lab at Northeastern University.
First, she talked about cars and how they have improved our lives, while also being a great source of danger. Hundreds of thousands die in auto accidents, but at the same time, it would be hard to do without the advances and benefits that this transportation has brought us.
“Do we want the technology?” she asked, citing the pros and cons, and relating this question to AI. “Do we want the technology to be a part of our lives? The right question is not ‘should we or should we not have cars’ but the right question is: ‘how can we make sure the cars are safer?’”
She cited safety features, crash tests and infrastructure as examples of this kind of assurance around the rules of the road.
“We have infrastructure just to make sure we can make use of the cars, but don’t perish in high numbers,” she said.
In an analogy to AI, Canca described how research shows women minorities and marginalized groups tend to get fewer options or opportunities in systems driven by AI models.
The problem, she said, is that a lot of this inequity is baked in: it’s in the training data, and only gets magnified by the model.
“AI doesn’t get to make good decisions,” she said. “And AI discriminates efficiently.”
Citing examples like disparate medical care and inequity in hiring outcomes, she said this type of unfairness is “what can happen” but “not a natural consequence” of using AI.
For the kinds of guardrails we need, Canca mentioned two major things: infrastructure, and process.
“Every AI system should ideally go through a structured lifecycle and workflow,” she said, endorsing impact analysis, risk mitigation and ‘ethics by design’ as tools. “(There’s a) need to have a governance system in place in (an) organization.”
All of this, she suggested, will help, since, in her opinion, regulation is unlikely to do the job by itself.
“There will be many questions that are not answered by the law,” Canca said. “Investors are key players in incentivizing.”
Investors, she noted, will be looking to apply some criteria to their investments, which she characterized this way:
“Do (the companies) have the governance structure in place? Do they have the expertise, do they have a workflow? Can they say that every AI system goes through the workflow?”
All of this is important, and NIST’s U.S. Artificial Intelligence Safety Institute, under American Secretary of Commerce Gina Raimondo, is bringing together over 200 partners to, in the agency’s words, “develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world.” This type of thinking is likely to be a part of that conversation. The organization’s work, and corresponding efforts like an executive order from the White House, is big news in the tech world as we look at what’s likely to help keep our AI advancements on the rails. Anyway, that’s some of what we’re hearing right now: “we’re working on it.” In reality, though, everybody will have to work on it together, which is why I am encouraging so much dialogue around AI, in classes, in conferences, and elsewhere.