Should organizations consider creating dedicated roles for executives in charge of responsible AI? It’s a move that makes sense, but shouldn’t this be part of the chief information officers’ or newly ordained chief AI officers’ roles anyway?
Either way, the need for responsible AI is urgent. Erroneous recommendations, hallucinations, and privacy violations are endemic. The most pressing concern is the algorithm bias that is pervasive within AI models across the globe, according to Arnab Chakraborty, recently appointed chief responsible AI officer for Accenture.
“As technology becomes smarter, threats will bring greater challenges,” he says. “Considering AI learns from the datasets it is trained on, it is quite possible that these datasets contain inadvertent biases, related to demography, such as racial bias, or gender and even income biases.”
We asked Chakraborty why a dedicated role such as chief responsible AI officer is needed these days, and how he is helping pave the way for similar roles across organizations grappling with responsible and ethical AI deployments.
There are distinct roles a chief responsible AI officer assumes versus those of a chief AI officer. “Responsible AI is a C-suite agenda, and that stakeholders need to understand its importance and use,” he states. “The chief AI officer manages the AI strategy, R&D, and use of AI for itself and clients. The chief responsible AI officer ensures that the deployment of AI is fair, robust, and explainable, and that it improves efficiencies, does right by people and brings value in ways that are transparent and responsible.”
At the same time, the chief responsible AI officer will be in a position to help raise awareness and educate C-suite colleagues about the urgency of responsible AI. “At present, leaders throughout the C-suite might assume the role of an AI officer, yet we aspire to a future where all leaders more broadly embody the essence of a responsible AI officer,” Chakraborty says. A dedicated executive can “create a layer of objectivity, oversight and focus to operationalize responsible AI.”
Deploying AI ethically means “moving beyond metrics like financial performance to a holistic perspective, factoring in the cause and effect of AI on society and people,” he urges. “The number one question leaders should ask themselves is whether they have AI standards and principles established for their organization, and if they are linked to their people, organizational goals and values.”
A key part of this task is defining responsible AI. Chakraborty frames it as “taking intentional actions to design, deploy and use AI to create value and build trust by protecting from the potential risks of AI.”
Importantly, he adds, “any implementation of AI should be human-centered by design. Responsible AI should be fair, with no unwanted bias, or negative unintended circumstances. It should be secure, enabled by compliance, data privacy and cybersecurity, but should not just be a compliance issue. It should be a full C-suite strategy.”
A chief responsible AI officer’s mandate is to understand “why AI does what it does, and the ever-evolving capabilities of the technology,” he adds. “It is important to monitor and audit AI, especially when it comes to ethical judgments, because as it stands, ethical judgments in our society are created on the structure of values and principles. AI automates these judgments devoid of ethical and moral sensitivities. Combatting such unintended risks requires a strong responsible AI framework, making it a key priority across boardrooms.”
Inevitably, of course, legal is going to get involved in the AI discussion — if it isn’t already. “As global regulations take effect, in-house legal teams will become crucial in leading the way for AI adoption and providing effective legal advice to enhance efficiency, provide guardrails for use of AI and safeguard organizations.”