Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On

How AI Is Tracking Illegal Wildlife Trade Hidden In Online Marketplaces

16 March 2026
China’s power ‘supergrid’ gives Xi buffer against energy shocks

China’s power ‘supergrid’ gives Xi buffer against energy shocks

16 March 2026
Top airline CEOs plead with Congress to restore DHS funding and pay airport workers

Top airline CEOs plead with Congress to restore DHS funding and pay airport workers

16 March 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » An Approach For Leadership Teams To Balance AI Risk Versus Opportunity
Innovation

An Approach For Leadership Teams To Balance AI Risk Versus Opportunity

Press RoomBy Press Room20 February 20245 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
An Approach For Leadership Teams To Balance AI Risk Versus Opportunity

Andrew Duncan, CEO and Managing Partner, Infosys Consulting.

AI is accelerating at a pace never seen before—especially not with any other technological advance. Gartner predicts that by 2026, generative AI will facilitate an increased use of other AI technologies by 400%. ChatGPT is already making advanced technology accessible to millions who lack programming skills. Enterprise generative AI models promise to optimize costs, generate new revenue streams, and improve experiences end to end across the value chain.

But security and governance are only just catching up with the explosion in AI. Generative AI models aren’t infallible; they can produce inaccurate or fabricated answers—known as hallucinations—and there are also significant intellectual property and copyright risks. Just as employees can use generative AI to amplify their work, the same is also true of cybersecurity threat actors. Therefore, we all have a responsibility to ensure the responsible use of AI. Here’s my advice on how to make strides toward responsible-by-design AI.

1. Prioritize compliance.

Generative AI is evolving rapidly—and so are the associated risks. Publicly available generative AI tools are trained on large amounts of public data and aren’t designed to comply with copyright laws and data governance. Therefore, information fed into these tools won’t necessarily remain private and confidential. As a result, many organizations have or are looking to develop their own generative AI tools, using proprietary data to ensure greater reliability, confidentiality, and compliance.

Even so, appropriate data governance must be embedded at the heart of the organization. Risk leaders must have a view of which teams use what data, how and in which models. Reliability, compliance, security and privacy only occur when AI tools, services and processes are built to robust standards and operate consistently according to their original design. Regular testing should be part of the ongoing maintenance process, and compliance teams must ensure AI meets regulatory requirements, doesn’t perpetuate bias and is continuously monitored for accuracy.

Auditing that places people at the center of the review process can also help build trust and transparency. Review teams can monitor and consider how AI processes comply with local regulations and whether they reflect biases or pose ethical questions.

2. Establish responsible data practices.

The risk of bias in AI is a top concern for executives—even more so than insufficient subject matter knowledge.

If a machine learning algorithm is trained on historical data that reflects racial, religious or gender biases, for example, it can produce outputs that perpetuate those biases. Training datasets must be representative of society. It’s only possible to make fair, automated decisions if datasets include a representative cross-section of society that doesn’t exclude minority populations. Including a human factor in review and audit processes adds a layer of critical thinking that makes monitoring for inaccuracies or biases reflexive and consistent.

It’s also essential to take an inclusive approach to hiring and expanding the diversity of AI teams so that organizations can make sure that their AI models are representative of the society in which they operate.

2. Continuously monitor for accuracy.

Generative AI models are trained to look at patterns and predict the next pixel, word or sound. They don’t necessarily understand the context of what they’re responding to, and they’re dependent on the quality of the prompts. As large language models mature and demands accelerate, generative AI models will continue to learn, and incidences of error will decrease.

Nevertheless, it’s essential to use humans to monitor output to verify the quality of the content generated. Just as though you would review a colleague’s work for accuracy, appropriateness and usefulness, you should do the same for generative AI-led content. The same goes for using contextual prompts to minimise the possibility of error and increase the quality of the output. Mastering the art of communicating with AI is foundational to influencing the model’s output quality, performance and response consistency.

3. Put humans at the center with an AI governance council.

Good governance is essential to the responsible use and successful adoption of AI—and placing people at the center of your AI strategy can ensure greater oversight, integrity and confidence.

Establishing an AI governance committee that encompasses IT, legal, compliance and regulatory leaders, supply chain experts and marketing executives can help balance opportunity and risk and ensure that ethics remain central to your AI strategy. Designating a group of individuals responsible for ethics promotes accountability and empowers them to take a holistic look at where and how AI is applied, what pitfalls and risks may exist and debate AI’s strategic intent and impact.

The governance council would be responsible for verifying AI outputs, evaluating and rigorously examining standards and processes, and monitoring for biases while ensuring a global and inclusive perspective.

Things to keep in mind.

No enterprise technology transformation is entirely risk-free. But leadership teams must get familiar with managing and mitigating AI-generated risks while prioritizing value creation. Generative AI is a promising leap in achieving a new horizon of the digital-first workforce, and it’s an exciting opportunity to embed responsible design into every corner of your organization.

In fact, in my view, businesses must get this right. It’s up to all of us to responsibly shape the future of AI to ensure it enriches society and amplifies the potential of every individual it touches.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Andrew Duncan
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

How AI Is Tracking Illegal Wildlife Trade Hidden In Online Marketplaces

16 March 2026

Naval Ravikant’s AI Thesis Is Playing Out In Public Markets

15 March 2026

How AI Is Transforming Enterprise Software Into Living Systems

11 March 2026

VC-Backed Style Brands That Are Reshaping Furniture And Home Decor

10 March 2026

Venture Capital Is Discovering Fashion Tech

7 March 2026

Will The Iran Conflict Reshape Venture Capital?

7 March 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

6 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Oil rises as Iran seizes Hormuz gatekeeper role while Trump eyes risky naval option to reopen strait

Oil rises as Iran seizes Hormuz gatekeeper role while Trump eyes risky naval option to reopen strait

15 March 20261 Views
Pentagon sees Iran war lasting up to six weeks, Trump aide says

Pentagon sees Iran war lasting up to six weeks, Trump aide says

15 March 20261 Views
The closed Strait of Hormuz is testing Asia’s energy security. The answer lies in Canada

The closed Strait of Hormuz is testing Asia’s energy security. The answer lies in Canada

15 March 20262 Views
An OpenAI cofounder ‘vibe coded’ an analysis of the U.S. labor market’s exposure to AI

An OpenAI cofounder ‘vibe coded’ an analysis of the U.S. labor market’s exposure to AI

15 March 20262 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

How AI Is Tracking Illegal Wildlife Trade Hidden In Online Marketplaces

16 March 2026
China’s power ‘supergrid’ gives Xi buffer against energy shocks

China’s power ‘supergrid’ gives Xi buffer against energy shocks

16 March 2026
Top airline CEOs plead with Congress to restore DHS funding and pay airport workers

Top airline CEOs plead with Congress to restore DHS funding and pay airport workers

16 March 2026
Most Popular

Naval Ravikant’s AI Thesis Is Playing Out In Public Markets

15 March 20260 Views
Oil rises as Iran seizes Hormuz gatekeeper role while Trump eyes risky naval option to reopen strait

Oil rises as Iran seizes Hormuz gatekeeper role while Trump eyes risky naval option to reopen strait

15 March 20261 Views
Pentagon sees Iran war lasting up to six weeks, Trump aide says

Pentagon sees Iran war lasting up to six weeks, Trump aide says

15 March 20261 Views
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.