Three years ago today, OpenAI unassumingly released a chatbot that would redefine the relationship between humans and machines. The three year anniversary was just over 1,000 days ago (1,096 to be precise). After it came out on Nov. 30th, 2022, ChatGPT had one million users within five days. Within two months, it became the fastest-growing consumer application in history, reaching 100 million users. Today, over 800 million people use it weekly.
But here’s what most business leaders miss: these first 1,000 days were about laying the foundation. The real AI revolution, the one that reshapes competitive dynamics, eliminates entire job categories, and creates trillion-dollar markets, starts now. The next 1,000 days will determine whether your organization captures value from AI or becomes the value captured.
AI Capabilities: What the Benchmarks Actually Tell Us
It’s tempting to lead with dramatic headlines: AI models now score above 120 on standardized IQ tests, approaching “gifted” levels by human standards. According to Tracking AI, which administers the Mensa Norway IQ test to AI models, Anthropic’s Claude 4.5 Opus and OpenAI’s GPT 5.1 scored 120, Google’s Gemini 3 Pro Preview scored 123, and xAI’s Grok 4 Expert Mode scored 126. For context, the average human IQ ranges from 90-110 (Einstein’s IQ was estimated at 160).
But these numbers mislead more than they illuminate. IQ tests measure narrow pattern-recognition abilities on specific problem types. A model scoring 126 on verbalized reasoning can also confidently fabricate legal precedents, miss obvious ethical red flags a child would catch, and fail catastrophically on tasks slightly outside its training distribution. The gap between benchmark performance and reliable real-world judgment remains vast.
The more honest framing: AI systems are becoming remarkably capable at specific cognitive tasks while remaining brittle in ways we don’t fully understand. When ChatGPT launched, GPT-3.5 scored roughly 85 on these same tests – below average human performance. Fifteen months later, Claude 3 crossed the threshold of average human intelligence. The trajectory is undeniable. But trajectory isn’t destiny, and benchmark performance isn’t wisdom.
What this means for business leaders: AI can now perform many cognitive tasks that knowledge workers get paid to do, such as logical reasoning, pattern recognition, and information synthesis. But the question isn’t whether AI can think. It’s whether you can deploy AI in ways that capture its capabilities while managing its limitations. That’s a governance challenge, not a technology one.
The Economics: Abundance and Its Paradox
Both operational leaders and independent researchers agree that AI’s growth is exponential. Amin Vahdat, Google Cloud vice president and head of AI infrastructure, recently told employees that the company must double its AI serving capacity every six months to keep up with soaring global demand. Epoch AI’s research confirms training compute is expanding at approximately 4x annually, doubling every six months. For perspective: Moore’s Law doubled transistors every two years. AI compute is scaling four times faster.
The cost collapse is equally dramatic. Achieving GPT-3.5-level performance became 280 times cheaper between November 2022 and October 2024 – from $20 to $0.07 per million tokens. Mary Meeker’s 2025 AI report provides historical context: it took 80 years for the light bulb to achieve dramatic cost accessibility. AI inference achieved similar compression in roughly one year. Nvidia’s Blackwell GPU uses 105,000 times less energy per token than its 2014 predecessor.
Here’s the paradox reshaping competitive dynamics: while inference costs collapse, training costs are exploding exponentially. GPT-3 cost roughly $4.6 million to train in 2020. GPT-4 exceeded $100 million. But the acceleration is breathtaking: Anthropic CEO Dario Amodei revealed that billion-dollar training runs are already underway, with $10 billion models expected by 2026 and $100 billion training clusters by 2027. Epoch AI projects that if current trends continue, the largest AI supercomputer in 2030 will cost $200 billion, require 2 million chips, and consume 9 gigawatts of power – equivalent to nine nuclear reactors running simultaneously.
Jevons Paradox in the AI Era
There’s an uncomfortable economic truth most efficiency narratives ignore. In 1865, economist William Stanley Jevons observed that as coal engines became more efficient, total coal consumption increased rather than decreased. Efficiency made coal useful for more applications, driving aggregate demand far beyond what efficiency gains saved.
AI is following the same pattern. As inference costs plummet, organizations don’t simply do the same work cheaper – they deploy AI in entirely new contexts. Customer service automation expands to include edge cases previously handled by humans. Code generation extends from simple functions to entire applications. Content creation scales from drafts to personalized variations for every customer segment.
But Jevons paradox applies to labor too, in ways most commentary gets wrong. The historical pattern with transformative technologies isn’t simple replacement but reconfiguration. AI may enable entirely new categories of cognitive work we haven’t imagined yet, performed by humans in collaboration with AI systems in ways we can’t currently predict. The common framing “AI eliminates jobs” may be too pessimistic about human adaptability while being dangerously optimistic about the transition period. The transitions will be hard. Planning as if they’ll be smooth is a mistake.
AI Agents: From Assistants To Operators
The AI agent market will grow from $7.8 billion in 2025 to $52.6 billion by 2030. Gartner predicts 15% of work decisions will be made autonomously by agentic AI by 2028, up from 0% in 2024.
This is the shift from AI as assistant to AI as operator. Agents don’t just answer questions – they book meetings, process invoices, manage supply chains. Research suggests AI agents’ autonomous task completion has been doubling every seven months. The implication: within five years, agents could handle many tasks currently requiring human effort. Not augment. Handle.
This will quickly shift from a technology issue to a governance imperative. When an AI agent makes a consequential error – and it will – who is accountable? When autonomous systems make decisions that affect customers, employees, or communities, what oversight exists? When AI operates at a speed and scale that makes human review impractical, how do you maintain meaningful control?
These aren’t compliance questions. They’re existential questions for organizations that want to deploy AI at scale without destroying the trust that makes deployment possible.
The Four Moats That Matter When Intelligence Is Everywhere
If cognitive capability becomes cheap and universally accessible, what creates sustainable advantage? Most analyses focus on technology. That’s dangerously wrong. When everyone has access to the same foundation models, competitive advantage shifts to four durable moats: data, brand, people, and distribution. But how you build these moats matters as much as whether you build them.
Data: Flywheels That Improve, Not Just Accumulate
Generic proprietary data is losing value. We all know that AI foundation models have trained primarily on public data. The new data moat isn’t about volume – it’s about flywheel effects where user interactions continuously improve performance in ways competitors can’t replicate.
Healthcare outcomes data becomes more predictive with every patient. Legal case histories sharpen precedent matching with every filing. Financial transaction patterns detect fraud more accurately with every transaction.
But here’s what most data strategies miss: a flywheel that improves accuracy is different from one that entrenches bias. A feedback loop that enhances reliability is different from one that optimizes engagement at the cost of user welfare. The question isn’t just whether your data creates compounding returns – it’s whether those returns make your AI systems more trustworthy over time or less. Most organizations can’t answer that question. That’s a problem.
Brand: Trust as the Differentiator That Earns Its Keep
When everyone has access to the same AI capabilities, who customers trust to deploy AI on their behalf becomes what matters. An AI agent making autonomous decisions carries your brand’s reputation with every action – booking flights, approving expenses, communicating with customers, making recommendations.
The question shifts from “which AI is smartest?” to “whose AI do I trust with my business?”
But trust isn’t a marketing exercise. It’s an operational reality that requires building AI systems that are genuinely helpful while avoiding harms you may not anticipate. Organizations that establish trusted AI deployment – transparent about capabilities, accountable for errors, consistent in judgment – build brand equity that compounds over time. Those that deploy AI recklessly destroy it in minutes.
The brands that win the next 1,000 days won’t be those that claim trustworthiness. They’ll be those that demonstrate it under pressure.
People: The Judgment Layer You Can’t Automate
According to McKinsey’s 2025 State of AI, organizations achieving meaningful AI impact share one characteristic: they’ve fundamentally redesigned workflows, not just implemented tools.
The humans who orchestrate AI systems, exercise judgment at critical decision points, and take accountability for outcomes are becoming exponentially more valuable. Not the humans who do the cognitive work – the humans who direct it, knowing when to trust AI output and when to override it, where to deploy autonomous agents and where to maintain human control, how to design feedback loops that improve performance over time.
This isn’t just competitive advantage – it’s how you avoid catastrophic failures. AI systems are capable enough to be dangerous when deployed without oversight and unreliable enough to require human judgment at critical points. The judgment layer isn’t optional. It’s what separates organizations that scale AI successfully from those that scale AI disasters.
Most organizations are cutting headcount. The smart ones are redeploying talent to the judgment layer.
Distribution: Reaching Customers Before Competitors Can
The fourth moat is often overlooked in AI discussions, but it may be the most decisive. In a world where any company can access frontier AI capabilities, organizations with existing customer relationships, embedded workflows, and trusted distribution channels have an asymmetric advantage.
Salesforce can deploy AI agents across 150,000 customer relationships overnight. A startup with identical technology cannot. Adobe can embed generative AI into creative workflows already used by millions. A new entrant must first convince those millions to switch. Microsoft’s Copilot reaches customers through products they already pay for and depend on daily.
Distribution compounds with the other three moats. Customer interactions generate proprietary data. Trusted distribution reinforces brand. Existing relationships provide the context that makes human judgment more valuable.
Three Questions For The Next 1,000 Days
As we enter the next phase of AI development, the strategic questions have changed. It’s no longer about whether to adopt AI. It’s about whether your organization can navigate the transformation already underway.
First: Which of your moats is AI eroding—and which is it strengthening?Every competitive advantage you have today falls into one of four categories. Information asymmetry is collapsing – if your moat depended on knowing things others didn’t, AI is destroying it. Cognitive labor arbitrage is disappearing – if your margin came from doing knowledge work cheaper, that margin is evaporating. But proprietary data flywheels can strengthen. Brand trust can become more valuable. Human judgment at critical decision points commands premium pricing. Distribution advantages compound. Honestly assess which category each of your advantages falls into. Then ask whether you’re building the moats that will matter or defending the ones that won’t.
Second: Where does human judgment matter most? When 15% of work decisions can be made autonomously by 2028, the humans who exercise judgment at critical points and take accountability for outcomes become exponentially valuable. Where in your organization is that judgment layer? Are you investing in it or cutting it?
If current trajectories hold, we may be 2-4 years from AI systems that can perform most cognitive tasks at or above human level. Not “eventually.” Not “in our lifetimes.” Possibly before the end of this decade. This isn’t a timeline that allows for leisurely strategic planning. It’s not a timeline that allows institutions to gradually adapt. The organizations that will navigate this transition successfully are the ones building capabilities now – not the ones waiting for certainty that will never arrive.
Third: What’s your governance strategy – not for compliance, but for trust? AI models with remarkable capabilities are available for pennies per query. The organizations that thrive won’t be those with the smartest AI. They’ll be those who figured out how to deploy AI responsibly at scale – maintaining meaningful human oversight when autonomous systems make consequential decisions, building the trust that makes customers choose their AI over commodity alternatives, designing feedback loops that make their systems more reliable over time rather than more opaque.
What’s Actually at Stake
The first 1,000 days taught us what AI could do. The next 1,000 days will determine whether we capture those capabilities for broadly shared benefit or stumble into consequences we didn’t anticipate.
The competitive framing – who wins, who loses, which companies capture value – is real but incomplete. If we develop AI systems that can perform most cognitive tasks at human level or beyond, we’re not just talking about market share. We’re talking about potentially compressing a century of progress in biology, medicine, and science into a decade. We’re talking about tools that could help solve problems that have resisted human effort for generations. We’re also talking about risks that could be catastrophic if we get deployment wrong.
The winners won’t be companies with the best AI. They’ll be companies that built the moats that matter when AI becomes abundant – proprietary data flywheels, earned trust, human judgment layers, and distribution that reaches customers before competitors can – while maintaining the governance structures that make large-scale deployment sustainable.
Three years in, most organizations are still treating AI as a feature. The next three years belong to those who understand it’s the foundation – and who build on that foundation responsibly.


