In 1942, Isaac Asimov introduced a visionary framework — the Three Laws of Robotics — that has influenced both science fiction and real-world ethical debates surrounding artificial intelligence. Yet, more than 80 years later, these laws demand an urgent revisit and revamp to address a fundamentally transformed world, one in which humans coexist intimately with AI-(em)powered robots. Central to this revision is the need for a 4th foundational law rooted in hybrid intelligence — a blend of human natural intelligence and artificial intelligence — aimed explicitly at bringing out the best in and for people and planet.
Asimov’s original Three Laws are elegantly concise:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
While insightful, these laws presuppose a clear hierarchy and a simplified, somewhat reductionist relationship between humans and robots. Today’s reality, however, is distinctly hybrid, characterized by interwoven interactions and mutual dependencies between humans and advanced, learning-capable robots. Consequently, relying solely on Asimov’s original triad is insufficient.
The essential question we must ask is: Are Asimov’s laws still relevant, and if so, how can we adapt them to serve today’s intertwined, complex society?
Why Revise The Three Laws?
Asimov’s laws assume humans are entirely in charge, capable of foresight, wisdom, and ethical consistency. In reality, human decision-makers often grapple with biases, limited perspectives, and inconsistent ethical standards. Thus, robots and AI systems reflect — and amplify — the strengths and weaknesses of their human creators. The world does not exist in binaries of human versus robot but in nuanced hybrid intelligence ecosystems where interactions are reciprocal, dynamic, and adaptive.
AI today is increasingly embedded in our daily lives — from healthcare to education, via shopping to environmental sustainability and governance. Algorithms influence what we buy, write, read, think about, and look at. (In) directly they have begun to influence every step of the decision-making process, and hence shaping our behavior. Gradually this is altering societal norms that had been taken for granted. I.e in the past AI-generated artworks were considered as less valuable than those made by humans; this perception is not only shifting in terms of appreciation for the final product (partially due to the vastly improved performance of AI in that regard). The integration of AI is also influencing our perception of ethical values – what was considered as cheating in 2022 is increasingly acknowledged as a given.
In the near future multimodal AI-driven agentic robots will not merely execute isolated tasks; they will be present throughout the decision making process, preceding the human intent, and actually executing off-screen what might not even matured yet in the human mind.
If these complex interactions continue without careful ethical oversight, the potential for unintended consequences multiplies exponentially. And neither humans nor machines alone are sufficient to address the dynamic that has been set in motion.
The Imperative Of Hybrid Intelligence
Hybrid intelligence arises from the complementarity of natural and artificial intelligences. HI is more than NI+AI, it brings out the best in both and curates added value that allows us to not just do more of the same, but something that is entirely new. It is the only path to adequately address an ever faster evolving hybrid world and the multifaceted challenges that it is characterized by.
Humans possess creativity, compassion, intuition, and moral reasoning; whereas AI -empowered robots offer consistency, data analysis, speed, and scalability combined with superhuman stamina and immunity toward many of the physiological factors that the human organism struggles to cope with, from lack of sleep to the need for love. A synthesis of these strengths constitutes the core of hybrid intelligence.
Consider climate change as a tangible example. Humans understand and empathize with ecological loss and social impact, while AI systems excel at predictive modeling, data aggregation, and identifying efficient solutions. Merging these distinct yet complementary capabilities can significantly enhance our capacity to tackle global crises, offering solutions that neither humans nor AI alone could devise.
Introducing The 4th Law
To secure a future in which every being has a fair chance to thrive we need all the assets that we can muster, which encompasses hybrid intelligence. On this premise an addition to Asimov‘s threesome is required — a Fourth Law — that may serve as the foundational bedrock for revisiting and applying Asimov’s original three in an AI-saturated society:
A robot must be designed and deployed by human decision-makers explicitly with the ambition to bring out the best in and for people and the planet.
This 4th law goes beyond mere harm reduction; it proactively steers technological advancement toward universally beneficial outcomes. It repositions ethical responsibility squarely onto humans — not just engineers, but policymakers, business leaders, educators, and community stakeholders — to collectively shape the purpose and principles underlying AI development, and by extension AI-empowered robotics.
The Ethical Shift: From Reductionist Self-interest To Collective Flourishing
Historically, technological innovation has often been driven by reductionist self-interest, emphasizing efficiency, profit, and competitive advantage at the expense of broader social and environmental considerations. Hybrid intelligence, underpinned by the proposed fourth law, shifts the narrative from individualistic to collective aspirations. It fosters a world where technological development and ethical stewardship move hand-in-hand, enabling long-term collective flourishing.
This shift requires policymakers and leaders to prioritize systems thinking over isolated problem-solving. It is time to ask: How does a specific AI or robotic implementation affect the broader ecosystem, including human health, social cohesion, environmental resilience, and ethical governance? Only by integrating these considerations into decision-making processes from the outset can we ensure that technology genuinely benefits humanity and the environment they depend on.
Practical Application of Asimov@4
Implementing the 4th law means to embed explicit ethical benchmarks into AI design, development, testing, and deployment. These benchmarks should emphasize transparency, fairness, inclusivity, and environmental sustainability. For example, healthcare robots must be evaluated not merely by efficiency metrics but also by their ability to enhance patient well-being, dignity, and autonomy. Likewise, environmental robots should prioritize regenerative approaches that sustain ecosystems rather than short-term fixes that yield unintended consequences.
Educational institutions and corporate training programs must cultivate double literacy — equipping future designers, users, and policymakers with literacy in both natural and artificial intelligences. Double literacy enables individuals to critically evaluate, ethically engage with, and innovatively apply AI technologies within hybrid intelligence frameworks.
Differently put, the 4th law looks for proscial A, AI-systems that are tailored, trained, tested and targeted to bring out the best in and for people and planet. Social benefits are aimed for as a priority, rather than a collateral benefit in the pursuit of commercial success. That requires humans who are fluent in double literacy.
Building A Sustainable Hybrid Future
The rapid integration of AI into our social fabric demands immediate and proactive ethical revision. Written over eight decades ago Asimov’s laws provide an essential starting point for today; their adaptation to contemporary reality requires a holistic lens. The 4th law explicitly expands their scope and steeps them in humanity’s collective responsibility to design AI systems that nurture our best selves and sustain our shared environment.
In a hybrid era, human decision-makers (each of us) do not have the luxury of reductionist self-interest. Revisiting and revamping Asimov’s laws through the lens of hybrid intelligence is not just prudent — it is imperative for our collective survival