This is a very unique time in the development of artificial intelligence.

When historians go back and sort through the last few years, and the years to come, it might be hard to put a finger on just where the critical mass moment occurred, when it was that we vaulted into the future on the wings of LLMs. But to me, it’s telling that we are talking so much about systems and products and opportunities that would have been unimaginable just a decade ago. Image creation, for instance. Even in the oughts, in the early years of the new millennium, you still had to make your own pretty pictures and charts and graphs. Not anymore. Voice cloning, realistic texting companions, robots running races… the list goes on and on.

Amid this rapid set of developments, some of those closest to the industry are warning that we need a certain trajectory to make sure that AI is safe. One such person is Yann LeCun, former head of research at Meta, who has been on stage at multiple Imagination in Action events, and gets top billing on many panels and conferences where he discusses innovation.

Right now, LeCun is in the news for suggesting that AI needs “guardrails,” that there are specific principles that we will need to keep in mind to ensure the fidelity of our use cases. What he’s calling for is two-fold: first, that the systems be able to represent “empathy” or compassion, and second, that the AI entities need to be deferential to human authority. That second one speaks to the way that breakout forces escape the food chain of the natural world: humans did this aeons ago, with weaponry and protective systems that basically eliminated natural predators. I guess the idea is that we now have a new potential predator that must be neutralized in a different way.

To wit, LeCun said this:

“Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans.”

That word, instinct, helps to explain those deep-level motivations that do, in a real sense, guide behaviors. Hopefully we haven’t lost ours, as humans, and hopefully we can help AIs form theirs.

Mother, Do You Think They’ll Like This Song?

Reporting on LeCun’s comments notes that he’s speaking in the wake of some input from Geoffrey Hinton, who is often called the “godfather of AI” but ended up disavowing his brainchild, to a certain extent.

Hinton’s own comments go right to the core of how we see human-to-human interactions, and by extension, those we will have with humanoid AI.

He asks us to imagine if AI could be like our mothers.

“The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,” Hinton reportedly said. “If it’s not going to parent me, it’s going to replace me. …These super-intelligent caring AI mothers, most of them won’t want to get rid of the maternal instinct because they don’t want us to die.”

In Supplication

Unfortunately, this goal seems to fly in the face of the hubris observed in our modern societies – with both superpowers and domestic populations armed to the teeth against each other, what chance do we have of internalizing the right instinct, to bond with a more powerful partner?

On the other hand, ascribing maternal roles to AI seems like a positive thing, but is it the right thing, at the end of the day?

Ultimately, those aspirations that LeCun and Hinton mention (empathy, etc.) are objectives for us, too.

A Safer Road

It’s also sobering that these comments come at a time when a jury has just brought the top self-driving vehicle company to heel with a $200 million fine for a fatality involving technology: ruling on the death of Naibel Benavides Leon, struck by a Tesla car on Autopilot, the jury found that technology makers are responsible, to an extent, for that lack of guardrails that has real and tragic consequences.

It’s a powerful metaphor: that to build correctly, we have to deliberate, not only on market principles, but on greater ones, too – that we have to have a long-term picture of how society is going to work with these AGIs and agentic systems in play. AI is now able to “do things for you,” and so, what sorts of things will it be doing?

I’m reminded, again, of the proposal by my colleague Dr. John Sviokla that AI could provide individual tutors for humans, to help them work through various kinds of critical thinking, and the suggestion from other quarters that one human priority should be to hire an army of philosophers to keep us nicely in the lane when it comes to AI development.

Plumbing the Depths of Human Intelligence

Here’s an interesting resource from Selmer Bringsjord and Konstantine Arkoudas at the Rensselaer Polytechnic Institute (RPI) in Troy, NY, talking in 2007 about the fundament of AI research. They cite another team of authors in suggesting:

“The fundamental goal of AI research is not merely to mimic intelligence or produce some clever fake. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves.”

“This ‘theoretical conception’ of the human mind as a computer has served as the bedrock of most strong-AI research to date,” Bringsjord and Arkoudas write. “It has come to be known as the computational theory of the mind; we will discuss it in detail shortly. On the other hand, AI engineering that is itself informed by philosophy, as in the case of the sustained attempt to mechanize reasoning, discussed in the next section, can be pursued in the service of both weak and strong AI.”

There’s a lot more in here, about speculation, logic, mechanistic thought, etc. – to sink your teeth into. And similarly, quite a few MIT people are working somewhere in the junction of neuroscience, AI, and biological modeling, to come to a more informed perspective on what the future will look like. And perhaps, as Paul Simon sings, the mother and child reunion is only a motion away.

Share.
Exit mobile version