Looking at new roundups from established sources in IT can show us where we are on the road to artificial general intelligence or AGI, which, to those unfamiliar with the term, starts to approach what Ray Kurzweil called “The Singularity,” and what I think could be described this way: new, multi-purpose AIs start doing things better than us, conquering multi-step tasks, taking initiative, driving a design process, and generally working unsupervised in any number of roles.

That’s front and center in a brand new survey of current tech trends that IBM is releasing for 2026 under the banner: “The trends that will shape AI and tech in 2026.” Staff writer Annabelle Nicoud takes us through a dizzying array of advances that will certainly grab headlines in the year to come.

One is quantum computing and its obvious boost to overall digital brain-power. The utilities of quantum over classical computing are evident, and I’ve seen IBM actually focus on this through 2024 and 2025.

But one of the more interesting components here is the emergence of what Nicoud calls “super-agents.”

What does that mean?

The Super-Agent at Work

Well, agentic AI means AI can do tasks. The first agents were the most primitive. They could, say, create a poem or a song, or an image, or use a web site to make some transaction, but they were, in a sense, specialists. They were single-use.

The idea with super-agents is that they will be able to move in a digital environment, analyze a system, navigate processes, and even supply their own insights. Not only supply their own insights, but work on the basis of those insights, in ways that don’t really require any substantial human supervision.

So you might wake up to find that AI has written a poem or a song, or done some transaction, without your express request. That’s part of the whole ball of wax.

And how do the super-agents do this?

A major part of the answer is that the super-agent isn’t really just one agent: it’s a series of agents working together to accomplish broader goals. IBM Chief Architect of AI Open Innovation Gabe Goodhart explains:

“If you go to ChatGPT, you are not talking to an AI model … You are talking to a software system that includes tools for searching the web, doing all sorts of different individual scripted programmatic tasks, and most likely an agentic loop.”

That’s true for the super-agent, too.

“We’ve moved past the era of single-purpose agents,” says Chris Hay, an IBM engineer. “We’re seeing the rise of what I call the ‘super agent’. In 2026, I see agent control planes and multi-agent dashboards becoming real. You’ll kick off tasks from one place, and those agents will operate across environments—your browser, your editor, your inbox—without you having to manage a dozen separate tools.”

What is Objective-Validation Protocol?

I came across a strange term, about halfway through Nicoud’s survey of a panoply of experts describing these super-agents and how they work. It’s “objective-validation protocol,” which, absent context, I would guess refers to a protocol for accomplishing the objective of validation, not, say, a protocol for validating an objective. Here’s how I came across this syntax-fail originally, where Nicoud quotes Ismael Faro, VP of Quantum and AI at IBM Research.

“Software practice will evolve from vibe coding to Objective-Validation Protocol,” said Faro in an interview with IBM Think. “The users are going to define goals and validate while collections of agents autonomously execute, extending the idea of human-in-the-loop, requesting human approval at critical checkpoints.”

Then Nicoud adds this:

“This shift will enable the emergence of agentic runtimes to run complex workflows with a control mechanism, and move agent behavior from static, code-bound outputs to dynamic adaptation, enabled by policy-driven schemas that balance flexibility and control.”

If you think that looks a little like word salad, you’re not alone. It’s implied, I think, that the “control mechanism” is provided by an engineering team, although that’s unclear. Runtimes run complex workflows? I guess.

For more on “objective-validation protocol,” I googled, and came up with this from the U.S. Food and Drug administration:

“The objective of a validation protocol is to demonstrate that a process consistently produces a product meeting predetermined specifications. It is a critical step in ensuring that pharmaceutical processes, equipment, or systems consistently produce products meeting predefined quality standards. A validation protocol should be written before carrying out a validation activity and should be prepared by the qualified person of the concerned department and approved before implementation. It should contain parts such as protocol approval, objective, scope, reason for validation, revalidation criteria, responsibilities, reference documents, procedure, deviations, conclusions, and change control.”

That, to me, is almost gibberish. Here’s how GPT simplifies it.

A validation protocol is a plan.
It proves a process works the same way every time.
It must be approved before testing.
It lists the tests, pass rules, roles, and how to handle issues or changes.

Better?

Not to go down a rabbit hole here, but Faro also posits that the above will lead to something called an “Agentic Operating System (AOS),” which will, in Faro’s words, standardize orchestration, safety, compliance and resource governance across agent swarms.

“With disciplined attention to security, resource management, compliance and operational excellence, enterprises can leverage expert system agents to reclaim leadership in mission-critical computing,” he said.

AI Among Us

Whether you’re validating an objective, or pursuing an objective to validate, it’s likely that AI is increasingly going to be doing it for you. In reality, there’s not much, at this point, that you can’t automate. Do LLMs hallucinate? Yes, but that’s what you have ‘mixture of experts’ (MoE) for. A human board does the same thing.

Stay tuned.

Share.
Exit mobile version