News of a prestigious award in robotics science shows us a lot of what’s going on in terms of fitting what we’ve learned about artificial intelligence into our world.

The Board of Directors of City Trusts announced November 18 that three individuals received the 2024 John Scott award for their work in the field.

One was Takeo Kanade, who has greatly contributed to computer vision, and is credited with developing the first facial recognition program of its kind.

Another is Daniela Rus from the MIT CSAIL lab, who has done groundbreaking work on robot autonomy and artificial intelligence decision-making.

The third is Vijay Kumar, much of whose work has focused on multi-agent robotics and distributed systems.

I could go over the many credentials and bona fides of each of these lifetime professionals – (Kanade holds a Ph.D. in electrical engineering, and founded the Digital Human Research Center in Tokyo – Rus’s Ph.D. in computer sciences is from Cornell, and in addition to running the MIT CSAIL lab, she has written authoritative books on the future of AI. Kumar has a Ph.D. from Ohio State University and runs robotic programs at Penn, with various associated fellowships and affiliations, including the IEEE).

But what’s perhaps most interesting about this latest round of awards is how it reflects the research that so many are doing into, the most useful pillars of AI and applications to industry.

How Computers ‘See’

All three of these awards relate to the science of computer vision and how that supports more capable AI.

It starts with the system’s ability to see what’s around it – to take in visual data in a similar way to what the human eye does, and relay that data to the artificial intelligence system mimicking how the human biology of the eye delivers its information to the human brain.

Then the digital system has to make sense of those inputs, which is easier said than done.

However, we’ve seen computer vision evolve quickly and decisively over the past few years. Optical character recognition led to systems being able to read handwritten checks for automatic ATM deposits, which eliminated millions of hours of labor in the banking industry.

But that was just the start. Now individuals like Rus are doing more work on how seeing systems will run autonomous vehicles, or in Kumar’s case, operate cooperative sets of aerial drones or other hardware driven by agents.

The Road Ahead

As mentioned, a lot of the application of computer vision leads toward making self-driving vehicles more viable – which is a big job. It’s one of the most prominent test cases for this technology, because it’s very complex, and because it has to be right 100% of the time – you could say that self-driving vehicle systems are mission critical because they address human safety.

I’ve been privileged to be affiliated with the work that Rus is doing at the MIT lab with others, and in some ways, I have a front row seat to this kind of pioneering. I know the speed with which these researchers moved from simple rote computer vision tied to avoidance of physical obstacles, to being able to label training data in ways that show the computer more than just whether an obstacle is directly in front of it.

In other words, the nuance and sophistication of computer vision has increased rapidly, so that now we can truly build vehicles that navigate intelligently.

Operating on the Human Body

What has computer vision brought the medical world?

One major contribution is the ability of artificial intelligence systems to perform radiology evaluation and diagnosis.

Using systems of probability and data training, these programs have been able to assist doctors in incredible ways.

However, there’s they’re also being used in all of that equipment that goes into invasive surgery.

In particular, Kanade’s work has been notably helpful in building robotic cameras and tools that will move inside the body’s tissues and blood vessels.

Again, we can see the enormous elimination of lots of dangerous, labor intensive and invasive surgical procedures that used to be inpatient, and are now daily outpatient procedures.

AI Research Governance

This one is a little different, but it’s critically important to all of us as we move into the AI age.

In Rus’s books, ‘The Heart and the Chip’ and ‘The Mind’s Mirror’ in particular, she goes into the obligation to make sure that research is proceeding on the right vectors – that we are working on systems that will help, not hinder, the goals of humanity.

“I hope that more people will deepen their understanding of AI in a way that is relevant to their life and work,” Rus writes. “Our world leaders and lawmakers would be well served by having a broad understanding of how AI works if they are going to oversee the economic, societal and political impacts of the technology, along with concerns around bias and data security.”

In other words, technologies that are so powerful need the right kind of framing in order to turn out the results that we’re looking for as people. That doesn’t necessarily just happen by magic, and whether it’s a swarm of drones, a medical robot or a self-driving car, all of these implementations have to be rigorously monitored for the end goal and the end result.

I’m immensely pleased that a member of MIT is faculty is featured in this list of experts, and that the board saw fit to give the John Scott award to these three amazing professionals. In addition, it really does show a lot of what’s important in the halls of learning, and where a lot of good people are working on improving our future.

Share.
Exit mobile version