At the corner of Iris Ave and Folsom in North Boulder, my Model 3 Tesla was self-driving when it showed up two human drivers in the lane just in front of us. All three of us were turning left — two flesh-and-blood drivers and my robot — when the two humans violated a basic traffic rule by swinging wide and turning into the right hand lane.

The Tesla hugged the inside lane, as the driver manual indicates is the proper rule of the road. I wished my teenage son had been watching.

Milo is 15, with a learner’s permit. It is my fantasy that when he gets his license, he will develop the memory, rote behaviors and mundane habits that have shown themselves possible in the short life span of self-driving cars. On the other hand, my son is less likely than the Tesla’s software to suddenly disengage and just stop steering altogether (requiring me to take over).

The machine and the adolescent each have brains still under development, a human mind governed by millions of years of evolutionary biology, and algorithms shaped by decades of engineers. Seen through the lens of cognition and neuroscience, the contrast says a lot about the next generation of drivers.

For now, only the Tesla (not my son) has been involved in wrecks. The federal government in April reported that Tesla’s Autopilot technology was involved 956 crashes between January of 2018, and August of 2023, including 29 fatalities. The National Highway Traffic Safety Administration report found that, “Autopilot’s system controls may be insufficient for a driver assistance system that requires constant supervision by a human driver.”

Also recently, Elon Musk concluded talks in Beijing to clear the way with regulators to bring Autopilot to Chinese roads. Lots of other companies are developing their own versions — General Motors, BMW, Mercedes, Lincoln, Kia, and others — most that take some control in limited situations, like on the highway.

In short, don’t doubt that these cars will be on the road soon enough, as surely as my son will have his license within a year. He inherits a daunting task; statistically speaking driving is the most dangerous regular activity most of us will do in our lives.

After years of reporting on driver safety, I can say assuredly why we face such risk: a mismatch between the capacity of the human brain and the complexity of the roadway. It presents an onslaught of fast-changing stimulation, inputs and risk. Cars, pedestrians and cyclists dive in and out of our frame, our brains fatigue, get distracted, miss an input; we are human, with biologically limited attention and cognition, piloting a car that is a missile at highway speeds.

From this perspective, I watch my son learn in our Toyota Highlander. To stay focused, he prefers the radio off, and low volume on parental commentary. His solemn task shows in his tight grip of the wheel, the forward lean of his body as if to get a little more connected to what’s happening outside the windshield. When he gets in a groove, I ask him to identify the various inputs around him, the car in his blind spot, the cyclist turning right without signaling, the pedestrian looking down at a phone. He gets tired. Driving safely takes effort. For all the control and adolescent glory that comes with taking the wheel, sometimes, he’d just as soon not.

When it comes to supervising our leased Tesla, there is a single aspect of the technology that, to me, highlights the study in contrasts: On the screen, where the map is displayed, an animation shows the surrounding inputs the Tesla picks up with its multiple cameras and sensors. Cars materialize around us, intersections appear as we approach, dotted with the presence of cyclists or pedestrians. It seemingly sees everything, everywhere, all at once, processing multiple streams of information in parallel. For example, when the Tesla “sees” something on the right, it does not do so at the expense of seeing something to the left; my son can only look one way at a time.

The algorithm drives at night, and in the rain. It is exceedingly rule driven. In fact, its strict adherence has caused frustration to other drivers — and my teen passengers — by rigorously following the speed limit. One teen passenger said to me: “It’s sus,” meaning suspect, “because it only goes 20 miles an hour.” We were in a school zone.

“Even though automated vehicles are not perfect, they work surprisingly well and are getting better all the time,” I was told by David Strayer, a cognitive neuroscientist at the University of Utah and one of the world’s foremost experts on driver distraction. “We really need to focus on relative risk,” he added, meaning computers can save many more lives than they risk taking, largely because of their cognitive advantage.

“They don’t get distracted. They don’t get fatigued. They don’t get drunk or high. They speed only if the driver tells them to.”

They do glitch, though. On occasion over the last few months since I’ve been trying the technology, the Tesla’s software has suddenly disengaged, requiring me to immediately seize control. I feel like an attendant at a high-speed Disney ride that inexplicably jumps the tracks and heads for the Burger Hut. It is jarring. Take Over! Save Us All!

To be fair, the system warns me constantly to keep my hands on the wheel and be prepared to take over. Sometimes, the auto-driving disengages because I tug too hard on the wheel, suggesting I’m inclined to take over. Other times when it bugs out, who knows what happened inside that mysterious algorithm? Did a one and zero get crossed?

In the federal government’s latest report on Tesla’s crashes activity, it said the car continues to rely too much of human supervision, and, also, humans aren’t always on the job. Crash outcomes can be severe “when a driver is disengaged with the Tesla vehicle operating in Autopilot and the vehicle encounters a circumstance outside of Autopilot’s object or event detection response capabilities”

What a potent statement: the car and the human must pay attention when neither of their brains is yet fully suited for the job.

Still, for the moment, I’d trust my son to drive me home before I’d trust the Tesla; I just don’t know when the car will check out, or why, and what I could possibly do to discourage this glitch.

Soon enough, humans will relinquish the controls. When that happens, I’m not sure it will be because the robots have virtually limitless cognition and will keep us safer, true though that may be. The real reason that robots will take the wheel is because humans have better things to do than drive. Stream a show, stretch our legs, nap. At that point, when machines rule the road, I can tell my teen to go ahead and watch TikTok behind the wheel, although I guess at that point I wouldn’t mind the development of self-watching social media.

Share.
Exit mobile version