Two recent stories are generating excitement that Tesla may be on the verge of having working self-driving. First was the deployment on Feb 22 of Tesla robotaxis serving the public in Austin with, as Elon Musk said, “No safety monitor in the car” and FSD chief Ashok Elluswamy called “a few unsupervised vehicles.” Second was the announcement by two different groups of doing coast to coast drives in the USA using Tesla FSD with no interventions.
These two events have caused many to declare that Tesla has “done it” but a deeper examination suggests this is quite unlikely, and in fact, it’s fairly consistent with Tesla still needing to improve by a factor of at least 100 to 1,000 times to have actually done it. That difference is quite extreme, almost incredible, but the principles are fairly simple.
The Vacant Robotaxis
If, as Elluswamy says, these are actually running “unsupervised” it is a big achievement indeed, but this does not appear to be the case, and Musk’s more limited “in the car” language suggests this. Videos of passengers taking rides in these few vehicles have reported one or two Tesla “chase cars” following behind them, and it seems unclear what else chase cars would do besides supervise them.
They definitely should be. I don’t believe any company, including Waymo, Zoox, May, Cruise, Baidu or any of the others didn’t have remote supervision in place when they deployed their first cars with nobody in them, and most of them admit that. It would be foolish not to, crazy to let your new baby go out in the world on its own and just hope it comes back fine. When it’s just a small number of cars, you’re going to watch, and you’re going to have the same emergency stop button that Tesla has had its right-side safety drivers equipped with available, but remotely. Moving the supervision out of the car and to a remote location is a step and a good sign of progress, but it’s not the much bigger step that going truly unsupervised is.
Chinese companies actually are required to do this under the terms of their test permits, and as they mature, their test permits are modified to dictate just how many remote staff they must have per vehicle. It starts with 1:1 (which is full time remote monitoring) and graduates to 2 vehicles per staffer and more.
Once it’s not 1:1 it is no longer remote supervision and becomes remote part-time assistance, and it never goes away, the ratio just gets larger. Waymo let the number get too large and they ran out of remote assistance staff during a recent power outage in San Francisco.
But for now, Tesla will be at 1:1, and Zoox probably is also still there. Most companies don’t talk much about this, nor do they make a big deal of what actually is a big deal, when they drop from full time monitoring. To the public perception, having the human out is the biggest deal, and it’s important, but not the big cliff.
Tesla, it is likely, has more than remote monitoring. Based on job ads Tesla has run, and photos of their control rooms, they appear to be building remote driving, where a remote staffer actually has a wheel and pedals and can remotely drive the car. This is not new, several companies have built this. One, a German company called Vay, operates remotely driven cars in a few cities, including Las Vegas. A remote driver does not just do remote supervision of a driving system, they can take over and drive the car in tough situations. It’s designed to work even with poor quality data networks. If Tesla is using chase cars, they probably don’t have remote driving in operation yet. Tesla has be known to use remote operation for its Optimus robots, and for not disclosing in advance that they are doing that.
Cross-Country “Cannonball” Runs
In December, a driver named David Moss completed a 2,700 mile coast to coast drive with Tesla FSD. This is something Elon Musk famously predicted would be common by 2017. Alex Roy, a well known driver and automotive podcaster from the “Autonocast,” did a similar trip last week on the famous “Cannonball Run” route (in reverse) also without intervention. Tesla FSD recently added the ability to drive to charging stations and park in the stalls, the humans only have to plug in and the cars did all the driving.
Impressive. Or is it? For some time, the team at FSD Community Tracker has been attempting to gather crowdsourced statistics on how often Tesla FSD needs a “critical intervention.” While their group is small, and the determination of a critical intervention is somewhat subjective, this is the only data we have–Tesla refuses to release their own, far superior data. For version 14.2.2.3, they report about 1,400 miles per critical disengagement on city streets, and about 3 times that many on highways. (They report other, non-critical events far more frequently.)
Since the Cannonball Run drive is almost entirely highway, a 2,700 mile trip is not at all remarkable. In fact, even if the vehicles were having a critical event every 1,000 miles on the highway, Teslas are driving so many miles that a 3,000 mile run without such an event would be completing many, many times each day, if people were attempting them. That means that based on the FSD Tracker numbers, there is absolutely nothing surprising about receiving multiple anecdotal reports of very long perfect drives. Since robotaxi trips average around 10 miles, you would expect each robotaxi to do about 150 flawless trips in a row before rare applications of the emergency stop button from a safety driver or chase car.
Alex Roy’s run is actually the most impressive feat, because it’s less of an anecdote announced after the fact. They set out to do it and would have reported both success and failure, which suggests this is no more likely to happen. That’s different from a situation where one person reports a very long trip and you don’t see the 20 reports of people who didn’t pull it off.
So perhaps Tesla FSD can now do a 3,000 mile freeway trip a decent fraction of the time. That sounds great, but it’s actually not so impressive. Waymo’s auditors have reported Waymos going not 1,500 miles but 2.3 million miles between “liability events.” (A liability event is going to be rarer than a critical disengagement, but we sadly don’t get identical metrics.) If they were the same, that would suggest that Tesla needs to improve by a factor of 1,500 times to reach Waymo’s level. Even if you judge there are 15 critical engagements for one liability event (where you do damage to something, someone or sombody) Tesla still needs to improve 100-fold to get there.
Waymo’s pretty good, of course. Elon Musk has declared he wants to be “twice as good” as human drivers. That’s easier. They have a police reported crash every 500,000 miles, and a minor crash about every 150,000 miles. So maybe Tesla only needs to get 20 times better.
People have a hard time with this math. They see the car make 200 perfect drives in a row, and conclude it’s “almost there” when that statistic actually means it’s only 1% of the way.
Of course, Tesla’s private internal FSD build could be much better than this. Indeed, it could be “almost there” because you can have a supervised car that’s 99.9% of the way there and ready to stop supervision tomorrow, and also have a supervised car that’s 0.1% of the way there It’s very hard for outsiders to tell the difference. Tesla has the data, though. The public wonders, though, why they don’t release the data if it’s so great? Instead of keeping it close to the vest, they should shout it from the rooftops. They don’t shy away from boasts and bold predictions, not in the slightest. Why are they shy about data? Why do they keep doing tricks to mislead, like declaring the cars are unsupervised by having chase cars, or tele-operating Optimus robots, or misleading crash statistics? This hasn’t inspired trust and confidence.








