
Tesla Admits It Still Needs Human Drivers for Robotaxi — And Claims That's Better Than Waymo | Taha Abbasi

In a stunning regulatory filing that’s sending shockwaves through the autonomous vehicle industry, Taha Abbasi breaks down Tesla’s February 13 submission to the California Public Utilities Commission (CPUC) — a document that quietly admits Tesla’s “Robotaxi” service still relies on both in-car human drivers and domestic remote operators. But here’s the twist: Tesla argues this multi-layered human supervision model is actually superior to Waymo’s fully driverless approach.
The filing, submitted in CPUC Rulemaking 25-08-013, reveals the operational reality behind Tesla’s ride-hailing ambitions and provides fascinating insight into two fundamentally different philosophies of autonomous transportation. As Taha Abbasi has been tracking for months, the gap between marketing language and operational reality in the robotaxi space is often wider than the public realizes.
Tesla’s Two-Layer Human Safety Net
The CPUC filing makes clear that Tesla’s current ride-hailing service operates with what amounts to a belt-and-suspenders approach to safety. Tesla’s Transportation Charter Party (TCP) vehicles use FSD (Supervised), which is classified as a Level 2 Advanced Driver Assistance System (ADAS). By definition, this requires a licensed human driver behind the wheel at all times, actively monitoring the road and ready to take over at any moment.
But Tesla doesn’t stop there. The company also employs domestically located remote operators in both Austin and the San Francisco Bay Area. These operators hold DMV-mandated U.S. driver’s licenses, undergo extensive background checks and drug and alcohol testing, and receive specialized training. This creates two complete layers of human oversight: one in the car and one monitoring remotely.
Compare this to Waymo’s approach: no driver in the vehicle at all. Waymo uses remote assistance operators who can provide guidance in ambiguous situations — construction zones, unusual road conditions — but the vehicle fundamentally drives itself. Waymo’s operators don’t control the car; they confirm whether it’s safe to proceed in edge cases.
The San Francisco Blackout: Tesla’s Ace Card
Tesla’s most compelling argument centers on the December 20, 2025, San Francisco power outage. During that blackout, Waymo’s autonomous vehicles began overwhelming their remote assistance teams with requests for confirmation at darkened intersections. Vehicles stopped in traffic lanes and intersections, creating gridlock.
Tesla points out that its ADAS-equipped TCP vehicles “were not impacted by the outage and completed all rides that day without interruption.” The reason is straightforward: human drivers behind the wheel navigated the situation the way any competent driver would — using judgment, visual assessment, and common sense.
As Taha Abbasi has previously analyzed, this incident exposes a genuine vulnerability in fully driverless systems: they can be overwhelmed by scenarios outside their training data, and when they fail, there’s no human backup in the vehicle to seamlessly take over.
The Philosophical Divide
What makes this filing remarkable is that Tesla isn’t embarrassed about having human drivers. Instead, the company is reframing the entire debate. Tesla’s implicit argument is that the safest path to full autonomy isn’t removing humans immediately, but gradually reducing their role as the AI improves. The human driver isn’t a crutch — it’s a feature.
This is a significant strategic pivot from Elon Musk’s previous messaging, which emphasized the timeline for fully unsupervised robotaxis. The CPUC filing suggests Tesla’s actual operational strategy is more measured: use human-supervised vehicles to build the service, collect more real-world data, and transition to full autonomy when the technology — and the data — justify it.
Taha Abbasi sees this as potentially the most honest assessment Tesla has given of its autonomous driving timeline. The company with 8 billion FSD miles is saying, in effect, “We’re not ready to remove the human yet, but our approach is safer than the alternative while we get there.”
What This Means for the Industry
The implications extend far beyond Tesla and Waymo. Every company pursuing autonomous ride-hailing must answer a fundamental question: is it safer to deploy fully driverless vehicles with remote oversight, or supervised vehicles with in-car human drivers? Tesla’s filing argues the latter, at least for now.
For investors and consumers watching the robotaxi race, this filing provides clarity. Tesla’s path to unsupervised autonomy will be gradual, data-driven, and — if this filing is any indication — more conservative than the hype suggests. That’s not necessarily bad news. As the vision-only debate continues, having a human safety net while accumulating billions more miles of data may ultimately prove to be the winning strategy.
The robotaxi future is coming. The question was never if, but how — and Tesla just told us its answer: carefully, with humans in the loop, and with data to prove every step forward.
🌐 Visit the Official Site
About the Author: Taha Abbasi is a technology executive, CTO, and applied frontier tech builder. Read more on Grokpedia | YouTube: The Brown Cowboy | tahaabbasi.com

Taha Abbasi
Engineer by trade. Builder by instinct. Explorer by choice.
Comments
Related Articles
📺 Watch on YouTube
Related videos from The Brown Cowboy

I Tested FSD V14 with Bike Racks... Here is the Truth

Tesla Robotaxi is Finally Here. (No Safety Driver)

