I strongly believe LIDAR is the way to go and that Elon's vision-only move was extremely "short-sighted" (heheh). There are many reasons but that drives it home for me multiple times a week is that my Tesla's wipers will randomly sweep the windshield for absolutely no reason.
This is because the vision system thinks there is something obstructing its view when in reality it is usually bright sunlight -- and sometimes, absolutely nothing that I can see.
The wipers are, of course, the most harmless way this goes wrong. The more dangerous type is when it phantom-brakes at highway speeds with no warning on a clear road and a clear day. I've had multiple other scary incidents of different types (swerving back and forth at exits is a fun one), but phantom braking is the one that happens quasi-regularly. Twice when another car was right behind me.
As an engineer, this tells me volumes about what's going on in the computer vision system, and it's pretty scary. Basically, the system detects patterns that are inferred as its vision being obstructed, and so it is programmed to brush away some (non-existent) debris. Like, it thinks there could be a physical object where there is none. If this was an LLM you would call it a hallucination.
But if it's hallucinating crud on a windshield, it can also hallucinate objects on the road. And it could be doing it every so often! So maybe there are filters to disregard unlikely objects as irrelevant, which act as guardrails against random braking. And those filters are pretty damn good -- I mean, the technology is impressive -- but they can probabistically fail, resulting in things that we've already seen, such as phantom-braking, or worse, driving through actual things.
This raises so many questions: What other things is it hallucinating? And how many hardcoded guardrails are in place against these edge cases? And what else can it hallucinate against which there are no guardrails yet?
And why not just use LIDAR that can literally see around corners in 3D?
Engineering reliability is primarily achieved through redundancy.
There is none with Musk's "vision only" approach. Vision can fail for a multitude of reasons --- sunlight, rain, darkness, bad road markers, even glare from a dirty windshield. And when it fails, there is no backup plan -- the car is effectively driving blind.
Driving is a dynamic activity that involves a lot more than just vision. Safe automated driving can use all the help it can get.
I agree with everything you're saying; but even outside of Tesla, I'd just like to remind people that LIDAR as a complement to vision isn't at all straightforward. Sensor fusion adds real complexity in calibration, time sync, and modeling.
Both LIDAR and vision have edge cases where they fail. So you ideally want both, but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion. People seem to be under the mistaken impression that vision is dirty input and LIDAR is somehow clean, when in reality both are noisy inputs with different strengths and weaknesses.
I guess my point is: Yes, 100% bring in LIDAR, I believe the future is LIDAR + vision. But when you do that, early iterations can regress significantly from vision-only until the fusion is tuned and calibration is tight, because you have to resolve contradictory data. Ultimately the payoff is higher robustness in exchange for more R&D and development workload (i.e. more cost).
The same reason why Tesla needed vision-only to work (cost & timeline) is the same reason why vision+LIDAR is so challenging.
The primary benefit of multiple sensor fusion from a safety standpoint isn't an absolute decrease in errors.
It's the ability to detect sensor disagreements at all.
With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios).
If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
But we'd think that the budget config of the Boeing 737 MAX would have taught us that tying safety critical systems to single sources of truth is a bad idea... (in that case, critical modality / single physical sensor)
> With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios).
"A man with a watch always knows what time it is. If he gains another, he is never sure"
Most safety critical systems actually need at least three redundant sensors. Two is kinda useless: if they disagree, which is right?
EDIT:
> If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
> Two is kinda useless: if they disagree, which is right?
They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two.
> You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
Any realistic system would see them long before your eyes do. If you are so worried, override the AI in the moment.
> They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two.
Lots of safety critical systems actually do operate by "voting". The space shuttle control computers are one famous example [1], but there are plenty of others in aerospace. I have personally worked on a few such systems.
It's the simplest thing that can obviously work. Simplicity is a virtue when safety is involved.
You can of course do sensor fusion and other more complicated things, but the core problem I outlined remains.
> If you are so worried, override the AI in the moment.
This is sneakily inserting a third set of sensors (your own). It can be a valid solution to the problem, but Waymo famously does not have a steering wheel you can just hop behind.
This might seem like an edge case, but edge cases matter when failure might kill somebody.
Isn't the historical voting pattern something more of a legacy thing dictated by limited edge compute of the past vs necessarily a best practice.
I see in many domains a tendency to oversimplify decision making algorithms for human understanding convenience (eg vote rather that develop a joint probability distribution in this case, supply chain and manufacturing in particular seem to love rules of thumb) rather than use better algorithms that modern compute enables higher performance, safety etc
This is an interesting question where I do not know the answer.
I will not pretend to be an expert. I would suggest that "human understanding convenience" is pretty important in safety domains. The famous Brian Kernighan quote comes to mind:
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
When it comes to obscure corner cases, it seems to me that simpler is better. But Waymo does seem to have chosen a different path! They employ a lot of smart folk, and appear to be the state of the art for autonomous driving. I wouldn't bet against them.
We're trying to build vehicles that are totally autonomous, though. How do you grab the wheel of the new Waymos without steering wheels? Especially if you're in the back seat staring at Candy Crush.
Waymos are safer, and drive more defensively than humans. There is no way a Waymo is going to drive aggressively enough to get itself into the trolley problem.
This situation isn't going to happen unless the vehicle was traveling at unsafe speeds to begin with.
Cars can stop in quite a short distance. The only way this could happen is if the pedestrian was obscured behind an object until the car was dangerously close. A safe system will recognize potential hiding spots and slow down preemptively - good human drivers do this.
"Quite a short distance" is doing a lot of lifting. It's been a while since I've been to driver's school, but I remember them making a point of how long it could take to stop, and how your senses could trick you to the contrary. Especially at highway speeds.
I can personally recall a couple (fortunately low stakes) situations where I had to change lanes to avoid an obstacle that I was pretty certain I would hit if I had to stop.
> > If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
> This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
> What do you do?
Go into your failure mode. At least you have a check to indicate a possible issue with 2 signals.
Exactly my point. That you know the systems disagree is a benefit, compared to a single system.
People are underweighting the alternative single system hypothetical -- what does a Tesla do when its vision-only system erroneously thinks a pedestrian is one lane over?
> This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
This is why good redundant systems have at least 3... in your scenario, without a tie-breaker, all you can do is guess at random which one to trust.
That's a good point, but people do need to keep in mind that many engineered systems with three points of reference have three identical points of reference. That's why it works so well, a common frame of reference (i.e. you can compare via simple voting).
For example jet aircraft commonly have three pitot static tubes, and you can just compare/contrast the data to look for the outlier. It works, and it works well.
If you tried to do that with e.g. LIDAR, vision, and radar with no common point of reference, solving for trust/resolving disagreements is an incredibly difficult technical challenge. Other variations (e.g. two vision + one LIDAR), does not really make it much easier either.
Tie-breaking during sensor fusion is a billion+ dollar problem, and will always be.
> If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
Also, this is probably when Waymo calls up a human assistant in a developing-country callcentre.
Saw that happen a week ago, actually. Non-sensor problem, but a Waymo made a slow right turn too wide, approached the left turning lane of cars, then safed itself by stopping, then remote assistance came online and extricated it.
The same reason why Tesla needed vision-only to work (cost & timeline)
But vision only hasn't worked --- not as promised, not after a decade's worth of timeline. And it probably won't any time soon either --- for valid engineering reasons.
Engineering 101 --- *needing* something to work doesn't make it possible or practical.
The complexity argument rings hollow to me. It’s a bit like saying distributed databases are complex because you have to deal with CAP guarantees. Yes, but people still develop them because it has real benefits.
It was maybe a valid argument 10 years ago, but in 2025 many companies have shown sensor fusion works just fine. I mean, Waymo has clocked 100M+ miles, so it works. The AV industry has moved on to more interesting problems, while Tesla and Musk are still stuck in the past arguing about sensor choices.
Well, it's more like sensor fusion plus extensive human remote intervention, it seems: https://www.nytimes.com/interactive/2024/09/03/technology/zo... . Mind you, if it takes both LiDAR and call-centre workers to make self-driving work in 2025 and for the foreseeable future, that makes Tesla's old ambition to achieve it with neither look all the more hopeless.
Tesla robotaxis have teleoperation [1], which is worse than remote assistance others use because the operators have direct control. Yet they cannot fully remove safety personnel from the car.
> but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion
I keep reading arguments like this, but I really don't understand what the problem here is supposed to be. Yes, in a rule based system, this is a challenge, but in an end-to-end neural network, another sensor is just another input, regardless of whether it's another camera, LIDAR, or a sensor measuring the adrenaline level of the driver.
If you have enough training data, the model training will converge to a reasonable set of weights for various scenarios. In fact, training data with a richer set of sensors would also allow you to determine whether some of the sensors do not in fact contribute meaningfully to overall performance.
It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if one considers the venture to be a vehicle (ha!) to shovel as much money as possible from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing.
It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if the venture is a vehicle (ha!) to shovel money from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing.
Your comments on sensor fusion seem to describe the weird results of 2 informal ADAS (lidar, vision, lidar + vision, lidar + vision + 4d imaging radar, etc.) “tournaments” conducted earlier this year. There was an earlier HN post about it <https://news.ycombinator.com/item?id=44694891> with a comment noting “there was a wide range of crash avoidance behavior even between the same car likely due to the machine learning, and that also makes explaining the differences hard. Hopefully someone with more background on ADAS systems can watch and post what they think.”
Notably, sensor confusion is also an “unsolved” problem in humans, eg vision and vestibular (inner ear) conflicts possibly explaining motion sickness/vertigo <https://www.nature.com/articles/s44172-025-00417-2>
Urban Scenarios: “a massive, complex roundabout and another segment of road with a few unsignaled intersections and a long straight...The first four tests incorporated portions of this huge roundabout, which would be complex for human drivers, but in situations for which there is quite an obvious solution: don’t hit that car/pedestrian in front of you” <https://electrek.co/2025/07/29/another-huge-chinese-self-dri...>
I think you hit the nail on the head - Obviously when Tesla have saturated the potential of vision, they should bring in LiDAR if it can be reasonably added from a hardware point of view. Their current arguments make this clear - it would be surface-level thinking to add LiDAR and the kitchen sink now, complicating the system's evolution and axing scalability.
But we're far from plateauing on what can be done with vision - Humans can drive quite well with essentially just sight, so we're far from extinguishing what can be done with it.
> Vision can fail for a multitude of reasons --- sunlight, rain, darkness, bad road markers, even glare from a dirty windshield. And when it fails, there is no backup plan
So like a human driver. Problem is, automatic drivers need to be substantially better than humans to be accepted.
Just imagine Tesla would have subventioned passive LIDAR on every car they ship to collect data. Wow that dataset would be crazy, and would even improve their vision models by having ground truth to train on. He’s such a moron
Ummm, it is actually active with ADAS anywhere? Certainly not in the US.
>The EX90's LiDAR enhances ADAS features like collision mitigation and lane-keeping, which are active and assisting drivers. However, full autonomy (Level 3) is not yet available, as the Ride Pilot feature is still under development and not activated.
That's likely closer to reality now, but that's not counting the cost for R&D to add it to the car, any additional costs that come with it besides the LIDAR hardware, plus the added cost to install it.
All of that combined is probably closer to $1k than to $140.
And, again, that's - what - 10 years after Tesla originally made the decision to go vision only.
It wasn't a terrible idea at the time, but they should've pivoted at some point.
They could've had a massive lead in data if they pivoted as late as 3 years ago, when the total cost would probably be under $2.5k, and that could've led to a positive feedback loop, cause they'd probably have a system better than Waymo by now.
Instead, they've got a pile of garbage, and no path to improve it substantially.
Not all LIDARs are equal. Just because BYD is spending $140 on a LIDAR system does not mean it's the same quality as the Waymo system reported to cost $75k almost a decade ago, or, especially, the same quality as the ones in use today.
They might be!
But I doubt it.
I don't know enough about Tesla's cameras, but it's not implausible to think there are LIDARs of low enough quality that you'd be better off with a good quality camera for your sensor.
Again, I doubt this is the case with BYDs cameras.
But it's still worth pointing out, I think.
My point is, BYD's LIDAR system costing $x is only one small part of the conversation.
I would say a $140 LIDAR system that’s currently being used in production cars [somewhere] is better than a $0 non-existent LIDAR system. Pair a cheap LIDAR system with some nice cameras and perhaps you can make up much of the difference in software.
It goes very slow and it doesn't need to work with high resolution or long distances. It has plenty of time to average out noise.
Solid-state LIDAR is still a fairly new thing. LIDAR sensors were big, clunky, and expensive back when Tesla started their Autopilot/FSD program.
I googled a bit and found a DFR1030 solid-state LIDAR unit for 267 DKK (for one). It has a field of view of 108 degrees and an angular resolution of 0.6 degrees. It has an angle error of 3 degrees and a max distance of 300mm. It can run at 7.5-28 Hz.
Clearly fine for a floor-cleaning robot or a toy. Clearly not good enough for a car (which would need several of them).
Even if Tesla wasn't using LIDAR, I think they did still use radar and ultrasonic detection for a while, which I'm sure contributed to their models some.
LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything. Maybe this influences how they, and regulators, will think about self driving cars.
It's done in a roundabout way. Usually with a variation of "you had a bad experience because you are using the tool incorrectly, get good at prompting".
That's a response to 'I don't get good results with LLMs and therefore conclude that getting good results with them is not possible'. I have never seen anyone claim that they make no mistakes if you prompt them correctly.
"LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything."
You take issue with my response of:
"loads of DEVs on here will claim LLMs are infallible"
You're not really making sense. I'm not straw-manning anything, as I'm directly discussing the statement made. What exactly are you presuming I'm throwing a straw man over?
It's entirely valid to say "there are loads of supposed experts that don't see this point, and you're expecting the general public to?". That's clearly my statement.
You may disagree, but that doesn't make it a strawman. Nor does it make it a poorly phrased argument on my part.
Do pay better attention please. And your entire last sentence is way over the line. We're not on reddit.
> And why not just use LIDAR that can literally see around corners in 3D?
Based on what I've read over the years: it costs too much for a consumer vehicle, it creates unwanted "bumps" in the vehicle visual design, and the great man said it wasn't needed.
Yes, those reasons are not for technology or safety. They are based on cost, marketing, and personality (of the CEO and fans of the brand).
Lidar is being manufactured in china in the volume of millions a year by robosense, Huawei, and hesai. Bom cost is on the order of a few hundred dollars - slightly more than automotive radar. The situation is a lot different in 2025 than in 2017.
I’ve always wondered about LiDAR - how can multiple units sweep a scene at the same time (as would be the case for multiple cars driving close together, all using lidar)? One unit can’t distinguish return signals between itself and other units, can it?
I think it can - I think it might encode something in the beam, use a slightly different wavelength or even just pulse the laser so that the returns don't overlap if timed right.
>why not just use LIDAR that can literally see around corners in 3D?
LIDAR requires line-of-sight (LoS) hence cannot see around conner, but RADAR probably can.
It's interesting to note that the all time 2nd most popular post on Tesla is 9 years ago on its full self driving hardware (just 2nd after the controversial Cybertruck) [1].
>Elon's vision-only move was extremely "short-sighted"
Elon's vision was misguided because some of the technologists at the time including him seem to really truly believed that AGI is just around the corner (pun attended). Now most of the tech people gave up on AGI claim blaming on the blurry definition of AGI but for me the truly killer AGI application is always full autonomous level 5 driving with only human level sensor perceptions minus the LIDAR and RADAR. But the complexity of the goal is very complicated that I really truly believe it will not be achieved in foreseeable future.
[1] All Tesla Cars Being Produced Now Have Full Self-Driving Hardware (2016 - 1090 comments):
I rented a Tesla a while back and drove from the bay to the death valley. On clear roads with no hazards whatsoever, the car hit the brakes at highway speeds. It scared the bejeesus out of me! Completely off put by the auto drive and derailed plans to buy a Tesla.
The around corners thing, when I saw demos of it seeing the vehicles the driver can't even see ... I wanted it for my non self driving car ... it's just too big of an advantage to skimp out on.
I use FSD in my Model S daily to commute from SF to Palo Alto along with most of my other Bay Area driving. It does a better job currently than most people and it drives me 95% of the time now I haven't had the phantom braking.
I'm in a 2025 with HW4, but it's dramatic improvement over the last couple of years (previously had a 2018 Model 3) increased my confidence that Elon was right to focus on vision. It wasn't until late last year where I found myself using it more than not, now I use it almost every drive point to point (Cupertino to SF) and it does it.
I think people are generally sleeping on how good it is and the politicization means people are under valuing it for stupid reasons. I wouldn't consider a non Tesla because of this (unless it was a stick shift sports car, but that's for different reasons).
Their lead is so crazy far ahead it's weird to see this reality and then see the comments on hn that are so wrong. Though I guess it's been that way for years.
The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case. I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem.
If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off.
I'd really encourage people to here with a bias to dismiss to ignore the comments and just go in real life to try it out for yourself.
This is extremely narrow minded.
As another commenter pointed out, you are driving on easy mode in terms of environment and where a majority of the training was done.
This is not a general solution, it is an SF one... at best.
Most humans also don't get in accidents or have problems with phantom breaking within the timeframe that you mentioned.
Oh please - people excuse and dismiss major accomplishments, you can send a skyscraper to mars and people on HN will still be calling you a fraud.
The Bay Area has massive traffic, complex interchanges, SF has tight difficult roads with heavy fog. Sometimes there’s heavy rain on 280. 17 is also non trivial.
What Tesla has done is not trivial and roads outside the bay are often easier.
People can ignore this to serve their own petty cognitive bias, but others reading their comments should go look at it for themselves.
Auto-Pilot is not FSD. It's akin to a regular carmaker's Automatic-Braking-System and Lane-Keep-Assist. If you're seeing it used dangerously that's user error.
HW4 is really a game changer. I was absolutely floored by HW4 FSD during a recent test drive. Tesla is accomplishing some truly groundbreaking technical achievements here. But you wouldn't know it through all the Elon Musk noise (pro and con). I'd encourage anyone to take a test drive and put FSD through its paces. I went in with a super critical mindset and walked away stunned.
It also makes this horrible kind of sense that Elon would see them both as admirable, this idea that you're the only person who matters. Ordinary people exist only for you to exploit them, and have no intrinsic worth.
Whats the oddest thing about the wiper tech is that we've had the tech for automated wipers since at least the 70s. As a kid my neighbor's Cadillac had it.
tl;dr: you can use optics to determine if there's rain on a surface, from below, without having to use any fancy cameras or anything, just a light source and light sensor.
If you're into this sort of thing, you can buy these sensors and use them as a rain sensor, either as binary "yes its rained" or as a tipping bucket replacement: https://rainsensors.com
Camera only might work better if you used regular digital cameras along with more advanced cameras like event based cameras that send pixels as soon as they change brightness and have microsecond latency and\or Single Photon Avalanche Diode (SPAD) sensors which can detect single photons. Having the same footage from all 3 of these would enable some fascinating training options.
"There are many reasons but that drives it home for me multiple times a week is that my Tesla's wipers will randomly sweep the windshield for absolutely no reason."
Self-starting wipers uses some kind of current/voltage measure on the windshield right - unrelated to self-driving? It's been around longer than Tesla - or are you just saying it's another random failure?
This and that. FUD this FUD that. Tesla have communicated clearly why "adding" LiDAR isn't an improvement for a system with goals as high as their are. Remember, no vision system yet is as good as humans are with vision, so obviously there's a lot to do with vision still.
Check this for a reference of how well Tesla's vision-only fares against the competition, where many have LiDAR. Keep it simple wins the game.
https://www.youtube.com/watch?v=0xumyEf-WRI
Is this like the same BS as Elon on an investor call recently?
One analyst asked about the reliability of Tesla’s cameras when confronting sun glare, fog, or dust. Musk claimed that the company’s vision system bypasses image processing and instead uses direct photon counting to account for “noise” like glare or dust.
This... is horseshit. Photon counting is not something you can do with a regular camera or any camera installed on a Tesla. A photon counting camera doesn't produce imagery that is useful for vision. Even beyond that, it requires a closed environment so that you can you know, count them in a controlled manner, not an open outside atmosphere.
It's bullshit. And Elon knows it. He just thinks that you are too stupid to know it and instead think "Oh, yeah, that makes sense, what an awesome idea, why is only Tesla doing this?" and are wowed by Elon's brilliance.
Sterling Anderson was the first autopilot director, and he was fired for insisting on Lidar. Elon sued Sterling Anderson, then hired the bootlick Karpathy to help him grease chumps.
The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to ultimately have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case.
I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem.
If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off. His model is all this sort of first principles thinking, it's why his companies pull off things like starship. I wouldn't bet against it.
If humans had radar they would reverse into obstacles less often, and not be blindsided or T-boned as readily so long as their radar could still reach the object moving rapidly in their direction.
Elon is being foolish and weirdly anthropomorphic.
At that time Lidar was too expensive and ugly to be putting in every car. Robust Lidar for SAE level 4 autonomous vehicles is still not cheap and still pretty ugly.
Not too long ago there were over 5 dozen startups in the automotive grade lidar space. Lidar is now much cheaper and smaller, but still very conspicuous and still too expensive to be putting in every vehicle.
For decisions of this scale (ie, tens of years of development time, committing multiple products to a single unproven technology), the CEO really should be involved. Maybe they’ll just decide to take the recommendation of the SMEs, but it’s hard for me to imagine Elon had no say in it.
1. A human has a lot more options to deal with things like sun glare. We can move our head, use shade, etc. And when it comes to certain aspects around dynamic range the human eyes are still better than cameras. And most of all, if we loose nearly all vision we are intelligent enough to simulate the behaviour of most objects around us to react safe for the next few seconds.
2. Human intelligence is much deeper than machine vision. We can predict a lot of things that machine visions have no hope to achieve without some kind of incredibly advanced multi-modal model which is probably many years out.
The most important thing is that Tesla/Elon absolutely had no way to know, and no reason to believe (other than as a way to rationalise a dangerously risky bet) that machine vision would be able to solve all these issues in time to make good on their promise.
Not only do we have options to deal with it, we understand that it's a vision artefact, and not something real. We understand objects don't vanish or appear out of nowhere. We understand the glare isn't reality but is obstructing our view of reality. We immediately understand we're therefore dealing with incomplete information and compensate for that. Including looking for other ways to look around the instruction or fill in the gaps. Without even thinking about it, often.
The human brain is the result of literal billions of years of evolution, across trillions of organisms. The "just" in your "just a matter of sufficient learning data and training" is doing a lot of work.
If you have cheat codes then why not just use it instead of insisting on principle that our eyes are good enough? We see Waymo use the cheat codes, oh no. We also only have binocular vision, so I guess Tesla is already okay with superhuman cheat codes.
We not only use our vision when driving but also our other senses. We can tell the sun is shining at us because it warms our skin. This all happened subconsciously.
Humans are vastly superior drivers in general, it’s just that 50% of humans are bad drivers.
I think Elon's prediction was that LIDAR was too expensive and will stay too expensive.
In a sense he was right, LIDAR prices did not drop and I wonder why that is?
$200 to enable better FSD vs a decade of struggle to get FSD only partially working with $20 cameras. Which one do you think is more expensive overall?
The fact that we still do not have a significant number of cars with LIDAR on our streets somewhat proves which approach the auto industry considers viable for business.
I am much more curious about the next ten years. If we can bring down the cost of a LIDAR unit into parity with camera systems[1], I think I know the answer. But I thought that 10 years ago and it did not happen so I wonder what is the real roadblock to make LIDAR cheap.
[1] Which it won't replace, of course. What it will change is that it makes the LIDAR a regular component, not an exceptionally expensive component.
You won't get rich with a $30,200 robotaxi, you won't even have a viable business. The game is the mass market and there the usual unit of currency is not cents, its tenth of cents.
Investments into re-engineering production to bring down cost is done when there is a market large enough for said product.
True self-driving is still a baby that needs to grow and cannot even compete against an adult human with 30+ years of experience. As self driving actually forms to that level the market will grown.
Why do all of you think prices haven't come down? I can buy an AT128 from hesai for a few hundred dollars in volume. It's higher performance than any spinning lidar I could buy in 2017.
In what universe is 'a few hundred dollars is way too much' for implementing full self-driving on an autonomous vehicle that moves like, and at the speeds of and in the spaces of, an automobile?
A two to four ton vehicle that can accelerate like a Ferrari and go over 100 mph, fully self-driving, and 'a few hundred dollars is way too much'.
Disagree. Even as they are dialing back the claims, which may or may not affect how people use the vehicles. These things respond too quickly for flaky senses based on human sensoriums.
Are you referring to autopilot or FSD? Phantom braking is a solved problem since the release of V12 FSD. As soon as a vision based car is safer than a human, it's flaws don't matter because it will save lives.
Lidar is great for object detection. But it's not great for interpreting the objects.
It will stop you crashing into a traffic light. But it won't be able to tell the color of the light. It won't see the stripes on the road. It won't be able to tell signs apart. It won't enable AIs to make sense of the complex traffic situations.
And those complex traffic situations are the main challenge for autonomous driving. Getting the AIs to do the right things before they get themselves into trouble is key.
Lidar is not a silver bullet. It helps a little bit, but not a whole lot. It's great when the car has to respond quickly to get it out of a situation that it shouldn't have been in to begin with. Avoiding that requires seeing and understanding and planning accordingly.
Meanwhile, the competition who is using LiDAR has FSD cars. You're understating the importance of this sensor.
You can train a DL model to act like a LiDAR based on only camera inputs (the data collection is easy if you already have LiDAR cars driving around). If they could get this to work reliably, I'm sure the competition would do it and ditch the LiDAR, but they don't, so that tells us something.
Isn’t this article about Tesla admitting their system is as good as it’s going to get? They’re changing their definition of FSD to pretty much “current state”.
Researchers had this knowledge in 2007, when the only cars to finish the DARPA Urban challenge were equipped with Velodyne 3D LIDAR. Elon Musk sent us back a decade by using his platform to ignorantly convince everyone it was possible with just camera alone.
For anyone who understands sensor fusion and the Kalman filter, read this and ask yourself if you trust Elon Musk to direct the sensor strategy on your autonomous vehicle: https://www.threads.com/@mdsnprks/post/DN_FhFikyUE
For anyone wondering, to a sensors engineer the above tweet is like sayin 1 + 1 = 0 -- the truth (and science) is the exact opposite of what he's saying.
I think you might be under-estimating the importance of not hitting things.
If you look at the statistics on fatal car accidents, 85%+ involve collisions with stationary objects or other road users.
Nobody's suggesting getting rid of machine vision or ML - just that if you've got an ML+vision system that gets in 1 serious accident per 200,000 miles, adding LIDAR could improve that to 1 serious accident per 2,000,000 miles.
Because LIDAR can detect the object at the beginning of the perception pipeline, whereas camera can only detect the object after an expensive and time consuming ML inference process. By the time the camera even knows there's an object (if it does at all) the LIDAR would have had the car hitting its brakes. When you're traveling 60 MPH, milliseconds matter.
Just to put numbers on it, 10ms at 60mph is just under a foot. I don't think that matters too much, but if we're talking 200ms that's 10-15 ft which is substantial. I have no idea how long the ML pipeline is, though.
It's an extra sensor you'd add into the mix, you'd still have cameras. Like the radar sensors. I think the reason Teslas don't have it, is because the sensor hardware was expensive a few years back. I assume they are much cheaper now.
Tesla have also backed themselves into a corner by declaring that older models are hardware capable of FSD, so they can't easily add LIDAR to new models without being liable for upgrading/refunding previously sold models.
I've defended some of Musk because I think what he did for Twitter was completely necessary (showing Jay Bhattacharya that the old regime had put him on a trends blacklist, and all the other people who got banned for no reason) but things like this (and Tesla's already been accused of killing people through crashes) are alarming (vision only as opposed to multiple telemetry) and it's kind of amazing he's in charge of something like SpaceX (are we about to witness a fatal incident in space?)
Oh good lord, why? This is a solved problem, why would they waste their time on it. Wait, I think I know the answer: Elon's famous (at SpaceX) for saying the most reliable part is no part. So maybe this is a consequence of that.
The mistakes you describe are the issues of the AI system controlling the car, not of the cameras themselves. If you were watching the camera feed and teleoperating the vehicle, no way you'd phantom brake at a sudden bit of glare.
Going from cameras to the human model, every morning on my way to work humans suddenly slam their brakes for the sun in their eyes: if you can't see, you can't see. I think it's another good example why cameras are not enough alone.
As someone who worked in this space, you are absolutely right, but also kind of wrong - at least in my opinion.
The cold hard truth is that LIDARs are a crutch, they're not strictly necessary. We know this because humans can drive without a LIDAR, however they are a super useful crutch. They give you super high positional accuracy (something that's not always easy to estimate in a vision-only system). Radars are also a super useful crutch because they give really good radial velocity. (Little anecdote, when we finally got the Radars working properly at work it made a massive difference to the ability for our car to follow other cars, ACC, in a comfortable way).
Yes machine learning vision systems hallucinate, but so do humans. The trick for Tesla would be to get it good enough to where it hallucinates less than humans do (they're nowhere near yet - human's don't hallucinate very often).
It's also worth adding that last I checked the state of the art for object detection is early fusion where you chuck the LIDAR and Radar point clouds into a neural net with the camera input so it's not like you'd necessarily have the classical methods guardrails with the Lidar anyway.
Anyway, I don't think Tesla were wrong to not use LIDAR - they had good reasons to not go down that route. They were excessively expensive and the old style spinning LIDARs were not robust. You could not have sold them on a production car in 2018. Vision systems were improving a lot back then so the idea you could have a FSD on vision alone was plausible.
The hard truth is there is no reason to limit machines to only the tools humans are biologically born with. Cars always have crutches that humans don't possess. For example, wheels.
In a true self-driving utopia, all of the cars are using multiple methods to observe the road and drive (vision, lidar, GPS, etc) AND they are all communicating with each other silently, constantly, about their intentions and status.
The "lidar is a crutch" excuse is such a fraud. Musk is doing it so he can make more money, because it's cheaper. Thats it. Just another sociopath billionaire cutting corners at the expense of safety.
The reason this is clear is because, except for a brief period in late 2022, Teslas have included some combination of radar and ultrasonic sensors. [0]
This information is out of date. LIDAR costs are 10x less than they were a decade ago, and still falling.
Turns out, when there's demand for LIDAR in this form factor, people invest in R&D to drive costs down and set up manufacturing facilities to achieve economies of scale. Wow, who could have predicted this‽
Cost is relative. LIDAR maybe be expensive relative to a camera or two but it’s very inexpensive compared to hiring a full time driver. Crashes aren’t particularly cheap either. Neither are insurance premiums.
Huawei has a self-driving system that uses three lidars, which cost $250 each (plus vision, radar, and ultrasound). It appears to work about as well as FSD. Here's the Out of Spec guys riding around on it in China for an hour:
It's not 2010 anymore. They will asymptotically reach approximately twice the price of a camera, since they need both a transmit and receive optical path. Right now the cheapest of the good LiDARs are around 3-4x that. So we're getting close, and we're already within the realm large-scale commercial viability.
You know what used to be expensive? Cameras. Then people started manufacturing them for mass market and cost when down.
You know what else used to be expensive? Structured light sensors. They cost $$$$ in 2009. Then Microsoft started manufacturing the Kinect for a mass market, and in 2010 price went down to $150.
You know what's happened to LIDAR in the past decade? You guessed it, costs have come massively down because car manufacturers started buying more, and costs will continue to come down as they reach mass market adoption.
The prohibitive cost for LIDAR coming down was always just a matter of time. A "visionary" like Musk should have been able to see that. Instead he thought he could outsmart everyone by using a technology that was not suited for the job, but he made the wrong bet.
The point of engineering is to make something that’s economically viable, not to slap together something that works. Making something that works is easy, making something that works and can be sold at scale is hard.
That's simply not true. Engineering can exist outside industry. "Stuff costs money" is not a governing aspect of these kinds of things.
FOSS is the obvious counterexample to your absurdly firm stance, but so are many artistic pursuits that use engineering techniques and principles, etc.
There is no market for such a thing. At that price point, you get a personal chauffeur. That’s what rich people do and he can do stuff that a self driving system never can.
And the rich people who don't want a chauffeur like driving the car. They will buy a $10M car no problem, but they want driving that car to be fun because that's what they were paying for. They don't want you to make the driving more automatic and less interesting.
> they're not strictly necessary. We know this because humans can drive without a LIDAR
and propellers on a plane are not strictly necessary because birds can fly without them? The history of machines show that while nature can sometimes inspire the _what_ of the machine, it is a very bad source of inspiration for the _how_.
> The cold hard truth is that LIDARs are a crutch, they're not strictly necessary. We know this because humans can drive without a LIDAR, however they are a super useful crutch.
Crutch for what? AI does not have human intelligence yet and let’s stop pretending it does. There is no shame in that as the word crutch implies.
That's because our stereoscopic vision has infinitely more dynamic range, focusing speed and processing power w.r.t. a computer vision system. Periphery vision is very good at detecting movement, and central view can process tremendous amount of visual data without even trying.
Even a state of the art professional action camera system can't rival our eyes in any of these categories. LIDARs and RADARs are useful and shall be present in any car.
This is the top reason I'm not considering a Tesla. Brain dead insistence on cameras with small sensors only.
their cams have better dynamic range than your eyes, given they can just run multiexposure and u gotta squint for sunlight. focal point is infinite for driving.
You’re not considering them even though they have the best adas on the market lmao suit yourself
I don’t work in this field so take the grain of salt first.
Quality of additional data matters. How often does a particular sensor give you false positives and false negatives? What do you do when sensor A contradicts sensor B?
Humans can be confused in a number of ways. So can AI. The difference is that we know pretty well how humans get confused. AI gets confused in novel and interesting ways.
I suspect it helps engineering the system. If you have 30 difference sensors, how do you design a system that accounts for seemingly random combinations of them disagreeing with an observation in real time if a priori you don’t know the weight of their observation in that particular situation? For humans for example you know that in most cases seeing something in a car is more important than smelling something. But what if one of your eyes sees a pedestrian and another sees a shadow of a bird?
Also don’t forget that as a human you can move your head any which way, and also draw on your past experiences driving in that area. “There is always an old man crossing the road at this intersection. There is a school nearby so there might be kids here at 3pm.” That stuff is not as accessible to a LIDAR.
LIDARs have the advantage that they allow detecting solid objects that have not been detected by a vision-only system. For example, some time ago, a Tesla crashed into an overturned truck, likely because it didn't detect it as an obstacle.
A system that's only based on cameras is only as good as its ability to recognize all road hazards, with no fall back if that fails. With LIDAR, the vehicle might not know what's the solid object in front of the vehicle using cameras, but it knows that it's there and should avoid running into it.
Solid objects that arent too dark or too shiny. Lidar is very bad at detecing mirrored surfaces or non-reflecting structures that absorb the paticular frequency in use. The back ends of trucks hauling liquid are paticularly bad. Block out the bumper/wheels, say by a slight hill, and that polished cone is invisible to lidar.
LIDAR works be measuring the time it takes for light to return so I don't understand how a object can be too reflective. Objects that absorb the specific wavelength the LIDAR uses is an obvious problem.
Too reflective, like a flat mirror, will send the light off in a random direction rather than back as the detector. Worse yet, things like double reflections can result in timing errors as some of the signal follows a longer path. You want a target that is nicely reflective but not so shiny that you get any double reflections. The ideal is a matte surface painted the same color as the laser.
> Yes machine learning vision systems hallucinate, but so do humans.
When was the last time you had full attention on the road and a reflection of light made you super confused and suddenly drive crazy? When was the last time you experienced objects behaving erratically around you, jumping in and out of place, and perhaps morphing?
Well there is strong anecdotal evidence of exactly this happening.
We were somewhere around Barstow on the edge of the desert when the drugs began to take hold. I remember saying something like, “I feel a bit lightheaded; maybe you should drive . . .”And suddenly there was a terrible roar all around us and the sky was full of what looked like huge bats, all swooping and screeching and diving around the car, which was going about 100 miles an hour with the top down to Las Vegas. And a voice was screaming: “Holy Jesus! What are these goddamn animals?” [0]
[0] Thompson, Hunter S., „Fear and Loathing in Las Vegas“
One hopes so. Many of the comments assume an ideal human driver, whereas real human drivers are frequently tired, distracted, intoxicated, or just crazy.
Many accidents are caused by low-angle light dazzle. It's part if why high beams aren't meant to be used off a dual carriageway.
When was the last time you saw a paper bag blown across the street and mistook it for a cat or a fox? (Did you even notice your mistake, or do you still think it was an animal?)
Do you naturally drive faster on wide streets, slower on narrow streets, because the distance to the side of the road changes your subconcious feeling of how fast you're going? Do you even know, or are you limited to your memories rather than a dashcam whose footage can be reviewed later?
etc.
Now don't get me wrong, AI today is, I think, worse than humans at safe driving; but I'm not sure how much of that is that AI is more hallucinate-y than us vs. how much of it is that human vision system failures are a thing we compensate for (or even actively make use of) in the design of our roads, and the AI just makes different mistakes.
If the internal representation of Tesla Autopilot is similar to what the UI displays, i.e. the location of the w.r.t. to everything else, and we had a human whose internal representation is similar, everything jumping around in consciousness, we’d be insane to allow him to drive.
Self-driving is probably “AI-hard” as you’d need extensive “world knowledge” and be able to reason about your environment and tolerate faulty sensors (the human eyes are super crappy with all kinds of things that obscure it, such as veins and floaters).
Also, if the Waymo UI accurately represents what it thinks is going on “out there” it is surprisingly crappy. If your conscious experience was like that when you were driving you’d think you had been drugged.
I agree that if Tesla's representation of what their system is seeing is accurate, it's a bad system.
The human brain's vision system makes pretty much the exact opposite mistake, which is a fun trick that is often exploited by stage magicians: https://www.youtube.com/watch?v=v3iPrBrGSJM&pp
I wonder what we'd seem like to each other, if we could look at each other's perception as directly as we can look at an AI's perception?
Most of us don't realise how much we mispercieve because it doesn't feel different in the moment to percieve incorrectly; it can't feel different in the moment, because if it did, we'd notice we were mispercieving.
> Anyway, I don't think Tesla were wrong to not use LIDAR - they had good reasons to not go down that route. They were excessively expensive and the old style spinning LIDARs were not robust. You could not have sold them on a production car in 2018.
The correct move for Tesla would have been to split the difference and add LIDAR to some subset of their fleet, ideally targeted in the most difficult to debug environments.
Somewhat like Google/Waymo are doing with their Jaguars.
Tesla did, in fact, use "ground truth vehicles" - vehicles that were owned and operated by Tesla itself, and had high performance LIDARs installed. They were used to collect the data to train the "vision-only" system and verify its performance.
Reportedly, they no longer use this widely - but they still have some LIDAR-equipped "scout vehicles" they send into certain environments to collect extra data.
I want my self-driving car to be a better driver than any human. Sure we can drive without LIDAR, but just look up the amount of accidents caused by humans.
Humans cause one fatal accident per million miles. (They have no backup driver they can disengage to.) Now just look up how many disengagements per million miles Tesla has.
I had taken for granted that the cameras in the Tesla might be equivalent to human vision, but now I'm realizing that's probably laughable. I'm reading it's 8 cameras at 30fps and it sounds like the car's bus can only process about 36fps (so a total of 36fps, not 8x30 = 240fps theoretically available from the cameras, if they had a better memory bus.) It also seems plausible you would need at least 10,000 FPS to fully match human vision (especially taking into account that humans turn their heads which in a CV situation could be analogous to the CV algorithm having 32x30 = 960 FPS, but typically only processing 140 frames this second from cameras pointing in a specific direction.
So maybe LIDAR isn't necessary but also if Tesla were actually investing in cameras with a memory bus that could approximate the speed of human vision I doubt it would be cheaper than LIDAR to get the same result.
Mostly human vision is just violently different from a camera, but you could interpret that as a mix of better and worse.
One of the ways it's better is that humans can sense individual photons. Not 100% reliably, but pretty well, which is why humans can see faint stars on a dark night without any special tools even though the star is thousands of light years away. On the other hand, our resolution for most of our field of vision is pretty bad - this is compensated for by changing what we're looking it when we care about details we can just look directly at it and the resolution is better right in the centre of the picture.
You might not have the classical guardrails, but you are providing the neural net with a lot more information. Even humans are starting to find it useful to get inputs from other sensor types in their cars.
I agree that Tesla may have made the right hardware decision when they started with this. It was probably a bad idea to lock themselves into that path by over-promising.
At least for one marine creature, which I forgot its name, the answer is yes. Said creature dissolves its brain the moment it can find a place to attach and call home.
Can't think of the name atm either, but I'm pretty sure it only does so, as it would be pointless to make any further decisions after attaching itself - it simply has no means to act on anything after that... the attaching is the only thing it 'does' in it's life... after that, it's only job, and only ability, is to be. Chose the wrong spot to attach and call home? Brains wouldn't make a bit of difference (unless regretting it's one life-choice is somehow usefull during this stage of just being, being stuck on the spot).
Musk's argument "Humans don't have LIDAR, therefore LIDAR is useless" has always seemed pretty dumb to me. It ignores the possibility that LIDAR might be superhuman with superhuman performance. And we also know you can get superhuman performance on certain tasks with insect-scale brains. Musk's just spewing stoner marketing crap that stoners think is deep, not actual engineering savvy.
(and that's not even addressing that human vision is fundamentally a weird sensory mess full of strange evolutionary baggage that doesn't even make sense except for genetic legacy)
Musk's argument also ignores intelligence of humans. The worst case upper bound for reaching human level driving performance without LIDAR is for AI to reach human level intelligence. Perhaps it is not required, but until we see self-driving Teslas performing as well as humans, we won't know this. Worst case scenario is that Tesla unsupervised self-driving is as far away as AGI.
The big promise of autonomous self-driving was that it would be done safer than humans.
The assumption was that with similar sensors (or practically worse - digital cameras score worse than eyeballs in many concrete metrics), ‘AI’ could be dramatically better than humans.
At least with Tesla’s experience (and with some fudging based on things like actual fatal accident data) it isn’t clear that is actually what is possible. In fact, the systems seem to be prone to similar types of issues that human drivers are in many situations - and are incredibly, repeatedly, dumb in some situations many humans aren’t.
Waymo has gone full LiDAR/RADAR/Visual, and has had a much better track record. But their systems cost so much (or at least used to), that it isn’t clear the ‘replace every driver’ vision would ever make sense.
And that is before the downward pressure on the labor market started to happen post-COVID, which hurts the economics even more.
The current niche of Taxis kinda makes sense - centrally maintained and capitalized Taxis with outsourced labor has been a viable model for a long time, it lets them control/restrict the operating environment (important to avoid those bad edge cases!), and lets them continue to gather more and more data to identify and address the statistical outliers.
They are still targeting areas with good climates and relatively sane driving environments because even with all their models and sensors, heavy snow/rain, icy roads, etc. are still a real problem.
This whole "But Waymo can't work in bad climates" thing is very dubious. At some point it is too dangerous to be driving an automobile. "But Waymo should also be dangerous" is the wrong lesson.
When the argument was Phoenix is too pleasant I could buy that. Most places aren't Phoenix. But SF and LA are both much more like a reasonable place other humans live. It rains, but not always, it's misty, but not always. Snow I do accept as a thing, lots of places humans live have some snow, these cities don't really have snow.
However for ice when I watch one of those "ha, most drivers can make this turn in the ice" videos I'm not thinking "I bet Waymo wouldn't be able to do this" I'm thinking "That's a terrible idea, nobody should be attempting it". There's a big difference between "Can it drive on a road with some laying snow?" and "Can it drive on ice?".
You know how I can tell you haven’t actually lived in a bad climate?
Both SF and LA climates are super cushy compared to say, Northern Michigan. Or most of the eastern seaboard. Or even Kansas, Wyoming, etc. in the winter.
In those climates, if you don’t drive in what you’re calling ‘nobody should be attempting it’ weather, you - starve to death in your house over the winter. Because many months are just like that.
Self driving has a very similar issue with the vast majority of, say, Asia. Because similarly “this is crazy, no one should be driving like this conditions” is the norm. So if it can’t keep up, it’s useless.
Eastern and far Northern Europe has a lot of kinda similar stuff going on.
Self driving cars are easy if you ignore the hard parts.
In India, I’ve had to deal with Random Camel, missing (entire) road section that was there yesterday, 5 different cars in 3 lanes (plus 3 motorcycles) all at once, many cattle (and people) wandering in the road at day and night, and the so common it’s boring ‘people randomly going the wrong way on the road’. If you aren’t comfortable bullying other drivers sometimes to make progress or avoid a dangerous situation, you’re not getting anywhere anytime soon.
All in a random mix of flooding, monsoon rain, super hot temperatures, construction zones, fog, super heavy fireworks smoke, etc. etc.
Hell, even in the US I’ve had to drive through wildfires and people setting off fireworks on the road (long story, safety reasons). The last thing I would have wanted was the car freezing or refusing.
Is that super safe? Not really. But life is not super safe. And a car that won’t help me live my life is useless to me.
Such an AI would of course be a dangerous asshole on, say, LA roads, of course. Even more than the existing locals.
This idea that they're somehow ignoring the hard parts is very silly. The existing human drivers in San Francisco manage to kill maybe 20 or so people per year so apparently it's not so "easy" that the human drivers can do it without killing anybody.
I live in the middle of a city, so, no, in terrible weather just like great weather I walk to the store, no need to "starve to death" even if conditions are too treacherous for people to sensibly drive cars. Because I'm an old man, and I used to live somewhere far from a city, I have had situations where you can't use a car to go fetch groceries because even if you don't care about safety the car can't go up an icy hill, it loses traction, gravity takes over, you slide back down (and maybe wreck the car).
So why do you think they’re only those cities? Because I’m hearing nothing from you that goes beyond ‘nuh uh’ so far.
Because as an old man who has actually lived in all these places - and also has ridden in Waymos before and has had friends on the Waymo team in the past, your comments seem pretty ridiculous.
Unlike Phoenix the choice of SF and LA seems to me like a PR choice. SF is where lots of tech nerds live and work, LA is one half of the country's media. I'd imagine that today if you're at all interested in this stuff and live in LA or SF you have ridden Waymo whereas when it was in a Phoenix suburb that's a very niche thing to go do unless you happened to live there.
A lot of the large population centres in the US are in these what you're calling "super cushy" zones where there's not much snow let alone ice. More launches in cities in Florida, Texas, California will address millions more people but won't mean more ice AFAIK. So I guess for you the most interesting announcement is probably New York, since New York certainly does have real snow. 2026 isn't that long, although I can imagine that maybe a President who thinks he's entitled to choose the Mayor of New York could mess that up.
As to the "But people in some places are crazy drivers" I saw that objection from San Francisco before it was announced. "Oh they'll never try here, nobody here drives properly. Can you imagine a Waymo trying to move anywhere in the Mission?". So I don't have much time for that.
This impulse to limit robots to the capacities, and especially the form factors, of humans has severely limited our path to progress and a more convenient life.
Robots are supposed to make up for our limitations by doing things we can't do, not do the things we can already do, but differently. The latter only serves to replace humans, not augment them.
"I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate".
> Vision systems were improving a lot back then so the idea you could have a FSD on vision alone was plausible.
This was only plausible to people who had no experience in robotics, autonomy, and vision systems.
Everyone knew LIDAR was the enabling technology thanks to the 2007 DARPA Urban challenge.
But the ignoramus Elon Musk decided he knew better and spent the last decade+ trashing the robotics industry. He set us back as far as safety protocols in research and development, caused the first death due to robotic cars, deployed them on public roads without the consent of the public by hoisting around his massive wealth, lied consistently for a DECADE about the capabilities of these machines, defrauded customers and shareholders while becoming richer and richer, all to finally admit defeat while he still maintains the growth story of for Tesla's future remains in robotics. The nerve of this fucking guy.
This looks to me like they are acknowledging that their claims were premature, possibly due to claims of false advertising, but are otherwise carrying forward as they were.
Maybe they'll reach level 4 or higher automation, and will be able to claim full self driving, but like fusion power and post-singularity AI, it seems to be one of those things where the closer we get to it, the further away it is.
I was about to say this. Elon would go on stage and say something like “and this is something we can do today”, or “coming next year” in 2018. The crowd goes wild, the stock price shoots up.
The first time could be an honest mistake, but after a certain point we have to assume that it’s just a lie to boost the stock price.
I know. That’s my point; he just goes on stage and lies, the stock price goes up and it doesn’t appear to correct itself despite the boost being based on a lie.
I'm not sure it's fraud because there was always the fine print. But a company selling a car with a feature called Full Self Driving that does not in fact fully self drive, well, that's a company I don't buy from. Unfortunately others don't seem as offended and happily pay for the product, encouraging further b.s. marketing hype culture.
Just like politicians, it seems there's no repercussions for CEO's lying as long as it's fleecing the peons and not the elite.
Not in the US. There's a whole bureaucracy of advertising boards where a false advertising case can heard and appealed before anyone with legal authority would even look at it, which pretty much never happens. Even then, it's a tort, so punishment outside of fines is pretty much non existent.
Not gonna happen as long as Musk is CEO. He's hard over on a vision-only approach without lidar or radar, and it won't work. Companies like Waymo that use these sensors and understand sensor fusion are already eating Tesla's lunch. Tesla will never catch up with vision alone.
While I don't think vision-only is hopeless (it works for human drivers) the cameras on Teslas are not at all reliable enough for FSD. They have little to no redundancy and only the forward facing camera can (barely) clean itself. Even if they got their vision-only FSD to work nicely it'll need another hardware revision to resolve this.
I feel like our AI research at physical world falls so much behind language-level AI, that our reasoning might be clouded.
Compare Boston Dynamics and cat. They are on the absolutely different levels for their bodies and their ability to manipulate their bodies.
I have no doubts, that using cameras-only would absolutely work for AI cars, but at the same time I'm feel that this kind of AI is not there. And if we want autonomous cars, it might be possible, but we need to equip them with as much sensors as necessary, not setting any artificial boundaries.
But lidar is basically a cheat code, whether or not optical is sufficient. Why wait for end stage driving AI? Why not use cheat codes and wait for cheaper technology later?
I honestly think Tesla is past the point where lidar would provide significant benefits. I've tried FSD for a month or two and it can see everything but just drives like an idiot. Lidar isn't going to help it merge properly, change lanes smoothly, take left turns at lights without blocking traffic, etc.
Check out what the Tesla park assist visualization shows now. It's vision based and shows a 3D recreation of the world around the car. You can pan around to see what's there and how far away it is. It's fun to play around with in drive thrus, garages, etc. just to see what it sees.
It should help for disambiguating scenarios that lead to phantom stops or not stopping on time, which has killed Tesla drivers before, such as by driving full speed into the back of a truck with some glare.
Maybe? I don't remember the cases but there is some confusion with autopilot (cruise control) vs. FSD sometimes. Autopilot is a completely different system and nobody should be surprised if it leads to crashes when misused.
Behavioral and pattern analysis is always in full overdrive when I drive. I drive in Africa, people never follow rules, red lights at crossings mean go for bikers, when there are no lights you can't just give right of way, or you'll never move. When nearing intersections, people accelerate so they can cross before you, and it's a long line, and they know you have right of way, so they accelerate to scare you into stopping. Amateurs freeze and hold up the line for a very long time, usually until a traffic officer shows up to unblock (multiply this by every intersection). In order for you to get anywhere, you have to play the same game, and get close enough to the point where they aren't sure you'll stop, and will hit you and will have to pay. So often at crossings you're always in near misses and they realize you're not going to stop, so they do. Everyone is skilled enough to do this daily. Your senses, your risk analysis, your spider sense are fully overclocked most of the time. And then there are all the other crazy things that happen, like all the insane lane zig zagging people do, bikers our of nowhere et night with no lights, memorizing all the pot holes in all roads in the city bc they aren't illuminated at night so you can drive at 80-120km/h, etc.
So no, it's not just your eyes. Lots of sensors, memory, processing, memory/mapping are required.
It's not just hearing. I can "feel" in the seat of my pants, the pull of the steering wheel, et. I have a vestibular system that knows bout relative velocities and changes which coordinates with my other senses, and more. This all allows me to take in far more than what my eyes see, or my ears hear and to build the correct intuitions and muscle memories to get good at driving and adapt to new driving environments.
Even without getting out of the vision sense there are features of vision Tesla doesn’t properly try to replicate. Depth perception for example (it does DP very differently to humans).
Binocular depth perception stops being useful somewhere around 10 meters. Your brain is mostly driving using the “computed” depth perception based on the flat image it’s getting. Same way Tesla is getting a depth map.
Provable by one-eyed people being able to drive just fine, as could you with one eye covered.
Eyes are not cameras they are extensions of the brain. That people can drive with one eye is not a "proof of concept" that cars should be able to drive with one camera. You'll need a human brain to go along with it. Unfortunately for Tesla, they seem to be short on supply of those at the moment.
So your assertion is that a human with access to arbitrarily good camera feeds could not drive a car at level 5? That something magical is happening because the eyes are close topographically to the brain? Sounds implausible.
> your assertion is that a human with access to arbitrarily good camera feeds could not drive a car at level 5?
No. I live in snow country. Folks with vestibular issues are advised to pull over in snowstorms because sometimes the only indication that you have perpendicular velocity and are approaching a slide off the road or spin is that sense. My Subaru has on more than one occasion noticed a car before I did based on radar.
Vision only was a neat bet. But it will cost Tesla first to market status generally and especially in cities, where regulators should have fair scepticism about a company openly trying to do self driving on the cheap.
How does the human consume the arbitrarily good camera feeds?
> That something magical is happening because the eyes are close topographically to the brain?
It sounds to me like you have to study what eyes actually are. It's not about proximity or magic, they are a part of your brain, and we're only beginning to understand their complexities. Eyes are not just sensory organs, so the analogy to cameras is way off. They are able to discern edges, motion, color, and shapes, as well as correct errors before your brain even is even aware.
In robotics, we only get this kind of information after the camera image has been sent through a perception pipeline, often incurring a round trip through some sort of AI and a GPU at this point.
> Sounds implausible.
Musk just spent billions of dollars and the better part of a decade trying to prove the conjecture that "cameras are sufficient", and now he's waving the white flag. So however implausible it sounds, it's now more implausible than ever that cameras alone are sufficient.
Sounds horrible. I can understand that stopping you from cycling, but if you could have managed to sit in a car, would you have been able to drive it? I can imagine that inner ear issues can sometimes affect vision too as my wife suffered from positional vertigo for a while and I could see her eyes flicking rapidly when she was getting dizzy. (I did find a helpful YouTube video about a sequence of positions to put the sufferer through which basically helps to remove the otoliths from the ear canal).
I am not 100% sure which “sense” this would be, but when I drive I can “feel” the texture of the road and intuit roughly how much traction I have. I’m not special, every driver does this, consciously or not.
I am not saying that you couldn’t do this with hardware, I am quite confident you could actually, but I am just saying that there are senses other than sight and sound at play here.
Whilst that might feel re-assuring that you're getting tactile feedback, I doubt that there's many situations apart from driving on snow and ice that it's of much use. Fair enough if you're aiming for a lap record round a track, but otherwise you shouldn't be anywhere near the limit of traction of your tyres.
I'm using inertial senses from my inner ear. I feel the suspension through the seat. I feel feedback through the steering wheel. I can feel the g forces pulling on my body.
Yes, but in what specific circumstances do they change your driving behaviour? If you weren't able to feel the suspension through your seat, how would your driving become less safe?
Pretty much all of them. The difference between driving a car and playing a video game is remarkable.
But that's sort of besides the point: why would you not use additional data when the price of the sensors are baked into the feature that you're selling?
One quick obvious example, they put tactile features on the road specifically so you can feel them. Little bumps on lane markers. Rumble strips on the boundaries. Obvious features like that.
While it doesn't often snow or ice up here (it does sometimes), it does rain a good bit from time to time. You can usually feel your car start to hydroplane and lose traction well before anything else goes wrong. It's an important thing to feel but you wouldn't know it's happening if you're going purely on vision.
You can often feel when there's something wrong with your car. Vibrations due to alignment or balance issues. Things like that.
Those are quick examples off the top of my head. I'm sure there are more.
Of course, all these things can be tracked with extra sensors, I'm not arguing humans are entirely unique in being able to sense these things. But they are important bits of feedback to operate your car safely in a wide range of conditions that you probably will encounter, and should be accounted for in the model.
As for auditory feedback, while some drivers don't have sound input available to them (whether they're deaf or their music is too loud or whatever) sound is absolutely a useful input to have. You may hear emergency vehicles you cannot see. You may hear honking alerting you to something weird going on in a particular direction. You may hear issues with your car. Those rumble strips are also tuned to be loud when cars run over them as well. You can hear the big wind gusts and understand those are the source of weird forces pushing the car around as opposed to other things making your car behave strangely. So sure, one can drive a car without sound, but its not better without it.
For sure, but my phone camera sees better than I do. Cars can make use of better camera sensors and have more than two of them. You can't just extrapolate the conclusion that human vision bad = vision sensors bad.
Such utter drivel. A camera is not the equivalent of human eyes and sensory processing, let alone an entire human being engaging with the physical world.
Cameras are better than human eyes. Much better. There are areas in which they are worse, but that's completely outweighed by the fact that you are not limited to two of them and they can have a 360 degree field of vision.
What garbage. The human eye has about 20 stops of dynamic range. Cameras of the size that are in a Tesla are at about 12 stops. That's a lot of data they don't get. For just one thing. Human eyes can also adjust focal distance multiple times a second, which camera (lenses) have a harder time doing.
The best cameras are surely better than most peoples' eyes these days.
Sensory processing is not matched, sure, but IMO how a human drives is more involved than it needs to be. We only have two eyes and they both look in the same direction. We need to continuously look around to track what's around us. It demands a lot of attention from us that we may not always have to spare, especially if we're distracted.
The front camera Tesla is using is very good with this. You can drive with the sun shining directly into it and it will still detect everything 99% of the time, at least with my older model 3. Way better than me stuck looking at the pavement directly in front the car.
AFAIK there is also more than one front camera. Why would anyone try to do it all with one or two camera sensors like humans do it?
It's important to remember that the cameras Tesla are using are optimized for everything but picture quality. They are not just taking flagship phone camera sensors and sticking them into cars. That's why their dashcam recordings look so bad (to us) if you've ever seen them.
Well, Teslas use low cost consumer cameras. Not DSLRs. Bad framerate, bad resolution and bad dynamic range. Very far from human vision and easily blinded and completely washed out by sudden shifts in light.
I’m consistently surprised by how my Tesla can see a traffic light with the sun directly behind it. They seem to have solved the washout problem in practice.
Because human vision has very little in common with camera vision and is a far more advanced sensor, on a far more advanced platform (ability to scan and pivot etc), with a lot more compute available to it.
I don't think it's a sensors issue - if I gave you a panoramic feed of what a Tesla sees on a series of screens, I'm pretty sure you'd be able to learn to drive it (well).
Who cares? They don't need that. The cameras can have continuous attention on a 360 degree field of vision. That's like saying a car can never match a human at bipedal running speed.
The human sensor (eye) isn't more advanced in its ability to capture data -- and in fact cameras can have a wider range of frequencies.
But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.
Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.
Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.
LIDAR based self-driving cars will always massively exceed the safety and performance of vision-only self driving cars.
Current Tesla cameras+computer vision is nowhere near as good as humans. But LIDAR based self-driving cars already have way better situational awareness in many scenarios. They are way closer to actually delivering.
And what driver wouldn't want extra senses, if they could actually meaningfully be used? The goal is to drive well on public roads, not some "Hands Tied Behind My Back" competition.
The human processing unit understands semantics much better than the Tesla's processing unit. This helps avoid what humans would consider stupid mistakes, but which might be very tricky for Teslas to reliably avoid.
Cost, weight, and reliability. The best part is no part.
No part costs less, it also doesn't break, it also doesn't need to be installed, nor stocked in every crisis dealership's shelf, nor can a supplier hold up production. It doesn't add wires (complexity and size) to the wiring harness, or clog up the CAN bus message queue (LIDAR is a lot of data). It also does not need another dedicated place engineered for it, further constraining other systems and crash safety. Not to mention the electricity used, a premium resource in an electric vehicle of limited range.
That's all off the top of my head. I'm sure there's even better reasons out there.
These are all good points. But that just seems like it adds cost to the car. A manufacturer could have an entry-level offering with just a camera and a high-end offering with LIDAR that costs extra for those who want the safest car they can afford. High-end cars already have so many more components and sensors than entry-level ones. There is a price point at which the manufacturer can make them reliable, supply spare parts & training, and increase the battery/engine size to compensate for the weight and power draw.
We already have that. Tesla FSD is the cheap camera only option and Waymo is the expensive LIDAR option that costs ~150K (last time I heard). You can't buy a Waymo, though, because the price is not practical for an individually owned vehicle. But eventually I'm sure you will be able to.
LIDAR does not add $150K to the cost. Dramatically customizing a production car, and adding everything it needs costs $150K. Lidar can be added for hundreds of dollars per car.
> Lidar can be added for hundreds of dollars per car.
Surprisingly, many production vehicles have a manufacturer profit under one thousand dollars. So that LIDAR would eat a significant portion of the margin on the vehicle.
They charge $3000 for the hours of labor to take apart the car, pull the old camera out, put the new camera in, and put the car back together, not for the camera. You can argue that $3000 is excessive, but to compare it to the cost of the camera itself is dishonest.
Chimpanzies are better than humans given a reward structure they understand. The next battlefield evilution are chimpanzies hooked up with intravenous cocaine modules running around with 50. cals
I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.
I didn't mean that a human driver needs to leave their vehicle to drive safely, I mean that we understand the world because we live in it. No amount of machine learning can give autonomous vehicles a complete enough world model to deal with novel situations, because you need to actually leave the road and interact with the world directly in order to understand it at that level.
that's (usually) because our reflexes are slow (compared to a computer), or we are distracted by other things (talking, phone, tiredness, sights, etc. etc.), not because we misinterpret what we see
But, they're changing the meaning of FSD to FSD (Supervised). So that means they don't make any promises for unsupervised FSD in the future anymore. They'll of course say that they keep working on it and that stuff is progressing. But they don't have to deliver anymore. Just like they say to people getting into accidents that they should keep their arms on the wheel or else it's your own responsibility.
It is strange how Elon and Tesla get a pass on this. Tesla has contributed to the death of more people than Thernos. I guess he didn't rip off rich investors, except maybe the ones who died in their Teslas.
Perhaps it's that cars are more sacred than healthcare.
> This looks to me like they are acknowledging that their claims were premature, possibly due to claims of false advertising, but are otherwise carrying forward as they were.
> Maybe they'll reach level 4 or higher automation
There is little to suggest that Tesla is any closer to level 4 automation than Nabisco is. The Dojo supercomputer that was going to get them there? Never existed.
What does Waymo lack in your opinion to not be considered "full self driving"?
The persistent problem seems to be severe weather, but the gap between the weather a human shouldn't drive in and weather a robot can't drive in will only get smaller. In the end, the reason to own a self-driven vehicle may come down to how many severe weather days you have to endure in your locale.
Waymo is very restricted on the locations it drives (limit parts of limited cities, I think no freeways still), and uses remote operators to make decisions in unusual situations and when it gets stuck. This article from last year has quite a bit of information: https://arstechnica.com/cars/2024/05/on-self-driving-waymo-i...
Waymo never allows a remote human to drive the car. If it gets stuck, a remote operator can assess the situation and tell the car where it should go, but all driving is always handled locally by the onboard system in the vehicle.
Interesting that Waymo now operates just fine in SF fog, and is expanding to Seattle (rain) and Denver (snow and ice).
The person you're replying to never claimed otherwise. However, while decision support is not directly steering and accelerating/braking the car, I am just going to assert it is still driving the car, at least for how it actually matters in this discussion.
And the best estimate is that these interventions are "uncommon" on the order of 10ks miles, but that isn't rare.
A system that requires a "higher level" handler is not full self driving.
I think the important part is that the remote person doesn't need to be alert, and make real time decisions within seconds. As I understand it, the remote driver is usually making decisions with the car stationary. I'd imagine that any future FSD car with no steering wheel would probably have a screen for the driver to make those kind of decisions.
There's a simple test I find useful to determine who's driving:
If the vehicle has a collision, who's ultimately responsible? That person (or computer) is the driver.
If a Waymo hits a pole for example, the software has a bug. It wasn't the responsibility of a remote assistant to monitor the environment in real time and prevent the accident, so we call the computer the driver.
If we put a safety driver in the seat and run the same software that hits the same pole, it was the human who didn't meet their responsibility to prevent the accident. Therefore, they're the driver.
In that case, it sounds like "full self driving" is more of an academic concept that is probably past it's due date. Waymo and Apollo Go are determining what the actual requirements are for an ultra-low labor automated taxi service by running them successfully.
Geofencing and occasional human override meets the definition of "Level 4 self driving". Especially when it's a remote human override.
But is Level 4 enough to count as "Full Self Driving"? I'd argue it really depends on how big the geofence area is, and how rare interventions are. A car that can drive on 95% of public roads might as well be FSD from the perspective of the average drive, even if it falls short of being Level 5 (which requires zero geofencing and zero human intervention).
> and uses remote operators to make decisions in unusual situations and when it gets stuck.
This is why its limited markets and areas of service: connectivity for this sort of thing matters. Your robotaxi crashing cause the human backup lost 5g connectivity is gonna be a real real bad look. NO one is talking about their intervention stats. IF they were good I would assume that someone would publish them for marketing reasons.
> Your robotaxi crashing cause the human backup lost 5g connectivity is gonna be a real real bad look.
Waymo navigates autonomously 100% of the time. The human backup's role is limited to selecting the best option if the car has stopped due to an obstacle it's not sure how to navigate.
> NO one is talking about their intervention stats.
Interventions are a term of art, i.e. it has a specific technical meaning in self-driving. A human taking timely action to prevent a bad outcome the system was creating, not taking action to get unstuck.
> IF they were good I would assume that someone would publish them for marketing reasons.
I think there's an interesting lens to look at it in: remote interventions are massively disruptive, the car goes into a specific mode and support calls in to check in with the passenger.
It's baked into UX judgement, it's not really something a specific number would shed more light on.
If there was a significant problem with this, it would be well-known given the scale they operate at now.
All cars were once restricted in the locations they could drive. EVs are restricted today. I don't see why universal access is a requirement for a commercially viable autonomous taxi service, which is what Waymo is currently. And the need for human operators seems obvious for any business, no matter how autonomous, let alone a business operating in a cutting edge and frankly dangerous space.
It's by definition in terms of how these things are counted.
L4 is "full autonomy, but in a constrained environment."
L5 is the holy grail: as good as or better than human in every environment a human could take a car (or, depending on who's doing the defining: every road a human could take a car on. Most people don't say L5 and mean "full Canyonero").
> or, depending on who's doing the defining: every road a human could take a car on.
That's a distinction without a difference. Forest service and BLM roads are "roads" but can be completely impassable or 100% erased by nature (and I say this as a former Jeep Wrangler owner), they aren't always located where a map thinks they are, and sometimes absolutely nothing differentiates them from the surrounding nature -- for example, left turn into a desert dry wash can be a "road" and right not.
Actual "full" autonomous driving is crazy hard. Like, by definition you get into territory where some vehicles and some drivers just can't make it through, but it's still a road(/"environment"). And some people will live at the end of those roads.
It initially seems mad that a human, inside the box can outperform the "finest" efforts of a multi zillion dollar company. The human has all their sensors inside the box and most of them stymied by the non transparent parts. Bad weather makes it worse.
However, look at the sensors and compute being deployed on cars. Its all minimums and cost focused - basically MVP, with deaths as a costed variable in an equation.
A car could have cameras with views everywhere for optical, LIDAR, RADAR, even a form of SONAR if it can be useful, microwave and way more. Accellerometers and all sorts too, all feeding into a model.
As a driver, I've come up with strategies such as "look left, listen right". I'm British so drive on the left and sit on the right side of my car. When turning right and I have the window wound down, I can watch the left for a gap and listen for cars to the right. I use it as a negative and never a positive - so if I see a gap on the left and I hear a car to my right, I stay put. If I see a gap to the left but hear no sound on my right, I turn my head to confirm that there is a space and do a final quick go/no go (which involves another check left and right). This strategy saves quite a lot of head swings and if done properly is safe.
I now drive an EV: One year so far - a Seic MG4, with cameras on all four sides, that I can't record from but can use. It has lane assist (so lateral control, which craps out on many A road sections but is fine on motorway class roads) and cruise control that will keep a safe distance from other vehicles (that works well on most roads and very well on motorways, there are restrictions).
Recently I was driving and a really heavy rain shower hit as I was overtaking a lorry. I immediately dived back into lane one, behind the lorry and put cruise on. I could just see the edge white line, so I dealt with left/right and the car sorted out forward/backward. I can easily deal with both but its quite nice to be able carefully abrogate responsibilities.
For a couple decades you couldn't even bring your cell phone anywhere in the world and use it. Transformational technologies don't have to be available universally and simultaneously to be viable. Even when the gas car was created you couldn't use it anywhere that didn't have gasoline and paved roads, plus a mechanic and access to parts.
We once had no gas stations, now we have 150,000 (in the US). If the commercial need is there, building out connectivity is an unlikely impediment. Starlink et al. can solve this everywhere except when there's severe weather, a problem Waymo shares, which is starting to make me think the Upper Midwest might be waiting a very long time for self-driving cars.
I answered the question 'What does Waymo lack in your opinion to not be considered "full self driving"?'. And clearly its not if it can't drive on literally 99.99% of roads in the world. Any argument to the contrary is just ridiculous.
That's the real issue. If "can navigate roads" is enough then we've had full self-driving for a while. There needs to be some base level of general purpose capability or it's just a neat regional curiosity.
Oh gosh sorry, I do try to contribute positively to HN and write quality comments. I'll expand: I've been in circumstances where I've been rented a company car in a foreign country, felt that I was a good driver, but struggled. The road signs are different and can be confusing, the local patterns and habits of drivers can be totally different from what you're accustomed to. I don't doubt that lots of humans could drive most roads - but I think the average driver would struggle, and have a much higher rate of accidents than a local.
Germany, Italy, India all stand out as examples to me. The roads and driving culture is very different, and can be dangerous to someone who is used to driving on American suburban streets.
I really do stand by my comment, and apologize for the 'low quality' nature of it. I meant to suggest that we set the bar far higher for AI than we do for people, which is in general a good thing. But still - I would say that by this definition of 'full self driving', it wouldn't be met very well by many or most human drivers.
I've driven all over the planet except for Asia and Africa. So far, no real problem and I think most drivers would adapt within a day or two. Greece, Panama and Colombia stand out as somewhat more exciting. Switching to left hand driving in the UK also wasn't a big problem but you do have to pay more attention.
Of course I may have simply been lucky, but given that my driving license is valid in many countries it seems as though humanity has determined this is mostly a solved problem. When someone says "Put a Waymo on random road in the world, can it drive it?" they mean: I would expect a human to be able to drive on a random road in the world. And they likely could. Can a Waymo do the same?
I don't know the answer to that one. But if there is one thing that humans are pretty good at it is adaptation to circumstances previously unseen. I am not sure if a Waymo could do the same but it would be a very interesting experiment to find out.
American suburban streets are not representative of driving in most parts of the world. I don't think the bar of 'should be able to drive most places where humans can drive' is all that high and even your average American would adapt pretty quickly to driving in different places. Source: I know plenty of Americans and have seen them drive in lots of countries. Usually it works quite well, though, admittedly, seeing them in Germany was kind of funny.
"Am I hallucinating or did we just get passed by an old lady? And we're doing 85 Mph?"
That's experience and you learned and survived to tell the tale. Its almost as though you are capable of learning how to deal with an unfamiliar environment, and fail safe!
I'm a Brit and have driven across most of Europe, US/CA and a few other places.
Southern Italy eg around Napoli is pretty fraught - around there I find that you need to treat your entire car as an indicator: if you can wedge your car into a traffic stream, you will be let in, mostly without horns blaring. If you sit and wait, you will go grey haired eventually.
In Germania, speed is king. I lived there in the 70s-90s as well as being a visitor recently. The autobahns are insane if you stray out of lane one, the rest of the road system is civilised.
France - mostly like driving around the UK apart from their weird right hand side of the road thing! La Perifique is just as funky as the M25 and La Place du Concorde is a right old laugh. The rest of the country that I have driven is very civilised.
Europe to the right of Italy is pretty safe too. I have to say that across the entirety of Europe, that road signage is very good. The one sign that might confuse any non-European is the white and yellow diamond (we don't have them in the UK). It means that you have priority over an implied "priority to the right". See https://driveeurope.co.uk/2013/02/27/priority-to-the-right/ for a decent explanation.
Roundabouts were invented in the US. In the UK when you are actually on a roundabout you have right of way. However, everyone will behave as though "priorite a la doite" and there will often be a stand off - its hilarious!
In the UK, when someone flashes their headlights at you it generally means "I have seen you and will let you in". That generally surprises foreigners (I once gave a lift to a prospective employee candidate from Poland and he was absolutely aghast at how polite our roads seemed to be). Don't always assume that you will be given space but we are pretty good at "after you".
That reminds me. I was in the UK on some trip and watched two very polite English people crash into each other when after multiple such 'after you' exchanges they both simultaneously thought screw it and accelerated into each other. Fortunately only some bent metal.
My anecdata suggests that Waymo is significantly better than random ridesharing drivers in the US, nowadays.
My last dozen ridesharing experiences only had a single driver that wasn't actively hazardous on the road. One of them was so bad that I actually flagged him on the service.
My Waymo experiences, by contrast, have all been uniformly excellent.
I suspect that Waymo is already better than the median human driver (anecdata suggests that's a really low bar)--and it just keeps getting better.
> My anecdata suggests that Waymo is significantly better than random ridesharing drivers in the US, nowadays.
Those two aren't really related are they? That's one locality and a specific kind of driver. If you picked a random road there is a pretty small chance that road would be one like the one where Waymo is currently rolled out, and where your ridesharing drivers are representative of the general public, they likely are not.
I use self-driving every single day in Boston and I haven’t needed to intervene in about 8 months. Most interventions are due to me wanting to go a different route.
Based on the rate of progress alone I would expect functional vision-only self-driving to be very close. I expect people will continue to say LIDAR is required right up until the moment that Tesla is shipping level 4/5 self-driving.
I would like to get my experience more in line with yours. I can go a few miles without intervention, but that's about it, before it does something that will result in damage if I don't take over. I'm envious that some people can go months when I can't go a full day.
Where are you driving?! If the person you're replying to has gone 8 months in Boston without having to intervene, I'm impressed. Boston is the craziest place to drive that I've ever driven.
Pro tip if you get stuck in a warren of tiny little back streets in the area. Latch on to the back of a cab; they're generally on their way to a major road to get their fare where they're going and they usually know a good way to get to one. I've pulled this trick multiple times around city hall, Government Center, the old state house, etc.
Or when. Driving during peak commute hours really makes you a sardine in a box and it's harder for there to be intervene-worthy events just by nature of dense traffic.
Same experience in a mix of city/suburban/rural driving, on a HW3 car. Seeing my car drive itself through complex scenarios without intervention, and then reading smart people saying it can’t without hardware it doesn’t have, gives major mental whiplash.
On a scale from "student driver" to "safelite guy (or any other professional who drives around as part of their job) running late" how does it handle storrow and similiar?
Like does it get naively caught in stopped traffic for turns it could lane change out or does it fucking send it?
> Based on the rate of progress alone I would expect functional vision-only self-driving to be very close.
So close yet so far, which is ironically the problem vision based self-driving has. No concrete information just a guess based on the simplest surface data.
Normally the Board of Directors would fire any CEO that destroyed as much of the company's value as Musk has. But Tesla's board is full of Musk syncophants and family members who refuse to stand up to him.
SEC and FTC would be obvious candidates who historically would do this. States also have the ability to prosecute this via UDAP (unfair and deceptive practices) laws.
Probably Tesla being the only major domestic EV manufacturer + historically Musk not wading into politics + Musk/Tesla being widely popular for a time is probably why no one has gone after him. Not sure how this changes going forward with Musk being a very polarizing figure now.
The previous two administrations (Trump I and Biden) being somewhat anti-Tesla or anti-Musk was some part of what prompted Musk to get into politics in the first place. Given the Biden admin's hostility, I would have expected the SEC and FTC to have been directed to do all they could against him within bounds, and so my first guess would be that they did, in fact, do everything justifiable.
While I didn't look long for a more neutral source, Teslarati has a good list of the prompts of the shift from Musk being anti-Trump and pro-Biden, to giving up on Biden, to supporting Trump: https://www.teslarati.com/former-tesla-exec-confirms-wsj-rep...
There were apparently also other considerations not associated with Tesla for his turn (transgender child, etc), but my read on all this is that Musk saw staying out of politics didn't mean politics would stay away from him. Given that Trump II is also now somewhat anti-Musk, it's not clear to me that he succeeded in avoiding a longer-term axe for Tesla (Neuralink/Solarcity/SpaceX/Boring...) from politicians. We'll see.
Maybe that’s what happens in late stage capitalism. The billionaires get so powerful that they become untouchable. He’s already shown that he uses his fortune to steer political outcomes.
They made tons of money on the Scam of the Decade™ from Oct 2016 (See their "Driver is just there for legal reasons" video) to Apr 2024 (when they officially changed it to Supervised FSD) and now its not even that.
SpaceX is a success despite Elon. Maybe setting an extremely lofty goal helped somewhat but Gwynne Shotwell and all the actual engineers at SpaceX deserve the credit for their success.
How is it despite Elon? I don't know the history too well.
I agree that the team deserves most of the success. I think that's the case in general. At best, a CEO puts down good framing/structure, that's it. ICs do the actual innovative work.
Small correction: LiDAR can’t literally see around corners — it’s still a line-of-sight sensor. What it can do is build an extremely precise 3D point cloud of what it can see, in all lighting conditions, and with far less susceptibility to “hallucinations” from things like glare, shadows, or visual artifacts that trip up purely vision-based systems.
The problem you’re describing — phantom braking, random wiper sweeps — is exactly what happens when the perception system’s “eyes” (cameras) feed imperfect data into a “brain” (compute + AI) that has no independent cross-check from another modality. Cameras are amazing at recognizing texture and color but they’re passive sensors, easily fooled by lighting, contrast, weather, or optical illusions. LiDAR adds active depth sensing, which directly measures distance and object geometry rather than inferring it.
But LiDAR alone isn’t the endgame either. The real magic happens in sensor fusion — combining LiDAR, radar, cameras, GNSS, and ultrasonic so each sensor covers the others’ blind spots, and then fusing data at the perception level. This reduces false positives, filters out improbable hazards before they trigger braking, and keeps the system robust in edge cases.
And there’s another piece that rarely gets mentioned in these debates: connected infrastructure. If the vehicle can also receive data from roadside units, traffic signals, and other connected objects (V2X), it doesn’t have to rely solely on its onboard sensors. You’re effectively extending the vehicle’s situational awareness beyond its physical line of sight.
Vision-only autonomy is like trying to navigate with one sense while ignoring the others. LiDAR + fusion + connectivity is like having multiple senses and a heads-up from the world around you.
I don’t need self driving cars that can navigate alleys in Florence, Italy and also parkways in New England. Here is what we really need: put transponders into the roadway on freeways and use those for navigation and lane positioning. Then you would be responsible for getting onto the freeway and getting off the exit but can take a nap between. This would be something that would be do e by the DOT, supported by all car makers, and benefit everyone. LIDAR could be used for obstacle detection but not for navigation. And whoever figures out how to do the transponders and land a government contract and get at least one major car manufacturer on board would make bank.
We live in an area with sort of challenging roads, and I strongly disagree.
There’s an increasing number of drivers that can barely drive on the freeways. When they hit our area they cannot even stay on their side of the road, slow down for blind curves (when they’re on the wrong side of the road!), maintain 50% the normal speed of other drivers, etc. I won’t order uber or lyft anymore because I inevitably get one of these people as my driver (and then watch them struggle on straight stretches of freeway).
Imagine how much worse this will get when they start exclusively using lane keeping on easy roads. It’ll go from “oh my god I have to work the round wheel thingy and the foot levers at the same time!” to “I’ve never steered this car at speeds above 11”.
I’d much rather self driving focused on driving safely on challenging roads so that these people don’t immediately flip their cars (not an exaggeration; this is a regular occurrence!) when the driver assistance disables itself on our residential street.
I don’t think addressing this use case is particularly hard (basically no pedestrians, there’s a double yellow line, the computer should be able to compute stopping distance and visibility distance around blind curves, typical speeds are 25mph, suicidal deer aren’t going to be the computer’s fault anyway), but there’s not much money in it. However, if you can’t drive our road, you certainly cannot handle unexpected stuff in the city.
You describe it as challenging but it sounds like realistically it is just badly designed roads. But fixing that aside, nothing really stops you from outfitting secondary roads with transponders in principle. In practice, it is easier to start with freeways because (a) they are much more uniform, (b) the impact of an accident at freeways speeds is much more deadly, (c) no pedestrians, bicycles, etc., and (d) the federal government has control over the freeways (it is a complex relationship but ultimately the feds have a say) which means they can mandate installing the transponders and pay for it. Once the system functions there it can be expanded until every driveway and parking spot is outlined.
We already have transponders on freeways. They’re technically passive reflectors, but they reflect a high proportion of incident EM waves, in the visible spectrum, and exist between lanes on every major road in the US. Also known as white paint.
Following roads and lane markers and signs and signals is the "easy" part of autonomous driving. You could do everything you say and it wouldn't result in something that is any better than the current state of the art. Dealing with others on the road is the main problem (weather comes in close second). Your solution solves nothing, I'm afraid.
Physics prevents detected objects from jumping unrealistically. Current systems seem not to account for that at all, reacting to objects which appear and disappear spontaneously. Sensor fusion is exactly the solution to this: use a variety of sensors as input to reliably identify actual obstacles. To fake all the sensors at once you'd need to fake vision, lidar, and transponder locations simultaneously.
My experience working in an automotive supplier suggest that Tesla engineers must have always knowns this and the real strategy was to provide the best ADAS experience with the cheapest sensor architecture. They certainly did achieved that goal.
There were aspirations that the bottom up approach would work with enough data, but as I learned about the kind of long tail cases that we solved with radar/camera fusion, camera-only seemed categorically less safe.
easy edge case: A self driving system cannot be inoperable due to sunlight or fog.
a more hackernew worthy consideration: calculate the angular pixel resolution required to accurately range and classify an object 100 meters away. (roughly the distance needed to safely stop if you're traveling 80mph) Now add a second camera for stereo and calculate the camera-to-camera extrinsic sensitivity you'd need to stay within to keep error sufficiently low in all temperature/road condition scenarios.
The answer is: screw that, I should just add a long range radar.
there are just so many considerations that show you need a multi-modality solution, and using human biology as a what-about-ism, doesn't translate to currently available technology.
Lidar is the first thing brought up in these discussions. Lidar isn’t that great of a sensor. It does one thing well and that’s measure distance. A visual sensor can be measured along the axis of spatial resolution (x,y,z) temporal resolution(fps) and dynamic range(bit depth). You could add things like light frequency and phase etc. Lidar is quite poor in all of these except the spatial z dimension, measuring distance as mentioned before. Compared to a cheep camera the fps is very low, the spatial resolution in x and y is pathetic 128. in the vertical, higher horizontal but its not mega pixels. Finally the dynamic range is 1 bit(something is there or not).
Lidars use near infrared and are just as susceptible to problems with natural fog (not the theatrical fog like in that Roper video) and rain.
Multiple cameras can do good enough depth estimation with modern neural networks. But cameras are vastly better at making sense of the world. You can’t read a sign with lidar.
Lidars have been reporting per-point intensity values for quite a while. The dynamic range is definitely not 1 bit.
Many Lidar visualization software will happily pseudocolor the intensity channel for you. Even with a mechanically scanning 64-line Lidar you can often read a typical US speed limit sign at ~50 meter in this view.
One of the shower thoughts I've had is why don't we start equipping cars with UWB tech. UWB can identify itself, two UWB nodes can measure short range distances between each other (around 30m) with fairly decent accuracy and directionality.
Sure, it wouldn't replace any other sensing tech, but if my car has UWB and another car has UWB, they can telegraph where they are and what their intentions are a lot faster and in a "cleaner" manner than using a camera to watch the rear indicator for illumination
Feels like Musk should step down from the CEO role. The company hasn’t really delivered on its big promises: no real self-driving, Cybertruck turned into a flop, the affordable Tesla never materialized. Model S was revolutionary, but Model 3 is basically a cheaper version of that design, and in the last decade there hasn’t been a comparable breakthrough. Innovation seems stalled.
At this point, Tesla looks less like a disruptive startup and more like a large-cap company struggling to find its next act. Musk still runs it like a scrappy startup, but you can’t operate a trillion-dollar business with the same playbook. He’d probably be better off going back to building something new from scratch and letting someone else run Tesla like the large company it already is.
This is not a heavily researched comment, but it seems to me that the Model 3 is relatively affordable, at least compared to available options at the time. It depends on your point of comparison: there is a lot of competition for sure. The M3 was successful to a good degree, don’t you think? I mean, we should put a number on it so we’re not just comparing feels. The Model Y sells well too, doesn’t at least until the DOGE insanity.
Here's some heavy research for you -- Model 3 is competing with the likes of BMW, Audi etc. That's not considered the "affordable" tier. It's called luxury. Here's a comparison:
I will charitably interpret “heavy research” as a joke.
It is hard to interpret the smugness above in a positive light. It is unhelpful to you and to everyone here.
If you want to compare an electric car against combustion-engine vehicle, go ahead, but that isn’t a key decision point for what we’re talking about.
The TrueCar web page table does not account for a $7,500 federal tax credit for EVs. I recognize it ends soon — September 30 — if only to head off a potential zinger comment (which would be irrelevant to the overall point).
All in all, it is notable that ~2 minutes asking a modern large language model for various comparisons is more helpful than this conversation with another human (presumably). If we’re going to advocate for the importance of humanity, seems to me like we should start demonstrating that we can at least act like why we deserve it. I view HN primarily as a place to learn and help others, not a place for snarky comments.
A better modern comparison showing less expensive EVs would mention the Nissan Leaf or Chevy Equinox or others. The history is interesting and worth digging into. To mention one aspect: the Leaf had a ~7 year head start but the Tesla caught up in sales by ~2018 and became the best-selling EV — even at a higher price point. So this undermines any claim that Tesla wasn’t doing something right from the POV of customer perception.
I don’t need to “be right” in this particular comment — I welcome corrections — I’m more interested in error correction and learning.
The model 3 is 1.5x more expensive than the cheapest car on the list, and it’s not obviously better than other things in its price range.
Here are some brands that have delivered more affordable EVs than Tesla: Kia, Hyundai, Chevy, Cooper, Nissan.
Note that all of these cost about 2x more than international competitors.
On top of that, Ford’s upcoming platform is targeting $30K midsize pickup trucks. Presumably, most other manufacturers have similar things in their pipelines.
Tesla is already behind most of its competitors, and does not seem to have anything new the pipeline, so the gap is likely to expand.
They’ve clearly failed to provide affordable EVs. They’ve been beaten to market by a half dozen companies in the North American market, and that’s with trade barriers blocking foreign companies that are providing cars for less than half these prices.
> No vehicle sales to having the best selling vehicles in the world
They have the best selling model in the world (their Model Y). But their total sales of all models are way behind many other car companies.
These car companies sell more cars each year than Tesla (ordered by total sales): Toyota, Volkswagen, Hyundai-Kia, GM, Stellantis, Ford, BYD, Honda, Nissan, Suzuki, BMW, Mercedes-Benz, Renault, and Geely.
Toyota and Volkswagen each sell more cars in a year than Tesla has sold over its lifetime, and Hyundai-Kia's annual sales are about the same as Tesla's lifetime sales.
By revenue rather than units these companies sell more per year: Volkswagen, Toyota, Stallantis, GM, Ford, Mercedes-Benz, BMW, Honda, BYD, and SAIC Motor. (Edit: I accidentally left out Hyundai-Kia)
If I had to guess, I’d say the original Tesla founders had a greater influence than Musk. His track record, frankly, is unimpressive. He’s been promising full self-driving “next year” since 2016, yet it’s still nowhere close. Aside from the Model S and X, there hasn’t been a major innovation under his watch. The real groundbreaking work likely came before him. His reign? Far from remarkable. Each year has been a cycle of overpromising (often outright lying) and underdelivering. As for Tesla’s stock? Well, markets can stay irrational far longer than most people can remain solvent.
Tarpenning and Eberhard left Tesla in 2008 and 2007 but somehow they had a greater influence? They contributed no money, nearly tanked the company but somehow were more important.
"His track record is unimpressive"... I can see why you say that, I mean, took Tesla from almost nothing to a trillion dollar company. Started the most prolific rocket and satellite company in history (but hey, it's only rocket science right?), provides internet to places that it never even had the possibility of getting to, and providing untold millions the chance to get on the internet.
Started a company that is giving the paralyzed the ability to use a computer controlling their brain, and is working to restore sight to the blind.
Totally unimpressive. There are so many people who have done these things /s
I don't hate Tesla. I've owned two of them, and still have one. I just think severe underperformance and hallucinations are going on with the company.
FSD has been a complete lie since the beginning. Any reasonable person who followed the saga (and the name "FSD") can tell you that. It was mobileye in 2015-2016, which worked quite well for what it's, followed by unfilled "FSD next year" promise since then every year.
Fool me once, shame on you; fool me twice, shame on me.
Daily reminder that Telsa is not — nor was ever intended to be — a car company. Tesla is fundamentally an "energy generation and storage" (battery/supercapacitor) company. Given Tesla's fundamentals (the types of assets they own, the logistics they've built out), the Powerwall and Megapack are closer to Tesla's core product than the cars are. (And they also make a bunch of other battery-ish things that have no consumer names, just MIL-SPEC procurement codes.)
Yes, right now car sales make up 78% of Tesla's revenue. But cars have 17% margins. The energy-storage division, currently at 10% of revenue, has more like 30% margins. And the car sales are falling as the battery sales ramp up.
The cars were always a B2C bootstrap play for Tesla, to build out the factories it needed to sell grid-scale batteries (and things like military UAV batteries) under large enterprise B2B contracts. Which is why Tesla is pushing the "car narrative" less and less over time, seeming to fade into B2C irrelevancy — all their marketing and sales is gradually pivoting to B2B outreach.
> Telsa is not — nor was ever intended to be — a car company. Tesla is fundamentally an "energy generation and storage" (battery/supercapacitor) company.
> The cars were always a B2C bootstrap play for Tesla, to build out the factories it needed to sell grid-scale batteries
This seems like revisionist history. They called their company Tesla Motors, not Tesla Energy, after all.
This is a blog post from the founder and CEO about their first energy play. It seems clear that their first energy product was an unintended byproduct of the Roadster, they worried about it being a distraction from their core car business, but they decided to go ahead with it because they saw it as a way to strengthen their car business.
> Telsa is not — nor was ever intended to be — a car company. Tesla is fundamentally an "energy generation and storage" (battery/supercapacitor) company
Are we still doing this in 2025?
Uber is not a taxi company it’s a transportation company! Just wait until they roll out buses!
Juicero is not a fruit squeezing company it’s an end to end technology powered nourishment platform!
And so on. Save it for the VC PowerPoints.
Tesla is a car company. Maybe some day it’ll be defined by some other lines of business too. Maybe one day they’ll even surpass Yamaha.
Nope. Don't even own a car. Military-industrial-complexes are just my special interest. And apparently Musk's, too. (What do grid-scale batteries, rockets, data-satellite constellations, and tunnel boring machines have in common? They're all products/services that can be — and already are being — sold to multiple allied nations' militaries. AFAICT, this is 90% of the reason Trump can't fully cut ties with the guy.)
This nazi-saluting manchild has been purposefully lying about self-driving for close to 10 years now, each year self-driving coming "next year". How is this legal and not false advertisement?
Kinda wish we as consumers had some way to fight back against this obvious bullshit, since lord knows the government won't do anything.
Like if a company comes out with a new transportation technology and calls it "teleportation", but in fact is just a glorified trebuchet, they shouldn't be allowed to use a generic term with a well-understood meaning fraudulently. But no, they'll just call it "Teleportation™" with a patented definition of their glorified trebuchet, and apparently that's fine and dandy.
Our current infrastructure isn’t compatible with lidar. We were consulted to fix it, but governments have no idea how to approach this problem so it won’t happen for years.
This article makes no sense to me. They aren't changing the meaning of anything for consumers, it's only defining it for the purpose of the compensation milestone.
Am I the only one that noticed most of the targets are in nominal dollars, not inflation adjusted? Trump’s already prosecuting Fed leadership because they’re refusing to print money for him. Elon’s worked with him enough to understand where our monetary policy is headed.
Honest question: did Tesla in the past promise that FSD would be unsupervised? My based-on-nothing memory is that they weren't promising that you wouldn't have to sit in the driver's seat, or that your steering wheel would collect dust. Arguing against myself: they did talk about Teslas going off to park themselves and returning, but that's a fairly limited use case. Maybe in the robotaxi descriptions?
My memory was more that you'd be able to get into (the driver's seat of) your Tesla in downtown Los Angeles, tell it you want to go to the Paris hotel in Vegas, and expect generally not to have to do anything to get there. But not guaranteed nothing.
Is Full just a catch word for actually not full now?
Full Speed USB is 12Mbps, nobody wants a Full Speed USB data transfer.
Full Self Driving requires supervision. Clearly, even Tesla understands the implication of their name, or they wouldn't have renamed it Full Self Driving Supervised... They should probably have been calling it Supervised Self Driving since the beginning.
Autopilot on a plane requires a pilot in the cockpit. Does that make it "not auto"?
I get that there are many who rush to defend Musk/Tesla. I'm not one of them.
I was just caught off guard by the headline. To me, changing “Full Self-Driving” to “Full Self-Driving (Supervised)” doesn't merit the headline "Tesla changes meaning of ‘Full Self-Driving’, gives up on promise of autonomy".
Again, to me "Full Self-Driving" never meant you would retro-fit your Tesla to remove the steering wheel, nor even set it for someplace and go to sleep. To me, it meant not needing to have your hands on the steering wheel and being able to have a conversation while maintaining some sort of situational awareness, although not necessarily keeping your eyes fully on the road for the more monotonous parts of a journey.
As others have pointed out, Tesla/Musk sometimes claimed more than that, but the vast majority of their statements re: FSD hew closer to what I said above. At least I think so -- no one yet has posted something where claims of more than the above are explicit and in the majority.
> Autopilot on a plane requires a pilot in the cockpit. Does that make it "not auto"?
Autopilot in a plane generally maintains heading and altitude. It certainly can do that with or without a pilot in the cockpit, and you hear about incidents from time to time where the pilot is incapacitated and the autopilot keeps the heading and altitude until fuel run out. Keeping heading and altitude is insufficient to operate a plane, of course; Tesla's choice of the word Autopilot was also problematic, because the larger market of drivers doesn't necessarily understand the limitations of aviation autopilot and many people thought the system is more capable than it actually is; an aviation style autopilot wouldn't be much help on the road, maintaining heading in that way isn't actually helpful when roads are not completely straight, maintaining speed is sometimes useful but that's been called cruise control for decades. (Some flight automation systems can do waypoints, and autoland is a thing, but afaik, it's not all put together where you put the whole thing in at once and chill, nor would that be a good idea).
> To me, it meant not needing to have your hands on the steering wheel and being able to have a conversation while maintaining some sort of situational awareness, although not necessarily keeping your eyes fully on the road for the more monotonous parts of a journey.
I mean, that's sort of what the product is, although there's real safety concerns about ability for humans to context switch and intervene properly. I see how that's supervised self-driving, but not how it's full self-driving.
If I paid 90% of your invoice and said paid in full, that doesn't make it paid in full.
Thx for this! The one that stands out is the guy who says "my grandmother who doesn't speak the language and doesn't drive" -- that speaks to unsupervised. That said, most of rest don't seem incompatible with "supervised"
To be clear, this is obviously a reframing from the implications Musk has made. But I still don't see adding "supervised" to the description as that big a shift for most of the use cases that have been presented in the past.
In 2016 Musk said you’d be able to drive from LA to NYC without touching the steering wheel once “within 2 years”.
He’s been making untrue statements about Tesla FSD for a decade.
Yeah, I know he said this. Setting aside the two years aspect, which obviously didn't pan out/was a lie if we're being harsh, I don't see this language as incompatible with the current change -- they're adding "supervised" to the description. He didn't say you'd be able to go to sleep in the back seat. "able to" is not the same as "guaranteed to." Believe me, I'm not a fan, but I just don't see this language as that big of a shift.
The [2016 Tesla promotional] video carries a tagline saying: “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.”
Well there you go. It seems clear that most of what Tesla has said is compatible with the application of the word "supervised" without really changing the meaning much. But a few statements, and the overall implication, very much contradict that.
So what does it mean for Tesla's "Robotaxi"? Is that being shut down?
It's pathetic. The Austin Robotaxi demo had a "safety monitor" in the front passenger seat, with an emergency stop button. But there were failures where the safety driver had to stop the vehicle, get out, walk around the car, get into the drivers's seat, and drive manually. So now the "safety monitor" sits in the driver's seat.[1] It's just Uber now.
Do you have to tip the "safety monitor"?
And for this, Musk wants the biggest pay package in history?
The article is wrong. What happened is that they added some highway-capable ridehail vehicles, and only those vehicles have the safety person in the drivers seat. Frederic (the author) lives in Canada, he doesn't have access to any recent version of FSD.
"In a visible sign of its shifting posture from daredevil innovation to cautious compliance, Tesla this week relocated its robotaxi safety monitors, employees who supervise the autonomous software’s performance and can take over the vehicle’s operation at any moment, from the passenger seat to the driver’s seat."
And this one in Electrek.[2]
The state of Texas recently enacted regulations for self-driving cars that require more reporting. Tesla's Robotaxi with a driver is not a self-driving car under Texas law.
Musk claims Tesla will remove the safety driver by the end of the year. Is there a prediction market on that?
Seeking Alpha: [1] "In sharp contrast, Tesla FSD largely remains a beta product that still requires drivers' attention. Even their Robotaxi service, which likely launched more out of necessity to stay in the headlines as opposed to actual readiness, requires a full-time human to monitor the vehicle. Independent data consistently shows that Tesla requires far more human intervention on a per mile basis than Waymo. This is not surprising as it seems FSD errors are becoming commonplace, with numerous examples of Tesla's FSD mode swerving into oncoming traffic, misinterpreting construction zones, and even failing to recognize pedestrians continuing to pile up."
What I don't understand about this is that to my experience being driven around in friends teslas, its already there. It really seems like legalese vs technical capability. The damn thing can drive with no input and even find a parking spot and park itself. I mean where are we even moving the goalpost at this point? Because there's been some accidents its not valid? The question is how that compares to the accident rate of human drivers not that there should be an expectation of zero accidents ever.
The word "driving" has multiple, partially overlapping meanings. You're using it in a very informal sense to mean "I don't have to touch the controls much". Power to you for using whatever definitions you feel like.
Other people, most importantly your local driving laws, use driving as a technical term to refer to tasks done by the entity that's ultimately responsible for the safety of the entire system. The human remains the driver in this definition, even if they've engaged FSD. They are not in a Waymo. If you're interested in specific technical verbiage, you should look at SAE J3016 (the infamous "levels" standard), which many vehicle codes incorporate.
One of the critical differences between your informal definition is whether you can stop paying attention to the road and remain safe. With your definition, it's possible have a system where you're not "driving", but you still have a responsibility to react instantaneously to dangerous road events after hours of of inaction. Very few humans can reliably do that. It's not a great way to communicate the responsibilities people have in a safety-critical task they do every day.
I don't understand why that is. They literally do nothing. The car drives itself. Parks itself. Does everything itself. The fact you have to engage with the wheel every now and then is because of regulation not because the tech isn't there imo. Really to me there is zero difference between the waymo and tesla experience save for regulatory decisions that prevent the tesla from being truly hands free eyes shut.
The difference is liability. If you're riding a Waymo, you are not at all liable for what the vehicle does. If there is a collision, you don't need to exchange your insurance info or name or anything else (regardless of who is at fault). You are not allowed to be in the drivers seat.
Tesla has chosen to not (yet) assume that liability, and leave that liability to the driver and requires a driver in the drivers seat. But someone in the drivers seat can override the steering wheel accidentally and cause a collision, so they likely will require the drivers seat to be empty to assume liability (or disable all controls, which is only possible on a steer by wire vehicle, and the only such vehicle in the world is Cybertruck).
Tesla has not asked for regulatory approval for level 4 or 5. When they do, it'll be interesting to see how governments react.
It makes sense why they wouldn't from a game theory standpoint. Why not shift liability? Waymo would too if they could set up such a structure in a way that makes sense. It is a little different for a cab where a 13 year old could call one on moms cellphone vs a car you buy outright and is registered to a licensed driver who pays for the insurance on it.
Still, my point is all this has nothing to do with the tech. It is all regulatory/legal checkers.
Because being a passenger in a driverless vehicle is a much better user experience than being a driver. You can be on a zoom call, sleep, watch a movie or TV show or scroll TikTok, get some work done on your computer, wear a VR headset and be in a different world, etc etc. Tesla would make a lot more money, and could charge a lot more for FSD.
They aren't doing that yet because they aren't ready yet. It's why they still have humans in the robotaxi service.
There are no doubts in my mind that they will do it probably next year. The latest version of FSD on the new cars is very, very impressive.
As I explained in the previous post, the crucial difference is
you still have a responsibility to react instantaneously to dangerous road events after hours of inaction.
There are no regulatory barriers impeding Tesla outside a small handful of states (i.e. California). The fact that you still have to supervise it is an intentional aspect of the system design to shift responsibility away from Tesla.
I don't know how to communicate this any more clearly to you, but I'm only talking about the safety design of the system. No legal or regulatory issues are involved.
That recent bloomburg report showed tesla fsd was an order of magnitude safer than human drivers in the U.S. People on reddit tried to discount it because "fsd is not actually fsd because you have to tap the wheel every now and then so those miles don't count" but really that is just a regulatory constraint not some real technical issue. The car drives itself and its drivers don't pay attention at all and its getting these numbers.
The lesson here is to wait for a chill SEC and friendly DOJ before you recant your fraudulent claims, because then they won’t be found to be fraudulent
"Full Self Driving (Supervised)." In other words: you can take your mind off the road as long as you keep your mind on the road. Classic.
Tesla is kind of a joke in the FSD community these days. People working on this problem a lot longer than Musk's folk have been saying for years that their approach is fundamentally ignoring decades of research on the topic. Sounds like Tesla finally got the memo. I mostly feel sorry for their engineers (both the ones who bought the hype and thought they'd discover the secret sauce that a quarter-century-plus of full-time academic research couldn't find and the old salts who knew this was doomed but soldiered on anyway... but only so sorry, since I'm sure the checks kept clearing).
Until very recently I worked in the FSD community, and I wouldn’t say I viewed it as a joke. I don’t know if I believed they would get to level 5 without any lidar, it’s pretty good for what’s available in the consumer market.
That's what I mean. Nobody I know thought there'd be a chance of getting to L4 (much less L5) without LIDAR. They doomed the goal from the gate and basically lied to people for years about the technological possibilities to pad their bottom line.
It's two steps from selling snake-oil, basically. Not that L4 or L5 are impossible, but people who knew the problem domain looked at how they were approaching it hardware-wise and went "... uhuh."
> In other words: you can take your mind off the road as long as you keep your mind on the road.
They literally did this with Summon. "Have your car come to you while dealing with a fussy child" - buried far further down the page in light grey, "pay full attention to the vehicle at all times" (you know, other than your "fussy child").
One problem might be that American driving is not exactly... well great, is it? Roads are generally too straight and driving tests too soft. And for some weird reason, many US drivers seem to have a poor sense of situational awareness.
The result is it looks like many drivers are unaware of the benefits of defensive driving. Take that all into account and safe 'full self driving' may be tricky to achieve?
Tesla's share price is all based on the Greater Fool Theory in the short run.
In the long run some of those promises might materialise. But who cares! Portfolio managers and retail investors want some juicy returns - share price volatility is welcomed.
Electric car + active battery management were what I cared about at the time of purchase. Also, I am biased against GM and Ford due to experiences with their cars in the 80s and 90s.
I doubt I'm the only one.
(In retrospect, the glass roof was not practical in Canada and I will look elsewhere in the future)
Needs to be known that Fred Lambert pushes out so much negative Tesla press that its reasonable to say that he's on a crusade. And not a too fact-based one.
Like with this. No, Tesla hasn't communicated any as such. Everyone knows FSD is late. But Robotaxi shows it is very meaningfully progressing towards true autonomy. And for example crushed the competition (not literally) in a recent very high-effort test in avoiding crashes on a highway with obstacles that were hard to read for almost all the other systems: https://www.youtube.com/watch?v=0xumyEf-WRI
> But Robotaxi shows it is very meaningfully progressing towards true autonomy.
What? They literally just moved the in car supervisor from the passenger seat to the driver seat. That's not a vote of confidence.
And I don't think you can glean anything. There are less than 20 Robotaxis in Austin, that spend their time giving rides to influencers so they can make YT videos where even they have scary moments.
At best he is skilled at sales and marketing --- maybe even management. At worst, he is a con artist.
The real problem for Musk and others like him is that while it is certainly possible to fool some of the people some of the time, most will *eventually* come to realize the lack of credibility and stop accepting the BS.
Musk has firmly established a pattern of over promising and under delivering. DOGE and FSD are just two examples --- and there is more of the same in his pipeline.
You have been voted down, but this is proven. He has lied about his education. Henever even enrolled at Stanford, and his undergraduate degree was basically a general studies business degree.
It disagrees, yes, but it does not correct what was stated. And that appointment couldn't possibly be political could it?
Musk has lied, time and time again, about his education. He has never worked as an engineer. People have commented that he barely understands how to run simple Python scripts.
This is clickbait from a publication that's had it out for Tesla for nearly a decade.
Tesla is pivoting messaging toward what the car can do today. You can believe that FSD will deliver L4 autonomy to owners or not -- I'm not wading into that -- but this updated web site copy does not change the promises they've made prior owners, and Tesla has not walked back those promises.
The most obvious tell of this is the unsupervised program in operation right now in Austin.
Marketing choice of words aside, it's already really good now to the point that it probably does 95% of my driving. Once in a while it chooses the wrong lane and very rarely I will have to intervene, but it's always getting better. If they just called it "Advanced Driver Assist" or something, and politics weren't such an emotional trigger, it would be hailed as a huge achievement.
Yeah, Tesla did themselves no favors with how they initially marketed FSD, and all the missed timelines amplified the brand cost of that. I'm glad to see them focus on what it can do today. Better to underpromise and overdeliver etc.
As an aside, it's wild how different the perspective is between the masses and the people who experience the bleeding edge here. "The future is here, it's just not evenly distributed," indeed.
Yeah I think their early success with Tesla Vision was faster than expected, it went to their heads, and they underestimated the iteration and fine tuning needed to solve the edge cases. It's difficult to predict how many reps it will take to solve an intricate problem. That's not to excuse their public timeline -- their guidance was naive and IMO irresponsible -- but I don't think it was in bad faith.
> They have not given up on unsupervised autonomy. They are operating unsupervised autonomy in Austin TX as I type this!
Setting aside calling a driver in the driver's seat "unsupervised"... that's exactly the point. People paid for this, and they are revoking their promise of delivering it, instead re-focusing on (attempting) operating it themselves.
I'd have no objection to this if they offered buy-backs on the vehicles in the field, but that seems unlikely.
I would like to understand what population feels they were fleeced. The FSD available on their cars with HW3 (some as old as 2017?) is quite impressive when you consider what the capabilities were back then. Sure, it won’t be as good as a 2025 Juniper Model Y. But who are the people that bought FSD in the early days and are unhappy and how big of a population is that? Is this the main thing people are upset about?
Or are people upset about the current state of autonomous vehicles like Waymo (which has been working for Years!) and the limited launch of Robotaxi?
I haven't closely followed which rides have drivers where, and what is driven by Tesla vs what is regulatory -- but I thought some "drivers" were still in the passenger seat in Austin?
At any rate, I don't think they are revoking their prior promises. I expect them to deliver L4 autonomy to owners as previously promised. With that said, I'm glad they are ceasing that promise to new customers and focusing on what the car does today, given how wrong their timelines have been. I agree it's shitty if they don't deliver that, and that they should offer buybacks if they find themselves in that position.
Yeah, they never said this. This article smells like anti-Elon FUD. "Elon is a dummy, everything he tries will fail, replace him with someone who isn't so controversial and supports the proper politics for a powerful global figure" and repeat in 100 minor internet blogs until the money to write these articles runs out.
I don't read the article (besides the clickbait headline and the author's "take") as Tesla "giving up". No marketing is changing, no plans for taxi services are changing. This is about the company's famously captured board giving their beloved CEO flexibility on how to meet their ambitious-sounding targets, by using vague language in the definitions. This way if Tesla fails to hit 10 million $100/month FSD subscriptions, they could conceivably come up with a cheaper more limited subscription and get Elon his pay.
This is because the vision system thinks there is something obstructing its view when in reality it is usually bright sunlight -- and sometimes, absolutely nothing that I can see.
The wipers are, of course, the most harmless way this goes wrong. The more dangerous type is when it phantom-brakes at highway speeds with no warning on a clear road and a clear day. I've had multiple other scary incidents of different types (swerving back and forth at exits is a fun one), but phantom braking is the one that happens quasi-regularly. Twice when another car was right behind me.
As an engineer, this tells me volumes about what's going on in the computer vision system, and it's pretty scary. Basically, the system detects patterns that are inferred as its vision being obstructed, and so it is programmed to brush away some (non-existent) debris. Like, it thinks there could be a physical object where there is none. If this was an LLM you would call it a hallucination.
But if it's hallucinating crud on a windshield, it can also hallucinate objects on the road. And it could be doing it every so often! So maybe there are filters to disregard unlikely objects as irrelevant, which act as guardrails against random braking. And those filters are pretty damn good -- I mean, the technology is impressive -- but they can probabistically fail, resulting in things that we've already seen, such as phantom-braking, or worse, driving through actual things.
This raises so many questions: What other things is it hallucinating? And how many hardcoded guardrails are in place against these edge cases? And what else can it hallucinate against which there are no guardrails yet?
And why not just use LIDAR that can literally see around corners in 3D?
There is none with Musk's "vision only" approach. Vision can fail for a multitude of reasons --- sunlight, rain, darkness, bad road markers, even glare from a dirty windshield. And when it fails, there is no backup plan -- the car is effectively driving blind.
Driving is a dynamic activity that involves a lot more than just vision. Safe automated driving can use all the help it can get.
Both LIDAR and vision have edge cases where they fail. So you ideally want both, but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion. People seem to be under the mistaken impression that vision is dirty input and LIDAR is somehow clean, when in reality both are noisy inputs with different strengths and weaknesses.
I guess my point is: Yes, 100% bring in LIDAR, I believe the future is LIDAR + vision. But when you do that, early iterations can regress significantly from vision-only until the fusion is tuned and calibration is tight, because you have to resolve contradictory data. Ultimately the payoff is higher robustness in exchange for more R&D and development workload (i.e. more cost).
The same reason why Tesla needed vision-only to work (cost & timeline) is the same reason why vision+LIDAR is so challenging.
It's the ability to detect sensor disagreements at all.
With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios).
If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
But we'd think that the budget config of the Boeing 737 MAX would have taught us that tying safety critical systems to single sources of truth is a bad idea... (in that case, critical modality / single physical sensor)
"A man with a watch always knows what time it is. If he gains another, he is never sure"
Most safety critical systems actually need at least three redundant sensors. Two is kinda useless: if they disagree, which is right?
EDIT:
> If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
What do you do?
They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two.
> You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
Any realistic system would see them long before your eyes do. If you are so worried, override the AI in the moment.
Lots of safety critical systems actually do operate by "voting". The space shuttle control computers are one famous example [1], but there are plenty of others in aerospace. I have personally worked on a few such systems.
It's the simplest thing that can obviously work. Simplicity is a virtue when safety is involved.
You can of course do sensor fusion and other more complicated things, but the core problem I outlined remains.
> If you are so worried, override the AI in the moment.
This is sneakily inserting a third set of sensors (your own). It can be a valid solution to the problem, but Waymo famously does not have a steering wheel you can just hop behind.
This might seem like an edge case, but edge cases matter when failure might kill somebody.
1. https://space.stackexchange.com/questions/9827/if-the-space-...
This is completely different from systems that cover different domains, like vision and lidar.
I see in many domains a tendency to oversimplify decision making algorithms for human understanding convenience (eg vote rather that develop a joint probability distribution in this case, supply chain and manufacturing in particular seem to love rules of thumb) rather than use better algorithms that modern compute enables higher performance, safety etc
I will not pretend to be an expert. I would suggest that "human understanding convenience" is pretty important in safety domains. The famous Brian Kernighan quote comes to mind:
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
When it comes to obscure corner cases, it seems to me that simpler is better. But Waymo does seem to have chosen a different path! They employ a lot of smart folk, and appear to be the state of the art for autonomous driving. I wouldn't bet against them.
Cars can stop in quite a short distance. The only way this could happen is if the pedestrian was obscured behind an object until the car was dangerously close. A safe system will recognize potential hiding spots and slow down preemptively - good human drivers do this.
"Quite a short distance" is doing a lot of lifting. It's been a while since I've been to driver's school, but I remember them making a point of how long it could take to stop, and how your senses could trick you to the contrary. Especially at highway speeds.
I can personally recall a couple (fortunately low stakes) situations where I had to change lanes to avoid an obstacle that I was pretty certain I would hit if I had to stop.
> This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
> What do you do?
Go into your failure mode. At least you have a check to indicate a possible issue with 2 signals.
People are underweighting the alternative single system hypothetical -- what does a Tesla do when its vision-only system erroneously thinks a pedestrian is one lane over?
This is why good redundant systems have at least 3... in your scenario, without a tie-breaker, all you can do is guess at random which one to trust.
For example jet aircraft commonly have three pitot static tubes, and you can just compare/contrast the data to look for the outlier. It works, and it works well.
If you tried to do that with e.g. LIDAR, vision, and radar with no common point of reference, solving for trust/resolving disagreements is an incredibly difficult technical challenge. Other variations (e.g. two vision + one LIDAR), does not really make it much easier either.
Tie-breaking during sensor fusion is a billion+ dollar problem, and will always be.
Also, this is probably when Waymo calls up a human assistant in a developing-country callcentre.
But vision only hasn't worked --- not as promised, not after a decade's worth of timeline. And it probably won't any time soon either --- for valid engineering reasons.
Engineering 101 --- *needing* something to work doesn't make it possible or practical.
It was maybe a valid argument 10 years ago, but in 2025 many companies have shown sensor fusion works just fine. I mean, Waymo has clocked 100M+ miles, so it works. The AV industry has moved on to more interesting problems, while Tesla and Musk are still stuck in the past arguing about sensor choices.
The old ambition is dead.
[1] https://electrek.co/2025/05/16/tesla-robotaxi-fleet-powered-...
I keep reading arguments like this, but I really don't understand what the problem here is supposed to be. Yes, in a rule based system, this is a challenge, but in an end-to-end neural network, another sensor is just another input, regardless of whether it's another camera, LIDAR, or a sensor measuring the adrenaline level of the driver.
If you have enough training data, the model training will converge to a reasonable set of weights for various scenarios. In fact, training data with a richer set of sensors would also allow you to determine whether some of the sensors do not in fact contribute meaningfully to overall performance.
It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if one considers the venture to be a vehicle (ha!) to shovel as much money as possible from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing.
It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if the venture is a vehicle (ha!) to shovel money from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing.
Infra-red of a few different wavelengths as well as optical light ranges seems like it'd give a superior result?
Notably, sensor confusion is also an “unsolved” problem in humans, eg vision and vestibular (inner ear) conflicts possibly explaining motion sickness/vertigo <https://www.nature.com/articles/s44172-025-00417-2>
The results of both tournaments: <https://carnewschina.com/2025/07/24/chinas-massive-adas-test...> Counterintuitively, vision scored best (Tesla Model X)
The videos are fascinating to watch (subtitles are available): Tournament 1 (36 cars, 6 Highway Scenarios): <https://www.youtube.com/watch?v=0xumyEf-WRI> Tournament 2 (26 cars, 9 Urban Scenarios): <https://www.youtube.com/watch?v=GcJnNbm-jUI>
Highway Scenarios: “tests...included other active vehicles nearby to increase complexity and realism”: <https://electrek.co/2025/07/26/a-chinese-real-world-self-dri...>
Urban Scenarios: “a massive, complex roundabout and another segment of road with a few unsignaled intersections and a long straight...The first four tests incorporated portions of this huge roundabout, which would be complex for human drivers, but in situations for which there is quite an obvious solution: don’t hit that car/pedestrian in front of you” <https://electrek.co/2025/07/29/another-huge-chinese-self-dri...>
But we're far from plateauing on what can be done with vision - Humans can drive quite well with essentially just sight, so we're far from extinguishing what can be done with it.
So like a human driver. Problem is, automatic drivers need to be substantially better than humans to be accepted.
>The EX90's LiDAR enhances ADAS features like collision mitigation and lane-keeping, which are active and assisting drivers. However, full autonomy (Level 3) is not yet available, as the Ride Pilot feature is still under development and not activated.
All of that combined is probably closer to $1k than to $140.
And, again, that's - what - 10 years after Tesla originally made the decision to go vision only.
It wasn't a terrible idea at the time, but they should've pivoted at some point.
They could've had a massive lead in data if they pivoted as late as 3 years ago, when the total cost would probably be under $2.5k, and that could've led to a positive feedback loop, cause they'd probably have a system better than Waymo by now.
Instead, they've got a pile of garbage, and no path to improve it substantially.
They might be!
But I doubt it.
I don't know enough about Tesla's cameras, but it's not implausible to think there are LIDARs of low enough quality that you'd be better off with a good quality camera for your sensor.
Again, I doubt this is the case with BYDs cameras.
But it's still worth pointing out, I think.
My point is, BYD's LIDAR system costing $x is only one small part of the conversation.
Solid-state LIDAR is still a fairly new thing. LIDAR sensors were big, clunky, and expensive back when Tesla started their Autopilot/FSD program.
I googled a bit and found a DFR1030 solid-state LIDAR unit for 267 DKK (for one). It has a field of view of 108 degrees and an angular resolution of 0.6 degrees. It has an angle error of 3 degrees and a max distance of 300mm. It can run at 7.5-28 Hz.
Clearly fine for a floor-cleaning robot or a toy. Clearly not good enough for a car (which would need several of them).
LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything. Maybe this influences how they, and regulators, will think about self driving cars.
And the general public?! No way. Most are completely unaware of the foibles of LLMs.
No the don't. You're making a straw man rather than trying to put forth an actual argument in support of your view.
If you feel can't support your point, then don't try to make it.
I responded to this parent comment:
"LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything."
You take issue with my response of:
"loads of DEVs on here will claim LLMs are infallible"
You're not really making sense. I'm not straw-manning anything, as I'm directly discussing the statement made. What exactly are you presuming I'm throwing a straw man over?
It's entirely valid to say "there are loads of supposed experts that don't see this point, and you're expecting the general public to?". That's clearly my statement.
You may disagree, but that doesn't make it a strawman. Nor does it make it a poorly phrased argument on my part.
Do pay better attention please. And your entire last sentence is way over the line. We're not on reddit.
The irony of telling someone not to be rude while being absolutely insufferable. Peak redditor behavior.
Please provide examples. Thank you!
Based on what I've read over the years: it costs too much for a consumer vehicle, it creates unwanted "bumps" in the vehicle visual design, and the great man said it wasn't needed.
Yes, those reasons are not for technology or safety. They are based on cost, marketing, and personality (of the CEO and fans of the brand).
https://opg.optica.org/oe/fulltext.cfm?uri=oe-31-2-2013&id=5...
LIDAR requires line-of-sight (LoS) hence cannot see around conner, but RADAR probably can.
It's interesting to note that the all time 2nd most popular post on Tesla is 9 years ago on its full self driving hardware (just 2nd after the controversial Cybertruck) [1].
>Elon's vision-only move was extremely "short-sighted"
Elon's vision was misguided because some of the technologists at the time including him seem to really truly believed that AGI is just around the corner (pun attended). Now most of the tech people gave up on AGI claim blaming on the blurry definition of AGI but for me the truly killer AGI application is always full autonomous level 5 driving with only human level sensor perceptions minus the LIDAR and RADAR. But the complexity of the goal is very complicated that I really truly believe it will not be achieved in foreseeable future.
[1] All Tesla Cars Being Produced Now Have Full Self-Driving Hardware (2016 - 1090 comments):
https://news.ycombinator.com/item?id=12748863
I rented a Tesla a while back and drove from the bay to the death valley. On clear roads with no hazards whatsoever, the car hit the brakes at highway speeds. It scared the bejeesus out of me! Completely off put by the auto drive and derailed plans to buy a Tesla.
I'm in a 2025 with HW4, but it's dramatic improvement over the last couple of years (previously had a 2018 Model 3) increased my confidence that Elon was right to focus on vision. It wasn't until late last year where I found myself using it more than not, now I use it almost every drive point to point (Cupertino to SF) and it does it.
I think people are generally sleeping on how good it is and the politicization means people are under valuing it for stupid reasons. I wouldn't consider a non Tesla because of this (unless it was a stick shift sports car, but that's for different reasons).
Their lead is so crazy far ahead it's weird to see this reality and then see the comments on hn that are so wrong. Though I guess it's been that way for years.
The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case. I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem.
If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off.
I'd really encourage people to here with a bias to dismiss to ignore the comments and just go in real life to try it out for yourself.
This is not a general solution, it is an SF one... at best.
Most humans also don't get in accidents or have problems with phantom breaking within the timeframe that you mentioned.
The Bay Area has massive traffic, complex interchanges, SF has tight difficult roads with heavy fog. Sometimes there’s heavy rain on 280. 17 is also non trivial.
What Tesla has done is not trivial and roads outside the bay are often easier.
People can ignore this to serve their own petty cognitive bias, but others reading their comments should go look at it for themselves.
Here outside of Los Angeles, about an hour east, they do not do well at all on their 'auto-pilot.'
Your area has the benefit of being one of the primary training areas, and thus the dataset for your area is good.
Try that here. I'll be more than happy to watch you piss yourself as the Tesla tries to take you into the HOV lane THROUGH THE BARRIERS.
But what is the point to use it everywhere if you still need to pay attention to the road, keep hands on the steering wheel?
It’ll be nice when that’s not required anymore, but even today it’s way more comfortable.
The filters introduce the problem of incorrectly deleting something that really is there.
tl;dr: you can use optics to determine if there's rain on a surface, from below, without having to use any fancy cameras or anything, just a light source and light sensor.
If you're into this sort of thing, you can buy these sensors and use them as a rain sensor, either as binary "yes its rained" or as a tipping bucket replacement: https://rainsensors.com
Careful. HN takes a dim view of puns.
But Tesla didn't do this.
Self-starting wipers uses some kind of current/voltage measure on the windshield right - unrelated to self-driving? It's been around longer than Tesla - or are you just saying it's another random failure?
Check this for a reference of how well Tesla's vision-only fares against the competition, where many have LiDAR. Keep it simple wins the game. https://www.youtube.com/watch?v=0xumyEf-WRI
One analyst asked about the reliability of Tesla’s cameras when confronting sun glare, fog, or dust. Musk claimed that the company’s vision system bypasses image processing and instead uses direct photon counting to account for “noise” like glare or dust.
This... is horseshit. Photon counting is not something you can do with a regular camera or any camera installed on a Tesla. A photon counting camera doesn't produce imagery that is useful for vision. Even beyond that, it requires a closed environment so that you can you know, count them in a controlled manner, not an open outside atmosphere.
It's bullshit. And Elon knows it. He just thinks that you are too stupid to know it and instead think "Oh, yeah, that makes sense, what an awesome idea, why is only Tesla doing this?" and are wowed by Elon's brilliance.
It wasn't Elon's but Karpathy's.
The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to ultimately have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case.
I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem.
If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off. His model is all this sort of first principles thinking, it's why his companies pull off things like starship. I wouldn't bet against it.
Elon is being foolish and weirdly anthropomorphic.
It’s already better at X-rays and radiology in many cases.
Everything you are talking about is just a matter of sufficient learning data and training.
The most important thing is that Tesla/Elon absolutely had no way to know, and no reason to believe (other than as a way to rationalise a dangerously risky bet) that machine vision would be able to solve all these issues in time to make good on their promise.
(For those who don't want to click through "LIDAR is a fool's errand, and anyone relying on LIDAR is doomed."
I am much more curious about the next ten years. If we can bring down the cost of a LIDAR unit into parity with camera systems[1], I think I know the answer. But I thought that 10 years ago and it did not happen so I wonder what is the real roadblock to make LIDAR cheap.
[1] Which it won't replace, of course. What it will change is that it makes the LIDAR a regular component, not an exceptionally expensive component.
Anything except the lowest end car will cost $20K or more, so $200 is one percent of that price.
True self-driving is still a baby that needs to grow and cannot even compete against an adult human with 30+ years of experience. As self driving actually forms to that level the market will grown.
A two to four ton vehicle that can accelerate like a Ferrari and go over 100 mph, fully self-driving, and 'a few hundred dollars is way too much'.
Disagree. Even as they are dialing back the claims, which may or may not affect how people use the vehicles. These things respond too quickly for flaky senses based on human sensoriums.
Supervised FSD is already safer than a human.
And those complex traffic situations are the main challenge for autonomous driving. Getting the AIs to do the right things before they get themselves into trouble is key.
Lidar is not a silver bullet. It helps a little bit, but not a whole lot. It's great when the car has to respond quickly to get it out of a situation that it shouldn't have been in to begin with. Avoiding that requires seeing and understanding and planning accordingly.
You can train a DL model to act like a LiDAR based on only camera inputs (the data collection is easy if you already have LiDAR cars driving around). If they could get this to work reliably, I'm sure the competition would do it and ditch the LiDAR, but they don't, so that tells us something.
For anyone who understands sensor fusion and the Kalman filter, read this and ask yourself if you trust Elon Musk to direct the sensor strategy on your autonomous vehicle: https://www.threads.com/@mdsnprks/post/DN_FhFikyUE
For anyone wondering, to a sensors engineer the above tweet is like sayin 1 + 1 = 0 -- the truth (and science) is the exact opposite of what he's saying.
If you look at the statistics on fatal car accidents, 85%+ involve collisions with stationary objects or other road users.
Nobody's suggesting getting rid of machine vision or ML - just that if you've got an ML+vision system that gets in 1 serious accident per 200,000 miles, adding LIDAR could improve that to 1 serious accident per 2,000,000 miles.
edit: no, it was ultrasonic sensors. But this was likely object detection, and now it's gone.
He's doing the exact same thing and worse to people he doesn't like.
In any case, thanks, TIL!
> this tells me volumes about what's going on in the computer vision system
Emphasis:
> computer vision system
The cold hard truth is that LIDARs are a crutch, they're not strictly necessary. We know this because humans can drive without a LIDAR, however they are a super useful crutch. They give you super high positional accuracy (something that's not always easy to estimate in a vision-only system). Radars are also a super useful crutch because they give really good radial velocity. (Little anecdote, when we finally got the Radars working properly at work it made a massive difference to the ability for our car to follow other cars, ACC, in a comfortable way).
Yes machine learning vision systems hallucinate, but so do humans. The trick for Tesla would be to get it good enough to where it hallucinates less than humans do (they're nowhere near yet - human's don't hallucinate very often).
It's also worth adding that last I checked the state of the art for object detection is early fusion where you chuck the LIDAR and Radar point clouds into a neural net with the camera input so it's not like you'd necessarily have the classical methods guardrails with the Lidar anyway.
Anyway, I don't think Tesla were wrong to not use LIDAR - they had good reasons to not go down that route. They were excessively expensive and the old style spinning LIDARs were not robust. You could not have sold them on a production car in 2018. Vision systems were improving a lot back then so the idea you could have a FSD on vision alone was plausible.
The hard truth is there is no reason to limit machines to only the tools humans are biologically born with. Cars always have crutches that humans don't possess. For example, wheels.
In a true self-driving utopia, all of the cars are using multiple methods to observe the road and drive (vision, lidar, GPS, etc) AND they are all communicating with each other silently, constantly, about their intentions and status.
Why limit cars to what humans can do?
The reason this is clear is because, except for a brief period in late 2022, Teslas have included some combination of radar and ultrasonic sensors. [0]
[0] https://en.m.wikipedia.org/wiki/Tesla_Autopilot_hardware
Turns out, when there's demand for LIDAR in this form factor, people invest in R&D to drive costs down and set up manufacturing facilities to achieve economies of scale. Wow, who could have predicted this‽
https://www.youtube.com/watch?v=VuDSz06BT2g
Western countries might not be smart enough to keep R&D because Wall Street sees it as a cost center.
You know what else used to be expensive? Structured light sensors. They cost $$$$ in 2009. Then Microsoft started manufacturing the Kinect for a mass market, and in 2010 price went down to $150.
You know what's happened to LIDAR in the past decade? You guessed it, costs have come massively down because car manufacturers started buying more, and costs will continue to come down as they reach mass market adoption.
The prohibitive cost for LIDAR coming down was always just a matter of time. A "visionary" like Musk should have been able to see that. Instead he thought he could outsmart everyone by using a technology that was not suited for the job, but he made the wrong bet.
This should be expected when someone who is *not* an experienced engineer starts making engineering decisions.
FOSS is the obvious counterexample to your absurdly firm stance, but so are many artistic pursuits that use engineering techniques and principles, etc.
and propellers on a plane are not strictly necessary because birds can fly without them? The history of machines show that while nature can sometimes inspire the _what_ of the machine, it is a very bad source of inspiration for the _how_.
Crutch for what? AI does not have human intelligence yet and let’s stop pretending it does. There is no shame in that as the word crutch implies.
If a sensor provides additional data, why not use it? Sure, humans can drive withot lidars, but why limit the AI to using human-like sensors?
Why even call it a crutch? IMO It's an advantage over human sensors.
That's because our stereoscopic vision has infinitely more dynamic range, focusing speed and processing power w.r.t. a computer vision system. Periphery vision is very good at detecting movement, and central view can process tremendous amount of visual data without even trying.
Even a state of the art professional action camera system can't rival our eyes in any of these categories. LIDARs and RADARs are useful and shall be present in any car.
This is the top reason I'm not considering a Tesla. Brain dead insistence on cameras with small sensors only.
You’re not considering them even though they have the best adas on the market lmao suit yourself
https://m.youtube.com/watch?v=2V5Oqg15VpQ
Quality of additional data matters. How often does a particular sensor give you false positives and false negatives? What do you do when sensor A contradicts sensor B?
“3.6 roentgen, not great, not terrible.”
Probably comes down to lidar (and Ai) failure modes.
Also don’t forget that as a human you can move your head any which way, and also draw on your past experiences driving in that area. “There is always an old man crossing the road at this intersection. There is a school nearby so there might be kids here at 3pm.” That stuff is not as accessible to a LIDAR.
A system that's only based on cameras is only as good as its ability to recognize all road hazards, with no fall back if that fails. With LIDAR, the vehicle might not know what's the solid object in front of the vehicle using cameras, but it knows that it's there and should avoid running into it.
This is a good example of why sensor fusion is good.
When was the last time you had full attention on the road and a reflection of light made you super confused and suddenly drive crazy? When was the last time you experienced objects behaving erratically around you, jumping in and out of place, and perhaps morphing?
When was the last time you saw a paper bag blown across the street and mistook it for a cat or a fox? (Did you even notice your mistake, or do you still think it was an animal?)
Do you naturally drive faster on wide streets, slower on narrow streets, because the distance to the side of the road changes your subconcious feeling of how fast you're going? Do you even know, or are you limited to your memories rather than a dashcam whose footage can be reviewed later?
etc.
Now don't get me wrong, AI today is, I think, worse than humans at safe driving; but I'm not sure how much of that is that AI is more hallucinate-y than us vs. how much of it is that human vision system failures are a thing we compensate for (or even actively make use of) in the design of our roads, and the AI just makes different mistakes.
Self-driving is probably “AI-hard” as you’d need extensive “world knowledge” and be able to reason about your environment and tolerate faulty sensors (the human eyes are super crappy with all kinds of things that obscure it, such as veins and floaters).
Also, if the Waymo UI accurately represents what it thinks is going on “out there” it is surprisingly crappy. If your conscious experience was like that when you were driving you’d think you had been drugged.
The human brain's vision system makes pretty much the exact opposite mistake, which is a fun trick that is often exploited by stage magicians: https://www.youtube.com/watch?v=v3iPrBrGSJM&pp
And is also emphasised by driving safety awareness videos: https://www.youtube.com/watch?v=LRFMuGBP15U
I wonder what we'd seem like to each other, if we could look at each other's perception as directly as we can look at an AI's perception?
Most of us don't realise how much we mispercieve because it doesn't feel different in the moment to percieve incorrectly; it can't feel different in the moment, because if it did, we'd notice we were mispercieving.
The correct move for Tesla would have been to split the difference and add LIDAR to some subset of their fleet, ideally targeted in the most difficult to debug environments.
Somewhat like Google/Waymo are doing with their Jaguars.
Don't LIDAR 100% of Teslas, but add it to >0%.
Reportedly, they no longer use this widely - but they still have some LIDAR-equipped "scout vehicles" they send into certain environments to collect extra data.
So maybe LIDAR isn't necessary but also if Tesla were actually investing in cameras with a memory bus that could approximate the speed of human vision I doubt it would be cheaper than LIDAR to get the same result.
One of the ways it's better is that humans can sense individual photons. Not 100% reliably, but pretty well, which is why humans can see faint stars on a dark night without any special tools even though the star is thousands of light years away. On the other hand, our resolution for most of our field of vision is pretty bad - this is compensated for by changing what we're looking it when we care about details we can just look directly at it and the resolution is better right in the centre of the picture.
I agree that Tesla may have made the right hardware decision when they started with this. It was probably a bad idea to lock themselves into that path by over-promising.
(and that's not even addressing that human vision is fundamentally a weird sensory mess full of strange evolutionary baggage that doesn't even make sense except for genetic legacy)
The assumption was that with similar sensors (or practically worse - digital cameras score worse than eyeballs in many concrete metrics), ‘AI’ could be dramatically better than humans.
At least with Tesla’s experience (and with some fudging based on things like actual fatal accident data) it isn’t clear that is actually what is possible. In fact, the systems seem to be prone to similar types of issues that human drivers are in many situations - and are incredibly, repeatedly, dumb in some situations many humans aren’t.
Waymo has gone full LiDAR/RADAR/Visual, and has had a much better track record. But their systems cost so much (or at least used to), that it isn’t clear the ‘replace every driver’ vision would ever make sense.
And that is before the downward pressure on the labor market started to happen post-COVID, which hurts the economics even more.
The current niche of Taxis kinda makes sense - centrally maintained and capitalized Taxis with outsourced labor has been a viable model for a long time, it lets them control/restrict the operating environment (important to avoid those bad edge cases!), and lets them continue to gather more and more data to identify and address the statistical outliers.
They are still targeting areas with good climates and relatively sane driving environments because even with all their models and sensors, heavy snow/rain, icy roads, etc. are still a real problem.
When the argument was Phoenix is too pleasant I could buy that. Most places aren't Phoenix. But SF and LA are both much more like a reasonable place other humans live. It rains, but not always, it's misty, but not always. Snow I do accept as a thing, lots of places humans live have some snow, these cities don't really have snow.
However for ice when I watch one of those "ha, most drivers can make this turn in the ice" videos I'm not thinking "I bet Waymo wouldn't be able to do this" I'm thinking "That's a terrible idea, nobody should be attempting it". There's a big difference between "Can it drive on a road with some laying snow?" and "Can it drive on ice?".
Both SF and LA climates are super cushy compared to say, Northern Michigan. Or most of the eastern seaboard. Or even Kansas, Wyoming, etc. in the winter.
In those climates, if you don’t drive in what you’re calling ‘nobody should be attempting it’ weather, you - starve to death in your house over the winter. Because many months are just like that.
Self driving has a very similar issue with the vast majority of, say, Asia. Because similarly “this is crazy, no one should be driving like this conditions” is the norm. So if it can’t keep up, it’s useless.
Eastern and far Northern Europe has a lot of kinda similar stuff going on.
Self driving cars are easy if you ignore the hard parts.
In India, I’ve had to deal with Random Camel, missing (entire) road section that was there yesterday, 5 different cars in 3 lanes (plus 3 motorcycles) all at once, many cattle (and people) wandering in the road at day and night, and the so common it’s boring ‘people randomly going the wrong way on the road’. If you aren’t comfortable bullying other drivers sometimes to make progress or avoid a dangerous situation, you’re not getting anywhere anytime soon.
All in a random mix of flooding, monsoon rain, super hot temperatures, construction zones, fog, super heavy fireworks smoke, etc. etc.
Hell, even in the US I’ve had to drive through wildfires and people setting off fireworks on the road (long story, safety reasons). The last thing I would have wanted was the car freezing or refusing.
Is that super safe? Not really. But life is not super safe. And a car that won’t help me live my life is useless to me.
Such an AI would of course be a dangerous asshole on, say, LA roads, of course. Even more than the existing locals.
I live in the middle of a city, so, no, in terrible weather just like great weather I walk to the store, no need to "starve to death" even if conditions are too treacherous for people to sensibly drive cars. Because I'm an old man, and I used to live somewhere far from a city, I have had situations where you can't use a car to go fetch groceries because even if you don't care about safety the car can't go up an icy hill, it loses traction, gravity takes over, you slide back down (and maybe wreck the car).
Because as an old man who has actually lived in all these places - and also has ridden in Waymos before and has had friends on the Waymo team in the past, your comments seem pretty ridiculous.
A lot of the large population centres in the US are in these what you're calling "super cushy" zones where there's not much snow let alone ice. More launches in cities in Florida, Texas, California will address millions more people but won't mean more ice AFAIK. So I guess for you the most interesting announcement is probably New York, since New York certainly does have real snow. 2026 isn't that long, although I can imagine that maybe a President who thinks he's entitled to choose the Mayor of New York could mess that up.
As to the "But people in some places are crazy drivers" I saw that objection from San Francisco before it was announced. "Oh they'll never try here, nobody here drives properly. Can you imagine a Waymo trying to move anywhere in the Mission?". So I don't have much time for that.
Robots are supposed to make up for our limitations by doing things we can't do, not do the things we can already do, but differently. The latter only serves to replace humans, not augment them.
This was only plausible to people who had no experience in robotics, autonomy, and vision systems.
Everyone knew LIDAR was the enabling technology thanks to the 2007 DARPA Urban challenge.
But the ignoramus Elon Musk decided he knew better and spent the last decade+ trashing the robotics industry. He set us back as far as safety protocols in research and development, caused the first death due to robotic cars, deployed them on public roads without the consent of the public by hoisting around his massive wealth, lied consistently for a DECADE about the capabilities of these machines, defrauded customers and shareholders while becoming richer and richer, all to finally admit defeat while he still maintains the growth story of for Tesla's future remains in robotics. The nerve of this fucking guy.
Maybe they'll reach level 4 or higher automation, and will be able to claim full self driving, but like fusion power and post-singularity AI, it seems to be one of those things where the closer we get to it, the further away it is.
Others are in prison for far less.
The first time could be an honest mistake, but after a certain point we have to assume that it’s just a lie to boost the stock price.
Just like politicians, it seems there's no repercussions for CEO's lying as long as it's fleecing the peons and not the elite.
Compare Boston Dynamics and cat. They are on the absolutely different levels for their bodies and their ability to manipulate their bodies.
I have no doubts, that using cameras-only would absolutely work for AI cars, but at the same time I'm feel that this kind of AI is not there. And if we want autonomous cars, it might be possible, but we need to equip them with as much sensors as necessary, not setting any artificial boundaries.
Check out what the Tesla park assist visualization shows now. It's vision based and shows a 3D recreation of the world around the car. You can pan around to see what's there and how far away it is. It's fun to play around with in drive thrus, garages, etc. just to see what it sees.
I guess you don't drive? You use more senses than just vision when driving a car.
You also do use your ears when driving.
Provable by one-eyed people being able to drive just fine, as could you with one eye covered.
No. I live in snow country. Folks with vestibular issues are advised to pull over in snowstorms because sometimes the only indication that you have perpendicular velocity and are approaching a slide off the road or spin is that sense. My Subaru has on more than one occasion noticed a car before I did based on radar.
Vision only was a neat bet. But it will cost Tesla first to market status generally and especially in cities, where regulators should have fair scepticism about a company openly trying to do self driving on the cheap.
> That something magical is happening because the eyes are close topographically to the brain?
It sounds to me like you have to study what eyes actually are. It's not about proximity or magic, they are a part of your brain, and we're only beginning to understand their complexities. Eyes are not just sensory organs, so the analogy to cameras is way off. They are able to discern edges, motion, color, and shapes, as well as correct errors before your brain even is even aware.
In robotics, we only get this kind of information after the camera image has been sent through a perception pipeline, often incurring a round trip through some sort of AI and a GPU at this point.
> Sounds implausible.
Musk just spent billions of dollars and the better part of a decade trying to prove the conjecture that "cameras are sufficient", and now he's waving the white flag. So however implausible it sounds, it's now more implausible than ever that cameras alone are sufficient.
What is up with hn today? Was there a mass stroke?
Deaf drivers (may include drivers playing loud music too) don't, unless they're somehow tasting the other vehicles.
Nature's accelerometers.
I've had mine go bad, and it wasn't fun.
Just sayin'...
I was unable to stand up.
It all came out OK, in the end, but it was touch-and-go for a while.
Not quite a Lotus Position, but I used the Epley Maneuver on her which immediately lessened her symptoms: https://en.wikipedia.org/wiki/Epley_maneuver
Even driving with mild vertigo could be difficult because you want to restrict your head movement.
Source: my dad gets Benign paroxysmal positional vertigo (BPPV)
I am not saying that you couldn’t do this with hardware, I am quite confident you could actually, but I am just saying that there are senses other than sight and sound at play here.
But that's sort of besides the point: why would you not use additional data when the price of the sensors are baked into the feature that you're selling?
While it doesn't often snow or ice up here (it does sometimes), it does rain a good bit from time to time. You can usually feel your car start to hydroplane and lose traction well before anything else goes wrong. It's an important thing to feel but you wouldn't know it's happening if you're going purely on vision.
You can often feel when there's something wrong with your car. Vibrations due to alignment or balance issues. Things like that.
Those are quick examples off the top of my head. I'm sure there are more.
Of course, all these things can be tracked with extra sensors, I'm not arguing humans are entirely unique in being able to sense these things. But they are important bits of feedback to operate your car safely in a wide range of conditions that you probably will encounter, and should be accounted for in the model.
As for auditory feedback, while some drivers don't have sound input available to them (whether they're deaf or their music is too loud or whatever) sound is absolutely a useful input to have. You may hear emergency vehicles you cannot see. You may hear honking alerting you to something weird going on in a particular direction. You may hear issues with your car. Those rumble strips are also tuned to be loud when cars run over them as well. You can hear the big wind gusts and understand those are the source of weird forces pushing the car around as opposed to other things making your car behave strangely. So sure, one can drive a car without sound, but its not better without it.
The problem is clearly a question of the fidelity of the vision and our ability to slave a decision maker and mapper to it.
Sure, for some definition of "works"...
https://www.iihs.org/research-areas/fatality-statistics/deta...
Sensory processing is not matched, sure, but IMO how a human drives is more involved than it needs to be. We only have two eyes and they both look in the same direction. We need to continuously look around to track what's around us. It demands a lot of attention from us that we may not always have to spare, especially if we're distracted.
Not on all metrics, especially not simultaneously. The dynamic range of human eyes, for example, is extremely high.
AFAIK there is also more than one front camera. Why would anyone try to do it all with one or two camera sensors like humans do it?
It's important to remember that the cameras Tesla are using are optimized for everything but picture quality. They are not just taking flagship phone camera sensors and sticking them into cars. That's why their dashcam recordings look so bad (to us) if you've ever seen them.
try matching a cat's eye on those metrics. and it is much simpler that human one.
But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.
Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.
Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.
LIDAR based self-driving cars will always massively exceed the safety and performance of vision-only self driving cars.
Current Tesla cameras+computer vision is nowhere near as good as humans. But LIDAR based self-driving cars already have way better situational awareness in many scenarios. They are way closer to actually delivering.
No part costs less, it also doesn't break, it also doesn't need to be installed, nor stocked in every crisis dealership's shelf, nor can a supplier hold up production. It doesn't add wires (complexity and size) to the wiring harness, or clog up the CAN bus message queue (LIDAR is a lot of data). It also does not need another dedicated place engineered for it, further constraining other systems and crash safety. Not to mention the electricity used, a premium resource in an electric vehicle of limited range.
That's all off the top of my head. I'm sure there's even better reasons out there.
But LIDAR would probably be wired more directly to the computer then use a packet protocol.
it has to do with the processing of information and decision-making, not data capture
Wake me up when the tech reaches Level 6: Ghost Ride the Whip [0].
[0] https://en.wikipedia.org/wiki/Ghost_riding
He said consumers, just buy the car and it will come with an updated. It didn't.
This is a scam, end of story.
7 years of it.
I think you mean "securities fraud", at gargantuan scale at that. Theranos and Nikola were nowhere near that scale.
Perhaps it's that cars are more sacred than healthcare.
Delusionaly generous take. Perhaps even zealotry.
There is little to suggest that Tesla is any closer to level 4 automation than Nabisco is. The Dojo supercomputer that was going to get them there? Never existed.
The persistent problem seems to be severe weather, but the gap between the weather a human shouldn't drive in and weather a robot can't drive in will only get smaller. In the end, the reason to own a self-driven vehicle may come down to how many severe weather days you have to endure in your locale.
Interesting that Waymo now operates just fine in SF fog, and is expanding to Seattle (rain) and Denver (snow and ice).
A system that requires a "higher level" handler is not full self driving.
If the vehicle has a collision, who's ultimately responsible? That person (or computer) is the driver.
If a Waymo hits a pole for example, the software has a bug. It wasn't the responsibility of a remote assistant to monitor the environment in real time and prevent the accident, so we call the computer the driver.
If we put a safety driver in the seat and run the same software that hits the same pole, it was the human who didn't meet their responsibility to prevent the accident. Therefore, they're the driver.
Which is why an autonomous car company that is responsible and prioritizes safety would never call their SAE Level 4 vehicle "full self-driving".
And that's why it's so irresponsible and dangerous for Tesla to continue using that marketing hype term for their SAE Level 2 system.
But is Level 4 enough to count as "Full Self Driving"? I'd argue it really depends on how big the geofence area is, and how rare interventions are. A car that can drive on 95% of public roads might as well be FSD from the perspective of the average drive, even if it falls short of being Level 5 (which requires zero geofencing and zero human intervention).
California granted Waymo the right to operate on highways and freeways in March 2024.
https://www.reddit.com/r/waymo/comments/1gsv4d7/waymo_spotte...
> and uses remote operators to make decisions in unusual situations and when it gets stuck.
This is why its limited markets and areas of service: connectivity for this sort of thing matters. Your robotaxi crashing cause the human backup lost 5g connectivity is gonna be a real real bad look. NO one is talking about their intervention stats. IF they were good I would assume that someone would publish them for marketing reasons.
Waymo navigates autonomously 100% of the time. The human backup's role is limited to selecting the best option if the car has stopped due to an obstacle it's not sure how to navigate.
Interventions are a term of art, i.e. it has a specific technical meaning in self-driving. A human taking timely action to prevent a bad outcome the system was creating, not taking action to get unstuck.
> IF they were good I would assume that someone would publish them for marketing reasons.
I think there's an interesting lens to look at it in: remote interventions are massively disruptive, the car goes into a specific mode and support calls in to check in with the passenger.
It's baked into UX judgement, it's not really something a specific number would shed more light on.
If there was a significant problem with this, it would be well-known given the scale they operate at now.
Are they? Did you mean Autonomous Vehicles?
L4 is "full autonomy, but in a constrained environment." L5 is the holy grail: as good as or better than human in every environment a human could take a car (or, depending on who's doing the defining: every road a human could take a car on. Most people don't say L5 and mean "full Canyonero").
That's a distinction without a difference. Forest service and BLM roads are "roads" but can be completely impassable or 100% erased by nature (and I say this as a former Jeep Wrangler owner), they aren't always located where a map thinks they are, and sometimes absolutely nothing differentiates them from the surrounding nature -- for example, left turn into a desert dry wash can be a "road" and right not.
Actual "full" autonomous driving is crazy hard. Like, by definition you get into territory where some vehicles and some drivers just can't make it through, but it's still a road(/"environment"). And some people will live at the end of those roads.
It initially seems mad that a human, inside the box can outperform the "finest" efforts of a multi zillion dollar company. The human has all their sensors inside the box and most of them stymied by the non transparent parts. Bad weather makes it worse.
However, look at the sensors and compute being deployed on cars. Its all minimums and cost focused - basically MVP, with deaths as a costed variable in an equation.
A car could have cameras with views everywhere for optical, LIDAR, RADAR, even a form of SONAR if it can be useful, microwave and way more. Accellerometers and all sorts too, all feeding into a model.
As a driver, I've come up with strategies such as "look left, listen right". I'm British so drive on the left and sit on the right side of my car. When turning right and I have the window wound down, I can watch the left for a gap and listen for cars to the right. I use it as a negative and never a positive - so if I see a gap on the left and I hear a car to my right, I stay put. If I see a gap to the left but hear no sound on my right, I turn my head to confirm that there is a space and do a final quick go/no go (which involves another check left and right). This strategy saves quite a lot of head swings and if done properly is safe.
I now drive an EV: One year so far - a Seic MG4, with cameras on all four sides, that I can't record from but can use. It has lane assist (so lateral control, which craps out on many A road sections but is fine on motorway class roads) and cruise control that will keep a safe distance from other vehicles (that works well on most roads and very well on motorways, there are restrictions).
Recently I was driving and a really heavy rain shower hit as I was overtaking a lorry. I immediately dived back into lane one, behind the lorry and put cruise on. I could just see the edge white line, so I dealt with left/right and the car sorted out forward/backward. I can easily deal with both but its quite nice to be able carefully abrogate responsibilities.
I answered the question 'What does Waymo lack in your opinion to not be considered "full self driving"?'. And clearly its not if it can't drive on literally 99.99% of roads in the world. Any argument to the contrary is just ridiculous.
Germany, Italy, India all stand out as examples to me. The roads and driving culture is very different, and can be dangerous to someone who is used to driving on American suburban streets.
I really do stand by my comment, and apologize for the 'low quality' nature of it. I meant to suggest that we set the bar far higher for AI than we do for people, which is in general a good thing. But still - I would say that by this definition of 'full self driving', it wouldn't be met very well by many or most human drivers.
Of course I may have simply been lucky, but given that my driving license is valid in many countries it seems as though humanity has determined this is mostly a solved problem. When someone says "Put a Waymo on random road in the world, can it drive it?" they mean: I would expect a human to be able to drive on a random road in the world. And they likely could. Can a Waymo do the same?
I don't know the answer to that one. But if there is one thing that humans are pretty good at it is adaptation to circumstances previously unseen. I am not sure if a Waymo could do the same but it would be a very interesting experiment to find out.
American suburban streets are not representative of driving in most parts of the world. I don't think the bar of 'should be able to drive most places where humans can drive' is all that high and even your average American would adapt pretty quickly to driving in different places. Source: I know plenty of Americans and have seen them drive in lots of countries. Usually it works quite well, though, admittedly, seeing them in Germany was kind of funny.
"Am I hallucinating or did we just get passed by an old lady? And we're doing 85 Mph?"
That's experience and you learned and survived to tell the tale. Its almost as though you are capable of learning how to deal with an unfamiliar environment, and fail safe!
I'm a Brit and have driven across most of Europe, US/CA and a few other places.
Southern Italy eg around Napoli is pretty fraught - around there I find that you need to treat your entire car as an indicator: if you can wedge your car into a traffic stream, you will be let in, mostly without horns blaring. If you sit and wait, you will go grey haired eventually.
In Germania, speed is king. I lived there in the 70s-90s as well as being a visitor recently. The autobahns are insane if you stray out of lane one, the rest of the road system is civilised.
France - mostly like driving around the UK apart from their weird right hand side of the road thing! La Perifique is just as funky as the M25 and La Place du Concorde is a right old laugh. The rest of the country that I have driven is very civilised.
Europe to the right of Italy is pretty safe too. I have to say that across the entirety of Europe, that road signage is very good. The one sign that might confuse any non-European is the white and yellow diamond (we don't have them in the UK). It means that you have priority over an implied "priority to the right". See https://driveeurope.co.uk/2013/02/27/priority-to-the-right/ for a decent explanation.
Roundabouts were invented in the US. In the UK when you are actually on a roundabout you have right of way. However, everyone will behave as though "priorite a la doite" and there will often be a stand off - its hilarious!
In the UK, when someone flashes their headlights at you it generally means "I have seen you and will let you in". That generally surprises foreigners (I once gave a lift to a prospective employee candidate from Poland and he was absolutely aghast at how polite our roads seemed to be). Don't always assume that you will be given space but we are pretty good at "after you".
I don't agree.
My anecdata suggests that Waymo is significantly better than random ridesharing drivers in the US, nowadays.
My last dozen ridesharing experiences only had a single driver that wasn't actively hazardous on the road. One of them was so bad that I actually flagged him on the service.
My Waymo experiences, by contrast, have all been uniformly excellent.
I suspect that Waymo is already better than the median human driver (anecdata suggests that's a really low bar)--and it just keeps getting better.
> My anecdata suggests that Waymo is significantly better than random ridesharing drivers in the US, nowadays.
Those two aren't really related are they? That's one locality and a specific kind of driver. If you picked a random road there is a pretty small chance that road would be one like the one where Waymo is currently rolled out, and where your ridesharing drivers are representative of the general public, they likely are not.
Based on the rate of progress alone I would expect functional vision-only self-driving to be very close. I expect people will continue to say LIDAR is required right up until the moment that Tesla is shipping level 4/5 self-driving.
Pro tip if you get stuck in a warren of tiny little back streets in the area. Latch on to the back of a cab; they're generally on their way to a major road to get their fare where they're going and they usually know a good way to get to one. I've pulled this trick multiple times around city hall, Government Center, the old state house, etc.
Like does it get naively caught in stopped traffic for turns it could lane change out or does it fucking send it?
So close yet so far, which is ironically the problem vision based self-driving has. No concrete information just a guess based on the simplest surface data.
https://www.nytimes.com/2025/05/13/business/tesla-stock-sale...
https://www.afr.com/technology/life-changing-wealth-stopped-...
Please, post numbers to back this up… please…
Probably Tesla being the only major domestic EV manufacturer + historically Musk not wading into politics + Musk/Tesla being widely popular for a time is probably why no one has gone after him. Not sure how this changes going forward with Musk being a very polarizing figure now.
Yeah, historically, as in: before many people here were born. It's been so long since SEC and FTC did such things.
I'm curious why you think this. I would be pretty shocked if, despite Musk's disgusting personality, they weren't also bought in.
While I didn't look long for a more neutral source, Teslarati has a good list of the prompts of the shift from Musk being anti-Trump and pro-Biden, to giving up on Biden, to supporting Trump: https://www.teslarati.com/former-tesla-exec-confirms-wsj-rep...
There were apparently also other considerations not associated with Tesla for his turn (transgender child, etc), but my read on all this is that Musk saw staying out of politics didn't mean politics would stay away from him. Given that Trump II is also now somewhat anti-Musk, it's not clear to me that he succeeded in avoiding a longer-term axe for Tesla (Neuralink/Solarcity/SpaceX/Boring...) from politicians. We'll see.
like this one: https://www.bbc.com/news/technology-53418069
For as long as we can’t understand AI systems as well as we understand normal code, first principles thinking is out of reach.
It may be possible to get FSD another way but Elon’s edge is gone here.
I agree that the team deserves most of the success. I think that's the case in general. At best, a CEO puts down good framing/structure, that's it. ICs do the actual innovative work.
The problem you’re describing — phantom braking, random wiper sweeps — is exactly what happens when the perception system’s “eyes” (cameras) feed imperfect data into a “brain” (compute + AI) that has no independent cross-check from another modality. Cameras are amazing at recognizing texture and color but they’re passive sensors, easily fooled by lighting, contrast, weather, or optical illusions. LiDAR adds active depth sensing, which directly measures distance and object geometry rather than inferring it.
But LiDAR alone isn’t the endgame either. The real magic happens in sensor fusion — combining LiDAR, radar, cameras, GNSS, and ultrasonic so each sensor covers the others’ blind spots, and then fusing data at the perception level. This reduces false positives, filters out improbable hazards before they trigger braking, and keeps the system robust in edge cases.
And there’s another piece that rarely gets mentioned in these debates: connected infrastructure. If the vehicle can also receive data from roadside units, traffic signals, and other connected objects (V2X), it doesn’t have to rely solely on its onboard sensors. You’re effectively extending the vehicle’s situational awareness beyond its physical line of sight.
Vision-only autonomy is like trying to navigate with one sense while ignoring the others. LiDAR + fusion + connectivity is like having multiple senses and a heads-up from the world around you.
There’s an increasing number of drivers that can barely drive on the freeways. When they hit our area they cannot even stay on their side of the road, slow down for blind curves (when they’re on the wrong side of the road!), maintain 50% the normal speed of other drivers, etc. I won’t order uber or lyft anymore because I inevitably get one of these people as my driver (and then watch them struggle on straight stretches of freeway).
Imagine how much worse this will get when they start exclusively using lane keeping on easy roads. It’ll go from “oh my god I have to work the round wheel thingy and the foot levers at the same time!” to “I’ve never steered this car at speeds above 11”.
I’d much rather self driving focused on driving safely on challenging roads so that these people don’t immediately flip their cars (not an exaggeration; this is a regular occurrence!) when the driver assistance disables itself on our residential street.
I don’t think addressing this use case is particularly hard (basically no pedestrians, there’s a double yellow line, the computer should be able to compute stopping distance and visibility distance around blind curves, typical speeds are 25mph, suicidal deer aren’t going to be the computer’s fault anyway), but there’s not much money in it. However, if you can’t drive our road, you certainly cannot handle unexpected stuff in the city.
Just kidding.
Wait, no! Please. No!
How do I delete this???
There were aspirations that the bottom up approach would work with enough data, but as I learned about the kind of long tail cases that we solved with radar/camera fusion, camera-only seemed categorically less safe.
easy edge case: A self driving system cannot be inoperable due to sunlight or fog.
a more hackernew worthy consideration: calculate the angular pixel resolution required to accurately range and classify an object 100 meters away. (roughly the distance needed to safely stop if you're traveling 80mph) Now add a second camera for stereo and calculate the camera-to-camera extrinsic sensitivity you'd need to stay within to keep error sufficiently low in all temperature/road condition scenarios.
The answer is: screw that, I should just add a long range radar.
there are just so many considerations that show you need a multi-modality solution, and using human biology as a what-about-ism, doesn't translate to currently available technology.
Many Lidar visualization software will happily pseudocolor the intensity channel for you. Even with a mechanically scanning 64-line Lidar you can often read a typical US speed limit sign at ~50 meter in this view.
Sure, it wouldn't replace any other sensing tech, but if my car has UWB and another car has UWB, they can telegraph where they are and what their intentions are a lot faster and in a "cleaner" manner than using a camera to watch the rear indicator for illumination
At this point, Tesla looks less like a disruptive startup and more like a large-cap company struggling to find its next act. Musk still runs it like a scrappy startup, but you can’t operate a trillion-dollar business with the same playbook. He’d probably be better off going back to building something new from scratch and letting someone else run Tesla like the large company it already is.
https://www.truecar.com/compare/bmw-3-series-vs-tesla-model-...
It is hard to interpret the smugness above in a positive light. It is unhelpful to you and to everyone here.
If you want to compare an electric car against combustion-engine vehicle, go ahead, but that isn’t a key decision point for what we’re talking about.
The TrueCar web page table does not account for a $7,500 federal tax credit for EVs. I recognize it ends soon — September 30 — if only to head off a potential zinger comment (which would be irrelevant to the overall point).
All in all, it is notable that ~2 minutes asking a modern large language model for various comparisons is more helpful than this conversation with another human (presumably). If we’re going to advocate for the importance of humanity, seems to me like we should start demonstrating that we can at least act like why we deserve it. I view HN primarily as a place to learn and help others, not a place for snarky comments.
A better modern comparison showing less expensive EVs would mention the Nissan Leaf or Chevy Equinox or others. The history is interesting and worth digging into. To mention one aspect: the Leaf had a ~7 year head start but the Tesla caught up in sales by ~2018 and became the best-selling EV — even at a higher price point. So this undermines any claim that Tesla wasn’t doing something right from the POV of customer perception.
I don’t need to “be right” in this particular comment — I welcome corrections — I’m more interested in error correction and learning.
https://www.edmunds.com/electric-car/articles/cheapest-elect...
The model 3 is 1.5x more expensive than the cheapest car on the list, and it’s not obviously better than other things in its price range.
Here are some brands that have delivered more affordable EVs than Tesla: Kia, Hyundai, Chevy, Cooper, Nissan.
Note that all of these cost about 2x more than international competitors.
On top of that, Ford’s upcoming platform is targeting $30K midsize pickup trucks. Presumably, most other manufacturers have similar things in their pipelines.
Tesla is already behind most of its competitors, and does not seem to have anything new the pipeline, so the gap is likely to expand.
They’ve clearly failed to provide affordable EVs. They’ve been beaten to market by a half dozen companies in the North American market, and that’s with trade barriers blocking foreign companies that are providing cars for less than half these prices.
They are still profitable, have very little debt and a ton of money into the bank.
Every company has hits and misses. Bezos started before Musk and still hasn't gotten his rockets into orbit.
They have the best selling model in the world (their Model Y). But their total sales of all models are way behind many other car companies.
These car companies sell more cars each year than Tesla (ordered by total sales): Toyota, Volkswagen, Hyundai-Kia, GM, Stellantis, Ford, BYD, Honda, Nissan, Suzuki, BMW, Mercedes-Benz, Renault, and Geely.
Toyota and Volkswagen each sell more cars in a year than Tesla has sold over its lifetime, and Hyundai-Kia's annual sales are about the same as Tesla's lifetime sales.
By revenue rather than units these companies sell more per year: Volkswagen, Toyota, Stallantis, GM, Ford, Mercedes-Benz, BMW, Honda, BYD, and SAIC Motor. (Edit: I accidentally left out Hyundai-Kia)
"His track record is unimpressive"... I can see why you say that, I mean, took Tesla from almost nothing to a trillion dollar company. Started the most prolific rocket and satellite company in history (but hey, it's only rocket science right?), provides internet to places that it never even had the possibility of getting to, and providing untold millions the chance to get on the internet.
Started a company that is giving the paralyzed the ability to use a computer controlling their brain, and is working to restore sight to the blind.
Totally unimpressive. There are so many people who have done these things /s
That's contradicted here:
> The two co-founders funded the company until early 2004, when Elon Musk led the company's $6.5 million Series A financing
https://en.wikipedia.org/wiki/Marc_Tarpenning
https://www.forbes.com/sites/quora/2014/12/29/how-much-equit...
Your claim was
> They contributed no money
Where did you get this from?
Tesla also licensed the design from tZero before the series-A I believe. And Eberhard was sort of keeping tZero afloat:
https://www.youtube.com/watch?v=88KHfX_kPIY
FSD has been a complete lie since the beginning. Any reasonable person who followed the saga (and the name "FSD") can tell you that. It was mobileye in 2015-2016, which worked quite well for what it's, followed by unfilled "FSD next year" promise since then every year.
Fool me once, shame on you; fool me twice, shame on me.
Yes, right now car sales make up 78% of Tesla's revenue. But cars have 17% margins. The energy-storage division, currently at 10% of revenue, has more like 30% margins. And the car sales are falling as the battery sales ramp up.
The cars were always a B2C bootstrap play for Tesla, to build out the factories it needed to sell grid-scale batteries (and things like military UAV batteries) under large enterprise B2B contracts. Which is why Tesla is pushing the "car narrative" less and less over time, seeming to fade into B2C irrelevancy — all their marketing and sales is gradually pivoting to B2B outreach.
> The cars were always a B2C bootstrap play for Tesla, to build out the factories it needed to sell grid-scale batteries
This seems like revisionist history. They called their company Tesla Motors, not Tesla Energy, after all.
This is a blog post from the founder and CEO about their first energy play. It seems clear that their first energy product was an unintended byproduct of the Roadster, they worried about it being a distraction from their core car business, but they decided to go ahead with it because they saw it as a way to strengthen their car business.
https://web.archive.org/web/20090814225814/http://www.teslam...
Are we still doing this in 2025?
Uber is not a taxi company it’s a transportation company! Just wait until they roll out buses!
Juicero is not a fruit squeezing company it’s an end to end technology powered nourishment platform!
And so on. Save it for the VC PowerPoints.
Tesla is a car company. Maybe some day it’ll be defined by some other lines of business too. Maybe one day they’ll even surpass Yamaha.
Like if a company comes out with a new transportation technology and calls it "teleportation", but in fact is just a glorified trebuchet, they shouldn't be allowed to use a generic term with a well-understood meaning fraudulently. But no, they'll just call it "Teleportation™" with a patented definition of their glorified trebuchet, and apparently that's fine and dandy.
I am still bitter about the hoverboard.
I know because I bought it in March 2019 on a Model 3. (I got it because I thought it would help my elderly parents who mostly used the car.)
7500 euros completely down the drain. It still can’t even read highway speed signs. A five-year-old would be a safer driver than Tesla’s joke FSD.
They do have the audacity to send me NPS surveys on the car’s “Teslaversary.” Maybe they could guess by now that it’s a big fat zero.
However, I’m not sure that’s necessary, They lost the Tesla Roof class action suit, so it’s clearly possible to sue them.
Am I the only one that noticed most of the targets are in nominal dollars, not inflation adjusted? Trump’s already prosecuting Fed leadership because they’re refusing to print money for him. Elon’s worked with him enough to understand where our monetary policy is headed.
My memory was more that you'd be able to get into (the driver's seat of) your Tesla in downtown Los Angeles, tell it you want to go to the Paris hotel in Vegas, and expect generally not to have to do anything to get there. But not guaranteed nothing.
Full Speed USB is 12Mbps, nobody wants a Full Speed USB data transfer.
Full Self Driving requires supervision. Clearly, even Tesla understands the implication of their name, or they wouldn't have renamed it Full Self Driving Supervised... They should probably have been calling it Supervised Self Driving since the beginning.
I get that there are many who rush to defend Musk/Tesla. I'm not one of them.
I was just caught off guard by the headline. To me, changing “Full Self-Driving” to “Full Self-Driving (Supervised)” doesn't merit the headline "Tesla changes meaning of ‘Full Self-Driving’, gives up on promise of autonomy".
Again, to me "Full Self-Driving" never meant you would retro-fit your Tesla to remove the steering wheel, nor even set it for someplace and go to sleep. To me, it meant not needing to have your hands on the steering wheel and being able to have a conversation while maintaining some sort of situational awareness, although not necessarily keeping your eyes fully on the road for the more monotonous parts of a journey.
As others have pointed out, Tesla/Musk sometimes claimed more than that, but the vast majority of their statements re: FSD hew closer to what I said above. At least I think so -- no one yet has posted something where claims of more than the above are explicit and in the majority.
Autopilot in a plane generally maintains heading and altitude. It certainly can do that with or without a pilot in the cockpit, and you hear about incidents from time to time where the pilot is incapacitated and the autopilot keeps the heading and altitude until fuel run out. Keeping heading and altitude is insufficient to operate a plane, of course; Tesla's choice of the word Autopilot was also problematic, because the larger market of drivers doesn't necessarily understand the limitations of aviation autopilot and many people thought the system is more capable than it actually is; an aviation style autopilot wouldn't be much help on the road, maintaining heading in that way isn't actually helpful when roads are not completely straight, maintaining speed is sometimes useful but that's been called cruise control for decades. (Some flight automation systems can do waypoints, and autoland is a thing, but afaik, it's not all put together where you put the whole thing in at once and chill, nor would that be a good idea).
> To me, it meant not needing to have your hands on the steering wheel and being able to have a conversation while maintaining some sort of situational awareness, although not necessarily keeping your eyes fully on the road for the more monotonous parts of a journey.
I mean, that's sort of what the product is, although there's real safety concerns about ability for humans to context switch and intervene properly. I see how that's supervised self-driving, but not how it's full self-driving.
If I paid 90% of your invoice and said paid in full, that doesn't make it paid in full.
https://youtu.be/B4rdISpXigM
To be clear, this is obviously a reframing from the implications Musk has made. But I still don't see adding "supervised" to the description as that big a shift for most of the use cases that have been presented in the past.
https://www.reuters.com/technology/tesla-video-promoting-sel...
It's pathetic. The Austin Robotaxi demo had a "safety monitor" in the front passenger seat, with an emergency stop button. But there were failures where the safety driver had to stop the vehicle, get out, walk around the car, get into the drivers's seat, and drive manually. So now the "safety monitor" sits in the driver's seat.[1] It's just Uber now.
Do you have to tip the "safety monitor"?
And for this, Musk wants the biggest pay package in history?
[1] https://electrek.co/2025/09/03/tesla-moves-robotaxi-safety-m...
"In a visible sign of its shifting posture from daredevil innovation to cautious compliance, Tesla this week relocated its robotaxi safety monitors, employees who supervise the autonomous software’s performance and can take over the vehicle’s operation at any moment, from the passenger seat to the driver’s seat."
And this one in Electrek.[2]
The state of Texas recently enacted regulations for self-driving cars that require more reporting. Tesla's Robotaxi with a driver is not a self-driving car under Texas law.
Musk claims Tesla will remove the safety driver by the end of the year. Is there a prediction market on that?
[1] https://gizmodo.com/tesla-robotaxi-2000653821
[2] https://electrek.co/2025/09/03/tesla-moves-robotaxi-safety-m...
[1] https://seekingalpha.com/article/4818639-tesla-robotaxi-ambi...
Other people, most importantly your local driving laws, use driving as a technical term to refer to tasks done by the entity that's ultimately responsible for the safety of the entire system. The human remains the driver in this definition, even if they've engaged FSD. They are not in a Waymo. If you're interested in specific technical verbiage, you should look at SAE J3016 (the infamous "levels" standard), which many vehicle codes incorporate.
One of the critical differences between your informal definition is whether you can stop paying attention to the road and remain safe. With your definition, it's possible have a system where you're not "driving", but you still have a responsibility to react instantaneously to dangerous road events after hours of of inaction. Very few humans can reliably do that. It's not a great way to communicate the responsibilities people have in a safety-critical task they do every day.
I don't understand why that is. They literally do nothing. The car drives itself. Parks itself. Does everything itself. The fact you have to engage with the wheel every now and then is because of regulation not because the tech isn't there imo. Really to me there is zero difference between the waymo and tesla experience save for regulatory decisions that prevent the tesla from being truly hands free eyes shut.
Tesla has chosen to not (yet) assume that liability, and leave that liability to the driver and requires a driver in the drivers seat. But someone in the drivers seat can override the steering wheel accidentally and cause a collision, so they likely will require the drivers seat to be empty to assume liability (or disable all controls, which is only possible on a steer by wire vehicle, and the only such vehicle in the world is Cybertruck).
Tesla has not asked for regulatory approval for level 4 or 5. When they do, it'll be interesting to see how governments react.
Still, my point is all this has nothing to do with the tech. It is all regulatory/legal checkers.
Because being a passenger in a driverless vehicle is a much better user experience than being a driver. You can be on a zoom call, sleep, watch a movie or TV show or scroll TikTok, get some work done on your computer, wear a VR headset and be in a different world, etc etc. Tesla would make a lot more money, and could charge a lot more for FSD.
They aren't doing that yet because they aren't ready yet. It's why they still have humans in the robotaxi service.
There are no doubts in my mind that they will do it probably next year. The latest version of FSD on the new cars is very, very impressive.
In 2016 Tesla claimed every Tesla car being produced had "the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver": https://web.archive.org/web/20161020091022/https://tesla.com...
It was a lie then and remains a lie now.
Sure. Does Tesla take responsibility and accept liability for the vehicle's driving?
It was his idea, his decision to build the architecture and he led the entire vision team during this.
Yet, he remains free from any of this fallout and still widely considered an ML god
https://youtu.be/3SypMvnQT_s?si=FDmyA6amWnDpMPEj
Tesla’s autonomous driving claims might be coming to an end [video]
https://news.ycombinator.com/item?id=45133607
Tesla is kind of a joke in the FSD community these days. People working on this problem a lot longer than Musk's folk have been saying for years that their approach is fundamentally ignoring decades of research on the topic. Sounds like Tesla finally got the memo. I mostly feel sorry for their engineers (both the ones who bought the hype and thought they'd discover the secret sauce that a quarter-century-plus of full-time academic research couldn't find and the old salts who knew this was doomed but soldiered on anyway... but only so sorry, since I'm sure the checks kept clearing).
It's two steps from selling snake-oil, basically. Not that L4 or L5 are impossible, but people who knew the problem domain looked at how they were approaching it hardware-wise and went "... uhuh."
They literally did this with Summon. "Have your car come to you while dealing with a fussy child" - buried far further down the page in light grey, "pay full attention to the vehicle at all times" (you know, other than your "fussy child").
The result is it looks like many drivers are unaware of the benefits of defensive driving. Take that all into account and safe 'full self driving' may be tricky to achieve?
In the long run some of those promises might materialise. But who cares! Portfolio managers and retail investors want some juicy returns - share price volatility is welcomed.
Electric car + active battery management were what I cared about at the time of purchase. Also, I am biased against GM and Ford due to experiences with their cars in the 80s and 90s.
I doubt I'm the only one.
(In retrospect, the glass roof was not practical in Canada and I will look elsewhere in the future)
Besides, hot in summer and cold in winter. Just see no benefit, it is just another made for California feature
Is life absurd?
Is hope a solution to absurdity?
Like with this. No, Tesla hasn't communicated any as such. Everyone knows FSD is late. But Robotaxi shows it is very meaningfully progressing towards true autonomy. And for example crushed the competition (not literally) in a recent very high-effort test in avoiding crashes on a highway with obstacles that were hard to read for almost all the other systems: https://www.youtube.com/watch?v=0xumyEf-WRI
What? They literally just moved the in car supervisor from the passenger seat to the driver seat. That's not a vote of confidence.
And I don't think you can glean anything. There are less than 20 Robotaxis in Austin, that spend their time giving rides to influencers so they can make YT videos where even they have scary moments.
At best he is skilled at sales and marketing --- maybe even management. At worst, he is a con artist.
The real problem for Musk and others like him is that while it is certainly possible to fool some of the people some of the time, most will *eventually* come to realize the lack of credibility and stop accepting the BS.
Musk has firmly established a pattern of over promising and under delivering. DOGE and FSD are just two examples --- and there is more of the same in his pipeline.
You have been voted down, but this is proven. He has lied about his education. Henever even enrolled at Stanford, and his undergraduate degree was basically a general studies business degree.
Musk has lied, time and time again, about his education. He has never worked as an engineer. People have commented that he barely understands how to run simple Python scripts.
Tesla is pivoting messaging toward what the car can do today. You can believe that FSD will deliver L4 autonomy to owners or not -- I'm not wading into that -- but this updated web site copy does not change the promises they've made prior owners, and Tesla has not walked back those promises.
The most obvious tell of this is the unsupervised program in operation right now in Austin.
As an aside, it's wild how different the perspective is between the masses and the people who experience the bleeding edge here. "The future is here, it's just not evenly distributed," indeed.
Lol it has been strategic manipulation right the way through. Right out of an Industrial Organisation textbook.
> Tesla has changed the meaning of “Full Self-Driving”, also known as “FSD”, to give up on its original promise of delivering unsupervised autonomy.
They have not given up on unsupervised autonomy. They are operating unsupervised autonomy in Austin TX as I type this!
Setting aside calling a driver in the driver's seat "unsupervised"... that's exactly the point. People paid for this, and they are revoking their promise of delivering it, instead re-focusing on (attempting) operating it themselves.
I'd have no objection to this if they offered buy-backs on the vehicles in the field, but that seems unlikely.
Or are people upset about the current state of autonomous vehicles like Waymo (which has been working for Years!) and the limited launch of Robotaxi?
At any rate, I don't think they are revoking their prior promises. I expect them to deliver L4 autonomy to owners as previously promised. With that said, I'm glad they are ceasing that promise to new customers and focusing on what the car does today, given how wrong their timelines have been. I agree it's shitty if they don't deliver that, and that they should offer buybacks if they find themselves in that position.
Nope, they gave up on that and moved them to the driver's seat.
FWIW, Tesla disputes this claim: https://x.com/robotaxi/status/1963436732575072723
That's not a fact, it's a conclusion drawn from all the other facts in the article.
Did you find the facts that support this conclusion to be false?