Tesla engineers focused on making "Full Self-Driving" work for Elon

I don’t think that is remotely true. Ok, I believe you don’t want his version of it, sure. However FSD as currently envisioned by the SAE (SAE is the “society of automotive engineers”) interoperates with human drivers one the same roads at the same time. So it largely needs to be predictable by humans that may not even know it is operating, and it needs to avoid cars driven by humans. Adding in other FSD cars that are doing their best not to hit any cars human driven or otherwise isn’t making the job any harder.

The whole problem space is hard, and there is no guarantee that just because we are 50% or 80% or 93% of the way there that we ever get to the SAE’s level 5 goals, but I don’t believe operating on the same road as both humans and other SAE L5 FSDs is any harder then driving with humans alone.

Like “the car ahead of my may see something I don’t and slam on the brakes”, or “the car ahead of me is driven by a meat bag who may hit the breaks because he sneezed or got a bee inside the car some other stupid meat bag stuff”, and “the car ahead of me might see a shadow and it’s dumb AI model might get confused and it hits the breaks” are all basically the same situation: “the car in front of me may slow unpredictably, make sure we have enough room to also slow before hitting it, or clear space to the left or right”)

2 Likes

My fear is that if different manufacturers all have different algorithms for FSD (Volvo might make its cars keep more distance from the car in front - I have an anecdote about their British chief deisgner’s view of their early adaptive cruise control and the M25 on a Monday morning) then one FSD car may not expect it to behave differently to what its algorithm tells it. But you are correct that the far bigger issue is meatbag self-driving and algorithm self-driving being on the
roads together. But I do believe there will have to be some common standards about certain algorithms (perhaps not least so meatbag drivers can get ‘muscle memory’ familiar with the things algorithm controlled cars might do. But maybe instantaneous car-to-car networking and communications will enable everyone to co-exist safely?
(“I am about to change lanes - keep clear” or “I have a meatbag at the wheel - do not assume predictability” etc.)

I agree with this, especially. I am very sceptical about ever getting to level 5 - I don’t see FSD at level 5 ever being achieved in a mixed meatbag/algorithm environment without having to accept a certain level of ‘attirition’ (which, granted, may be less than the human level of accidents).

3 Likes

I completely agree.

I’m going to start here, maybe this is where we are most at cross purposes.

I see FSD5 having fewer accidents, and causing less damage, and less deaths per million miles then human drivers as a success. Doing so by a statistically significant margin more so.

Like I find it hard to understand the viewpoint “if FSD kills a single person per ten million miles or ever it is a failure, compared to humans that kill 1.33 people per million miles”. Like I get it if FSD kills one person per million miles it isn’t much of an improvement, and shouldn’t be where we brush off our hands and declare victory…but that is still .33 lives per million miles, or basically one life per three million miles.

Unless having FSD triples the number of miles people drive (and I admit there is some possibility that FSD5 will encourage people to travel more miles by car per day/year) that is a net win in the number of lives saved.

How is that not good?

I aboslulty accept that I may have accidentally misrepresented what you said (as in misunderstood it, and then tried to talk about it, and now you can see what I misunderstand). If so I would appreciate if you let me know what I got wrong about your position. Also acceptable is if you let me know what you think is wrong with mine.

Well we want FSD cars to operate with existing human driven cars, and the fleet turn over is about 20 years, so the cars driven in a decade will have to put up with cars that are a decade old right now. So if FSD is going to be viable in a decade it has to deal with a lot of human driven cars with no extra signaling and the FSD car’s can’t send any “extra” signaling that isn’t visible or audible by a normal human.

So “this car might move left very soon” could be indicated by a blinking light on the left side of the car, but if we put like a orange U shaped light on it somewhere we can’t really expect anyone to know what that means. We are going to need to stick to things like “red lights on both sides of the car mean it is slowing down” and “all lights blinking mean something is wrong, and the car might be stopped or stop soon”.

FSD cars are also trained on what existing drivers do, so they don’t get training data that comes with a bunch of special signaling data from other cars because even if it existed the humans that the training models are built from don’t understand any of that stuff and won’t react to it.

So we have two forces keeping us away from FSD cars communicate to each other in ways that meatbag driven cars don’t. One is it will confuse and kill meatballs later, and two is we have no way to train them now to behave significantly unlike meatbags.

I mean it isn’t that there is no desire for anyone to work on special “no meatbag driving” systems, because you do get cool but terrifying results like if you can control entry/exit times to with 10ms in an intersection you can “flow together” cars going all 4 ways though an intersection without needing to stop or slow much below 60MPH. That would save time and energy, and be wicked cool…but you aren’t going to have a human go thought that intersection. You might not even be able to get humans that have been driving for themselves for decades to be able to go through in a FSD car without a heart attack.

I think because of how FSDs get trained on existing data all cars will be assumed to be unpredictable. Which might be a bonus for FSD/FSD interactions, both will be cautionary and if one gets a flat tire the other won’t have done something that would have normally been ok with a FSD like drafting an inch and a half behind the other car’s bumper. So no accident just to save a little fuel (or charge).

On the other hand maybe e it will be something like a tic-tac-toe or chess game where if you assume perfect play from an imperfect player you experience victory slightly less (and in slightly more turns) then if you assume they are imperfect. Maybe there is a similar thing here where FSD’s assuming all other cars are meatballs is overly cautious about them nd does something that gets it into trouble where if it assumed the other FSDs were much more predictable they would have better overall performance (in an important metic like accidents per million miles).

In the main I don’t disagree with much of what you say and accept your criticism/feedback re some of my comments. But standardisation/commonality of training/models/algorithms (whatever, etc.) is surely also a good thing if it improves predictability - both for other FSD and for meatbag drivers.

If I’m at the wheel, drafting an inch and a half from my rear bumper - or any non-safe distance that would apply if both drivers were human (“always observe the two second rule”) - will never “normally be ok” - even with full FSD in both cars. I appreciate you noted an extreme example to make the point. Nevertheless, my throwaway comment about Volvo programming a different ‘safe’ distance to maintain to that programmed by other manufacturers is pertinent here. There should be a mandated, standard ‘safe’ distance, for predictability (=safety) reasons). I suspect the same ought to apply to many other elements of a vehicle’s driving behaviour. That’s really what underlies my comment about “the same version” - some standardisation, not allowing a “free-market” free-for-all.

We absolutely need to avoid: “Buy Swiftie Autos - they’ll get you there faster” (because their FSD model is inherently less safe and optimised for speed not safety). Imagine a van manufacturer specialising in vans for courier deliveries going this route and the buyers saying “yeah - we’ll have some of that and who gives a fuck about the safety of our human ‘riders’ / employees?” That’s all a gross exaggeration merely to emphasise that this is currently an insufficiently regulated competitive market with no standards and little to stop individual manufacturers tweaking their models for different ‘optimisations’.

Hey-ho - I think we’ve burnt this one out.
But for entertainment, back in the 1990s I was chatting with the head of design at Volvo and we got to chatting about emerging/future tech such as adaptive cruise control. (I think Mercedes might have gone public with its development or even started to sell it - too long ago to remember.) But his very tongue-in-cheek comment was that the average Swedish Volvo driver had never been in the outside lane of the M25 at 7.30am on a Monday morning and the chances were that if they programmed a safe distance (for the average Swedish Volvo driver) into their adaptive cruise control, you’d end up stationary or going in reverse, given the propensity of every M25 driver in the middle lane to pull into any gap at all between two cars to get into the outside lane - the sort of gap Volvo imagined would be a perfect opportunity for all those cowboy van / BMW drivers to do just that. (In the end, of course, their solution allows the driver to set a range of distances to maintain, and a set distance increases or decreases within its own range, according to speed. I find it very effective and have it on nearly all the time.)

What happens when you rush your engineers:

This topic was automatically closed after 5 days. New replies are no longer allowed.