Driver takes video of another driver sleeping behind the wheel of a self-driving Tesla

Why is the video for this the egg by hurstgog or how ever you spell it?

See that’ll be a problem for me. When I see things like this while driving I usually become all like:


Bet you were Grateful not to be Dead.


People fall asleep at the wheel all the time. If you’re going to do it anyway, this is a much better option.


The end of that footage should have included a shot of the license plate.

I’m surprised that there’s little/no mention of the “driver taking a video” as way more dangerous than sleeping in a Tesla. The Tesla can take care of itself, there’s more danger in the driver waking up and freaking out than staying asleep.

Have you used one of these systems yourself? Or is your conclusion based on the power of pure reason?

Same question to you.

This reminds me to mention, it’s not like people don’t fall asleep behind the wheel of cars without self-driving systems. The difference is, those people tend to die.

On the one hand, I thought that was the point of having a self-driving car. So you could read, sleep, game, and/or masturbate with impunity.

On the other hand, is the tech ready for no driver?

On a third hand, if I was going to die in a car crash, I would rather be asleep.

That’s a fun little story idea.

Nah, just kill them and then send them on their way on a trip across the country.

Is it possible he was in a Tesla as well? Otherwise, yeah, it’s pretty distracted driving.

From the CDC:

The National Highway Traffic Safety Administration estimates that drowsy driving was responsible for 72,000 crashes, 44,000 injuries, and 800 deaths in 2013 However, these numbers are underestimated and up to 6,000 fatal crashes each year may be caused by drowsy drivers.

Assuming these folks didn’t end up in an accident, the driver automation, in this case, may well have prevented a serious accident, and possibly death of the driver, passenger, or others in the process.

Given that the car may have been capable of lane changes on its’ own (depending on settings), should the car have pulled off the road and stopped? Perhaps. Does more need to be done to detect and react to drowsy drivers in general, autonomy or no? Absolutely.

But there’s a non-zero chance the vehicle autonomy, in this case, may well have prevented injury or death.


There’s actually decades of research done on driver attentiveness and safety - these systems fly in the face of all the conclusions of that research and make a number of existing problems much worse.

One can also look at research done around autopilot systems in airplanes, which will dump control back on human pilots in extreme situations - there’s a move towards keeping control in the hands of the autopilot in such situations, even if it generally doesn’t fly as well as a human, just because the human pilots tend to become confused when suddenly handed back control and have a much higher likelihood of crashing in those situations (compared to having been in control the whole time).

So… yeah, it’s a well known problem that’s been thoroughly researched, and you can look it up.

Except that we know that, in general, safety features can increase reckless behavior, and specifically with Teslas, that the autopilot causes people to do things they otherwise wouldn’t (including deliberately taking a nap). We also know that the less demanding driving is, the more likely drivers are to “zone out” or fall asleep. So it’s highly likely they were only asleep because they were in a car with active “autopilot.”

(Given the number of publicized accidents involving Tesla drivers ignoring the road while on autopilot, it would be interesting to see accident rates. I’d be willing to bet they’re actually worse than those of other modern vehicles with comparable safety features but no autopilot.)


As soon as autonomous cars started being a possibility, it occurred to me that eventually there would be cars driving around with dead passengers, and this would become a whole genre of ghost story.


I disagree with that assessment. Speaking as someone who has done both stupidly early commutes, and 26-hour roadtrips where I have both successfully and unsuccessfully attempted the trip in one go. 72,000 crashes attributed to being drowsy when driving suggest strongly to me that this was a driver who was drowsy when they got behind the wheel in the first place. It seems a lot more likely than a driver who says “Hrm, well, I’m sleepy, so let me take a nap on the way to work” as a deliberate act.

And yes, in my case, I had an amazing warning system in my vehicle. My partner, who cannot drive herself but had no problem saying “Ok, your head has nodded every 10 seconds for the last few minutes, and you’re drifting in your lane. We are stopping at the next hotel”.

1 Like

I look forward to it.

1 Like

Probably, but drowsiness doesn’t inevitably lead to being asleep - there are things that can be done to make it less likely, and things that make it more likely. My point here is that by being behind the wheel of a car with “autopilot” engaged, they radically increased their chances that they would end up asleep, even if it wasn’t a deliberate choice to take a nap. (Though they did make a series of deliberate choices that made it more likely.)

As a culture, we’re weirdly relaxed about driving while tired (because driving is so necessary and ubiquitous, and being tired so unavoidable), that we ignore how incredibly dangerous it is, and just accept that people are driving around so impaired they might as well be shit-faced drunk. As a result, people also tend to ignore best practices to prevent tiredness turning into drowsiness as they’re inconvenient or uncomfortable. Semi-autonomous systems make it much easier to ignore best practices.


So I take it your answer to the first part of my question (which, let’s recall, was “have you used one of these systems yourself”) is “no”. For what my anecdote is worth, I’ve driven one quite a few thousand miles and my own experience of the system is exactly the opposite of what you speculate will be the case. Anecdote is famously not data, of course. But speculation is even more not data.

(It’s also not just me who reports increased attentiveness and decreased fatigue while using autosteer + TACC. It’s a common theme among Tesla drivers. No, this isn’t a controlled study, but again, it beats the Power Of Pure Reason as a basis for forming an opinion.)

Tesla publishes safety figures. As someone once told me, you can look it up.

I’m not particularly impressed by the “zomg Tesla driver asleep at wheel” stories because at this point it’s quite clear that some corollary of the “if it bleeds, it leads” principle is at work, where auto accidents and incidents are worthy of headlines… but only if the auto is a Tesla. Selection bias makes for good click rates but bad science.

Even if your speculation is true, you focus the the hand that (so you say) taketh away to the exclusion of the hand that giveth. The latter is that the failure mode of a car with Autopilot engaged that isn’t receiving driver input, is to put on the hazard lights and come to a stop, while avoiding obstacles. The failure mode of a car with no guidance systems that isn’t receiving driver input, is often more fatal, for the driver and bystanders. See for example the Grateful Dead rollover story upthread.

Agreed. And in fact there are several companies doing just that. Tesla definitely needs to get on board:

Also completely agree, as I am guilty of this very way of thinking myself (I wanted to limit how long my pets were in carriers in our 26 hour roadtrip, for example). Autonomy should make it more difficult to let things reach the point of dangerous behaviour, without question, and I don’t think that’s been enough of the focus to date.


Guess what? Tesla does use a system based on steering wheel input, as I’ve italicized in your Wikipedia excerpt above. Have done since they introduced the feature, though they keep tweaking sensitivity (generally to increase it).

That’s not what I was referring to at all. AFAIAA, Teslas do not currently detect “drowsy” driving at all, just lack of input (i.e., asleep state). Many of the systems linked in that post specifically detect driving patterns (weaving, or droopy eyelids / excessive blinking) that would suggest a driver needs rest.


Highway driving especially gets monotonous and easy to fall asleep. My exwife didn’t like me using cruise control for this reason. I would imagine that with auto-assist it would be even more likely that your lack of need for input would increase the chance of you falling asleep.

Here is a tip I have for drowsy drivers - get something to eat/suck on. Like gobstoppers or mints or something that will last awhile. Maybe nuts. At least in my experience, if you are chewing or sucking on candy it seems your body has a switch that says, “Don’t fall asleep, we are eating, and don’t want to choke.”

This is just a personal example of something that perks me up when driving, I am sure it is not fool proof. YMMV. Please don’t hurt yourselves.

And heres to the future where cars are smart enough that we can nap.


I see, thanks. Yes, although Tesla doesn’t disclose what their algorithm is, there’s no reason to believe it’s any more sophisticated than “there was some input”, although I believe it has to be purposeful input, to thwart the old “hang a weight on one side of the steering wheel” ploy. Agreed that it would be desirable for the algorithm to be fancier than that, as long as they can do it without throwing too many false positives, which can have negative consequences that are not immediately obvious.

Since I brought up the consequences of false positives, consider: what do you do about it when you decide the driver is too drowsy?

You probably go through a series of alerts with the penultimate step being sirens and flashy lights to wake them up. (The final step is, if they don’t respond to sirens and flashy lights, you declare an emergency and do the best you can to bring the vehicle safely to a stop.)

Once they’ve responded to your alert, what do you do then? At that point maybe you can turn off the automation. Now you have a driver you believe is drowsy, who you’re forcing to manually guide the vehicle, instead of having your not-drowsy automation do it. Maybe the driver has the opportunity and good sense to pull off to the side of the road and catch a nap. Maybe they can’t or don’t.

If your drowsiness detector is reliable, that’s about the best you can do, so you live with it. If it’s not reliable, and you’re making non-impaired drivers put up with sirens, flashy lights, and being put into “autopilot jail” because of a deficiency in your system, then you have a bunch of annoyed customers who are driving their cars manually instead of with benefit of assistance from your automation. Which, let us recall, is presumed to make driving safer when used correctly, so if you’re disabling it unnecessarily you’re making them less, not more, safe.

Almost all the foregoing, as it turns out, describes the existing Tesla system, with the only caveat being the algorithm they use to determine attentiveness.

Edited to add: come to think of it, you can even argue that Tesla does provide some measure of drowsiness detection, in that if it has to escalate to the siren-and-lights level several times in a row, it does put the driver in “autopilot jail”. If you accept that “driver didn’t notice the lower-intensity alerts” is a proxy for “driver isn’t alert” then it follows that this qualifies as the kind of system we’re talking about. Of course, nothing is so good that it can’t be made better.

You would imagine that, yes. As I’ve mentioned upthread, my experience in four+ years of using such a system is exactly contrary to what intuition suggests, however. I can only speculate as to the reasons, but my guess is that because the car is handling most of the lowest-level fine-motor work – the most monotonous part of driving, in fact – I get less fatigued and am freed to spend more of my attention on overall supervision. It certainly seems to pan out that way. This is why I’ve asked several people “have you used it?” It’s easy to form an opinion about a system one hasn’t used, based on simply reasoning about it. Often, reality matches our reasoning. Sometimes it doesn’t, though. That’s why we need data before drawing conclusions.