That’s one minute of Yakety Sax time, which corresponds to three minutes of human time.
Shirley, there’s a Yakkity Shanai?
I don’t have the situational awareness to drive in an environment like that.
Texting and driving and India don’t mix.
Drivers in Mumbia must pay attention on crowded streets filled with scooters, taxis, and pedestrians.
Exactly. Just because you’re in a big self-powered wheeled metal box capable of travelling fifty times faster in the milieu it was designed for doesn’t afford you any special privileges in one it wasn’t.
I have seen worse. (in Cambodia, for example.)
Anyway, show me an autonomous veichle that can navigate through that, and i’ll be impressed!
(I’d be impressed even by driving on the highways, but don’t tell this to our future self-driving overlords, or they may become quite smug)
Looks familiar. (This is Delhi)
Current self-driving vehicles are good at spotting pedestrians and other moving obstacles, especially when travelling relatively slowly, as this vehicle is doing. I think it would actually do pretty well, not running anyone over. I suspect that it would travel more slowly and wouldn’t be as nimble as the surrounding human drivers, though.
Wouldn’t it be stopped or confused if road signals were less clear?
Like missing or washed out. I mean both vertical signals, and horizontal ones (i don’t know if this is the right term, though).
The google autonomous cars in mountain view don’t really need visual street signs. They know where to stop based on GPS and other geo-location stuff. So when they approach an intersection, they start looking at their radar really closely. I have a friend who rode in one during the summer. It saw a little kid on a razor scooter through a hedge. The hedge was completely opaque to visible light. And the car stopped before hitting the edge of the line of sight, and the kid turned the corner into view, and crossed the street before hitting the nearby intersection. Those cars drive like old grannies with psychic powers. They’re able to see far beyond human vision, and are programmed to be so cautious as to be infuriating, if you actually paid attention to their driving, if you weren’t playing cribbage in the back seat with a friend.
Would it be so hard to mount the camera a little higher? Worst letterboxed movie ever.
I guess they integrate GPS and map data with maybe inertial navigation and environmental clues to pinpoint the precise position.
I wonder how they are programmed to deal with data inconsistency, such as wrong maps or road detours.
Besides stopping and handing the steering wheel to the human operator.
I’m under the impression that google mapped the routes these cars normally take, to the square millimeter, at high sample rates, for a few years, and that’s the data they run on. It’s very complete, but impractical.
You need to be a bit of a fatalist to drive there really. The process isn’t so much skill as it is waiting to see what happens.
The most incredible thing I’ve seen is that in traffic jams people actually get out of their cars to discuss with one another about how they should fix it…
This topic was automatically closed after 5 days. New replies are no longer allowed.