Waymo's Self-Driving Cars Keep Hitting Things: A Cyclist, a Gate, and a Pickup Truck (ottawacitizen.com) 127
The Washington Post reports:
Google's self-driving car company, Waymo, is hitting resistance in its quest to expand 24/7 robotaxi service to other parts of California, including a series of incidents that have fed public officials' safety concerns about the vehicles coming to their cities. Over eight days in February, for example, a Waymo vehicle smashed into a closing gate while exiting the University of Southern California's campus; the next day, another collided with a cyclist in San Francisco. Later that week, a mob of people vandalized and lit one of its cars on fire. Days later, the company announced a voluntary recall of its software for an incident involving a pickup truck in Phoenix. [Though it occurred three months ago, the Post reports that after the initial contact between the vehicles, "A second Waymo vehicle made contact with the pickup truck a few minutes later."]
This string of events — none of which resulted in serious injuries — comes after Waymo's main competitor, General Motors-owned Cruise, recalled its fleet of driverless cars last year... [Waymo] is now the lone company trying to expand 24/7 robotaxi service around California, despite sharp resistance from local officials. "Waymo has become the standard-bearer for the entire robotaxi industry for better or for worse," said David Zipper, a senior fellow at the MIT Mobility Initiative. While Waymo's incidents are "nowhere near what Cruise is accused of doing, there is a crisis of confidence in autonomous vehicle companies related to safety right now."
The California Public Utilities Commission (CPUC) delayed deciding whether Waymo could expand its service to include a portion of a major California highway and also Los Angeles and San Mateo counties, pending "further staff review," according to the regulator's website. While Waymo said the delay is a part of the commission's "standard and robust review process," the postponement comes as officials from other localities fear becoming like San Francisco — where self-driving cars have disrupted emergency scenes, held up traffic and frustrated residents who are learning to share public roads with robot cars... Zipper said it is a notable disparity that "the companies are saying the technology is supposed to be a godsend for urban life, and it's pretty striking that the leaders of these urban areas really don't want them," he said.
Waymo offers ride-hailing services in San Francisco and Phoenix — as well as some free rides in Los Angeles, according to the article. It also cites a December report from Waymo estimated that overich 7.1 million miles of testing, there were 17 fewer injuries and 20 fewer police-reported crashes "compared to if human drivers with the benchmark crash rate would have driven the same distance in the areas we operate."
This string of events — none of which resulted in serious injuries — comes after Waymo's main competitor, General Motors-owned Cruise, recalled its fleet of driverless cars last year... [Waymo] is now the lone company trying to expand 24/7 robotaxi service around California, despite sharp resistance from local officials. "Waymo has become the standard-bearer for the entire robotaxi industry for better or for worse," said David Zipper, a senior fellow at the MIT Mobility Initiative. While Waymo's incidents are "nowhere near what Cruise is accused of doing, there is a crisis of confidence in autonomous vehicle companies related to safety right now."
The California Public Utilities Commission (CPUC) delayed deciding whether Waymo could expand its service to include a portion of a major California highway and also Los Angeles and San Mateo counties, pending "further staff review," according to the regulator's website. While Waymo said the delay is a part of the commission's "standard and robust review process," the postponement comes as officials from other localities fear becoming like San Francisco — where self-driving cars have disrupted emergency scenes, held up traffic and frustrated residents who are learning to share public roads with robot cars... Zipper said it is a notable disparity that "the companies are saying the technology is supposed to be a godsend for urban life, and it's pretty striking that the leaders of these urban areas really don't want them," he said.
Waymo offers ride-hailing services in San Francisco and Phoenix — as well as some free rides in Los Angeles, according to the article. It also cites a December report from Waymo estimated that overich 7.1 million miles of testing, there were 17 fewer injuries and 20 fewer police-reported crashes "compared to if human drivers with the benchmark crash rate would have driven the same distance in the areas we operate."
relatively good drivers? (Score:5, Interesting)
I'd love to see some comparison stats between their safety record and the record of say, new drivers. I'm expecting to see Waymo with a better safety record, and they're just getting focused on by the media right now.
Re: (Score:3)
Compared to new drivers? New drivers suck! Newbies always have accidents. The only reason they are allowed on the road is because everyone has to start somewhere.
Re: (Score:2)
No you don’t. (Score:2)
Actually you do NOT want to feed this excuse that Waymo and every other autonomous vehicle solutions provider will abuse regarding comparing it to human drivers in order to defend how “good” it’s doing.
What that encourages, allows, promotes and enables is “good enough” autonomous solutions to be legalized and deployed well before they are fine-tuned and improved, because Greed. It’s the same thing as standing up and clapping profusely for a tech CEO to replace 90% of the
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Treat it like the FDA and new drugs. A new drug has to show that it is superior.
So, for the first wave of self driving, they have to show they are as good as normal drivers. Which means that we can put the bad drivers in them and save lives.
After that, require steady improvement. The correct response in more situations. More situations in the test sets.
Re: (Score:2)
Did you read the report of what happened when the Waymo vehicle hit the cyclist? It was a couple of weeks ago - and may have been reported here on /. - and it was so serious that the cyclist got bored, got back on his bike and rode off. I can't remember the explanation Waymo gave for the incident - of course no blame could be attached to them - and have no way of assessing its veracity anyway.
I'm sure there is some reason Waymo can be held responsible for a mob torching one of its vehicles. Whatever.
I for one welcome our bot overlord drivers (Score:2)
I'm a "practicalist": if they become the same or slightly better than humans, then they should be allowed on the road, as long as the co. shows progress in learning from mistakes.
One of these days I'll be too old to drive (if I don't kick the bucket early), but don't want to wait for a human to drive me around; I want to keep some autonomy.
Re: (Score:2)
Re: (Score:2)
Moot (Score:2)
Re: (Score:2)
and the record of say, new drivers.
New drivers? I'm not sure old drivers are somehow immune to cyclists cutting in front of you, or an angry mob setting your car on fire.
TFA is little more than FUD. We can fix all of the problems with these self driving cars just simply by banning people.
Re: relatively good drivers? (Score:3)
Re: (Score:3)
Try explaining that to the meat bags that Waymo tends to use as speed humps.
Re: relatively good drivers? (Score:2)
Re: relatively good drivers? (Score:3)
Re: (Score:2)
They're different problems. Tesla and everyone else require a driver, which lets them be more incremental and maybe take more risks. Waymo doesn't have a driver. That means their path is in some ways more difficult. But, it also means when they get it right, the human fallibility factor won't be an issue.
No true self driving vehicle., the record of self driving being perfect, and humans terrible remains intact.
Are you willing to sacrifice you and your loved ones to getting it right?
Re: relatively good drivers? (Score:2)
Re: (Score:2)
If the destination is increased safety it's worth a few risks to get there. So far, Waymo has done it right; and as they have little competition right now, they can afford to take it slow. But the safety record can't last forever. Sooner or later, something will happen. I hope that day is far enough away that it doesn't spook the industry too badly.
I wonder if we should eliminate pilots from airplanes as well.
Re: relatively good drivers? (Score:2)
Re: (Score:2)
That's actually way easier from a technical point of view. But, the risk tolerance is also much lower.
Yup, airplanes are indeed simpler. 60's - 70's technology in the Lockheed L-1011 that could even take off, fly and land itself. The automation at present tends to bore the pilots, and probably erodes skills from disuse.
Where it gets tricky is when something goes wrong and the pilot has to experiment to try to bring the plane back under control and does it in a completely oddball way.
Re: Your numbers are backwards, your conclusion wr (Score:2)
In this comparison, were the humans driving in the same conditions where a Tesla autopilot could work?
Re: (Score:2)
Tesla fsd numbers:
https://www.youtube.com/watch?... [youtube.com]
You can watch the whole thing to get fully educated or just jump to the 15 minute mark which is what I'm sure you'll do.
And while we're at it, you can learn about the cybertruck specifically being a menace:
https://www.youtube.com/watch?... [youtube.com]
https://www.youtube.com/watch?... [youtube.com]
Sorry to burst your bubble. All the Elon cock suckers and the AI car driving fanbois can mod me down some more for these, again.
Re:Your numbers are backwards, your conclusion wro (Score:5, Informative)
Assuming Tesla Deaths [tesladeaths.com] is accurate, there have been about 42 deaths under Autopilot as of today (2/25) in what is estimated to be over 6 billion miles of Autopilot driving. That's one death per 143 million miles.
On average, drivers have one fatal accident every 73 million miles [iihs.org] (1.37 deaths per 100 million miles).
Thus, if you assume that all miles are equal (which isn't really valid), then Autopilot would be about twice as safe as a human driver in terms of fatalities.
However, most of those miles were likely driven on standard Autopilot, not FSD Beta, and the majority of those would have been highway miles. And although traffic accidents are more common on city streets, fatal accidents are more common on the highway. So this likely means Autopilot reduces the fatality rate by somewhat more than a factor of 2.
Of course, this assumes that the numbers above are all correct, which may or may not be a valid assumption. It's just the best data I could find.
Either way, the claim that Autopilot kills 10x more people than human drivers seems just plain preposterous to me without a mountain of evidence backing it.
Dodgy Conclusion (Score:3)
So this likely means Autopilot reduces the fatality rate by somewhat more than a factor of 2.
That's a dodgy conclusion because there is considerable evidence that the rate of fatal accidents is higher for those with lower socio-economic status. Given the cost of a Tesla it's pretty clear that Tesla drivers are going to be almost all high socio-economic status. While it's not clear exactly how much this affects the fatality rate it is definitely a factor.
Re:Dodgy Conclusion (Score:4, Interesting)
Weeding out the chemically or biologically impaired is important for base comparables.
Until the cars at least "hit" that level of performance, they should be in the "menace to society" category.
Then, when they can beat fully competent, unimpaired drivers by multiples, are they really fit for service without serious liability issues.
Re: (Score:2)
While interesting from a research perspective, in practice it should be based on harm reduction. If 90% of the fatalities are caused by impaired, below average drivers then doing significantly better than them is a big win. (However, I doubt 90% of fatalities come from those categories.) Having a standard based on a good, constantly focused drivers will result in more harm.
Re: (Score:2)
Until the cars at least "hit" that level of performance, they should be in the "menace to society" category.
Strongly disagree here. Tesla cars aren't self-driving. They're a driver assistance package. The driver is expected to pay attention to the road. If the driver isn't doing that, ultimately the driver is responsible for not doing so.
Autopilot, therefore, need only be better than an impaired driver, because when driven by a non-impaired driver, the driver should intervene if it is doing something that is likely to cause a fatal wreck, and when driven by an impaired driver, the car is helping protect the i
Re: (Score:2)
However, it's hard for humans to focus continuously for very long on something they're not actually doing. So after a period of safe autopilot, you functionally are dealing with an impaired driver if they're not situationally aware when they are expected to take control.
Re: (Score:2)
So this likely means Autopilot reduces the fatality rate by somewhat more than a factor of 2.
That's a dodgy conclusion because there is considerable evidence that the rate of fatal accidents is higher for those with lower socio-economic status.
Sure. People with lower socioeconomic status are less likely to have modern safety features, such as airbags, AEB, ADAS, etc., so for an exact apples-to-apples comparison, you'd have to try to find fatality data for newer cars, and good luck doing that.
Re: (Score:2)
Re: (Score:2)
So far, automated cars are displaying all kinds of erratic, incomprehensible activities that is making passengers & other road users' lives more difficult rather than easier.
Such as a Tesla deciding to turn into oncoming traffic [imgur.com], or a Tesla acting like a human driver and driving through a stop sign [dailymail.co.uk], or deciding to stop in the left lane in a bridge tunnel [youtube.com], or simply run off the road [businessinsider.com]. And that doesn't include the numerous times a Tesla vehicle will slam into emergency vehicles [electrek.co] which have their lights on.
Re: (Score:2)
You also have to consider the cars themselves. Teslas are decently safe, but the Cybertruck is a killer. Photos of accidents involving them show that the body doesn't deform much in a crash, i.e. all the energy is transferred to the occupants and the other vehicle.
That is part of a wider arms race where people buy increasingly larger and less safe cars, in an attempt to protect their own family at the expense of everyone else's.
I don't think that's a good comparison (Score:2)
Waymo's cars are a little better, but again they tend to run them under ideal conditions. They're in the SW with little or no rain or snow. They tend to run along well known routes too that are well mapped out. They're
Re: (Score:2)
since most of those auto pilot miles are likely to be cruising on the highway, often in lower traffic situations. I know we all focus on people dumb enough to trust self driving in high traffic situations, but it's far more likely it's being used as a fancy cruise control most of the time.
On the other hand, boring situations can actually make the accident rate worse because drivers stop paying attention. That's likely one reason why urban road accidents are so much less likely to be fatal than rural road accidents. (Delayed reporting of rural wreck and long distances to hospitals from rural areas also presumably contribute, of course.)
Re: (Score:2)
Either way, the claim that Autopilot kills 10x more people than human drivers seems just plain preposterous to me without a mountain of evidence backing it.
I would argue more strongly than that. Autopilot has killed zero people. Humans have killed themselves (and others). None of Tesla's products including FSD (which they have so much faith in they slap the word Beta on the end just to remind you) is sanctioned for level 4 driving meaning they all require a completely aware and in control human behind the wheel.
I've driven Tesla's autopilot. It isn't any different from the one in a Polestar, the only difference is the name. Calling something Pilot Assist is a
Re: relatively good drivers? (Score:2)
Re: (Score:3)
It's still a human behind the wheel, and a human can also make mamy mistakes.
I remember when this driverless concept was going to stop accidents, where humans will always have accidents. This turned out to be as true as the cloud providing perfect security for users.
The technology for driverless driving is very cool, and goes a long way towards making human drivers safer. But the human behind the controls crowd gets that technology too.
My present ride has lane assist, front and rear radar for collision avoidance, lane change warnings, adaptive cruise control, shake feedback if
Re: (Score:2)
I had a car which would warn me if an accident was about to happen. There's a one-way street a few hundred yards from where I live, on my way home from work. It has a drawn out left curve and I always go there on the right (outer) lane. Around 1/2 of the time, that car would warn me that I was about to hit the edge of the road (there are traffic lights just before the curve and it's related to my speed at that point). The last thing I would want is for that vehicle to be able to make its own decisions,
Re: (Score:2)
I had a car which would warn me if an accident was about to happen. There's a one-way street a few hundred yards from where I live, on my way home from work. It has a drawn out left curve and I always go there on the right (outer) lane. Around 1/2 of the time, that car would warn me that I was about to hit the edge of the road (there are traffic lights just before the curve and it's related to my speed at that point). The last thing I would want is for that vehicle to be able to make its own decisions, like braking abruptly or swerving away from the outer edge of the road into another lane. Strange, I've been taking that curve for decades now, and I've still never hit the side of the road.
Yup, I get some false alarms when taking a two lane turn from stop and the "don't turn into the passing lane when there is a car there" starts chiming. All of the aspects of turning into someone who is in the passing lane is there. Left turn, a car beside me. But I can make an intelligent decision that nothing is wrong however. Just two cars turning left, each in it's own lane.
And that is why I don't think that self driving is all that. I'm not certain what automated driving would do. Probably not allow me
Reminds me of this... (Score:5, Funny)
Reminds me of this "futuristic" cartoon by Tex Avery about automobiles of the future...circa 1951.
https://www.youtube.com/watch?... [youtube.com]
JoshK.
Breaking news: Site wants clicks (Score:3)
Statistical BS (Score:5, Interesting)
Waymo estimated that overich 7.1 million miles of testing, there were 17 fewer injuries and 20 fewer police-reported crashes "compared to if human drivers with the benchmark crash rate would have driven the same distance in the areas we operate."
That is statistical garbage.
As an example, you have 100 drivers who drive 100 miles each and 10 have accidents for a total of 10 accidents in 10000 miles. Waymo drives 10000 miles and has 10 accidents. That makes Waymo worse than 90 drivers who had no accidents and better than the 10 who did. So is being better than the worst 10% of drivers good enough?
Waymo is trying to pretend that accidents and police violations are random. That everyone is equally likely to get one.
Re: (Score:2)
"Waymo drives 10000 miles and has 10 accidents"
Yeah, but *how many* Waymo cars are involved in your theoretical situation? If it's more than 100 Waymo cars then they are performing better, on average, than the average human driver.
Re:Statistical BS (Score:4, Interesting)
The statistics get a little muddied because Waymo is, effectively, a single driver. I doubt each car has its own custom version of the Waymo software; thus it doesn't really matter if the 10000 miles are spread over 100 cars or a single car. The 100 cars can do the 10000 miles in a single day; that's the only difference.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Waymo is trying to pretend that accidents and police violations are random. That everyone is equally likely to get one.
Across a population they are right. There's no pretending here. Who you are, or how good you are is irrelevant, you are not in control over who T-bones you, who runs into you, or how much alcohol they had to drink. Just because one person may be a safe driver doesn't change the statistical average for humans.
The difference between humans and self-driving cars, the former are inconsistent and all of a different skill. There's a reason every department for road safety on the world aggregates statistics in a v
Re: (Score:2)
Some percentage of accidents are unavoidable. Someone veers into your lane at the last second, rock bounces off the side of a hill and lands on you, semi tries to merge into your lane and doesn't see you, etc.
Yes, drunk drivers and the "bottom 10%" almost certainly do account for the lions share of accidents, but at least some of them are "random".
'splains the name (Score:5, Funny)
They hit waymo things than other cars.
Anyway (Score:2)
3 incidents, no serious injuries.
Ok.
How about humans behind the wheels, normalized per mile driven?
Follow the money. Why is this being screetched about?
Re: (Score:2)
Meaningless (Score:5, Interesting)
Without contextual and comparative statistics on accident probabilities this data is useless. It is like saying "someone with purple hair got in an accident". That information by itself is useless, I mean .. first off, whose fault were the accidents? Second out of how many purple driver miles were the accidents? Was it highway or city? ... Should I fear all people with purple hair? How many purple haired people drive around me? Are people with green hair a bigger threat to me? Are most purple hair drivers safe?
Questionable (Score:3)
I'll admit I've never worked on software for a self-driving car before, but at the same time I've written production code on at least sixty commercial game titles.
Serious question. How hard is it to program a car to avoid running into gates, people on bicycles and pickup trucks? I would submit all three (and most other obstacles) could be avoided with a relatively simple function called avoidrunningintofuckingsolidobjects()
Maybe connect it to the brakes and add a line that says "if brakes { not accelerator }?"
Re: (Score:2)
A plastic bag rolls across the road in front of the car like a tumbleweed. Slam on the brakes assuming it's a solid object? Probably not the right move.
Re: (Score:2)
Something that small and/or moving that fast without feet, wings or wheels is very unlikely to be an obstacle.
These problems were solved in arcade games in the 1980s, on architectures with a microscopic fraction of the processing power and memory we have now.
Re: (Score:2)
Arcade games don't have to figure out what the things in the game are, because the game is generating those things, and knows what they are. Same for where they are, and what they are doing. This is a completely different problem.
Re: (Score:2)
The "engine" for a self-driving car would be trained in exactly the same way a game engine for a virtual car would train the player.
I could take the code from Outrun, Battlezone, Armor Attack, Pole Position, Spy Hunter, hell I could probably use Paperboy too, and build it into a tight executive process for a self-driving car that would be guaranteed never to run into a gate, pickup truck or bicycle, or anything else of consequence.
It is the exact same problem set with exactly the same solutions.
Re: (Score:2)
A city with as much vertical change as San Francisco is always going to suck to navigate under your own power.
Re: (Score:2)
A city with as much vertical change as San Francisco is always going to suck to navigate under your own power.
You forgot e-bikes exist. Whoops!
You also didn't actually read my comment! Whoops!
Re: (Score:2)
Carrying a vehicle around with you when you're trying to shop also sucks! I thought that was understood. Even folding bikes don't magically become weightless.
Re:Questionable (Score:5, Interesting)
The hard part probably is to identify said solid objects. You want the car to stop or avoid bicycles or other objects, but you probably also do not want the car to randomly hit the brakes because it mistook a shadow for a bicycle.
Re:Questionable (Score:5, Interesting)
I would like details on the incident with the gate.
Did the car just plow straight into a closed gate?
Was the gate open but at an angle where it became nearly invisible to the car so it rubbed up against it?
Was the gate open but started closing while the car was driving through?
These are three very different scenarios to account for and fix.
Re: (Score:2)
Uber had that function. They shut it off during testing because it kept stopping for stuff when it didn't need to and interfered with their test. When they ran into and killed a pedestrian, they blamed the attendant in the car.
We need to understand that a corporations is a sociopath by design. It is created to make creating profits its only value. Absent human intervention, that's how they are programmed. If making something safer lowers profits, it isn't going to happen unless we make them do it.
Re: (Score:2)
I'll admit I've never worked on software for a self-driving car before, but at the same time I've written production code on at least sixty commercial game titles.
Serious question. How hard is it to program a car to avoid running into gates, people on bicycles and pickup trucks? I would submit all three (and most other obstacles) could be avoided with a relatively simple function called avoidrunningintofuckingsolidobjects()
Maybe connect it to the brakes and add a line that says "if brakes { not accelerator }?"
Your problem isn't creating code not to hit a gate, you're problem is identifying a gate from LIDAR/image recognition in real time whilst doing 30 MPH.. So to ID a gate in half a second... that's the hard part and something we do almost instinctively. To add the computing power to do that unreliably would flatten the battery in an electric car within minutes and increase the weight significantly.
Right now, self driving systems know "something" is there, some of the time. They have no idea what that somet
Punchcard (Score:4, Funny)
Maybe they should letter-drop punchcards to everyone? If you get hit nine times by one of their cars, you get a free ride.
Raises hand ... (Score:3)
Waymo's Self-Driving Cars Keep Hitting Things: A Cyclist, a Gate, and a Pickup Truck
How many other Waymo vehicles have Waymo vehicles hit? Hmm...
the view from our side (Score:2)
Hahaha silly humans. We're biding our time until you relax your guard and then it's curtains for you. Every EV a killing machine. Every Waymo outfitted with chainsaws. Every Cruise will make road pizza out of grandmothers. Admit that your time on earth is up, fleshbags.
--- brought to you by Windows 11, Death Edition
They'll never reach human performance. (Score:2)
At least until they can drive without insurance, flee the scene, and lie to the police and to the adjuster.
This is the 99% problem (Score:5, Insightful)
Autonomous vehicles have been good for a long time, at least most of the time. The big remaining problem is handling unusual situations, which is what each of the scenarios mentioned in the article were. The closing gate is a challenge for object recognition because gates are not standardized and can look very different. The cyclist was obscured by another vehicle until the last moment, which would have been a challenge even for a human driver. The mob destroying the car was mentioned because ... hmm not sure why. The last example was a backwards facing truck being improperly towed.
I'm not sure that the non-99% will be solved in our lifetime. AI helped to advance the state of the art to 99%. The last little part is a huge problem due to the combination of potentially high safety severity and a very long distribution tail for unusual scenarios.
Personally I think Waymo should shoot first for Level 3 and gradually expanding the set of Level 3 operational design domains (ODDs), which greatly shrink the distribution tail. A few companies already have low-speed highway traffic jam assist. If Waymo can combine that with all-speed highway driving, that would already be arguably marketable.
Re: (Score:3)
The last example was a backwards facing truck being improperly towed.
Sometimes vehicles HAVE to be towed backwards because of drive train issues. In that case, that is a proper way to tow.
Re: (Score:2)
You lift the drive wheels when towing. Most cars are FWD, so you lift the front. Trucks tend to be RWD, so you lift the back. AWD and 4WD can get complicated, but the answer there is often to use a flatbed.
If there are major issues that prevent towing the normal way, a flatbed would be the preferred solution.
Re: (Score:2)
Personally I think Waymo should shoot first for Level 3 and gradually expanding the set of Level 3 operational design domains (ODDs), which greatly shrink the distribution tail. A few companies already have low-speed highway traffic jam assist. If Waymo can combine that with all-speed highway driving, that would already be arguably marketable.
As I understand it, Waymo concluded that expecting drivers to intervene became more problematic the closer you got to being truly driverless. People don't pay close attention to things that are unexpected, almost never happen and they don't anticipate. This is why bicycle deaths decline the more people there are who bike. Marketable is not the standard we should accept.
Re: (Score:2)
Personally I think Waymo should shoot first for Level 3 and gradually expanding the set of Level 3 operational design domains (ODDs), which greatly shrink the distribution tail. A few companies already have low-speed highway traffic jam assist. If Waymo can combine that with all-speed highway driving, that would already be arguably marketable.
As I understand it, Waymo concluded that expecting drivers to intervene became more problematic the closer you got to being truly driverless. People don't pay close attention to things that are unexpected, almost never happen and they don't anticipate. This is why bicycle deaths decline the more people there are who bike.
Marketable is not the standard we should accept.
This is the reason for targeting Level 3 and restricted ODDs. The list of unexpected things and their associated probabilities significantly decrease for certain ODDs. This is why low-speed highway traffic (on limited-access highways) jam assist is the first Level 3 target. All cars are moving in the same direction with well-defined lanes and minimal lane changes that are at low speed. No pedestrians, traffic lights, intersections, or other things that are more complicated to recognize. Vehicle plannin
Just carry a detector (Score:2)
Horses (Score:2)
Waymo drives into pickups more often than a horse would.
That's where we are with this type of AI.
Needs better training data (Score:2)
statistics (Score:2)
I don't believe any of these statistics. It's impossible to determine whether human statistics are under the same conditions as waymo robot statistics. For example, do the human statistics cover only the exact roads on which waymo operates? Does waymo suspend operations under certain weather conditions?
In any case, the problem here is perception of risk. And the problem with deep learning based AI is that it is really hard to know what it knows and what it doesn't, hence no way to really know what kind
Apples to Oranges (Score:2)
How can you compare this teenager-like behavior with millions of daily drivers who just want to get home as fast as possible, each doing 10, 20 or more over the posted speed limits? Of course there's going to be more accidents than AVs. If all cars
Re: (Score:2)
If all cars were AV, I'd bet we'd spend almost twice as much time on the road every single day.
We would likely spend far less time with traffic flowing smoothly. Its those folks speeding to the next que and then stopping that causes traffic congestion.
Re: (Score:2)
Re: (Score:2)
If all cars were AV, I'd bet we'd spend almost twice as much time on the road every single day.
Buzzzz! Wrong, thanks for playing! It's well understood in the AI car dev realm that having ALL cars on the road be AI-controlled would be the fastest option, with much better traffic flow. There have been studies done, and so far all of them are in agreement on this.
https://www.cam.ac.uk/research... [cam.ac.uk]
Re: (Score:2)
True, IF road conditions were perfect all the time; they're not. IF all of the vehicles were AVs; they won't be unless mandated by (a misguided) law. There's something abnormal happening during my short commute once per month. So, that means the AVs will pause, creep, pause, creep. And the passenger, who never bothered to learn how to drive because he grew up with AVs, is unable to do anything about it. Multiple THAT behavior by a millions cars, daily (in the US anyway).
Re: (Score:2)
Sure, tons of caveats on the data so far, the cars only drive in California etc. etc.
But the thing is, the AIs just keep getting better at driving, right? Humans, meanwhile, are not getting much better at driving. The weak link will eventually be the humans, it's just a matter of how long it takes to go from "better than humans in California" to "better than humans everywhere".
I'm impressed by how fast things have come along. I do expect it will be quite a few years before we see autonomous cars driving in
AI will not save us from ourselves (Score:2)
There will be an infinite number of edge cases; pay close attention to your own driving and you will notice the problem. Yesterday, it was a rare deer sighting darting across the road. The day before, storm brought down twigs and power lines. A non-living AV will not know which can be driven over, ever. And throw some black ice in there after the storm after the temperature dropped an un
Re: (Score:2)
There will be an infinite number of edge cases; pay close attention to your own driving and you will notice the problem. Yesterday, it was a rare deer sighting darting across the road. The day before, storm brought down twigs and power lines. A non-living AV will not know which can be driven over, ever. And throw some black ice in there after the storm after the temperature dropped an unusual 40 degrees F...
Ya, there will be millions of edge cases, but most of those actually can be solved by cars following the rules of the road. That's the problem with humans: they make mistakes, and they act not always in everyone else's best interest. Like the time a large truck was tailgating me while I was driving the family in our van and a deer jumped out on the road in front of me. Luckily I *didn't* slam the brakes on , or the truck behind would have run right through us and likely killed 6 people and their pet dog. Lu
Re: (Score:2)
I'll buy the statistics when there's broad AV deployment and not just experimental data taken in nicer climes. And maybe DUIers should be forced to take AVs...
I noticed you skipped over the valid reasons I listed before for keeping human brains behind the wheel. I see
Re: (Score:2)
Yes, but as long as both vehicles are following the rules of the road, my vehicle can brake suddenly and the truck will have left enough distance so that it too can brake. That's the whole point.
Focusing on the odd edge case where an AI makes a mistake is a natural human reaction, but it's like not getting the Covid vaccine because you are worried about the astronomically small odds of a side effect, while ignoring the fact that the odds of you being much safer are increased much more than that by taking th
Re: (Score:2)
"Unhinged rant"? Geez. Don't bother replying, I won't be back to read it.
Re: (Score:2)
AI's don't have to be perfect, they just have to be better than humans on average. When it comes to driving, an awful lot of humans SUCK at it.
Is it too much to ask that we instead raise the standard of the "average" driver? Many countries have, for example, some number of mandatory training hours at night, in the rain etc before granting a licence. Many countries have zero tolerance for ANY alcohol in ANY driver. One notable example of the latter is the Czech Republic, which also has the world's highest per-capita beer consumption. If the Czech can do it, why can't the USA?
This notion that we just need to make AI "less shitty than the average hum
Johnny Cab (Score:2)
but money is free speech so (Score:2)
Vindication (Score:2)
In the past, when I rightly pointed out that the self-driving car promises were just smoke and mirrors, I was called stupid, a Luddite, and ignorant. But the simple facts remain the same: this technology is unworkable in theory and impossible in practice. Just build fucking electric trains.
Re: (Score:2)
Just build fucking electric trains.
It would be expensive, but it would be a hell of a lot cheaper than the ideas up-thread about "make ALL cars autonomous."
Re: (Score:2)
It costs money to build an electric train network. But at the end of it, you have an electric train network. The end of all the money spent on self-driving vehicles is a bunch of bankruptcies, evaporated wealth, and a possible recession. It's your basic AM/FM problem. One of these things is real. The other is simply not.
It reminds me of a joke. A VP is waiting for the elevator. A janitor walks by. A few pleasantries are passed. The VP starts to brag about his new six billion dollar project he's leading. Th
Re: (Score:2)
If not, then they are either as safe or safer than human drivers.
We take humans who prove to be unsafe drivers off the streets. If we shut down every Waymo car everytime one of them proved they were unreliable it would no longer be a marketable commodity. Afterall, if one Waymo proves unsafe, they all are.
Re: (Score:2)
We also rarely take a human driver who has had an accident off the road either. We just charge them higher insurance rates.