Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Transportation Technology

People Are Losing Faith In Self-Driving Cars Following Recent Fatal Crashes (mashable.com) 446

oldgraybeard shares a report from Mashable: A new survey (PDF) released Tuesday by the American Automobile Association found that 73 percent of American drivers are scared to ride in an autonomous vehicle. That figure is up 10 percent from the end of last year. The millennial demographic has been the most affected, according to the survey of more than 1,000 drivers. From that age group, 64 percent said they're too afraid to ride in an autonomous vehicle, up from 49 percent -- making it the biggest increase of any age group surveyed. "There are news articles about the trust levels in self-driving cars going down," writes oldgraybeard. "As a technical person, I have always thought the road to driverless cars would be longer than most were talking about. What are your thoughts? As an individual with eye problems, I do like the idea. But technology is not as good as some think."

The Mashable article also references a separate study from market research company Morning Consult "showing increased fear about self-driving vehicles following the deadly March crashes in the Bay Area and Arizona." Another survey from car shopping site CarGurus set to be released Wednesday found that car owners aren't quite ready to trade their conventional vehicles for self-driving ones. "Some 84 percent of the 1,873 U.S. car owners surveyed in April said they were unlikely to own a self-driving car in the next five years," reports Mashable. "79 percent of respondents said they were not excited about the new technology."
This discussion has been archived. No new comments can be posted.

People Are Losing Faith In Self-Driving Cars Following Recent Fatal Crashes

Comments Filter:
  • Amazing (Score:5, Insightful)

    by Anonymous Coward on Wednesday May 23, 2018 @06:08AM (#56657852)

    How many crashes happen every day because of humans? Yes I know it is sad, no one wants bad things to happen. But in the long run this is going to save far more lives than take.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Sure, but maybe we should be more careful with deployment than Tesla and Uber. See Waymo (or I am sure there are others) for example, I don't know of any fatal incident there. Also studies comparing accidents of driver-less/normal cars would be useful.

      • Re: Amazing (Score:2, Troll)

        by Type44Q ( 1233630 )

        I don't know of any fatal incident there.

        And if self-driving becomes dominated by the most powerful information clearinghouse on the planet, there's a good chance you never will.

    • by sjbe ( 173966 )

      How many crashes happen every day because of humans? Yes I know it is sad, no one wants bad things to happen. But in the long run this is going to save far more lives than take.

      Several problems with that argument. 1) you are assuming people are rational when they aren't. 2) People don't care much about the long run. They especially don't care when they are afraid of something (see nuclear power). 3) Your claim that it will save lives is at this point pure conjecture albeit based on reasonable logic. We don't actually have any proof that self driving tech does or will save lives. 4) Certain high profile companies are pushing the technology out there in some arguably irrespo

    • The real difference is with automated driving cars, their safety will only get better with new technology and applied lesson learns over a long period of time.

      So the young Adult on the road today may have only a few hundred hours of driving experience, then when they get a lot of experience they are at an age where their reflexes are slower.

      A self driving car, for every new one made, the lesson learned for past cars is copied into the software, as well with newer technology to let it understand its environm

      • by arth1 ( 260657 )

        The real difference is with automated driving cars, their safety will only get better with new technology and applied lesson learns over a long period of time.

        How is that a difference? Human driven cars most certainly get safer too - just look at the statistics.
        Human drivers have in general gotten better too; in parts of the world through programs like mandatory slick driving and obstacle avoidance courses, or it becoming easier to lose a license.

    • How many crashes happen every day because of humans? Yes I know it is sad, no one wants bad things to happen. But in the long run this is going to save far more lives than take.

      Stupid question. How about "How often do SDC's need intervention?" Humans may be poor drivers, but at least they can go for 250k miles without an accident. SDCs need active human participation every 5k miles or so.

      The average human driver (including unlicensed, drunk, tired and old) *averages* 250k miles without an accident. Call me when SDCs can go that far without having a human take over.

      • Humans may be poor drivers, but at least they can go for 250k miles without an accident. SDCs need active human participation every 5k miles or so.

        The average human driver (including unlicensed, drunk, tired and old) *averages* 250k miles without an accident.

        So, using your numbers, and assuming that human intervention would require 10 minutes of attention per instance (call it five miles worth of attention), then a human using an SDC would have an accident about once every 250,000,000 miles traveled. Soun

    • by arth1 ( 260657 )

      How many crashes happen every day because of humans? Yes I know it is sad, no one wants bad things to happen. But in the long run this is going to save far more lives than take.

      We humans have this organ on top of our bodies that has evolved to be able to assess risks and rewards, weigh them against each other, and make choices accordingly. We accept small risks all the time. Evolution has had a lot of time to weed out both the excessive risk takers and the risk averse.

      The risk of driving is minimal compared to the rewards. Reducing the risk is a good thing only as long as it does not reduce the rewards to a higher degree.

      And that, I believe, is a problem with autonomous cars, e

      • by DarkOx ( 621550 )

        Exactly there is a very fundamental issue with SDCs is they need a destination - My wife and I go for drives in the country in my 32 year Alfa Spider all the time - its fun. Its something to do around here. You can push the car into a curve now and again and get a little thrill, as you are appreciating the sprawling country side and the mountains in the distance.

        Its going to take a pretty clever AI to be able to tell the car "just take me for a ride exploring the county roads and make it interesting here

  • Just more FUD (Score:4, Insightful)

    by fredgiblet ( 1063752 ) on Wednesday May 23, 2018 @06:15AM (#56657872)
    Show me the statistics, not the emotion-laden stories. I'll bet money that self-driving cars are safer now and will be even safer in the future. Id love to have one, just can't afford it.
    • Comment removed based on user account deletion
      • Re:Just more FUD (Score:4, Insightful)

        by Gonoff ( 88518 ) on Wednesday May 23, 2018 @06:51AM (#56657996)

        The way to reduce automobile accidents is to rid the road of drunk drivers and texting drivers. When you subtract those two causes, humans are pretty good drivers.
        Maybe in 10 or 20 years your dream of self-driving cars will come true. They're just not good enough yet.

        Do you go out much? Careless, aggressive, inattentive and plain bad drivers are really big problems. I have driven in the USA, Europe Africa and the middle east. In the west, we have better roads and that does encourage poor driving. Or perhaps it is the fact that we have ambulances and nice police officers to pick up the pieces (or whole bodies if needed).

        Drunks and other idiots deserve whatever they get. I am more in danger from people who tailgate, overtake on the wrong side, cut in front of people and so on. Get them off the road and we will all be safer.

        • by sphealey ( 2855 )

          Aggressive drivers and generally bad drivers tend to be excessively self-confident bro-types who will never engage a self-driving mode on any car they own anyway, so that group factors out. Additionally, programming an automated vehicle to make the decisions necessary to handle when it encounters an aggressively bad driver requires real AI - which doesn't exist - and takes the designer deep into trolley problem space.

          • The single greatest thing about self driving cars will be that the police can order the "bro-types" to use one after a driving offense.

            The future of motoring laws won't be "banned from driving for six months", it will be "forced to use autopilot for 2 years".

        • by asylumx ( 881307 )

          Careless, aggressive, inattentive and plain bad drivers are really big problems.

          I agree with this 100%. People do really stupid shit in cars. They pull out blindly into traffic, they drive 30mph or more faster than the traffic in the lane next to them, they turn where it is clearly marked they aren't allowed to, they take left turns INTO TRAFFIC just because the other lane is finally clear, etc. etc. etc. I don't even live in a big city and I see this stuff every day -- people don't treat the roads or o

          • by jythie ( 914043 )
            On the problem of stupid and lack of respect.... there is a lot of discussion about how configurable self driving cars will be, either through manufacturers offering different packages/settings or people modding their own. Many of the behaviors that cause traffic problems and accidents are things that freedom loving people might put right back into their car's autonomous behavior. After all, they are in a hurry!
    • Re:Just more FUD (Score:4, Insightful)

      by ledow ( 319597 ) on Wednesday May 23, 2018 @06:43AM (#56657960) Homepage

      The statistics are that there are an insignificant number of self-driving cars on the road.

      Even if you include things like the Tesla which are NOT SELF-DRIVING.

      Sadly, you won't have the statistics to compare accurately until, say, 5% of people has one of those things. Currently... what? SALES of electric cars are 1% of all new car sales. So there is an insignificant percentage of those, even, currently on the road.

      If you took all Teslas, every single model of them ever sold, adds up to about 300,000 cars. Worldwide. There are approximately 1bn vehicles in the world. That's 0.03%.

      If you go for "certified self-driving cars in private hands", the figure is so near zero that's is not even recordable. Everything is either a "prototype" from a big corporation or deliberately advertised as NOT a self-driving car.

      So... sorry... self-driving cars do not have any statistically-significant data from which to draw any conclusion whatsoever. Even Tesla's don't.

      As an IT guy, I fail to see why a computer would be any better than a human at such a human task. If we were talking isolated, self-driving-only roads, no human drivers, changing the roads to prevent such signage and road-marking confusions etc. Sure. We call that a railway, though. It's very different.

      We couldn't even make a burger-flipping robot that works around humans. Robots/computers are good for one thing. The same task, over and over again, which needs as little interpretation as possible, and no human interference. Anything else is a mess.

      And guess what a self-driving on an ordinary road is.

      • by jythie ( 914043 )
        And among the self driving cars that are on the road today, few have been on for very long, which means no maintenance or service issues have caught up. Think about how many problems pretty much any PC one interacts with has as it ages. Go to even the crappiest budget seller and their systems are generally fine for a few months. But cars are something we hold on to for years or decades.
    • by jythie ( 914043 )
      On the other end, self driving cars have become a near religious cause to a lot of people, with unbending faith that the technology either works or will work 'real soon'. But just like fusion power, thorium reactors, and strong AI, it might end up being one of those things that people believe should work and is just around the corner but never really materializes because that last 10% of the problem is so much harder than the 90% people have already seen accomplished.
    • by grumbel ( 592662 )

      A statistic wouldn't be all that useful at this point, as we don't have any real consumer self driving cars on the road, just experimental vehicles with safety drivers and those vehicles are only driven in condition that they are deemed to be able to handle. We don't even know how much or how little the safety driver had to intervene. Those cars could be terrible, but you still wouldn't notice since there is a human on the wheel helping out.

      I'd be much more interested in seeing the self-driving software bei

    • Show me the statistics, not the emotion-laden stories. I'll bet money that self-driving cars are safer now and will be even safer in the future. Id love to have one, just can't afford it.

      You'll lose that money, because we don't actually have self-driving cars now. We have cars that require human intervention every few thousand miles at best, every ten miles at worst.

    • Even the most pessimistic figures we have today would suggest that self-driving cars are safer than the average driver.

      As for why people still distrust self-driving cars, educated guess is that it's due to the way mass media tends to end up pushing the stores that pull in the biggest audience rather than the ones most relevant or truthful ones. We saw this back in the 80s and 90s when the reporting about violent crime went up significantly while actual crime statistics were showing a downward trend that
  • Good (Score:5, Insightful)

    by mccalli ( 323026 ) on Wednesday May 23, 2018 @06:16AM (#56657876) Homepage
    There is way too much starry-eyed magical thinking about tech in general at the moment. AI this and machine learning that...you would think people's day to day interactions with their phone assistants would get people to quickly understand things are still fledgling, but apparently not.

    I'm in favour of developing the technology. And very, very much in favour of not overhyping it to destruction.
    • by mentil ( 1748130 )

      I'd rather have Siri behind the wheel of my neighbors' cars than my neighbors. Sure, it'd just stay in Park because Siri can't drive, but then the roadways would be clear for me. Problem solved.

  • People always fear dangers that they can't control more than ones they can. This is why some people fear plane and train crashes more than car crashes, even though cars are statistically a lot more dangerous. A self-driving car will have to have a much better accident rate than human ones. It's easy for people to say "that's average but I am much better than average [wikipedia.org]" if they are in control.
  • by chrism238 ( 657741 ) on Wednesday May 23, 2018 @06:36AM (#56657944)
    The link is to a local file, not net-accessible....
  • The old stories referenced are for a Tesla on "Autopilot" (stupid name) and a pedestrian stepping out into traffic and getting (sadly but unsurprisingly) run down. In both cases the human driver is clearly at fault.

    Get back to me when truly "autonomous" cars are (a) on the road and (b) killing more people than sleepy or drunk humans.

     

  • I imagine civilians lost a lot of faith in 'aeroplanes' after they dropped a bunch of bombs on them during ww2. After a bunch of test pilots died because parachutes hadn't been invented yet. After a bunch of barnstormers died pushing the limits of airplane controls. After an endless procession of adverse weather, mundane mechanical failures, and human errors.

    The bugs were worked out, pilot training was drastically improved, and it was figured out what was needed for safe flight. And now commercial air trave

  • The right question (Score:4, Interesting)

    by Pascal Sartoretti ( 454385 ) on Wednesday May 23, 2018 @06:58AM (#56658016)
    The right question to ask is : would you prefer to ride in a self-driven car, or with a drunken driver ? and with a very tired driver ?
    • by unrtst ( 777550 )

      The right question to ask is : would you prefer to ride in a self-driven car, or with a drunken driver ? and with a very tired driver ?

      That's called a false dichotomy, and is certainly not the right question to ask.

      • .. So how would you feel about being driven around by a blind quadriplegic? Huh?? How about that?? Autonomous cars are looking pretty good now aren't they!
    • by mjwx ( 966435 ) on Wednesday May 23, 2018 @09:16AM (#56658644)

      The right question to ask is : would you prefer to ride in a self-driven car, or with a drunken driver ? and with a very tired driver ?

      I mean why stop there.

      The right question to as is: Would you like to ride in a self driving car in a summers day on a controlled road with no traffic or in an death-race style commute with a drunken, tired Donald Trump at the wheel whilst he listens to the BBC world service and pops Prozacs every 24 seconds.

      If you're going load a question, bloody well load it properly

  • Has been suspect (Score:5, Informative)

    by RobinH ( 124750 ) on Wednesday May 23, 2018 @07:10AM (#56658050) Homepage

    I do industrial automation for a living, since about 2000. There's a certain class of automation problem where getting to a 90% solution is easy, getting to 95% takes a lot of work, and getting to 97% is extremely hard. That is, 90% of the parts coming down the assembly line are easy to categorize correctly, the next 5% you can do with a lot of effort, and so on. Unfortunately that last 2 or 3% are damn near impossible due to problems with how good our sensors are, or how good our algorithms are, or how good our mechanical sorting solutions are.

    These problems are notorious for causing run-on projects that slurp up money but never end. That's because your initial effort appears to produce amazing results - 90% with almost no effort. How hard can the remaining 10% be? My first encounter with one of these problems was a barcode-reading system at an industrial facility reading barcoded tags with a camera instead of a barcode reader. The problem was that the barcodes were becoming more worn and faded over time, and management believed that if we used a camera instead of a barcode reader we'd be able to enhance the image, etc., and get a good read because clearly a human looking at the picture can clearly see the bars and the human-readable text below it. This project went on for months, and then years, always creeping closer to 100%, but never making that leap to 100%, having thrown several different engineers at the problem and bringing in outside machine vision specialists.

    In most cases these problems come from over-estimating the capability of your sensors. A sensor with a little dirt on it suddenly gives the wrong result, or temperature fluctuations mess up the calibration, or the dreaded, "sensor seems to be giving valid values, but they're just wrong for no reason." Even if your sensor values are reliable, in many cases you'll end up with a measurement that doesn't fall clearly into the known-A or known-B range.

    That's where "AI" is supposed to save us, but my limited experience with AI shows it falls into the same class of engineering problem: you can quickly build an AI that correctly categorizes 90% of your input correctly, and then with effort you can improve it and improve it some more, but you'll never reach that always-correct answer.

    This is where engineering projects fail, because you can always find a manager or an optimistic engineer who can hand-wave away the ambiguity and say, "humans aren't perfect either" and "we can just keep making the AI better and better." That's convenient when you don't put a physical number on it. How good can you make the AI with the available sensors? We know the sensors are in some ways better than human perception, but in other ways they're worse. In what quantitative ways are they worse, and how are you compensating for that?

    If I were going to tackle some problem like this, I'd start with a standardized sensor suite and data format. You can't have everyone developing AI based on proprietary sensor data because it's too opaque. You also need to standardize the system output format (accelerator percent, braking percent, steering value, etc.) Plus you need to standardize the parameters of the vehicle. Once you've got that you need to start collecting and publishing this data in this standard format - hundreds of thousands or millions of test case scenarios available for every researcher to use, and in each case you need to have an expert specify what the correct set of outputs should be (or correct range at least) for each scenario. Then you can develop your AI or algorithms and you can then run these through a test suite so your AI has to pass all of these scenarios before it can be certified. As we have crashes then we add to the list of scenarios, and if you make changes to the AI, it has to pass that new scenario and still pass all the old ones.

    I get the sense this is what the companies doing research are trying to do, but how do we validate their product? If their databases are proprietary, and their sensor format and data isn't in a standard format, and we can't run the tests ourselves, then how can we trust their systems? Of course we can't.

    • In QA circles there's a pretty standard distribution that says the first 80% of something will take 20% of your effort. Finishing the last 20% will take 80% of your effort. It's not true for everything, but it's true for quite a lot of things.

      • by RobinH ( 124750 )
        Yes, that's called the Pareto Principle [wikipedia.org]. It's typically phrased as the "top 80% of your downtime is caused by the top 20% of your problems." I'm not sure that applies here though, because it leaves out the cost of fixing those problems (some might just be fundamental), or just be really, really impractical to solve.
    • Holy crap, I've been talking about the last 20% since these autonomous car articles started.
    • Comment removed based on user account deletion
      • by RobinH ( 124750 )
        You can't ignore the psychological aspect of it. People are willing to use a kitchen knife even if they know they might cut themselves. If I tried to sell you a machine that sliced vegetables for you (faster) and statistically it only cut your fingers the same amount as doing it by hand, I think you'd probably feel uneasy about it. At least you feel in control of the knife. A machine that might just randomly cut you is different, if only inside your head. You can't market a device like that.
    • by jythie ( 914043 )
      I can't mod up, but would. This is a good description of the problem that anyone who hasn't worked in such fields needs to be aware of. I keep thinking back to the quote "AI, like fusion power, has been about 10 years away for 30 years now", and that was from the 80s I think and still holds true today. I think there is a lot of enthusiasm for AI finally solving various problems, but it is still mostly stuck in the 'recommending purchases' stage of 'great for things that do not matter, bad for critical de
    • Thankyou! I have been saying this for ages too. I can amlost smell another AI Winter coming...
      • 100%. We went through this in the 1970s and 1980s with AI for a while. For example, Expert Systems were going to take over the world, except they never did. And Expert Systems are actually real things that can solve real life problems. Neural Networks never panned out, but they have been recently rediscovered.
  • 2019 will be the Year of the Self-Driving Car on the Black Top.
  • by argStyopa ( 232550 ) on Wednesday May 23, 2018 @07:43AM (#56658136) Journal

    ...people are losing faith in an overhyped, not-ready-for-prime-time technology in the development stages for a task that takes a colossal synthesis of perception, reflexes, maturity, and training (none of which we have systems capable of duplicating yet individually) for which the infrastructure (physical, legal, social) hasn't even begun to be developed, much less matured to the point of implementation?

    It's almost like repeatedly INSISTING that "it's almost here" is ACTUALLY an insufficient substitute for real time in development?

    Hm.

  • Two anti-Tesla articles in a row on the front page makes you look like curmudgeons.

    People who believe God murders babies on purpose still believe in Jesus.

    People kill themselves and others while driving every day. I have no faith in humans.

    • If they did short TSLA they would be smart. You are paying an infinite multiple on a stock that pays no dividends. Quite stupid.
  • Uber got into the ride-sharing business, which has sort of morphed into the taxi business. Then along came Google with their plans to make a self-driving car. Uber saw it's future disappearing, and so got into the self-driving car game. They initially did it to give themselves a future, but quickly realised that self-driving cars are actually really, really hard. They then (secretly) pivoted to ensure that SDCs kill a few people so that the public trusts Uber's human drivers a bit longer.

  • by ganv ( 881057 ) on Wednesday May 23, 2018 @07:58AM (#56658192)
    Anybody who was developing "faith" in self driving cars was in trouble from the beginning. They are not a salvation. They are simply a technology that will soon be safer than humans at driving. Along the way they will introduce a whole host of new issues and changes in society and weighing whether the net effect is good or bad will occupy the pundits for a century. And anyone whose "faith" in this technology is strongly affected by the inevitable spurts of progress and setbacks needs to study a little history. We are starting a process that will extend over decades during which autonomous systems will take over driving duties from humans.
  • they're losing more than "faith"

  • You have to compare things, otherwise I can make anything seem scary dangerous.

    Sharks are a great example. One movie and people are terrified of them. But they are basically the same as elephants - more likely to be killed by humans than to kill a human.

    E- Cars are horribly dangerous - but they are ALREADY safer than human driven cars.

    I guarantee that if you are a parent, of a teenager, or the spouse of someone that drinks alcohol, a self-driving car looks VERY attractive, even today.

  • by lordlod ( 458156 ) on Wednesday May 23, 2018 @08:43AM (#56658428)

    We climb in a little metal box, hurtle towards another metal box at a closing speed of 200km/hr. Then, to make it safe, we paint a white line on the road and promise to both stay on one side of it. To make life exciting we then add wildlife, children playing, wet weather, tired alcoholics who have just broken up with their wives...

    The system is absurd, it is mind blowing that it works as well as it does, but all the band aids like crumple zones, seatbelts and AI steering can't avoid the fact that the system we have evolved is inherently dangerous. Nobody would ever deliberately design a system like our roads and cars.

    As an illustration, where I live people working on the side of the road must have a substantial crash barrier to protect them from the oncoming traffic and provide a safe working environment. That same worker can then get on a motorbike and ride home, protected only by a painted line, and nobody thinks anything of it.

E = MC ** 2 +- 3db

Working...