Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Transportation Technology

People Who Know More About Self-Driving Technology Trust It More 179

An anonymous reader quotes a report from Ars Technica: Robotaxis have a real public image problem, according to new survey data collected by an industry group. Partners for Automated Vehicle Education surveyed 1,200 Americans earlier this year and found that 48 percent of Americans say they would "never get in a taxi or ride-share vehicle that was being driven autonomously." And slightly more Americans -- 20 percent versus 18 percent -- think autonomous vehicles will never be safe compared to those who say they'd put their names down on a waiting list to get a ride in an autonomous vehicle.

PAVE says its data doesn't reflect skepticism or fear based on the killing of a pedestrian by one of Uber's autonomous vehicles, nor the series of drivers killed while using Tesla's Autopilot. In fact, those events don't even register with much of the population. Fifty-one percent said they knew nothing at all about the death of Elaine Herzberg in Arizona, and a further 37 percent only knew a little about the Uber death. Similar numbers said they knew nothing at all (49 percent) or very little (38 percent) about Tesla Autopilot deaths. But those who reported knowing a lot about the deaths were more likely to tell the survey they thought autonomous vehicles were safe now. According to the survey data, getting a ride in a robotaxi might change some of those minds. Three in five said that they'd have more trust in autonomous vehicles if they had a better understanding of how those vehicles worked, and 58 percent said that firsthand experience -- i.e. going for a ride in a self-driving car -- would make them trust the technology more.
"Of the 1,200 survey respondents, 678 reported owning an [advanced driver assistance system] ADAS-equipped vehicle, and three-quarters of them said they 'will feel safer on the road when I know that most other vehicles have enhanced safety features,' with the same number saying they are eager to see what new safety features will be on their next vehicle," the report adds.

"Interestingly, drivers who own cars with forward collision warning (FCW), blind spot monitoring (BSM), lane departure warning (LDW), and automatic emergency braking (AEB) were also more likely to believe that safe autonomous vehicles would be available within the next 10 years compared to those without those features."
This discussion has been archived. No new comments can be posted.

People Who Know More About Self-Driving Technology Trust It More

Comments Filter:
  • by anus0 ( 6878570 ) on Tuesday May 19, 2020 @10:31PM (#60080888)
    I COMPLETELY distrust it. Finely-tuned algorithms. Blind or fake a few sensors, and you have commited murder or manslaughter.

    How does this pass for news?
    • by Gavagai80 ( 1275204 ) on Tuesday May 19, 2020 @10:37PM (#60080908) Homepage

      Obligatory XKCD: https://xkcd.com/1958/ [xkcd.com]

      • by Actually, I do RTFA ( 1058596 ) on Tuesday May 19, 2020 @10:46PM (#60080926)

        What that XKCD misses is a sense of scale. Some people will randomly kill a human be being, but they normally only kill one or two at a time. Thieves usually only rob one house at a time. However, electronic criminals can steal 10 million credit cards in a single act. Similarly, a zero-day can allow one asshole to crash all the cars on the road at the same time.

        We know assholes exist, because we see various malicious hacks from time to time.

        • by DontBeAMoran ( 4843879 ) on Tuesday May 19, 2020 @10:55PM (#60080942)

          That's why all those so-called A.I. features should only assist/warn the human drivers, not take over driving itself. And in some critical situations, A.I. won't understand context and will do the opposite of what needs to be done [youtu.be].

          • A.I. won't understand context and will do the opposite of what needs to be done

            Yep. Because it's incapable of reason. Nobody home in there at all.

            • Yep. Because it's incapable of reason. Nobody home in there at all.

              It doesn't need to reason, it just needs to follow routes and avoid hitting things.

              • It's called a train.

              • by cusco ( 717999 ) <brian@bixby.gmail@com> on Wednesday May 20, 2020 @08:50AM (#60082152)

                Self-driving vehicles are already safer than a newly-minted 16 year-old driver, and most people have ridden with some of them. I taught my wife,nephew and two nieces to drive, and I'll guarantee that even Uber's car is a better driver than Gaby was the first couple of years that she was had a license. (And if you watch the Uber fatality video you'll realize that even experienced drivers would have had trouble avoiding that really stupid woman.) I remember talking to a woman in Florida who loved living there because she could renew her driver's license by mail even though she had been collecting benefits for being legally blind for two decades, if a self-driving car can't see adequately it just won't try to.

          • by serviscope_minor ( 664417 ) on Wednesday May 20, 2020 @02:12AM (#60081280) Journal

            And in some critical situations, A.I. won't understand context and will do the opposite of what needs to be done.

            So? There are plenty of situations where humans, understanding the context will willfully choose to ignore it and do the opposite of what needs to be done. Like getting shitfaced and climbing behind the wheel, for example.

            AI isn't human, never will be and will always make mistakes that humans won't. On the other hand it isn't human, never will be and won't make mistakes ever that humans make all the time.

          • by jbengt ( 874751 )

            And in some critical situations, A.I. won't understand context and will do the opposite of what needs to be done.

            For a concrete example, my daughter's friend was driving on a road with narrow gravel shoulder and a 45 MPH speed limit. There was a bicycle ahead and as she was going around it, automatic lane keeping tried to jerk the car back into the lane with the bicycle. Since that encounter, she has disabled that feature.

        • Not to mention drawing fake lines on a road would compete with other lines on the road. Will people drivers and autonomous drivers behave the same? Maybe people would see the conflicting lines and say: hey I don't know what I'm supposed to do here I'll turn off on another route, or slow to a crawl etc. Hopefully autonomous cars would too, but they might be really dumb: I'm 8" away from the left line QED I'm in my lane. First generation is likely going to be this dumb they have to trim off all the "irrelevan

        • So you are saying digital hacking laws should align with gun restrictions on automatic (perhaps now with semi-automatic) weapons.

          The second amendment was written before the times of such weapons, where you could shoot dozens of people at once. So your one shot weapon, which was inaccurate could kill one person, and then be at risk of retribution.

          • by cusco ( 717999 )

            No, the 2nd Amendment was written to allow merchant ships to be better armed than some US Navy ships, to allow frontier communities to have mortars, grenades, rockets, and multi-barrel carriage mounted muskets, and to let port cities have cannon batteries. The "town hall cannon" was not an ornament at the time.

      • Xkcd comics that make a good point but xkcd always fails in some simple way any grade schooler can see.

        In this case, no, I am not going to blindly and stupidly follow,\ your fake painted lines into a fucking concrete barrier like a FSD Tesla does.

        The cartoon is obligatory. It is obligatory as an example of why NOT to use xkcd to make a point.

        Xkcd is cute, often funny, but he rarely knows wtf he's talking about when it comes to getting into contentious real world debates because he lacks common sense.

        Again,
        • In this case, no, I am not going to blindly and stupidly follow,\ your fake painted lines into a fucking concrete barrier like a FSD Tesla does.

          They don't need you to. They just need the soccer mom with 4 kids in the van who's paying more attention to her cell phone to do so. The idiot who's just learning to drive. People paying way too much respect to their GPS driving off a pier into water. Running red lights. Going too fast for conditions, etc...

          As somebody else mentioned, keeping human drivers is currently costing us 38k people/year. It'd take some serious hacking to match that.

          • keeping human drivers is currently costing us 38k people/year. It'd take some serious hacking to match that.

            Sounds like you are talking about the USA, India or some other 3rd world place. What you need is not SD cars but a meaningful driving test, more like the one in the UK. That could implemented now.
            I saw a video of a driving test in some state of the USA in which the candidate only had to drive a circuit of a supermarket car park to get a pass.

            • Someone should probably have told you this before you made yourself look like a complete fool: "I saw a video once" is the worst form of argument known to man. Even your blatant bigotry takes a back-seat to the kind of idiocy it takes to make that type of argument.

            • While lower, it's hardly like your death rate is zero. I went and looked it up. The UK has about half the death rate measured by deaths/vehicle-km.

              https://en.wikipedia.org/wiki/... [wikipedia.org]

              To expand upon this, the US has more highway/high speed driving, where fatalities are more likely. While better driver training(not just testing) would certainly help, so wouldn't more aggressive work on safe road systems. I remember when, outside of my work, they went through and made all the ditches much more shallow - death

              • While lower [than the USA], it's hardly like your [UK] death rate is zero. .... The UK has about half the death rate measured by deaths/vehicle-km......the US has more highway/high speed driving, where fatalities are more likely. .... Even if we only figure that a deployed self driving car gets rid of 90% of the driver error accidents ...That would still be a 80% reduction in fatalities. You're only at half. Still lots of lives to save.

                The USA having twice the death rate despite the light level of traffic compared with the UK, and despite the "more highway/high speed driving" you mention, just underlines how bad driving standards are in the USA. It is not true that high speed highways must have a higher casualty rate; in fact there is a far lower casualty rate on high speed highways (ie motorways) in the UK than there is in towns, by any criteria. From :
                https://assets.publishing.serv... [service.gov.uk]

                Between 2009 and 2013 motorways carried around 20 per cent of GB traffic, but accounted for just 6 per cent of road deaths. Mile per mile, the risk of death on motorways was around 5 times lower than the equivalent figure for rural roads and 3 times lower than for urban roads.

                As for USA driving tests, from everything I have h

          • by cusco ( 717999 )

            Apple Maps used to be quite bad about that, there was a boat ramp in one of the southern states where something like a dozen people drove into a lake because Siri said "turn left".

        • Xkcd comics that make a good point but xkcd always fails in some simple way any grade schooler can see.

          In this case, no, I am not going to blindly and stupidly follow,\ your fake painted lines into a fucking concrete barrier like a FSD Tesla does.

          FSD Tesla has LIDAR to detect that barrier. How many tired humans have that when they're driving at night?

          • Really? Please explain that to the guy who ran into one in palo alto California a year ago. Why did his lidar not save his life?

            Ok well actually explain to his wife and kids because he is dead when his Tesla drove at full speed into a concrete barrier.

            Why? Because the roads were mid-painted. Just like the xkcd cartoon. Except the other 100k cars who pass that spot every day driven by humans didn't drive into the same barrier. Weird, huh? Must be a total fluke. (Eye roll).

            His FSD killed him.

            Lidar, ind
            • Sure, but (a) that was before, and (b) it was in a car that was never advertised as self-driving. He was supposed to be looking out of the window.

              And (c) there's lots of videos showing Tesla drivers asleep at the wheel and the car driving along safely. How many of those would be dead (and possibly have wiped out other families) if it wasn't for the car?

              https://www.google.com/search?... [google.com]

      • by MrKaos ( 858439 )

        Obligatory XKCD: https://xkcd.com/1958/ [xkcd.com]

        Some people are murders though, some people just want to cause chaos and, some people don't want to get caught. People don't always do things for malicious purposes, they may be ambivalent to the outcome altogether.

        Of course, some people are just assholes.

      • Most people commenting on the deliberately misleading painted lines case are missing the point. The danger is not primarily from jokers altering lines, but from poorly designed markings and signage particularly on older and secondary roads. Humans are far better at allowing for context than any current or forseeable AI.

        Simple example: a road near me (in the UK) has speed limits painted on the road of both 30 and 40 mph. It is because they reduced the limit 3 years ago and workers with white paint did t
    • Yea like how I distrust by Breaks on my car, Just cut a cable and you have committed murder or manslaughter.

  • what nonsense (Score:5, Insightful)

    by iggymanz ( 596061 ) on Tuesday May 19, 2020 @10:33PM (#60080902)

    The only sensible view is to mistrust it right now, since no reliable autonomous driving system exists for road vehicles. Asking a survey about trusting vaporware and nonexistent products is pointless.

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Wednesday May 20, 2020 @12:56AM (#60081164)
      Comment removed based on user account deletion
    • Elevators used to have operators. People didn't trust the push-button ones at first.

    • I would say people who know about Selfdriving features as they currently exist trust them.

      I have always trusted cruse control, (I have heard cases where it got stuck but never seen such an issue, or met someone who did, then I would say to them why didn't you put your car in neutral, and hit the breaks.)

      Then there is Traffic aware cruse control, which will slow you down if there is traffic in front of you.

      Then there is Autopilot which will keep you in your lane and at speed while not hitting the car in fron

  • by Actually, I do RTFA ( 1058596 ) on Tuesday May 19, 2020 @10:36PM (#60080904)

    I prefer a car that doesn't spy on me and report everything. I even more prefer a car that doesn't take me by the McDonald's because they get a cut if I order anything.

    • by rtb61 ( 674572 )

      I prefer a car that will not choose for me, whether to brake and run into an oncoming vehicle making an unwise overtaking manouver or swerve off the road, trying to run the gap between running down innocent children or a car once hacked, deciding to drive straight off a cliff, into a train or accelerate looking for the largest tree it can find.

      I Prefer my automated transport in controlled underground tunnels, isolated from the vagaries of the environment and continuously monitored by the control systems em

      • You're touching on a key point that I haven't gotten into for a while: choice. If you have no choice, that's bad. A box on wheels with nothing but some 'interface' that may or may not pay attention to you, and maybe an 'E-Stop' button to slap (that may or may not work) is a nightmare in the making when (not if) something goes wrong.
        Also, consider this: a car is a tool for transportation. Tool use works because the tool is an extension of our physical body so far as our brain is concerned. What happens when
      • We already know which type of driving system is highly dangerous and highly unlikely to significantly improve its performance in the future, and which system is in its infancy and improving by leaps and bounds. At some point it'll be better to have all those dangerous vehicles under the control of AI instead of inebriated dimwits on their cell phones.
      • by AmiMoJo ( 196126 )

        There's no reason why we can't have secure but also high tech modern cars. Look at aircraft, they have wifi and entertainment systems based on Android or Windows Embedded, but you don't see them getting hacked and falling out of the sky.

        And to their credit car manufacturers have mostly done a decent job. The reason you don't hear about people being remotely driven off cliffs by script kiddies is because they firewall the connected services and the power steering. The hacks that have been published all invol

    • by Excelcia ( 906188 ) <slashdot@excelcia.ca> on Tuesday May 19, 2020 @11:10PM (#60080976) Homepage Journal

      I suspect that this isn't going to be an issue. Not because it doesn't give every car manufacturer a woody, because it does - more than Software as a Service makes Microsoft hard. No, because it's not going to be mainstream in our lifetime. Self driving is one of those AI problems where people are right now at the pinnacle of Mount Stupid. They know just enough to think that they can do it, but it's an issue where we know enough to get us 99% of the way there, but the last one percent of the functionality we need is outside our grasp, and we don't even know it yet.

      I suspect it won't be until we see some really devastating failures in the field that it will get scrapped for a generation. Where I predict it will happen is when self-driving cars start interacting with other self-driving cars. Then I am betting we see some really weird feedback loops where the way one AI reacts is different enough from how a person reacts that it causes the other AI to move into weirdness that then goes back and forth until it fails in a spectacular way.

      I do not believe we can have safe self driving with today's AI technology without public infrastructure assistance - lane and intersection markers, etc. And I don't feel like dishing out my tax money to make that happen just so, as the parent comment to mine points out, auto manufacturers can invade our privacy.

      • What you said, exactly.
        We don't understand our own brains and how they really work at an overall system level, yet we're trying to make working copies of that? Nope.
      • Self driving is one of those AI problems where people are right now at the pinnacle of Mount Stupid. They know just enough to think that they can do it, but it's an issue where we know enough to get us 99% of the way there, but the last one percent of the functionality we need is outside our grasp, and we don't even know it yet.

        I suspect it won't be until we see some really devastating failures in the field that it will get scrapped for a generation. Where I predict it will happen is when self-driving cars start interacting with other self-driving cars....

        You know that self driving cars have already driven billions of miles, right?

        • You know that self driving cars have already driven billions of miles, right?

          Only with human intervention every 6000 miles on average. Most humans manage hundreds of thousands of miles without needing someone to take over in a few milliseconds or less.

          Come back and make that claim when SDCs can cover as many miles as humans do when driving autonomously.

          • Only with human intervention every 6000 miles on average. Most humans manage hundreds of thousands of miles without needing someone to take over in a few milliseconds or less.

            Most humans drive 100,000 miles without a single near miss?

            Come back and make that claim when SDCs can cover as many miles as humans do when driving autonomously.

            I will. Will you be here to apologize?

            PS: You know these things keep improving, right? eg. It used to be true that no computer could beat a human chess champion. Not any more.

            • Only with human intervention every 6000 miles on average. Most humans manage hundreds of thousands of miles without needing someone to take over in a few milliseconds or less.

              Most humans drive 100,000 miles without a single near miss?

              "Intervention" is not "a near miss". SDCs can't go too far without needing a human to intervene. Humans can go very far indeed without needing an intervention.

              Come back and make that claim when SDCs can cover as many miles as humans do when driving autonomously.

              I will. Will you be here to apologize?

              So, lemme get this straight, you made a claim that is blatantly incorrect, get called on it, and you want the apology?

              PS: You know these things keep improving, right? eg. It used to be true that no computer could beat a human chess champion. Not any more.

              That quote is from 2012. Got anything newer?

            • by bws111 ( 1216812 )

              A 'near miss' is not the same as a human having to take over. A near miss means the driver detected the problem and corrected it. A driver having to take over means the driver detected (and corrected) a problem that the AI missed.

              How many of those 'billions of miles' were driven without driver intervention at some point, in rain, snow, or ice? How about on crowded city streets? On the wrong side of a road because there is a flagger pointing there? How about on a road that has been brined (which confuse

      • I suspect it won't be until we see some really devastating failures in the field that it will get scrapped for a generation. Where I predict it will happen is when self-driving cars start interacting with other self-driving cars. Then I am betting we see some really weird feedback loops where the way one AI reacts is different enough from how a person reacts that it causes the other AI to move into weirdness that then goes back and forth until it fails in a spectacular way.

        Depends. Do self driving cars rely on a monolithic AI, a black box with sensor data going in and control data coming out? Or is it a collections of AI-assisted functions, tied together by something that is not a full AI but part programmatic in nature?

        In other applications of "black box" AIs, you can see the catastrophic feedback loops in case the input data gets a little weird for the AI. But that's not what we have seen thus far in self-driving cars. What we see is the system misinterpreting the da

        • Human activity isn't 100% mental either. We have a lot of biologic "hard coded" response to problems, these our instincts. Self driving cars need AI, to deal with unique situations, but there is probably a set of hard coded instinctual behaviors such as staying in the lane, not hitting objects, maintaining a constant speed.
          The AI for self driving cars don't need to be smarter than humans, to be safer and better. Us humans when we drive use a lot of brain power, because millions of years of evolution hasn'

      • by AmiMoJo ( 196126 ) on Wednesday May 20, 2020 @03:22AM (#60081378) Homepage Journal

        Waymo already has a working self driving car with hundreds of thousands of self driven miles on it. They are operating without anyone in the driver's seat right now.

        Robotaxis will only get more common. Maybe not serving arbitrary routes but I expect I'll be able to take one to/from the airport this decade.

      • by jodido ( 1052890 )
        This, and self-driving cars are a solution seeking a problem. Cars are bad. We're forced to live with them but that should not be our vision of the future.
    • If 'self driving cars' are allowed to exist, don't be surprised if they 'decide' your route should take you right past McDonalds, or whatever other 'corporate partner' the manufacturer has, then display ads at you to try to convince you to stop there and spend money.

      Only partially kidding.

      If it's a real concern to you then locate and ground the GPS antenna, and any cellular antenna the vehicle has, so it can't know your position and it can't transmit data. Of course you'll disable at least half the 'i
      • A self driving car shouldn't Need GPS, it should only use GPS to help aid it.

        If it has a map, and it knows how long it has driven then it would know there will be an intersection coming up. Where its sensors can track for changing road conditions to see there is the intersection.

  • It's relatively cheap and it works 99.999 percent of the time. It's made a squirrelly arc less than five times, but nothing sharp or dramatic. It just kinda leaned over, but I'm always paying attention. Again, over 20k miles so I'll take that percentage.

    Comma is cheap, easy to install, & easy to move to a different car if you need. Can't recommend it enough.

    • It's relatively cheap and it works 99.999 percent of the time.

      Assuming your numbers are correct, and you drive one hour a day, then that means it will fail for 6 seconds every 5 months.

      • The numbers are better than that. A "fail" with comma is a turn isn't quite sharp enough or too sharp. Again, comma never jerks the steering, it kinda just leans. Figure at least 1,000 small corrections per mile, so that's 4 or 5 steering adjustments that need correction out of 200,000,000. Not too shabby.

        Comma even shows they have over 15,000,000 miles driven with no accidents.

        It really is remarkable.

        • It's beyond remarkable, it's unbelievable.
        • So it only swings wide and hits a parked car or cuts too sharp and goes face first into the cross traffic waiting at the light?

          And only once every 6 months?

          I've been driving since I was 16 and haven't done either of those things ever much less every 6 months.

          Your system of choice sounds super dangerous.
          • So it only swings wide and hits a parked car or cuts too sharp and goes face first into the cross traffic waiting at the light?

            And only once every 6 months?

            I've been driving since I was 16 and haven't done either of those things ever much less every 6 months.

            Your system of choice sounds super dangerous.

            Sure. That's exactly what it does. It bounces your car down the road from shoulder to shoulder hitting everything.

            While driving down the road, you have never slightly driven over the yellow or white lines then eased the car back?

            Don't be a dumbass. Get back to me when you've driven over 15 million miles with no accident.

      • That is better than us humans. We fail all the time. We are usually just lucky enough to avoid causing danger.
        How many times, have you had to avoid an other driver, who is swerving in their lane because they were distracted.
        I while 90% of the people think they are above average drivers, and you will probably be saying, well I don't make such mistakes while driving. Chances are if you are honest with yourself you do. That quick stop that you had to do, where you cursed out the guy in front of you, may

  • Cause and effect (Score:4, Interesting)

    by ugen ( 93902 ) on Tuesday May 19, 2020 @10:57PM (#60080948)

    Perhaps it's the opposite - drivers who trust self-driving technology more are those more likely to purchase vehicles that have elements of such a technology?

    • Even having said what I said already, I'm not opposed to 'driver assist' systems that act as a backup in case you screw up (like the rear-end collision prevention system I've seen, for instance). But only if they don't take over the entire vehicle from you normally, and especially if they can be disabled entirely by the driver.
    • Re: (Score:2, Informative)

      Comment removed based on user account deletion
  • by burtosis ( 1124179 ) on Tuesday May 19, 2020 @10:58PM (#60080952)
    There is a sliding scale of competence. On one end you have google (waymo), and a not yet ready but pretty sophisticated driverless car. On the other end you have Uber with a barely functional crap pile, with safety systems purposefully disabled to prevent it from stopping all the time, instead manned by underpaid interns in some kind of extreme ADD test that occasionally runs over pedestrians. I’d ride in the waymo vehicle if it’s in a controlled setting. No way in hell im even getting near something from Uber, they need to be pulled from public roads.
  • by 93 Escort Wagon ( 326346 ) on Tuesday May 19, 2020 @11:03PM (#60080960)

    People Who Know More About Self-Driving Technology Trust It WayMo'?

  • if the self-driving industry and government came up with protocols for roads designed with sself-driving vcars in mind.

    As one example, implant a magnetic strip under the road which self driving cars can use as a guide.

    Then you would not rely on the ai as much.

  • But those who reported knowing a lot about the deaths were more likely to tell the survey they thought autonomous vehicles were safe now.

    Hmm.

    "Three in five said that they'd have more trust in autonomous vehicles if they had a better understanding of how those vehicles worked, and 58 percent said that firsthand experience -- i.e. going for a ride in a self-driving car -- would make them trust the technology more.

    HMM.

    What's going on here with this argument is a bait-and-switch. At no point here do we deal

  • I thought it was the opposite.
    Oh wait, you mean between "people who think a computer mouse is a kind of small rodent" and "people who know the basic of self-driving"?
    But between "people who know the basic of self-driving" and "people who have a deep understanding of self-driving", the latter probably distrust it more than the former.
  • The more you know about human drivers the less you trust them.

    Over the years I've come to the conclusion that things like how people direct their attention, make decisions, and perceive/remember the world are far more important than I was taught to regard them in school. So I started reading up on neuro- and cognitive science and hoo, boy.

    As one neuroscientist quipped, I believe the objective world exists, but now I'm doubting any of us have ever been there.

  • Seems the same is true of 5G. No Karen, it does not cause COVID-19.

  • Under normal driving conditions probably self-driving cars are much less likely to be involved in an accident. But in the road not everything is perfect. There are so many diverse situations that I think an AI can't cope with. What about:

    - Police directing traffic. Would they be able to follow orders?
    - Debris or big objects on the road
    - Puddles
    - Potholes
    - Fallen trees
    - Accident has happened (not to you), pull over to try to help
    - Drunk/mobile talking driver (risk incoming)
    - Overtaking on two lane roads
    - Rou

  • People who have read more into religion, tend to trust it more.
    People who have read more into communism, tend to trust it more.
    People who have read more into homeopathy, tend to trust it more.

  • Comment removed based on user account deletion
  • by melted ( 227442 ) on Wednesday May 20, 2020 @03:39AM (#60081404) Homepage

    When you know nothing about something, you have no illusions about knowing anything about it. When you know a lot, your confidence that you know enough tends to decrease and you begin to recognize nuance. But there's a dangerous middle ground: when you know a bit about something but have an illusion that you know more than you really do. I make my living doing deep learning and robotics work, and I have no illusion whatsoever about us getting anything even remotely close to L5 autonomy in the next 20 years. If you think otherwise, congratulations from Dunning-Kruger.

    • "I make my living making rockets, and I have no illusion whatsoever about us getting anything even remotely close to a reusable rocket in the next 20 years. If you think otherwise, congratulations from Dunning-Kruger."

        - Random expert, circa 2010

  • What people are missing in all this self driving cars debate is that it's not about the technology. I am pretty sure the technology will be good enough to work just fine, but of course there will always be accidents, but not more say than in any regular cars. The problem is the legal system.

    Anybody that uses a self driving car in autonomous mode is legally responsible for anything that happens, including hitting a pedestrian. So if there is a possbility that you could be charged for manslaughter if the self

  • Blue screen of death
  • I'm a large scale computer systems engineer, I work with very complex systems and on a daily basis I see so many engineering flaws, compromises, and edge cases in the systems. I see these systems fail in unexpected ways and often catastrophically. I see incompetence in every direction I look. I see decision makers who don't actually understand just about anything regarding the systems, rather I'm under constant pressure from them to get customer acceptance so we can record the earnings on the quarterly repo

  • Even if the self-driving cars themselves are totally safe, there will be unintended, potentially unsafe side effects.
    Self driving cars will go slow, really slow, much slower than normal traffic.
    Angry drivers will try all sorts of unsafe moves to get around them or otherwise avoid them

  • Mostly because I watch how other people drive and I'm always amazed at how bad people are at it. I figure it'd be hard for self driving cars to be worse. I could come up with an extended list on all the concepts I've seen other drivers mess up but I'll just point out in my area more than half the drivers have no idea what a yield sign means.(In my area most people think it either has absolutely no meaning and can be ignored or that it's a super stop sign and you must come to a full stop and wait an extended
  • Summary is moronic, most people who own technology do not know about registers, cache, network protocols, stacks, loops, fuzzy logic, how AI works etc etc. Using a thing does not mean you 'know' it these days.

    These people's trust is misplaced faith, like the guy who let his car crash into a dividing barrier or the two drivers that died as their cars collided with truck/tractors. Or the guy that hit a parked fire brigade vehicle.

    Tesla's apparently according to Tesla can't see stationary vehicles, the radar i

  • "People whose livelihood depends on self-driving technoolgy say they trust it"

If all else fails, lower your standards.

Working...