Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Transportation AI Software Technology

Researchers Trick Tesla Autopilot Into Steering Into Oncoming Traffic (arstechnica.com) 186

An anonymous reader quotes a report from Ars Technica: Researchers have devised a simple attack that might cause a Tesla to automatically steer into oncoming traffic under certain conditions. The proof-of-concept exploit works not by hacking into the car's onboard computing system. Instead, it works by using small, inconspicuous stickers that trick the Enhanced Autopilot of a Model S 75 into detecting and then following a change in the current lane. Researchers from Tencent's Keen Security Lab recently reverse-engineered several of Tesla's automated processes to see how they reacted when environmental variables changed. One of the most striking discoveries was a way to cause Autopilot to steer into oncoming traffic. The attack worked by carefully affixing three stickers to the road. The stickers were nearly invisible to drivers, but machine-learning algorithms used by by the Autopilot detected them as a line that indicated the lane was shifting to the left. As a result, Autopilot steered in that direction.

The researchers noted that Autopilot uses a variety of measures to prevent incorrect detections. The measures include the position of road shoulders, lane histories, and the size and distance of various object. [A section of the researchers' 37-page report] showed how researchers could tamper with a Tesla's autowiper system to activate wipers on when rain wasn't falling. Unlike traditional autowiper systems -- which use optical sensors to detect moisture -- Tesla's system uses a suite of cameras that feeds data into an artificial intelligence network to determine when wipers should be turned on. The researchers found that -- in much the way it's easy for small changes in an image to throw off artificial intelligence-based image recognition (for instance, changes that cause an AI system to mistake a panda for a gibbon) -- it wasn't hard to trick Tesla's autowiper feature into thinking rain was falling even when it was not. So far, the researchers have only been able to fool autowiper when they feed images directly into the system. Eventually, they said, it may be possible for attackers to display an "adversarial image" that's displayed on road signs or other cars that do the same thing.
In a statement, Tesla officials said that the vulnerabilities addressed in the report have been fixed via security update in 2017, "followed by another comprehensive security update in 2018, both of which we released before this group reported this research to us." They added: "The rest of the findings are all based on scenarios in which the physical environment around the vehicle is artificially altered to make the automatic windshield wipers or Autopilot system behave differently, which is not a realistic concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so and can manually operate the windshield wiper settings at all times."
This discussion has been archived. No new comments can be posted.

Researchers Trick Tesla Autopilot Into Steering Into Oncoming Traffic

Comments Filter:
  • Film at 11 (Score:5, Insightful)

    by MrLogic17 ( 233498 ) on Tuesday April 02, 2019 @08:05AM (#58370846) Journal

    So, optical illusions fool a driver. They just fund a kind that fools a digital driver. Film at 11

    Because machines "think" very differently from people, the optical illusions will be very different. No surprise there,

    Next we'll get a headline that if you put a number sticker over speed limit signs, human drivers can be tricked into driving at the wrong speed - even though very clearly the stickers have the wrong UV patterns and react to LIDAR clearly in an altered way.

    • Re:Film at 11 (Score:5, Insightful)

      by Junta ( 36770 ) on Tuesday April 02, 2019 @08:29AM (#58370960)

      The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.

      It points to a big gap in machine learning strategies in general: Training generally happens focused on positive correlations and not a lot of injection of maliciously designed data. So a well trained model is dumb and just says 'training says always follow lines' and follows it right head on into traffic.

      This is also a sign of likely problems in road construction, where markings are frequently very messed up.

      This is not 'a machine can be fooled like a human', it's a reminder that the machine is still a *lot* dumber than a human.

      • by ceoyoyo ( 59147 )

        The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.

        I've seen lots of drivers do exactly this. That was in Montreal though, so it may be significantly less common elsewhere.

        It points to a big gap in machine learning strategies in general: Training generally happens focused on positive correlations and not a lot of injection of maliciously designed data.

        There's usually lots of negative data in training sets, but you're right,

        • The goal is "better than humans" as tens of thousands die every year, and not "perfect".

          So occasionally someone dies and some kid goes to jail, just like bowling balls to the head.

          • The current poor excuse for 'AI' will never be 'better than humans' because it is fundamentally incapable of anything like 'thinking', it's just following complicated 'decision trees'. We have no idea how 'thinking' actually works therefore we cannot build machines that 'think', which is why it fails like this so badly.
            • The current poor excuse for 'AI' will never be 'better than humans' because it is fundamentally incapable of anything like 'thinking', it's just following complicated 'decision trees'. We have no idea how 'thinking' actually works therefore we cannot build machines that 'think', which is why it fails like this so badly.

              I don't think 'you' understand what it 'means' to quote something. Tends to make folks put less stock in your ruminations about AI and the nature of human thought.

            • by ceoyoyo ( 59147 )

              I think humans give themselves way too much credit for their "thinking" ability. The research suggests what we do is nothing like the logical reasoning most of us assume. Mysterious processes tell us the answer and then, if pressed, we justify it to ourselves.

          • by ceoyoyo ( 59147 )

            I agree, and I think people who use stories like this as "oh look, proof self-driving cars will never work" are wrong. But adversarial examples are an issue that should be solved. I don't think it's a terribly difficult solution though. One of the great things about adversarial examples is that you don't even need more training data to get started, just the output of your own adversarial generator.

      • Re: (Score:2, Insightful)

        The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.

        If there's active traffic, people will avoid it. But there are plenty of times that human drivers get confused by road markings, especially during construction, worn paint, poor lighting, rain, and blinding headlights from oncoming traffic.

        • by mysidia ( 191772 )

          If there's active traffic, people will avoid it. But there are plenty of times that human drivers get confused by road markings

          Yes.... But machines are supposed to be BETTER. Before self-driving cars are ready, they must be able to avoid jumping into the same lane as active oncoming traffic while traveling down a road or highway, even if the road markings are confusing or in error.

          • Yes.... But machines are supposed to be BETTER.

            They may be someday. They already are better than some human drivers. There are some people who really should not be allowed to drive and many people drive impaired/distracted with some regularity. Currently your hypothetical average human driver is probably still better than even the best machine driver but the machine's are getting better and human drivers are not. Eventually it seems probable that machine drivers will be safer than most (or all) human drivers. Exactly when that happens is unclear bu

          • It has to be able to actually 'think' in order to do that, and this type of software is completely incapable of 'thinking', it's just following decision trees based on stored data. We have no idea how 'thinking' in a biological brain actually works, and 'learning algorithms' are insufficient.
            • by mysidia ( 191772 )

              It has to be able to actually 'think' in order to do that, and this type of software is completely incapable of 'thinking',

              No... It doesn't have to "think" to do that, nor would I expect a computer to "think" in the same sense as human thinking.
              I would simply expect the vehicle to detect the path of oncoming traffic, and take any necessary correction to not enter that path in spite of confusing road markings.
              It just means a more-nuanced "decision tree" as you call it.

      • by sjbe ( 173966 )

        The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.

        I guarantee you I can find examples of humans would would be fooled. There are a LOT of humans that are quite easy to mislead and all humans can be mislead sometimes. The only difference is that the tactics that fool a human will usually be different than those that fool a machine but make no mistake that both can be fooled. There are plenty of examples [theweek.com] of people very dutifully following the instructions from their GPS into trouble despite it being painfully obvious that the GPS instructions were faulty

        • There are plenty of examples of people very dutifully following the instructions from their GPS into trouble despite it being painfully obvious that the GPS instructions were faulty in some way.

          Really? That could never happen! [google.com]

          But seriously, I completely agree. It takes very little to confuse, fool, or distract a human. There is nothing surprising about being able to do that, nor should there be something surprising about being able to do the same to a machine with sensory input. The difference is that you can reprogram the machine to not do that next time. Humans are surprisingly resistant to learning not to do dumb shit, and all it takes is a night with no sleep or a bit of trauma in their lives,

        • The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.

          I guarantee you I can find examples of humans would would be fooled. There are a LOT of humans that are quite easy to mislead and all humans can be mislead sometimes.

          There are a lot of coyotes who get tricked by misleading road markers, too, and run right into a mountain side.

      • Comment removed based on user account deletion
        • If the Tesla has an issue, ALL Tesla's have an issue.

          And ... one software update can fix them ALL. You can't do that with humans.

          • Or make them all go berserk. You can't do that with humans, but you surely can with things like OTA updatable Siemens centrifuges or vehicles.

      • The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.

        Complete bollocks. Care to set up a situation like that and see how many drivers follow the dots blindly?

        • Re:Film at 11 (Score:5, Informative)

          by larryjoe ( 135075 ) on Tuesday April 02, 2019 @09:59AM (#58371518)

          The difference being a human that sees lane markers leading into active oncoming traffic will decide there are shenigans and not follow.

          Complete bollocks. Care to set up a situation like that and see how many drivers follow the dots blindly?

          Unfortunately this situation occurs quite frequently at road construction sites where new lanes are overlaid over existing lanes. The old and new sets of lane markings make lane localization difficult at times even for humans to know where the true lane lies. Often in these cases, the human will follow the preceding and surrounding traffic in an attempt to avoid collisions, even if the true lane appears to be otherwise.

        • by AmiMoJo ( 196126 )

          Humans are very good at spotting things that are not really markings, such as spilt paint or ribbons blown into the road. White tape is fairly common in construction and often falls off vehicles.

          A human can spot a long tyre print made from spilt paint and not follow it. A machine... It can, but it needs to be trained and tested.

          What's most interesting here is that Tesla started out claiming Autopilot was amazing, and setting the drive attention detection system to be extremely lax. You could go for many min

        • by necro81 ( 917438 )

          Complete bollocks. Care to set up a situation like that and see how many drivers follow the dots blindly?

          There's a coyote and roadrunner joke [youtube.com] here I can't quite pin down.

        • Just because humans can be visually fooled sometimes doesn't mean we should turn over our lives to half-assed machines that are actually worse than we are.
      • While it is little consolation to the deceased, I fail to see how placing markers designed to deliberately cause a car wreck is not pre-meditated murder. Someone could easily be lying in wait in the woods with a rifle and shoot drivers, or waiting on overpass to drop bricks. That latter one, sadly, happens with some regularity. Murdering people is pretty easy.

        I am more interested in cases where it gets confused by routine bad situations. Construction is one, although my experience is that the car is telling

        • While it is little consolation to the deceased, I fail to see how placing markers designed to deliberately cause a car wreck is not pre-meditated murder. Someone could easily be lying in wait in the woods with a rifle and shoot drivers, or waiting on overpass to drop bricks.

          While there are fortunately very few people who are demented enough to shoot at people on the highway, there are many cases of mischievous teenagers who toss rocks from overpasses onto unsuspecting drivers. I have no doubt that these teenagers would try the "fool the self-driving car camera" trick after reading about it.

      • it's a reminder that the machine is still a *lot* dumber than a human

        Depends on the classification of dumb. We've all seen massively paradoxical things being done by drivers who were confused by the lane markings. Hell, rubbernecking leads to more crashes as attention is drawn off the road and onto the accident meaning the drive may not see the car in front of them slamming on the brakes. So really depending on how one defines "fooling the driver" one could easily say humans are just as easily fooled by things. The massive difference here is that while evolution of our b

      • > This is not 'a machine can be fooled like a human', it's a reminder that the machine is still a *lot* dumber than a human.

        And it always will be, until it stops being a machine.

        It's the self preservation instinct that most modern people forget its there that keeps you alive in, among hundreds of other situations, this one too.

        Some of it its genetic .. it's "just there" and science still don't fully understand how. Some of it its learned by constantly falling down, bumping into things, scratching
      • by lazarus ( 2879 )

        I get your point. But because we live in the age of sensationalist headlines the authors never bothered to tell you that, although it would steer into on-coming traffic, if there WERE actually on-coming traffic it would start blaring at you (loud enough to wake the dead), automatically braking, shaking the wheel, etc. Ask me how I know...

        My personal feeling about autopilot / partial self-driving (owning a car that has it) is that I am not a fan. EVs are awesome, but I think autonomous operation of a vehi

      • Exactly. Humans can be fooled - sure. But if the paint has peeled off the roadway, or is covered with snow, or somebody shot holes in the speed limit sign, or something else "out of the ordinary" our brains immediately detect "situation not normal" and we quickly come up with plan B. Computers though apparently stay on plan A until they hit a wall.

        Situation Normal, Situation Normal, Situation Normal .... Deploy Airbags!

        The story was put forward as a security event. I think in the end though that "machine is

      • If it could actually 'think' then we wouldn't be having this discussion -- but it is 100% incapable of 'thinking', never will be, therefore it's easily manipulated.
      • Comment removed based on user account deletion
    • by necro81 ( 917438 )
      I dunno, I think we'd all be better if we skipped all this talk of autonomous driving, and instead started teaching road runners how to drive for us. Things always seem to work just fine for the road runner [youtube.com].
    • by Megol ( 3135005 )

      Machines doesn't think as you probably understand which is why this is a problem - the machine can be fooled but doesn't detect that what fools it isn't logical. A human generally detect those cases and adapt to the situation. The problem is the reliance of pseduo-AI pattern matching without the actual AI, the part that would make the machine "think".
      Another problem is the sensitivity of current systems in that small patterns that just would make a human somewhat confused instead is detected as a highly acc

    • Because machines "think" very differently from people, the optical illusions will be very different. No surprise there.

      I'd like to preface this comment with a disclaimer: I am not attacking or insulting you in this comment; you just happen to be exemplar of an point I've made in the past.
      So-called 'machine learning', 'deep learning algorithms', 'neural networks', and everything else they're erroneously calling 'Artificial Intelligence' these day, is completely and totally incapable of 'thinking', 'cognition', 'consciousness', 'sentience', or any other major feature and phenomenon that we associate with actual Intellgence.

  • Misleading headline (Score:5, Informative)

    by honestmonkey ( 819408 ) on Tuesday April 02, 2019 @08:11AM (#58370876) Journal
    They, in fact, did not "steer a Tesla into oncoming traffic", but instead made the software "think" there was a lane line where there was none. The car did go the wrong way (or would have if they'd let it), but there was no traffic. They even said, if there had been cars there, the Tesla likely would have noticed them and not blithely crashed head on.
    • by Shotgun ( 30919 ) on Tuesday April 02, 2019 @08:29AM (#58370952)

      They even said, if there had been cars there, the Tesla likely would have noticed them and not blithely crashed head on.

      And if the AoA sensor was reading wrong, the pilot likely would have taken control and not let the plane crash. Those "likely" sure are dangerous.

      • They even said, if there had been cars there, the Tesla likely would have noticed them and not blithely crashed head on.

        And if the AoA sensor was reading wrong, the pilot likely would have taken control and not let the plane crash. Those "likely" sure are dangerous.

        You do know that this issue was a bit more complex than you seem to indicate. Part of the problem with the MAX was the pilots DID intervene, they just didn't understand what was happening and let the aircraft trim itself nose down instead of countering with nose up trim. In short, they *didn't* take control, control of the right thing at least. Both aircraft where 100% flyable, the pilots just had to figure out what was happening and deal with the issue in the time they had. These guys didn't have enough

      • So because technology fails one time we should give up on it?

        Both your example and the one in the article have been addressed at this point. And due to only one or two issues, the technology has been fixed on every model of that vehicle.

        That's the nice thing about technology.

        The second, significant failure in your reasoning is not considering the lives saved with the technology working correctly. While it's not an easy calculation, ignoring the benefit and focusing on the harm can easily lead you to cause m

      • The problem is you can't correct the planes path by pulling up on the stick. In a Tesla, all you have to do is move the steering wheel.
      • Then there were all those cases where the AoA sensor was right and the pilot let the plane drop out of the sky yanking back on their stick with the stall alarm blaring at high volumes as they died.

        Thanks but no thanks. I'll take predictable, programmable, and above all fixable computers over fallible squashy blobs of barely thinking water sacks behind the wheel any day.

        There's a reason the generally excepted error rate for humans is 10% on demand and for a well designed machine it's several orders of magnit

    • by Kokuyo ( 549451 )

      One does get the impression that people are trying very hard to get us riled up but are running out of ideas to manage it.

    • Re: (Score:2, Flamebait)

      They even said, if there had been cars there, the Tesla likely would have noticed them and not blithely crashed head on.

      Oh, that's nice. So my Tesla will just get duped into crossing over to the wrong side of the road, but will swerve back (in which direction?) when it eventually encounters traffic barreling toward it in what it wrongfully considers to be its lane, while the other cars will be in the process of taking evasive action (in which direction?) due to a car barreling toward them in THEIR lane.

      Yeah, no problem whatsoever.

      • Well.. I'm not sure that swerving into the oncoming lane is the best option when somebody crosses over into your lane..

        My druthers would be to hit the binders and head for the shoulder and hopefully get the horn sounding... That seems like a better option in general. It may not be the right call all the time, but it seems like the best option in a bad situation.

        Get out of the other car's way, staying on your side of the road, get on the brakes and scrub as much energy off and increase the time before the

        • Completely agree that's a great approach when we have time to calmly and rationally think it through from the comfort of our armchairs, but in the trenches I'd expect that to break down in somewhat inverse proportion to the amount of reaction time available, experience of the driver, familiarity with the road, etc.

          And that's for human drivers -- when we're debating whether it's ok to have this sort of behavior out of self-driving cars, we need to consider what can be expected to happen when the oncoming tra

          • You do what you practice, in advance or it's anybody's guess what you will do.. I suggest that if you don't have the presence of mind in panic situations that you go out and practice, actually, even if you don't tend to panic, practice. Have somebody randomly declare an emergency and time how long you take to respond correctly. Even mentally walking though these exercises will help prepare you for when it really happens.

            Far too often we run headlong into situations without a plan for when things go wro

    • You mean like it wouldn't blithely crash head-on into a concrete divider?

      Ooops.

  • "The rest of the findings are all based on scenarios in which the physical environment around the vehicle is artificially altered to make the automatic windshield wipers or Autopilot system behave differently, which is not a realistic concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so and can manually operate the windshield wiper settings at all times."

    While I agree that it shouldn't be a realistic concern, peop

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Real airline pilots have tons of training, understand the limits of the systems, and are literally PAID over six figures to do the terribly boring job of monitoring the system. Tesla owners could have zero training, are certainly not privy to the actual system limitations, and are shown tons of marketing indicating the main benefit of autopilot is the ability to NOT pay attention.

      But I can see how they are "basically the same thing".

  • Because Tesla uses purely optical cameras, these hacks are visible to human eye, once it has been pointed out, or during accident investigation. Lidar based navigation, if hacked using similar techniques, would not be visible. You would need special lidar forensic equipment to even know the lidar has been fooled.

    Human drivers too would be affected if someone adds fake lane marking. I remember a prankster was arrested for rearranging the traffic cones in a construction zone to create two colliding lanes. T

  • by bravehamster ( 44836 ) on Tuesday April 02, 2019 @08:20AM (#58370916) Homepage Journal

    I found that it's super easy to make human drivers crash with a simple $5 laser.

    It's amazing how many of our systems only work with the underlying assumption that we're not actively trying to murder each other at any given moment.

    • by necro81 ( 917438 )

      It's amazing how many of our systems only work with the underlying assumption that we're not actively trying to murder each other at any given moment.

      Dammit! This is why we can't have nice things.

    • I'm even more amazed that we all place our safety into the hands of total strangers who shouldn't be trusted.
  • Were these engineers contracted out to Boeing to design their MCAS system for the 737max?

    Seriously, the design pattern of a life critical system that makes decisions based on one set of or type of sensor is asinine. Boeing should have had the MCAS's AoA indicator cross checked with velocity, GPS, and engine data. Tesla should have the wiper controls visual sensor crosschecked with a humidifier, and the lane sensor crosschecked with a LIDAR. Isn't this just basic stuff here. I don't consider myself a gen

  • Computers will never be people. I don't think we WANT them to become that smart. Imagine the moral questions on that.

  • These and other funky glitches are reasons why I wouldn't really want to fully depend on the Tesla system. Google Car on the other hand uses a much larger complement of sensors and a 3D space mapping LIDAR to avoid these issues, unless you're going to go as far as placing a styrofoam lifesized car or panel onto the road which would almost fool real-life drivers as well. Google believes in the concept of making sure the system fully works instead of taking dangerous compromises.

  • by clawsoon ( 748629 ) on Tuesday April 02, 2019 @08:36AM (#58370990)

    a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so

    You're not in control, but you have to be constantly ready to take control. You don't have insight into its mental processes so you never know what it's about to do, but you have to be constantly ready to react to what it just did.

    And people find driving with Autopilot to be less stressful than driving without it? I guess I'm different from most people.

    • And people find driving with Autopilot to be less stressful than driving without it? I guess I'm different from most people.

      Because in most cases the system works fine, and people get complacent.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      While "autopilot" is engaged, you do have visibility to "what the car sees" on the screen. That tells you what obstacles it sees as well as where it thinks the vehicle lanes are. If they don't seem to make sense to what you see, then it's time to take over.

      Like the "autopilot" in planes, when the cruise control take over, it reduces cognitive load because the driver doesn't need to pay attention to as many things. That translates into less stress and the ability to pay attention for longer.

      If the driver d

      • While "autopilot" is engaged, you do have visibility to "what the car sees" on the screen. That tells you what obstacles it sees as well as where it thinks the vehicle lanes are. If they don't seem to make sense to what you see, then it's time to take over.

        Like the "autopilot" in planes, when the cruise control take over, it reduces cognitive load because the driver doesn't need to pay attention to as many things. That translates into less stress and the ability to pay attention for longer.

        If the driver does other things instead, that's really the driver's fault. Though Tesla's marketing isn't really helping on that front, either.

        Bumping this up for visibility, because the AC is spot on.

        Everything I've heard from Tesla owners driving moderate to long distances is that it's far less stressful with autopilot. Much less mental fatigue, because there's a lot less you need to do. It's not nothing, but when your car largely stays in its lane, slows for traffic ahead of you, auto-brakes if there's an obstruction, monitors your blind spots, turns on the wipers when it rains, and figures out how far you can go before recharging and suggests

    • I guess I'm different from most people.

      Probably because you have senseless distrust in electronics. If you're nervous about a computer steering your car, I can't imagine what a wreak you must be on the road with all those other variables you can't control.

  • Work zones with lines all over the place may trigger this??

    • In my state we have a "no cell phone use in work zones" law, and giant signs before all work zones notifying drivers. I'd bet that the same will happen with self-driving cars as they become more popular. And like the no-phones law, most people will obey it, some won't and will get fined, some won't and will get into accidents and fined, and some will get into accidents and injure someone else and get fined and potentially also get jail time.

  • A plastic bag over a stop-sign should work too and it would get the non-Tesla drivers as well.
    Also continuing the middle line into the abyss and hiding the original line that goes around.
    Putting a fake stop-sign on the middle of the highway should be fun too.

    No need to be a 'researcher' for stuff like that.

    • Putting a fake stop-sign on the middle of the highway should be fun too.

      Or painting a fake tunnel on a rock.

      • Putting a fake stop-sign on the middle of the highway should be fun too.

        Or painting a fake tunnel on a rock.

        The great thing about painted tunnels is that birds can go through them but predators cannot.

        • The great thing about painted tunnels is that birds can go through them but predators cannot

          Provided you use ACME paint, of course.

    • by Nkwe ( 604125 )

      A plastic bag over a stop-sign should work too and it would get the non-Tesla drivers as well.

      Would it? I suspect it could trip up a bad driver, but any decent driver should be able handle the situation safely.

      When I approach an uncontrolled intersection (one without a traffic light or a stop sign / other signage) I look for cross traffic and be prepared to stop. Part of my evaluation as to if an intersection is uncontrolled or not is to look at both the signage intended for me AND the signage intended for the cross traffic. If I don't have a stop sign and the cross traffic doesn't have a stop sig

  • "...not a realistic concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so and can manually..."

    Oh yeah, I'm sure the majority of people can be trusted to remain ready at all times to take over the system called "Autopilot"... Most would assume they can just fall asleep and the thing will magically drive itself. People are dumb, crash reports at 11.
  • Would stuff like autopilot be considered less controversial?

    I'd guess it would get promoted by the company as something other than quite such an autonomous self-driving platform.

    Volvo (and I'm sure others, I've only been exposed to Volvo's system personally) has what amounts to a nearly self-driving system -- distance sensing cruise, lane centering, you very nearly don't need to "drive" to drive, yet there's not nearly the constant promotion/hostility to their system and other similar ones.

    Even my lowly Sub

  • Computers suck at processing analog inputs. It wouldn't be hard to spoof a car with minimal effort especially something a primitive autonomous vehicle. Not that other autonomous vehicles will fair any better. Assuming they ever appear on the roads without drivers (not for a long time), people will make sport of griefing them - gum on sensors, traffic cones on their roofs, boxes laid in front of them, graffiti tags etc. Even without the griefing it won't be surprising if they become so frequently stuck, bloc

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...