Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Transportation AI

Why Self-Driving Cars Are Still a Long Way Down the Road 352

moon_unit2 writes "Technology Review has a piece on the reality behind all the hype surrounding self-driving, or driverless, cars. From the article: 'Vehicle automation is being developed at a blistering pace, and it should make driving safer, more fuel-efficient, and less tiring. But despite such progress and the attention surrounding Google's "self-driving" cars, full autonomy remains a distant destination. A truly autonomous car, one capable of dealing with any real-world situation, would require much smarter artificial intelligence than Google or anyone else has developed. The problem is that until the moment our cars can completely take over, we will need automotive technologies to strike a tricky balance: they will have to extend our abilities without doing too much for the driver.'"
This discussion has been archived. No new comments can be posted.

Why Self-Driving Cars Are Still a Long Way Down the Road

Comments Filter:
  • by JDG1980 ( 2438906 ) on Tuesday April 16, 2013 @04:23PM (#43466437)

    This writer makes a fundamental mistake: believing that if full driverless technology is not perfect or at least near-perfect, it is therefore unacceptable. But this is not true. Driverless technology becomes workable when it is better than the average human driver. That's a pretty low bar to clear. I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.

    • by icebike ( 68054 ) on Tuesday April 16, 2013 @04:38PM (#43466557)

      I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.

      That depends entirely on the failure mode.

      "Fail to Death" is actually acceptable to society as a whole as long as the dead person held his fate in his own hands at some point in the process. This is why we get so incensed about drunk drivers that kill others. A person doing everything right is still dead because of the actions of another. But if you drive under that railroad train by yourself, people regard it as "your own damn fault".

      When the driverless car crosses the tracks in front of an oncoming train it will be regarded differently. Doesn't matter that the driver was a poor driver, and had a lot of fender benders. Most of those aren't fatal

      In spite of that, I believe Google is far closer than the author gives them credit for. They have covered a lot of miles without accidents.

      Granted, we don't know how many times there were errors in the Google cars, where one part of the system says there is nothing coming so change lanes, and the human or another part of the system notices the road is striped for no-passing and prevents it. Google is still playing a great deal of this pretty close to the vest.

      • This is why we get so incensed about drunk drivers that kill others. A person doing everything right is still dead because of the actions of another. But if you drive under that railroad train by yourself, people regard it as "your own damn fault".

        Ah but see here, the driverless cars have full 360 degree vision, and they already stop for trolleys and erroneous pedestrians crossing the road illegally. They do so without honking and swearing at the git too. So, as you say, if your ignore the copious warnings the driverless car's self diagnostic gives you that its optics are fucked, and manage to override the fail-safe then wind up under a train, they yes, people will still regard it as "your own damn fault". Not only that, but YOUR insurance is paying to clean up the mess.

        • by VanessaE ( 970834 ) on Tuesday April 16, 2013 @09:00PM (#43468647)

          One problem it won't solve is the mindless moron behind you who nearly slams into your back end because you had to brake suddenly. This happened today in fact - I was on my way back from the store, doing about 35mph in a zone so marked, when a stray dog started to cross the street in front of my, and I instinctively stomped the brakes to avoid killing an innocent animal. Some little dipshit on a little blue moped was following too close and sounded his horn, then flipped me off two or three times, as if I was the one in the wrong.

          • One problem it won't solve is the mindless moron behind you who nearly slams into your back end because you had to brake suddenly. This happened today in fact - I was on my way back from the store, doing about 35mph in a zone so marked, when a stray dog started to cross the street in front of my, and I instinctively stomped the brakes to avoid killing an innocent animal.

            This is a problem driverless cars will fix. Unlike you, a computer doesn't react instinctively. It has 360-degree vision, knows where all of the surrounding vehicles are, including distances and velocities, and is capable of accurately deciding whether or not braking is safe. Assuming it can also distinguish dogs from children (very likely, I think, though I don't actually know), it should be capable of deciding that it's safer to simply run the dog over if that's the situation.

        • by DeBaas ( 470886 )

          They do so without honking and swearing at the git too.

          So they're not quite there yet?

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Exactly. Just because a self-driving car can't respond to outlier situations as well as a human can doesn't mean the car shouldn't be driving. By definition, those outlier situations aren't the norm. Most accidents are caused by someone picking something up they dropped, looing the wrong way at the time, changing the radio station, etc. or inibriation. These problems would be solved by a self-driving car, and while I don't have any numbers to back it up, something tells me that eliminating these will far ou

      • by icebike ( 68054 )

        Most accidents are caused by someone picking something up they dropped, looing the wrong way at the time, changing the radio station, etc. or inibriation.

        Really? I'd like to see the source of your assertion.

        Granted distracted driving is the darling of the press these days. But that doesn't make it the major contributor to fatalities.

        In fact, fatalities by all causes are on a steady year by year decline [census.gov] and have been for 15 years.

        Drunk driving [edgarsnyder.com] still accounts for a great deal, 31% of the overall traffic fatalities in 2010. One-half of traffic deaths ages 21 to 25 were drinking drivers.

        Distracted driving hovers around 16% of fatal crashes by comparison. [census.gov]

        Drunk

        • by adamstew ( 909658 ) on Tuesday April 16, 2013 @08:33PM (#43468479)

          Having self driving cars would eliminate the "drunk driving" cause of traffic fatalities, as well as all of the "distracted driving" fatalities as well. That is, according to your own numbers, 47% of all driving fatalities.

          Also, according to Wikipedia [wikipedia.org] The speed that someone was driving was listed as the cause for 5% of fatal crashes, and "driving to fast for the road conditions" was listed as another 11% of fatal crashes. A self driving car would also, probably eliminate almost all of these crashes...so there is another 16% of all fatal crashes.

          In fact, there was a study done in 1985 [wikipedia.org] that concluded that the human factor was SOLELY responsible for 57% of all fatal car crashes. It also found that the human factor was entirely or partly responsible for 93% of all fatal crashes...these are the situations where there may have been bad road conditions or other outside factors, but the driver didn't react in the best way to avoid an accident. These numbers are almost 20 years old, but they are probably still representative of the current stats.

          A self driving car should eliminate nearly all of the 57% of crashes where the human factor contributed solely. It should also eliminate a healthy portion of the remaining 36% of fatal crashes where the human factor was a contributor as the car can be programmed to respond in the best possible way to a huge number of road conditions.

          Based on these numbers, I believe it is reasonable to say that self driving cars should eliminate about 75% of all fatal crashes. Technology and machinery is also, when compared to the human factor, extremely reliable...particularly when it is designed correctly. I have no problems saying that self-driving cars will eliminate 75-90% of all fatal crashes.

          I am also certain that there will be some outlier situations where, if the driver had been in control the entire time, that a fatal accident would have been avoided...The technology failed and the car drove over a cliff, or the signal sent to cars about a railroad crossing didn't activate, etc, etc... but those incidents would be offset by a HUGE margin with the number of incidents that the self driving cars prevented. These same arguments were made against seat belts! There have probably been several examples where someone who was wearing there seat belt drove in to a lake and drowned because they couldn't get free of the car because of the seatbelt. But it is proven that seat belts save FAR many more lives than they cost.

          • by AK Marc ( 707885 )
            I would sat that your numbers are correct for the one person in a self-driver surrounded by human drivers. But I think your numbers are way off for a land of 100% self-drivers. It would eliminate closer to 100% of crashes. Nobody can come up with anything that's a reasonable crash case. Most of them are along the lines of "someone hiding behind a car jumps out in front of you as you pass" or "a meteor lands on your car" kind of scenarios. Even ones like "animals run out in front of you" aren't left as
        • Re: (Score:3, Insightful)

          by tompatman ( 936656 )
          Good point, So, of these who is the better driver? The doctor who just finished an 18 hour shift half asleep at the wheel, the idiot teen texting while driving, the guy who had one too many at the bar or Google's self driving car? I know which one I'd vote for.
    • by Hentes ( 2461350 )

      And those bad drivers get their licence revoked, just like bad software should be banned. It's not singleing out aoutonomous cars.

    • Re: (Score:2, Insightful)

      by tnk1 ( 899206 )

      Point is, it is *not* better. Yes, it is probably better at normal driving situations, and certainly would probably be very welcome on a nice open highway. I imagine it would excel at preventing fender benders, which would be very nice indeed.

      However, how would it react to sudden situations at high speed? According to the article and everything I know so far... not well. And the point of the article is that the automation would still require a driver for that, but the automation would actually cause the

      • Re: (Score:2, Funny)

        by Sponge Bath ( 413667 )

        ...probably better at normal driving situations

        Will auto-pilot have the blood lust to take out the squirrel crowding your lane? Humans are number one!

      • by Jeremi ( 14640 ) on Tuesday April 16, 2013 @05:55PM (#43467299) Homepage

        However, how would it react to sudden situations at high speed? According to the article and everything I know so far... not well.

        In principle, at least, an automated system could react better than a human to sudden emergency situations... because a computer can process more input and faster decisions than a human can, and also (just as importantly) a computer never gets bored, sleepy, or distracted.

        Dunno if Google's system reaches that potential or not, but if not it's just a matter of improving the technology until it can.

      • by Anonymous Coward on Tuesday April 16, 2013 @06:34PM (#43467643)

        I'm sorry but your comment is extremely clueless. You are asking about sudden situations at high speed. How would one end up in such a situation? By ignoring the laws and safe driving conditions. Just by obeying the speed limit and reducing it even more when sensor data gives warnings, the car can avoid such problems in the first place.

        Almost %90 of the human drivers are confusing their luck of driving at 80+ Mph on bad weather (which is equal to playing Russian roulette with a gun with slightly higher capacity for ammo) with their driving abilities.

    • This writer makes a fundamental mistake: believing that if full driverless technology is not perfect or at least near-perfect, it is therefore unacceptable.

      Is it a mistake? Or your unsupported bias against human drivers?

      there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job

      That's a claim I keep hearing... but I don't buy it. For all the really bad drivers supposedly out there, in a wide variety of road conditions, lighting, weather, etc... there's

      • by PraiseBob ( 1923958 ) on Tuesday April 16, 2013 @06:07PM (#43467415)
        There are roughly 200 million drivers in the US. They have roughly 11 million accidents per year.

        http://www.census.gov/compendia/statab/cats/transportation/motor_vehicle_accidents_and_fatalities.html [census.gov]

        The catch is, nearly all traffic accidents are preventable by one of the parties involved. Most are at low speeds and most are due to the driver not paying attention to the situtation around them. Next time you are at a busy traffic light, count the cars around you. Chances are one of them will be in an accident that year. Now do that every time you stop at a traffic light....
      • okay... how about some statistics to back up the claim: Human factors in Driving Fatalities [wikipedia.org]

        The human factor is SOLELY responsible for 57% of ALL driving related fatalities.
        The human factors is partially responsible for another 36% of all driving related fatalities.

        This means that the human factor is a factor is 93% of all driving related fatalities. A self driving car, even if it only eliminates HALF of the fatalities where the human factor was only partly responsible, you're still reducing the number of

    • by gorehog ( 534288 ) on Tuesday April 16, 2013 @05:35PM (#43467119)

      I don't think this is true. No auto maker wants to deal with the insurance overhead involved in installing a "marginally flawed driverless system." Can you imagine that meeting? The moment when some insurance executive is told by GM, Ford, Honda, Toyota, Mercedes, BMW, whoever, that their next car will include a full-on autodrive system? The insurance company will raise the manufacturer's insurance through the roof! Imagine the lawsuits after a year of accidents in driverless cars. Everyone would blame their accidents on the automated system. "Not my fault, the car was driving, I wasn't paying attention, can't raise my rates, I'll sue the manufacturer instead." A few go-rounds like that and no one will want the system installed anymore.

      • We'll just have to install them ourselves.

      • They could get around this by having the insurance company work with the manufacturer, so you lease the automated driving software and one of the terms of the lease is to pay for the manufacturer's insurance cost for the use of the automated car. Or you have the insurance contract written to support both automated and non-automated driving. Automated driving should have cheaper insurance if it works better than the average driver, which makes it a win for everybody.

      • by spire3661 ( 1038968 ) on Tuesday April 16, 2013 @07:01PM (#43467849) Journal
        Actually, some day it will get to the point where it will cost more in insurance to self-drive.
      • by lee1026 ( 876806 )

        But for fleet operations, that won't matter - what GM will have to pay is what UPS will save in terms of insurance costs. UPS will know how much they will save in terms of insurance, and will be willing to pay more for a car.

      • by Anubis IV ( 1279820 ) on Tuesday April 16, 2013 @07:43PM (#43468209)

        There are two obvious solutions to that problem:
        1) Subsidize the insurance carried by the car manufacturers in some way. It doesn't have to be done via the government either, since the manufacturers could simply introduce a new, ongoing cost with each purchase that arranges for the purchaser of the vehicle to pay their portion of the insurance cost being borne by the manufacturer. It's simple to do and the cost per person will go down over time as more people purchase the cars and rates go down due to a decreasing number of accidents as a result of human error. Of course, initial costs will be higher, but...

        2) Eventually it will become safer to be driving in an automated car than not, and when that happens, there will start to be incentives from insurance companies to switch over to the self-driving cars. As a result, costs for personal insurance will eventually go down for passengers in driverless cars and rates will go up for people who choose to drive their own cars, since they'll increasingly be the ones responsible for causing accidents. As such, simple economics will eventually force most people to switch to driverless cars.

        The second will happen on its own. Whether the first happens or not remains to be seen, but it'd certainly ease things along.

      • Compare to airlines (Score:3, Interesting)

        by beinsvein ( 2752465 )

        Airlines are liable for around $175000 for each passenger death, set by IATA. A similar figure could and should be set by law for autonomous vehichles. So you do the math and find that per car, with a reasonably safe driving system, that's no big deal, whether it's covered by your car insurance or the manufacturer's liability.

    • by ArsonSmith ( 13997 ) on Tuesday April 16, 2013 @05:37PM (#43467129) Journal

      It'll be adopted just as quickly as the orders of magnitude safer Nuclear Power over coal has taken off despite a few relatively minor and contained accidents.

      • Except the "logic" of crippling fear from badly-estimated risks doesn't apply so much here. While the general public is easily convinced that a single nuclear failure might kill thousands and yield cities uninhabitable, it would be harder for the "anti-automatic-car lobby" (who the heck would that be? "big auto" would love to sell everyone new cars) to make the masses terrified of automatic cars going on mass-murder sprees. Especially since, unlike nuclear plants, cars are something the general public can o

        • I know I don't want just any car that could up, and on it's own slam into people waiting in line at the movies. I'm sure they'll all be networked together as well so it will be the start of the robot uprising with thousands of cars running people down at random. I saw the documentary of the last time it happened called Maximum Overdrive.

    • A driverless car will certainly be overall more attentive than a human driver, but it also needs to be able to handle the unexpected things a human driver handles. The mundane tasks, sure - but how do you handle things like a tire blowout in a curved section of road with sand on it? As long as there are relatively common scenarios that crop up that a human can handle some reasonable percent of the time that the software can't, it's not ready for prime time. How do you failover when road conditions exceed th
      • by aXis100 ( 690904 )

        How do you failover when road conditions exceed the thresholds of the car? Uh... slow down or stop maybe???

        Just because you might need to transfer control to the driver doesnt mean you need to do it at 100 miles per hour.

        Computers are far better at understanding their limitations that human drivers, and when they start getting reduced sensor data or confusing conditions will be programmed to be conservative, unlike the human drivers that keep barreling on until they are at the edge of disaster.

    • This writer makes a fundamental mistake: believing that if full driverless technology is not perfect or at least near-perfect, it is therefore unacceptable. But this is not true. Driverless technology becomes workable when it is better than the average human driver. That's a pretty low bar to clear. I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.

      It becomes workable when it achieves ALL of these things:

      • performs as well in most situations as a human being
      • doesn't perform significantly worse than a normal human being in any but extremely rare situations
      • is highly reliable and
      • it is cheaper than the alternatives

      the first obvious application is to replace cab drivers.

    • by mjwx ( 966435 )

      Driverless technology becomes workable when it is better than the average human driver. That's a pretty low bar to clear. I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.

      And here in lies the problem.

      Driverless cars will have to live side by side with human driven cars for years, if not decades and as you pointed out, there are a lot of bad drivers out there. A self driving car will need to be able to be proactive in protecting it's passengers from drivers that are pretty damn unpredictable. Right now we cant even get everyone doing the same speed or indicating, a driverless car has no hope dealing with that at our current level of technology.

      Right now, you cant even g

  • Taxis first (Score:3, Insightful)

    by Guano_Jim ( 157555 ) on Tuesday April 16, 2013 @04:25PM (#43466459)

    I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.

    The first robot taxi company is going to make a mint when they integrate a smartphone taxi-summoning app with their robo-chauffeur.

    • No, they aren't (well at least not in the U.S.) because the main reason that there are not more taxis in most major U.S. cities is because the city governments strictly limit the number of taxes there are allowed to be in the city.
      • by icebike ( 68054 )

        Most governments limit taxi services to control safety.

        Its the Taxi companies (and unions) themselves that lobby long and loud for control of quantity.

    • by Anonymous Coward on Tuesday April 16, 2013 @04:43PM (#43466595)

      Maybe AUTO drive only express lanes that cars will go at high speed at near bumper to bumper

    • by Hentes ( 2461350 )

      Screaming down the highway is much easier, actually.

    • Screaming down the highway at 75MPH is *exactly* where I want a self-driving car. I live in the Canadian prairies, the nearest large city is 5hrs of highway driving, next nearest is 7hrs. I would _love_ to put my car on autopilot for that trip.

      Also, on the highway you generally have long straight sections, sight lines are long, cars are further apart, there are no pedestrians, and often you have divided highways so you don't even need to worry about oncoming traffic.

      • Canadian prairies? In that degenerate case (computationally speaking - no offense to our northern neighbors) I think you already have the technology. Just point the car straight, hit the cruise control, and set an alarm clock for a few minutes before you get to your destination. It's worked for me when I've driven on the American prairies.
    • Re:Taxis first (Score:5, Informative)

      by ebno-10db ( 1459097 ) on Tuesday April 16, 2013 @05:10PM (#43466911)

      I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.

      On the contrary, "screaming down the highway at 75mph" (never been on the Autobahn, have you?) is a lot easier to automate than driving around a city block. Similarly the easier part of a plane's autopilot is the part that handles cruising at 500mph at 30,000 feet. The numbers are impressive, but the control is comparatively easy.

      On a highway there are no traffic lights or stop signs, and there are nicely marked lanes and shoulders. Just stay between the lines at a constant speed and hit the brakes if something appears in front. Compare that to trying to figure out if some guy who's not watching is going to step off the curb and into your way, or if the car pulling out of a parking spot is going to wait for you to pass.

    • I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.

      This makes sense. I would be willing to drive a self-driving car in a crowded city, where the maximum speed is 25, and the risk of serious injury is minimal if I get in a collision. Driving at 75 down the highway has a lot bigger risk if things go wrong.

      Of course, looking at it the other way, I wouldn't want to be a pedestrian with a bunch of self-driving cars going around me. A small sensor error could lead to catastrophe.

      • maximum speed is 25, and the risk of serious injury is minimal if I get in a collision

        Warn me beforehand - I don't want to be a pedestrian in that city.

  • by Anonymous Coward

    ... because they will be, who is going to be sued?

    If I was a car manufacturer I don't think I'd be mad keen on going down the self-driving route - it's only going to mean more lawsuits.

  • If anyone is wondering the reasons they say Google cars are not good enough, here is the only section of the article that addresses that point:

    [Google] says its cars have traveled more than 300,000 miles without a single accident while under computer control. Last year it produced a video in which a blind man takes a trip behind the wheel of one of these cars, stopping at a Taco Bell and a dry cleaner.

    Impressive and touching as this demonstration is, it is also deceptive. Google’s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isn’t purely to reassure pedestrians and other motorists. No system can yet match a human driver’s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

    • What a vague statement! This entire article is lacking in any real specifics or citations.

      • Not at all. The article is full of specifics. The article content merely has no relation to the Slashdot headline. Imagine that.
    • What's that, Google's hype is just, well, hype? Say it ain't so.

      Numerous car manufacturers have been working on self-driving cars for many years. In 2014 Mercedes will actually have a car with some very limited self-driving capabilities (sort of cruise control on steroids - you can use it on the highway when you stay in your lane). As limited as it is, that's a lot more real world application than you're likely to see out of Google anytime soon. Contrary to the beliefs of some Silicon Valley and Google hy

      • In 2014 Mercedes will actually have a car with some very limited self-driving capabilities (sort of cruise control on steroids - you can use it on the highway when you stay in your lane).

        FYI the article describes the author driving a 2013 Ford Fusion with exactly those features. Also mentions a 2010 Lincoln with auto-parking.

        • I RTFA, and I think that's impressive. What struck me about Mercedes is they're actually going to be selling cars with those features in less than 6 months. I don't think the Ford features are going to be sold soon, though the article wasn't clear. The automated parallel parking has been around for a while, and in all fairness should count as a limited form of self-driving car.

          It reinforces my point about Google. While they're hyping completely self-driving demos, various car makers have been doing the ha

          • I don't think the Ford features are going to be sold soon, though the article wasn't clear

            Looks like Ford's selling it now [ford.com].

            • Yeah, the active parking assist, which was on the Lincoln driven by the article's author. A quick search though suggests that the highway stuff on the Ford is also production. Unfortunately I don't have the time right now to be sure. Ok, score one (three?) for the bottom-up self-driving people.
      • by mlts ( 1038732 ) *

        The 2014 Sprinter [1] has this technology. It not just auto-brakes when someone tried a swoop and squat (easy insurance money in most states), but can compensate for wind, alerts the driver if there is someone in a blind spot, and can automatically follow in a lane. IIRC, there is even an adaptive cruise control which doesn't just follow by speed, but also automatically keeps a distance from the car in front, so cruise control can be used safely in more places.

        [1]: Sprinters are weird beasts. Stick a Fr

        • Stick a Freightliner logo on the back of a new one, and the local homeowner's association starts sending notices that "company vehicles" are prohibited in driveways unless active work is being done. Pull two torx screws out of the flap between the double doors and attach a Mercedes logo, and the same HOA types now consider the vehicle an asset to the neighborhood due to the make.

          Wow, yet another reason to stay away from HOAs.

    • by icebike ( 68054 )

      What road do you know of where a human hasn't driven the route at least once?

      The street view cars have driven the bulk of roads in most major US cities at least once documenting everything along the way.

      Left unsaid by Google is how many human interventions were required, (for what ever reason).

      • by sl149q ( 1537343 )

        Can't find the article, but there was a mention that in the (small number of) incidents where a human operator took over control, they reviewed the logs afterwards and in all cases the computer would have taken either the same or equally safe action. These where mostly related to pedestrians and jay walking if I recall.

    • by Kjella ( 173770 )

      Impressive and touching as this demonstration is, it is also deceptive. Googleâ(TM)s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isnâ(TM)t purely to reassure pedestrians and other motorists. No system can yet match a human driverâ(TM)s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

      Google has cars driving around almost everywhere for their map feature, I'd have no problems with a first edition limited to what they already know. And they're legally obligated to have a driver ready to take over, even if they wanted to go solo. Miiiiiiiiinor detail.

    • So, most trips in cars are repetitive - you drive to work every day. Your Google-O-Matic could 'learn' the route over time, follow your driving habits, confer with other Google-O-matics about their experiences. Maybe the first time you drive a route, you go manual. Not such a big deal.

      Boats have had autopilots for years. Most of them are pretty primitive. Planes likewise. The captains of both devices are responsible for the vessel at all times. Same as with an autonomous car. The driver decides when

    • by sl149q ( 1537343 )

      The Google cars don't REQUIRE a human. They CAN operate fine without one. They MAY NOT operate without one.

      They have a human behind the wheel so that they comply with the various licensing regimes. As long as there is a human behind the wheel capable of assuming control the current laws in most places are fine with the computer controlling the car.

  • Great AI (Score:2, Funny)

    by Anonymous Coward

    Imagine if you had a car with a great AI, better than what is out there today. You just tell it to drive somewhere and it does. It never gets lost, knows where all addresses are, knows how to park, etc. Basically everything.

    There'd still be people that did things like, "It seemed to be going too fast so I slammed on the brakes and the car spun out of control and into a ditch. If it weren't for your AI, this never would have happened! I want a million dollars." or "I was sitting in the driver's seat dru

  • by maliqua ( 1316471 ) on Tuesday April 16, 2013 @04:36PM (#43466547)
    i like to think of them more as personal variable path trains. whats really needed to make it work is a road infrastructure designed around this. when the focus moves away from the AI that can replace regular driver and more towards a combination of smart roads and smart cars that work together, then we will have what they hype is suggesting
    • whats really needed to make it work is a road infrastructure designed around this.

      Came here to say that myself.

      Also, the issue of liability is another major barrier; until the government figures out who to blame when something goes awry and one of these things causes damage to life/property, you can bet your bonnet that Uncle Sam will not allow driverless cars on "his" streets.

      • Uncle Sam already allows driverless cars, it's just that they're all test platforms 'insured' by the company doing the developing. It's for liability purposes that somebody is behind the wheel of the google cars currently.

        Still, there are a number of possibilities I can see. Much like how the federal government set down specific rules on how nuclear power liability will be addressed, the same can be done with cars. I see several possibilities(but I'm writing this on the fly, so I'll probably miss stuff):
        1

        • Oh yeah, forgot part of #1: As part of making the manufacturer liable for accidents caused by the AD system, even limited, the manufacturer would build the liability into the price of the system, enabling dirt-cheap insurance if you can afford the auto-drive.

  • Google has apparently been using this technology for their StreetView cars, and apparently meet all the straw-man requirements of the article.

    So...do they have more bogus requirements that need to be met?

    Driverless cars don't need to be able to handle any possible situation. Most drivers can't handle those situations either - witness the large number of accidents that happen every day. The driverless cars just have to be better, and have superior liability coverage, than human drivers.

    • by Hentes ( 2461350 )

      I haven't seen Google participate in any independent test. They are making big claims, but I wouldn't just accept the words of a company praising its own product when they fail to provide proof for years.

  • If the car's AI cannot handle the situation, control of the car could be transfered to a central location where a human could take over. Another option would be to get the car to a safe spot and have a human come out and take over.

    Also, the cars don't need to go anywhere at anytime under any conditions to be useful, they just need to be able to follow pre-determined courses safely. In the even of an accident, detour, heavy traffic, or even bad weather, the automatic driving cars could be sent home or told t

  • Really, no one read the headline and thought: "They're a long way down the road because they're driving away without us!!!"
    • by neminem ( 561346 )

      I didn't, but I'd give you +1 funny for the visual (if I had any mod points, and if I hadn't already posted a comment :p)

  • by Animats ( 122034 ) on Tuesday April 16, 2013 @05:00PM (#43466803) Homepage

    The article is about semi-automated cars, not fully automated ones. Semi-automated cars are iffy. We already have this problem with aircraft, where the control systems and the pilot share control. Problems with being in the wrong mode, or incorrectly dealing with some error, come up regularly in accident reports. Pilots of larger aircraft go through elaborate training to instill the correct procedures for such situations. Drivers don't.

    A big problem is that the combination of automatic cruise control (the type that can measure the distance to the car ahead) plus lane departure control is enough to create the illusion of automatic driving. Most of the time, that's good enough. But not all the time. Drivers will tend to let the automation drive, even though it's not really enough to handle emergency situations. This will lead to trouble.

    On the other hand, the semi-auto systems handle some common accident situations better than humans. In particular, sudden braking of the car ahead is reacted to faster than humans can do it.

    The fully automatic driving systems have real situational awareness - they're constantly watching in all directions with lidar, radar, and cameras. The semi-auto systems don't have that much information coming in. The Velodyne rotating multibeam LIDAR still costs far too much for commercial deployment. (That's the wrong approach anyway. The Advanced Scientific Concepts flash lidar is the way to go for production. It's way too expensive because it's hand-built and requires custom sensor ICs. Those problems can be fixed.)

    • The fully automatic driving systems have real situational awareness - they're constantly watching in all directions with lidar, radar, and cameras.

      They have the sensory information needed for situational awareness, which can be a long way from having situational awareness.

    • The article is about semi-automated cars, not fully automated ones. Semi-automated cars are iffy. We already have this problem with aircraft, where the control systems and the pilot share control. Problems with being in the wrong mode, or incorrectly dealing with some error, come up regularly in accident reports.

      I can't help but think we're going about this the wrong way. Automating transportation in 3D is really hard. Automating it in 2D is a lot easier. Automating it in 1D is dirt simple.

      Perhaps wh

  • Answer (Score:5, Funny)

    by WilyCoder ( 736280 ) on Tuesday April 16, 2013 @05:01PM (#43466819)

    Because Skynet, that's why.

  • by Spy Handler ( 822350 ) on Tuesday April 16, 2013 @05:27PM (#43467043) Homepage Journal

    ready to take over in case of an emergency, what is the point of the whole thing?

    And assuming the human will be tweeting on his Facebook Amazon phone with his hands nowhere near the steering wheel and feet propped up on the dashboard, how is he going to take over control of the car in a split second when an emergency occurs? He can't. So that means he will have to be alert and in a ready driving posture and paying attention to the road like he's really driving. But then what is the point? Might as well have him drive it himself and save money by not buying the Google Car stuff in the first place.

    Either make a car that can go 100% without a human driver, or go back to web advertising and forget about the whole thing.

    • I agree, and this is a very different situation from an automated aircraft. Things generally happen quite slowly in aircraft except for the few minutes around takeoff and landing during which times the pilots can be alert. In cruise flight if something goes wrong with the automation there are usually many seconds for the pilot to become aware of the problem and to take over. In the famous AF 447 flight it was several minutes between the beginning of the problem and the last time at which the aircraft cou

    • by lgw ( 121541 )

      There are plenty of situations where a human driver could reasonably take over, with many seconds of warning: when the car starts having trouble with its sensors (or the lane markers disappear), when the map and reality diverge, and so on. The "sudden surprises" need to be 100% computer-handled on auto-pilot, but that's a whole different problem than the system's failsafes detecting that normal operation has degraded for whatever reason, including overdue maintenance.

  • There are other fundamental problems with driver-less cars: Illusion of control, exchange of information, and unmodeled conditions

    The illusion of control is why some people are scared to death of flying, but feel confident of getting behind the wheel: even though the stats say that people are safer in an aircraft than on the road, people hate to give up the control. Even if the control is just an illusion (you can do little if a drunk driver slams into your car).

    The exchange of information is crucial to

  • To be an efective driver an AI would need to understand my hand signals. Until then it is not safe for it to be on the road when I am on my bike.

  • by BetterSense ( 1398915 ) on Tuesday April 16, 2013 @08:29PM (#43468449)
    Self-driving cars won't work for a completely different reason than all this...they will never work because they can't bluff.

    As soon as people figure out that a computer is driving a car, they will pull out in front of it knowing they won't be hit. They will change lanes into it, knowing that it will slow down and get out of the way. And they will loath it, because it will never flatter their feelings.

    Self-driving cars will be bullied off the road, because there is a lot more to driving with people than being able to control the car. There is a lot of social/herd/mental/aggression dynamics that are instinctive for people but not for computers.
  • by EmperorOfCanada ( 1332175 ) on Tuesday April 16, 2013 @09:18PM (#43468739)
    My prediction is that people will be so resistant to just letting go of the steering wheel that the major car companies will give up with that route and pursue having super assisted driving. That is basically cruise control on steroids. Already companies like Mercedes have cruise control that will maintain a safe distance from the car in front, matching their slower speed or even emergency braking if needed. Other cars will do what they can from having you change lanes and side-swiping another car. So I suspect that all the robot driver technology will end up holding your hand more and more. Technically you will be the driver but the robot will be ready to prevent stupidity and also react when you don't. After a while it will finally reach a point where you can just take your hands off the wheel (the car will probably bleat plaintively) and the car will maintain speed and the lane. But nobody will call it robotic driving.

    But then the breakthrough will be that some company that has crossed some critical line of self-driving capability will say that full liability insurance is included with the price of the car. Potentially they will even cover all insurance short of trees falling on the car and whatnot as they will be sure the car can't cause an accident and that with all the cameras and sensors that some other fool can't blame you or your car if they are the cause of the accident.

    At this point my money would be on cars finally being marketed and sold as robotic self-driving cars. Shortly after this the tidal wave will wash away all the non-robotic cars as being a dangerous menace. The key here is that most cars by this point will be largely capable of being autonomous or very close to autonomous with only antiques being the hold outs.

    But, and the big but, is that some robotic car will drive off a cliff or into a train or whatever and that single incident or small collection of incidents (and their Youtube videos) will get everyone saying, "Those things are death traps, I'll never let the car drive." This will temporarily postpone the inevitable but going from 35,000 US annual road deaths to 35 will be too much reality for foolish people to fight for long.

Keep up the good work! But please don't ask me to help.

Working...