Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Transportation AI

GM Exec Says Elon Musk's Self-Driving Car Claims Are 'Full of Crap' (smh.com.au) 382

An anonymous reader quotes the Sydney Morning Herald: Billionaire entrepreneur Elon Musk's claims about the self-driving capabilities of his upcoming Tesla vehicles are "full of crap", General Motors' self-driving Tsar says... "To think you can see everything you need for a level five autonomous car [full self-driving] with cameras and radar, I don't know how you do that"... GM's own solution involves several radar and Lidar sensors, as well as cameras and multiple redundancy systems. Each system costs hundreds of thousands of dollars, and GM are some way away from getting the cost low enough to be commercially viable. "The level of technology and knowing what it takes to do the mission, to say you can be a full level five with just cameras and radars is not physically possible," Mr Miller said.
This discussion has been archived. No new comments can be posted.

GM Exec Says Elon Musk's Self-Driving Car Claims Are 'Full of Crap'

Comments Filter:
  • Translation (Score:4, Informative)

    by nospam007 ( 722110 ) * on Saturday October 07, 2017 @10:34AM (#55327269)

    GM can't do it.

    • by v1 ( 525388 )

      What, a car exec badmouths the competition? Say it ain't so!

      • Re: Translation (Score:4, Insightful)

        by saloomy ( 2817221 ) on Saturday October 07, 2017 @11:44AM (#55327621)

        A person gets all the information about where he car is and has to go based on two sensors (basically two cameras with stereoscopic vision) in your head, positioned sub-optimally inside the vehicle, with one point of view at any one time, and sensors for speed. If you include the persons ass, throw in a cheap accelerometer too.

        There is no reason to think cameras and an accelerometer canâ(TM)t figure it out with software to the same degree. But the cars cameras have better vantage points, near perfect operation once the software comes around, and will emphatically understand the rules that govern the roadways better than we could, as well as the dynamics and limitations of the vehicle it is operating. Cars can absolutely get autonomous with less than GM claims.

        • Re: Translation (Score:4, Insightful)

          by Anonymous Coward on Saturday October 07, 2017 @11:53AM (#55327667)

          The human brain has a huge amount of computational power compared with the processors in a car. And typically you're not allowed to drive a car until you're a teenager. So, that brain has 15 or more years of training in identifying all those objects you see while driving. Chances are good that the brain has been sitting in a car many times over those years and gotten good at identifying them at speed.

          It's definitely possible that we'll eventually get sensors that can do that, but it's naive to suggest that we're anywhere near that point. The sensor arrays can easily miss a bicycle or motorcycle if it's positioned in the wrong part of the road. With sensors that actually cover the entire lane ahead far enough to cover the stopping distance, it wouldn't be much of an issue, but most vehicles have far too few beams for that to happen. They have gaping holes right ahead of the vehicle that wouldn't exist for a driver. Drivers mostly can't see the couple feet ahead and behind, which are only an issue when going slowly. At speed, you wouldn't be able to stop quickly enough to care about that.

          • Re: Translation (Score:4, Insightful)

            by K. S. Kyosuke ( 729550 ) on Saturday October 07, 2017 @12:41PM (#55327845)

            The human brain has a huge amount of computational power compared with the processors in a car.

            Indeed, and it needs that computational power to do many things that we can get rid of by reformulating the problem. Instead of having a robot's computer do the calculations necessary for bipedal dynamics, you can just give the robot a few wheels. Problem simplified! Assuming that the car would need to approach human cognitive capacity is not justified.

        • Re: Translation (Score:5, Insightful)

          by Anonymous Coward on Saturday October 07, 2017 @12:34PM (#55327817)

          If you could somehow drop a human driver into the driver's seat in the middle of a journey with no prior knowledge, you would find them mostly non-functional. Even if they avoid panic, they would take seconds to minutes to bootstrap their knowledge to the point where they could effectively take over. The closest practical example I can think of to this scenario is how pilots are trained to deal with visual and inertial disorientation, and how they have to essentially troubleshoot their way back to an understanding of their situation. This doesn't happen in milliseconds, and in fact can take more time than is available as we've seen from well publicized air disasters. One can argue that the high rate of accidents from drunk and distracted drivers is due to similar disruption of situational awareness and an inability to recover with just two eyes and an ass in the seat.

          A human driver relies on their mind much more than their sensors. Their mind builds a very elaborate world model based on sensory data over time and their understanding of where they are, what they have been doing, and what they expect to be the rules of the environment. The situational awareness that we "see" as a 3D world lit up around us is mostly a construct of our minds showing us our memory and our expectations. This includes not only some physical simulation and prediction (e.g. instincts about momentum and continuity of trajectories of objects in the scene) but psychological and social simulation (e.g. assuming intentions of other drivers and pedestrians and reading "body language" cues). Today's infatuation with neural nets and "deep learning" does nothing to tell us how to construct a synthetic mind with these sorts of abilities which are necessary to compensate for our paltry sensors.

          This is the fundamental disconnect. A traditional engineer will think about how much data he needs from a sensor suite to reliably assess the scene with live data combined with the very primitive state model he knows exists in his automation system. A true believer in near-term AI will wave away the vast gulf between current technology and the human mind which we all take for granted every day, assuming that somehow the system can perform as well as us (or better!). More concerning, we have no reason to assume that a futuristic automated system made complex enough to emulate these functions of the human mind won't also be subject to analogous failure modes like confusion, delusion, hallucination, and even antipathy.

        • Re: Translation (Score:5, Insightful)

          by AmiMoJo ( 196126 ) on Saturday October 07, 2017 @01:27PM (#55328017) Homepage Journal

          The issue is that computer vision doesn't work the same way as human vision. Human's are good at recognizing when things don't make sense, or spotting objects that are partially obscured and recognizing what they are. Humans know that when they can't see most of that thing because of the blinding sunlight reflecting off it, it's a car. The human eye has really good dynamic range too, and a built in self-cleaning system.

    • Re:Translation (Score:5, Insightful)

      by hey! ( 33014 ) on Saturday October 07, 2017 @10:50AM (#55327343) Homepage Journal

      I'm guessing Tesla really can't do it well enough, cheap enough either ... yet.

      But one of the advantages of having a Bond villain as chairman and CEO is that he's a little less bound by quarterly profit targets and the need to dole out healthy shareholder dividends like clockwork.

      For the first fifteen years after Microsoft went public it never paid a penny in dividends. Investors didn't expect dividends; they expected all the profits to be plowed back into world domination.

      • But one of the advantages of having a Bond villain...

        The various bits of Musk's brain clearly inter-operate well enough that I'd rule out him being a sociopath/psychopath. Hollywood's bullshit portrayal of "evil geniuses" notwithstanding, if the part of the brain that we empathize with isn't functioning correctly, other parts will be compromised as well: witness not just the sociopathic behavior of Fortune 500 CEO's but also the incredibly stupid and unimaginative choices they tend to make.

    • While I agree with your assessment, I'm still curious why Elon is so insistent on only using cameras and radar (and ultrasound?)... I mean, lidar is getting pretty compact and inexpensive these days. It would almost certainly make the resulting self-driving capability even better and more reliable. Why not use it?

      • Re:Translation (Score:5, Insightful)

        by Rei ( 128717 ) on Saturday October 07, 2017 @11:02AM (#55327413) Homepage

        According to Google (who's a big LIDAR proponent), it's still $7,5k per unit. It still messes up your aerodynamics and looks dorky. It still can't see in adverse weather conditions, meaning you have to have developed an optical / radar based world-modeling system anyway. And you have to have image processing regardless to read signs, road lines, identify objects, see brake lights, and so forth.

        There's real hope for further improvements in LIDAR and its variants in the future, however. We'll see where it goes.

        • I was thinking more along the lines of the $10 LiDAR-on-a-chip, but I guess they aren't quite ready for mass production yet. But whenever they do become available, it would seem like a no-brainer to include them in the sensor suite.

          • Forgot to include a link to the LiDAR chip: https://www.spar3d.com/news/lidar/mits-10-lidar-chip-will-change-3d-scanning-know/ [spar3d.com]

            • It seems to me that these are for using in a room with optimal conditions. If current lidar arrays can't see through fog I can't see how these would ever have the power to.
              • Apparently I chose the wrong example from a quick google search. But there has been a lot of talk [economist.com] in the press [thedrive.com] about how the new LiDAR chips will revolutionize self-driving cars. It's definitely on the way, it's just not here yet.

                As for fog, humans can't see through it either. I think the point is to have a broad spectrum of inputs -- LiDAR, radar, ultrasound, cameras -- to get the best possible "picture" in the given weather conditions.

                • by Rei ( 128717 )

                  Yes, humans can see through fog; that's the reason that fog lights exist.

                  LIDAR is much more sensitive to obstruction by weather than human vision is.

                  • I would bet that they can design a LIDAR to function at a wavelength that delivers some kind of "fog-vision". Even if it's not as good as human vision, it would still be useful. And when you couple that with on-board cameras, and all the other sensors, you'd ultimately get a better situational awareness than most humans could achieve. Fog wreaks havoc with cameras too, that doesn't mean you shouldn't use them.

        • Re:Translation (Score:5, Interesting)

          by somenickname ( 1270442 ) on Saturday October 07, 2017 @03:59PM (#55328503)

          According to Google (who's a big LIDAR proponent), it's still $7,5k per unit. It still messes up your aerodynamics and looks dorky. It still can't see in adverse weather conditions, meaning you have to have developed an optical / radar based world-modeling system anyway. And you have to have image processing regardless to read signs, road lines, identify objects, see brake lights, and so forth.

          There's real hope for further improvements in LIDAR and its variants in the future, however. We'll see where it goes.

          Good lidar systems see much, much better than camera based systems in adverse weather. I work on FMCW lidar systems and I recall driving to work one day where the fog was so bad that I couldn't even find the road to my office. Once I got work, I turned on the lidar system I was working on and it imaged a building 100 meters away without issue. Road lines are trivial to identify in a lidar system since they have much different reflectivity than the road surface. Objects are also easier to identify because you aren't trying to pull three dimensional information out of two dimensional images. On an FMCW lidar system, you also get doppler information for free. You don't have to try and decide if an object is moving towards or away from you by comparing subsequent images. Every single point in the point cloud includes a meters/second doppler value.

          I have to assume your familiarity is with those awful spinning Velodyne systems. They are utter garbage. No self driving car company that I've interacted with is even vaguely entertaining the idea of using them in a production car. They don't even really like using them in their mule cars but, until very recently, they were the only real option available.

          The real problem with lidar is that people aren't good at consuming lidar data yet. Once they start to get some experience with it, I have zero doubt that lidar will be the primary sensor on the car. It's the only way you can really build a model of your surroundings with high accuracy, high refresh rate and high tolerance to ambient conditions. So, I actually agree with the GM guy here: Tesla is full of shit. They aren't going to make a level 5 autonomous car with cameras and radar.

    • Hey GM, are you saying only you can do it, and only if you can steal another $10 billion from the American taxpayer?
    • Neither can Tesla, right now. Tesla is promising full autonomous driving, but it isn't there yet. It also has lost features from the original AP1 hardware (reliable automatic windshield wipers) that it has not yet replicated with AP2, because it's relying on different hardware and software to enable it.

      Tesla is really betting on "software will fix everything", but there's really no saying when it'll happen.

      • by gweihir ( 88907 )

        Tesla is really betting on "software will fix everything", but there's really no saying when it'll happen.

        Well, lets hope that really, really stupid expectation will not kill them. They have some thing they do well (batteries, solar roofs) that would be a loss if they went bankrupt..

    • Re:Translation (Score:5, Insightful)

      by gweihir ( 88907 ) on Saturday October 07, 2017 @11:30AM (#55327549)

      No, that is not what he is saying. What he is saying is that GM cannot do it with these limitations and there is very good reason to believe that others cannot do it either. As Musk is full of it in a number of topics, it would not surprise me one bit if he were on this too.

    • Re:Translation (Score:4, Insightful)

      by west ( 39918 ) on Saturday October 07, 2017 @11:37AM (#55327583)

      > GM can't do it.

      Agreed. But if Tesla can't do it either, he's afraid that Tesla will "poisons the pool" by raising expectations about what can be done and at what price point to impossible levels.

      There's been more than one industry that simply doesn't exist at all because people have been trained to believe that anything less than the impossible is either no good or unfairly expensive.

      As the head of GM team, he's petrified of Tesla failing but in doing so, sowing the whole field of autonomous vehicles with salt.

      On the other hand, expecting Tesla to put the good of the field before it's own welfare is pretty much dreaming and taking shots at Tesla is simply counter-productive. His job is to just grin and bear it.

    • Re:Translation (Score:5, Insightful)

      by Solandri ( 704621 ) on Saturday October 07, 2017 @11:54AM (#55327673)
      It's not a simple matter of can or can't do. The problem is there's no standard threshold of success which needs to be met for a system to be considered a "marketable" autonomous car. If your car can handle 95% of situations, is it suitable for use on the road and for sale to the public? 99%? 99.999%? Or maybe the proper metric isn't situations, maybe it should be average time in operation before it encounters a situation which stumps it. Should that standard be 1000 hours (6 weeks)? 10,000 hours (a bit over a year)? A million hours (over 100 years)?

      Without some sort of standard, you can put a brick on the accelerator and a bungee cord on the steering wheel, and call it an autonomous car. Because it is, for about 20 seconds before it drifts into the next lane. It sounds like GM is working to a much more stringent internal standard for autonomy than Tesla, and the GM exec is frustrated that the press is constantly comparing them as if they were equals. Whether or not the car can drive autonomously isn't as important nor relevant as how often it fails to drive autonomously.

      All you people who love government regulation should be all over this, instead of giving Tesla a free pass just because they're Tesla. It's why we have nutrition labels, Energy Star labels, NHTSA crash safety tests, EPA mileage ratings, standardized health plans under the ACA, etc. So buyers can easily compare products on a like-for-like basis
      • by AmiMoJo ( 196126 )

        If you go to Tesla's web site right now they are offering it as full autonomous driving, take your kids to school for you kind of thing. That's a very high bar, and it's likely that even once the technology exists it will be a while before regulators figure it out and insurance companies can cope with it. And they are already selling it as a â4000 extra, to be enabled by software update.

        And today their latest Autopilot isn't even as good as the old Autopilot V1 system. It really does seem quite prematu

    • GM can't do it.

      ...so they did the next best thing which is to launch whiny ad hominem attacks on Tesla and Musk.

    • by tomhath ( 637240 )
      Nobody has done it.
  • by Austerity Empowers ( 669817 ) on Saturday October 07, 2017 @10:38AM (#55327283)

    I definitely trust GM to make an unbiased analysis of competitor technological capacity.

  • by Archon ( 13753 ) on Saturday October 07, 2017 @10:42AM (#55327303)

    I don't possess radar, LiDAR, or a gazillion redundancy systems. Stereo cameras (eyes) on a pivoting head and two directional microphones (ears). My software is way better than GM's, though, and I'm expecting Tesla's is too.

    • Actually, you do. Two lungs, kidneys, arms, hands along with many other redundant self healing systems in the human body. As to whether GM or Tesla has better self drive software, time will tell. I think neither will hit level 5 anytime soon.

    • Everyone thinks they are an above average driver.

    • I don't possess radar, LiDAR, or a gazillion redundancy systems. Stereo cameras (eyes) on a pivoting head and two directional microphones (ears). My software is way better than GM's, though, and I'm expecting Tesla's is too.

      The question really should be "Could a human function at a level 5 if you took away the windshield and just gave them a display with a camera and a radar like the Tesla?"
      My guess is that a human without a windshield and only the data provided by Tesla's sensors would perform substantially worse than a human in a standard car.
      On a side note, if they had the same performance and you could replace the windshield with metal, that car would be much safer to drive.

    • We won't see a fully self contained humanoid robot able to pilot a vehicle in the same way humans do (stereo cameras, dual 6 axis gyro/accelerometer, and tactile feedback, for another 50+ years at least. Maybe even 80+. In reality, if I had all those sensor systems, radar, lidar, cameras, etc and simply supplied the 'brain' AI aspect, I could do far far better than today's and tomorrow's AI.
  • Then why can't computers. We have an advantage that we can fill missing pieces in stereoscopic vision to complete our perception and it is incredibly difficult for algorithms to do that. Radar makes up for that, but it is simply possible that Tesla is closer to that then GM is.
    • by iggymanz ( 596061 ) on Saturday October 07, 2017 @10:46AM (#55327323)

      you're funny, the list of things computers can't do with any amount of sensors, that humans can, is quite long

      • But the rate the computers are catching up is staggering. Computers consistently beating even the most skilled people progressing from Tic-Tac-Toe -> Chess -> Go took about 50 years.

        (Of course he perfect play on Tic-Tac-Toe only means drawing if you are second to play.)

      • by Zuriel ( 1760072 )
        They can't now, sure. But it's absurd to say computers will never be able to drive a car using 360 degree vision and radar. That's plenty of input to do the job, if computers get smart enough. Maybe they won't, but then people once thought 640K RAM would be enough for anybody.
        • by Dog-Cow ( 21281 )

          If you think anyone ever thought that 640K RAM was enough for anyone, you are too stupid to be part of this conversation.

          • James Fawcette published that quote of Bill Gate in the April 29, 1985 issue of InfoWorld, though Bill Gates later denies ever having said that. However, a person is not stupid if they believed that reference.

      • You forgot to add hearing, 6-axis gyro/accelerometer, and tactile feedback. It's quite hard to drive without any one of those.
    • Then why can't computers. We have an advantage that we can fill missing pieces in stereoscopic vision to complete our perception and it is incredibly difficult for algorithms to do that. Radar makes up for that, but it is simply possible that Tesla is closer to that then GM is.

      Computer's can do stereoscopic vision just fine. The problem isn't perception, it is understanding.

  • Then so can a computer. Just need the right computer and software behind the cameras.

    • by gweihir ( 88907 )

      And where do you take that absolutely baseless claim?

      • And where do you take that absolutely baseless claim?

        Philiosophy from the sound of it, but think hard about it. Eyes are poor, the brain puts an incredibly amount of effort into just simply seeing what isn't immediately in-front of us (interpolating from subtle movements of the eye to build a more complete view of the surroundings).

        The sensor capability of humans was exceeded by technology many many years ago. The only thing computers lack is understanding of what to do with the information. i.e. the software behind the cameras.

        Though given the complexity of

    • by AmiMoJo ( 196126 )

      Yes in theory, but are the computers Tesla is installing today powerful enough? The lidar is there to give the computer extra information that it can process more easily than just vision alone.

    • by Kjella ( 173770 ) on Saturday October 07, 2017 @01:01PM (#55327915) Homepage

      Then so can a computer. Just need the right computer and software behind the cameras.

      If Musk had that computer and that software he'd be busy selling the personal assistants from "I, Robot".

  • by Rei ( 128717 ) on Saturday October 07, 2017 @10:51AM (#55327351) Homepage

    But to clarify the difference:

    LIDAR: Formerly about $75k, now about $7,5k per unit, and requires a bubble dome on top of the vehicle. GM and Waymo use it, Tesla doesn't. In addition to looking weird and adding drag, the price is killer if you want to include something on every vehicle. Beyond this, LIDAR doesn't work in fog, heavy rain, snow, and other conditions that humans can drive in - meaning that you'd have to either prevent trips during these conditions, require humans to drive during them, find workarounds (not easy), or rely instead on other sensors. And you still need to understand the world around you visually - LIDAR will tell you that "something" is there, but it can't read signs, see road markings, see brake lights, tell if that thing in the road is a person or a paper bag, etc.

    Tesla, for these reasons, ruled out LIDAR. They just simply use the "other sensors" - 1x radar, many cameras, many ultrasonic sensors - all the time. This way, all of their sensors can be put in all of their vehicles, and do double duty for both self driving and standard safety features (autobraking, etc), depending on what options the buyer has paid for. This however comes at a penality: when LIDAR works, it works really well. Photogrammetry with cameras is prone to stitching errors, and radar, while being able to see some things that humans can't, sees the world in very strange ways (for example, a piece of plywood is transparent, but an aluminum can glows like it's on fire). It's a much more challenging task if you leave LIDAR out of the loop. But, it gives you a more saleable product.

    In the end, I expect a convergence to take hold. An interesting new technology for example is time-of-flight cameras - they function as normal cameras, but also can read the length of time it takes for a laser pulse to return on every pixel they record. So no dome, just your normal camera coverage and a few cheap, fixed lasers - in mass production, it might not cost much more than cameras alone. In such a case, I'd expect the LIDAR groups to simply replace their conventional LIDAR datastream with the time-of-flight datastreams, while I'd expect the non-LIDAR groups to replace their photogrammetry-and-radar built 3d models with time-of-flight 3d models. But both sides will still need image processing, so it's important to work on maturing that technology today.

    That said, let me reiterate that I'm a pessimist regardless of what tech you use. There's just so much nuance in driving in hazardous conditions - understanding when, where and how much you have to slow down, what's safe to drive on and what's not, what things to the side of the road are hazardous and what aren't, when you should break rules (such as driving in the middle of the road when conditions are dangerous but oncoming traffic is rare), what are the consequences of a mistake in one location vs. another, etc, etc. On my gravel road, there's a canyon to one side with no guardrail, and varying amounts of ice and potholes in different places. You better well know how your traction is going to fare as you move across the potholes (vibrating the car and making it lose traction) or icing if you don't want to end up in an unrecoverable slide into a ravine.

    Just to pick a random example among countless things that you have to take into account: how long do you think before any self-driving systems will have "sheep recognizing algorithms"? Because where I live, there's sheep. Group of sheep on one side of the road: probably safe. Group of sheep on both sides of the road: not as safe, but probably safe. Lamb on one side, ewe on the other? Very dangerous - the lamb will invariably run to its mother as you approach. Where's the ewe-lamb-running algorithm?

    • by Rei ( 128717 ) on Saturday October 07, 2017 @10:59AM (#55327395) Homepage

      Also, as to answer the question of how humans do it with "two cameras": logic. We don't have "stitching errors" in how we build up a model of the world around us from visual data because our brain constantly processes everything around us through the prism of "does that make sense?" But whether something "makes sense" or not is an AI-hard problem.

      Building up 3d models with photogrammetry is an inherently error-prone process because a computer doesn't know if something makes sense. The approach is "Oh hey, these patterns from the left and right camera matched up - there's an object there at X distance based on how far the patterns had to be shifted to align". But what if the patterns happen to be different things that just happened to match up in patterning? Or what if, due to lighting / texturing / obstruction / material issues, the same thing didn't look exactly the same from different angles? Uncovering these problems is, as mentioned, AI-hard. I doubt anyone is even trying at this point; there's enough to work on just to get things to follow road lines correctly and not go chasing old tire tracks or poorly erased construction lines.

      Ranging systems like LIDAR help let you just simply ignore the issue by telling you flat out, "I sent out a beam in this precise direction and got a reflection in precisely this length of time." But LIDAR has a number of problems, as described above.

      • by Anonymous Coward on Saturday October 07, 2017 @11:05AM (#55327423)

        Also, we crash more often than is permissible to a self-driving vehicle.

      • by doug141 ( 863552 )

        Also, as to answer the question of how humans do it with "two cameras": logic. We don't have "stitching errors" in how we build up a model of the world around us from visual data because our brain constantly processes everything around us through the prism of "does that make sense?" But whether something "makes sense" or not is an AI-hard problem.

        You'll find a million human visual logic errors on google images under "optical illusions." Was the dress white and gold or blue and black? Then there's the hot road mirage (https://www.youtube.com/watch?v=_M0FcpQWh5E). My broader point is that people don't see perfectly either, but if machines always drive sober, without texting, and not sleepy, they could eventually do better than the very low bar we are setting.

      • ...is what happens when this tech is on every car? It's all very well to test and develop these things in isolation in California, Nevada or Arizona during bright sunlight.

        What about forty or fifty vehicles at a busy intersection, all firing ultrasound, LIDAR and/or microwave in every direction, at night, in the rain? The scope for false positives and false negatives is immense.

        Or perhaps the makers will modulate all the output with a unique identifier, perhaps the VIN. So then what happens to your priva
    • I've driven in a vehicle that has been out in a snowstorm. I'm wondering if there will ever be a sensor array that will be sufficient for automatic driving without having to spend three hours cleaning snow off. Generally you bear 10-15 minutes of cleaning, but you're not getting every square inch of every window. Airplanes where I am get hosed off with antifreeze before every flight; not very realistic for automated car owners.
    • by Ichijo ( 607641 )

      Where's the ewe-lamb-running algorithm?

      That's easy. Assume all of those ewes and lambs will try to throw themselves at your vehicle, calculate how far they can get before you can come to a complete stop, and slow down accordingly. This algorithm also works for children playing by the side of the road.

    • by AmiMoJo ( 196126 )

      Tesla must be quite confident they can make it work, as they are already selling cars with full self driving as an option. The web site says it will come as a software update one day, exactly when depending on regulatory acceptance.

    • You make very well-thought points about the technical stuff, but I disagree with your conclusions. I don't think your pessimism is logical based on the points you've made.

      Group of sheep on both sides of the road: not as safe, but probably safe. Lamb on one side, ewe on the other? Very dangerous - the lamb will invariably run to its mother as you approach. Where's the ewe-lamb-running algorithm?

      Sure, but to be fair if I had to travel to your area and rent a car, not knowing anything about the behavior of sheep, as a human driver I would be pretty terrible compared to you in that situation. But I doubt you'd say I don't deserve to have a driver's license, because that's a pretty unusual circumstance. I also doubt that even in your

  • It's funny that "I don't know" somehow gets translated into "it's impossible!" I recall a few fools claiming they had invented a new "unbreakable" encryption scheme because they didn't know how to break it.

  • We need low-cost electric cars for the common man, not expensive sort-of-self-driving electric cars for the rich.

    • What a stupid thing to say. What we need is a car company that can actually deliver what it promises and while doing so remain economically profitable and not need billion dollar handouts every 5-10 years like GM. When the price comes down, as it has been doing so with each new model of Tesla released, the general public will be able to afford them
      • by nasch ( 598556 )

        It doesn't sound like you're disagreeing with him. Why do you think it was a stupid thing to say?

  • You don't say (Score:5, Insightful)

    by RightwingNutjob ( 1302813 ) on Saturday October 07, 2017 @11:29AM (#55327543)
    Next you're gonna be telling me how we're not all going to have villas on Mars by the end of the next decade, at the very latest, all brought to you by SpaceX brand rocket ships.

    Elon gets my respect for making two successful and innovating businesses that have lasted and have solid fundamentals into the future. He gets no credit for his bullshit factory. Good bullshit has to be believable. Self-driving cars and 800 mph trains in a tube by next year doesn't pass the giggle test.
    • by Megol ( 3135005 )

      If each self-driving car have a dedicated road there's no need for advanced AI or sensors.

      That's why he started the Boring Company!

    • Self-driving cars and 800 mph trains in a tube by next year doesn't pass the giggle test.

      Neither did 3D graphics on computers.
      Neither did colour television.
      Or for that matter any television.
      Or electricity.

      This is kind of why the term "breakthrough" exists. Elon Musk is an egg machine. Everyone laughs at him constantly for everything he says. Then they end up with egg on their faces. If I am secretly giggling at something he says, I sure as hell don't have the guts to post about it, especially how he went from nothing to having cars drive hands free down highways in pretty much no time at all.

  • Didn’t a GM exec in 1970s say something about Japanese carmakers will fail in America because Americans didn’t want small fuel efficient cars?

  • Silly GM didn't you realize you became absolutely irrelevant like a decade ago.
  • Lidar was doomed from the start. If a car is going to be autonomous, it must function when drivers aren't paying attention to conditions. Otherwise, what's the point? Other systems will have to be good enough to work in fog. And if you have systems that can work even in poor conditions, then lidar is uselessly redundant.

  • GM != leadership (Score:4, Insightful)

    by LesserWeevil ( 4776371 ) on Saturday October 07, 2017 @12:59PM (#55327913)
    Elon's had a long history of proving naysayers like this wrong. My money's (literally) on him to pull this off. The folks truly terrified of self-driving cars are the National Auto Dealers Association (NADA) who stand to lose the most when you can order a car online and have it deliver itself.
  • During 2015 there were an average of 88 fatal car crashes a DAY in which an average of 96 people a DAY were killed. Can AI do any better? More than likely, but time will tell.

    The BIG problem is that cars do not communicate with each other and so they cannot coordinate their movements. Being able to do that would reduce the causes of accidents and fatalities to just mechanical failures. Sensors and communication networks buried in roadways would help significantly. IF cars were networked, for exa

Keep up the good work! But please don't ask me to help.

Working...