Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Transportation AI Technology

Musk Says Tesla Is 'Very Close' To Developing Fully Autonomous Vehicles (bloomberg.com) 260

Tesla's Elon Musk said the carmaker is on the verge of developing technology to render its vehicles fully capable of driving themselves, repeating a claim he's made for years but been unable to achieve. From a report: The chief executive officer has long offered exuberant takes on the capabilities of Tesla cars, even going so far as to start charging customers thousands of dollars for a "Full Self Driving" feature in 2016. Years later, Tesla still requires users of its Autopilot system to be fully attentive and ready to take over the task of driving at any time. Tesla's mixed messages have drawn controversy and regulatory scrutiny. In 2018, the company blamed a driver who died after crashing a Model X while using Autopilot for not paying attention to the road. Documents made public last year showed the National Highway Traffic Safety Administration had issued multiple subpoenas for information about crashes involving Tesla vehicles, suggesting the agency may have been preparing a formal investigation of Autopilot.
This discussion has been archived. No new comments can be posted.

Musk Says Tesla Is 'Very Close' To Developing Fully Autonomous Vehicles

Comments Filter:
  • and will he setup an auto payout for any deaths?

    • by nagora ( 177841 ) on Thursday July 09, 2020 @11:12AM (#60279240)

      and will he setup an auto payout for any deaths?

      It's called "insurance".

      • Tesla carries insurance? Why should my insurance company pay out when Tesla's level 5 autonomous vehicle does something stupid? I'm just a passenger. No controls. No my fault.

        • Because if you die but 3 other people didnâ(TM)t, then the insurance company made a profit.

        • Exactly. I'm not having my premiums go up because of something the car does. I will pay for damage protection if I chose to leave it in a place where it gets destroyed but no more drivers insurance. Only property insurance which is a lot cheaper.
        • That's actually a good question. Who will carry the liability of a self driving car? There's a lot of unanswered regulatory questions concerning self driving cars actually. Such as will you still need a driver's license to operate one? Will self driving cars require the ability to manually drive it? What happens when the car has determined there is service that needs to be performed? Will it drive itself off to a service center without the owner's consent? Will it refuse to go anywhere except to an approved

        • by mysidia ( 191772 )

          Misconception... Its not just drivers. In a collision; the OWNER of the property, the at-fault vehicle is liable... in this case the OWNER made the decision to purchase the vehicle and put it on the road and use the features, And they have responsibility for the consequences of that decision – Tesla is responsible only for the products performance as represented to buyers (If you ignore the safety manuals and use a manufacturer's product in a manner that goes against their instructions, then you

          • by Altus ( 1034 )

            but as a property owner I can make a judgement call about someone borrowing my car, are they responsible are they attentive, are they a trained and licensed driver. Its a lot harder to make those calls for an AI you had no hand in the creation or training of.

            While there is certainly a legal framework for any accidents caused by an AI to be the responsibility of the owner (and certainly you would still need insurance if there is ever a time the AI is not controlling the car) I think if a company is not will

        • by nagora ( 177841 ) on Thursday July 09, 2020 @02:53PM (#60280232)

          Tesla carries insurance? Why should my insurance company pay out when Tesla's level 5 autonomous vehicle does something stupid? I'm just a passenger. No controls. No my fault.

          Sure. I don't know why people think this is a difficult area. If Ford sell a car with faulty brakes, they're liable and they're insured. What magic ingredient makes people think Tesla would/should be different?

      • And the only workable insurance solution is the riders carry their own insurance. Tesla should not be responsible for the outcomes just collecting the profit.
        • That's like playing Russian Roulette.
          • by EllisDees ( 268037 ) on Thursday July 09, 2020 @11:58AM (#60279458)

            Is it more or less Russian Roulette than driving the car yourself, though?

            • Comment removed based on user account deletion
              • by Jeremi ( 14640 )

                The difference is that when you're the one driving, your finger is the one on the trigger. When the car is driving itself, some (possibly buggy) code's finger is on the trigger.

                When you're driving, some (possibly buggy) code's finger is on the trigger as well; granted it's wetware instead of software, but it's demonstrably much less than 100% reliable.

                • There's hardware, wetware, and software involved in the operation of every modern car. They have ABS, throttle by wire, ESP, blah blah blah. Any of that stuff could kill you if it went wrong enough.

            • I know I can drive, I don't know about the car.
              • I know I can drive, I don't know about the car.

                Indeed.

                I'm holding onto my gas burning, manual shifting, non-connected to the internet cars for as long as I possibly can.

                I don't want them connected to a company, once I buy it, they don't need to know fuck all about where it is, where I drive it or how I drive it, and neither does the government.

                They do not need to update anything on it, unless "I" give full consent EACH time.

                And I guess....I"m one of a dying breed that has most always bought fun cars, 2

        • and it's Death Race 2000 for people on street / sidewalk

      • He will do it just like the police does it when they kill civilians...

        Call it collateral damage, and God praise America!

        It's the American way of life (and death).

    • by backslashdot ( 95548 ) on Thursday July 09, 2020 @11:26AM (#60279310)

      Human drivers, mostly through negligence and distraction, murdered 500,000 people in the world (36,000 in the US) just last year. You aren't interested in saving a large percent of those lives? Solutions like "better driver license testing" is nonsensical, you can't test for one-time negligence.

      • However as of right now self driving is a theoretical improvement over humans. While Tesla are currently the safest cars on the roads, with a combination of its active safety measure, and its passive bottom heavy design. However the question is will it get to a point where you can safely remove the steering wheel. Or should there always be a driver ready to take control. Even having it requiring a driver to be on the ready, I think it would be much safer than the driver driving all the time.

        I know when

        • Anything below level 5 requires a driver present, awake, aware, and ready to take over with essentially no warning.

          You want level 5. You can't rest your eyes at 4 or less. Not safely.

          I know how you feel about the long boring drives but we're not there yet.

          • Anything below level 5 requires a driver present, awake, aware, and ready to take over with essentially no warning.

            You mean anything below level 3.

            Level 3 requires that a driver be present and available to take over if the car requests it, but the driver does not have to be paying attention, for example can be watching a movie or reading a book, etc. This means the system has to be good enough that it can give the driver warning and even enough time to figure out how to handle the situation, the driver doesn't have to maintain readiness.

            Level 4 requires that a driver be in the vehicle, but not necessarily awake or even in the driver's seat. The driver still has to be around because level 4 is fully automated only in certain areas or circumstances. For example a "freeways only" level 4 mode would make sense, where the car can handle everything from entrance ramp to exit ramp, without any human oversight or even attention.

            Level 5, of course, is so completely automated that the controls could simply be removed from the vehicle.

            https://en.wikipedia.org/wiki/Self-driving_car#Classification

            • by lgw ( 121541 )

              To emphasize: Volvo, whose safety promises I trust, is working on Level 4. Their goal is a car that can pull over safely if the computer enters conditions it can't handle (e.g. snow) and the driver doesn't take over. To me that's optimal: if you're actually paying attention, there's no disruption in driving when the auto-drive gets out of its depth, but if you're not then it's still safe.

              • pull over safely in death valley run till power dies and put someone in 110 heat to die?

                • As the driver of a level-4 self-driving vehicle, operating the vehicle is still your responsibility. If you sleep too deeply for the "driver intervention required" alert to wake you up, then you shouldn't sleep while your car is driving through Death Valley, or any other area where sleeping on the side of the road is potentially lethal.

                • And what happens if you have a regular gas-burner with no automation at all, and have a problem in death valley? The exact same thing. At least with the electric car, you've got a good shot at the AC still working even if something has happened to disable drivability - with a traditional ICE-powered vehicle you're probably fucked on that score too because the engine needs to run to turn the AC compressor pulley.

                  The good news is that mobile phones still exist in a world where autonomous vehicles of any lev

            • Mmmm, yes. However, would you really put your life in the "hands" of a level 4 system and take a nap? By definition there are situations it will fuck up and can't handle and the -hope- is it will get you into a safe spot and idle until you're there. However, the fact that it's already in a situation it doesn't understand makes it just as likely it won't be able to find a safe spot, too. Then what happens while you're asleep? Level 4 is the most dangerous of all 5 levels due to the sense of false safety

              • Who said anything about pulling over? Obviously you'd hope that it would try to pull over if it's sure it's safe to do so, but stopping in the middle of the road is still a huge improvement over continuing when confused or out of its league, in almost all circumstances.

                Narrow winding roads with poor visibility are one of the very few exceptions, but speed limits generally factor in visibility - as a driver you should always be prepared to stop for a child sitting in the middle of the road. (admittedly driv

      • Human drivers, mostly through negligence and distraction, murdered 500,000 people in the world

        If someone is killed by a vehicle through negligence and distraction that is not murder. It's called manslaughter. That is not to say people can't be murdered by vehicle, only that your usage isn't correct.

        You aren't interested in saving a large percent of those lives?

        We have 7.5 billion people on the planet. Do we need to protect everyone from possible mishap?

        you can't test for one-time neglig
        • "We have 7.5 billion people on the planet. Do we need to protect everyone from possible mishap?"

          I didn't say we should protect them from "every possible mishap" .. trying to expand the scope of argument? If we can prevent some deaths, we should. Is that too much incovenience for you?

          "Like that one time an autonomous vehicle's software decides not to work, or the sensors don't work, or something in how both interact with each other doesn't work?"

          It's about which one happens more often. If software errors kil

      • you can't test for one-time negligence.

        Let's treat it like drugs, then. You have to list all possible side-effects, and the chance of a fatal side effect happening.

  • by Luthair ( 847766 ) on Thursday July 09, 2020 @11:10AM (#60279232)
    He's been claiming (and selling) it for years now, one need only look at how laughably poorly the summon feature drives to question it.
    • by pr0t0 ( 216378 ) on Thursday July 09, 2020 @11:24AM (#60279300)

      ...laughably poorly

      Okay, sure, it's far from perfect and definitely funny sometimes, but don't think of it as not working as well as a human driver. Instead ask how it works at all. I mean, the car freaking drives itself! And sure, Musk definitely over-sells and under-delivers tech as much as he does with timelines, but his desire and determination to succeed is advancing this field as much as Google's efforts are. I believe his vision and efforts are laudable, even if sometimes clumsy and self-serving. These are the baby steps that will take us to fully autonomous vehicles.

    • The summon feature is much harder than 95% of full self driving if you think about it.

    • Because clearly each and every Tesla on the road is using the latest nightly builds of software that the development team has cooking, right? And in no way could there be big improvements under test that haven't been released which improve performance, right?

      They could release a new version tomorrow that is dramatically better than what you are talking about. As it turns out, the CEO has better inside information about the development of this software than some random slashtard does.

      • Re: (Score:3, Informative)

        by jellomizer ( 103300 )

        The CEO also has invested interest in getting people to take the Upsale to the self driving. I think is $7k extra to your car. So he would want to be sure people are going to get that feature for the future vs not getting it because it currently isn't worth it to them.

    • He never claimed it was full self driving. For the most part it has been advertised the way it is.
      Phase 1. Cruse control that will speed up and slow down to the car in front of you.
      Phase 2. We add the ability for it to self steer and keep your car in its lane.
      Phase 3. On Multi-lane roads to be able to pass other cars
      Phase 4. Be able to drive on and off the ramp for a highway all by itself
      Phase 5. Able to stop at stop signs and traffic lights (This part is in Live Beta)
      Phase 6. Able to turn onto intersectio

    • Re: (Score:3, Insightful)

      by SamDaMan ( 6535474 )
      I think we can all agree that it doesn't even work right now in broad day light and perfect conditions. Tesla themselves say you have to be fully attentive with hands on the wheel so it's not even "autopilot" as an aircraft system because a pilot can go pee. But ok.. say he's actually "very close" as he suggests. "very close" probably means he can achieve take your hands off the wheel on a highway in perfect conditions. frankly I'd be shocked if he can get the summon feature to work in parking lots all
  • Progressing (Score:5, Insightful)

    by ichthus ( 72442 ) on Thursday July 09, 2020 @11:11AM (#60279236) Homepage
    Yep. It used to be worse. Then, it got better. Then, a little later, even better. The developer of the technology believes that it's close to being even better still -- even to the point of closing in on the target level of functionality.

    I have a Model 3, only with autopilot (not FSD), and the object detection and identification keeps getting better and better. Yesterday, I noticed that a orange cone lying on its side was rendered as a cone, actually lying on its side. Impressive.
    • Re:Progressing (Score:5, Insightful)

      by AlanObject ( 3603453 ) on Thursday July 09, 2020 @11:34AM (#60279368)

      I have a Model 3, only with autopilot (not FSD), and the object detection and identification keeps getting better and better. Yesterday, I noticed that a orange cone lying on its side was rendered as a cone, actually lying on its side. Impressive.

      My new Model Y is the same configuration, where you can do FSD "preview" so you can see on the screen what it identifies.

      The things that it is good at are: un-obscured cars and trucks, traffic lights, stop signs, road paint, roadside "cones" and "trashcans." It can't seem to discern between a trash can from something like a road-side equipment box.

      The things that seem good but not perfect: pedestrians, trash in the road, bicyclists. It does have a curiously high-fidelity ability to see the difference between a bicyclist and a motorcycle.

      Things that never show: Lamp posts, signs other than stop signs, fences and barriers/walls, curbs and overpasses. Nearby birds seem to be invisible.

      So if anyone is going to use this for FSD I can only conclude that the display does absolutely not show everything that the dynamic 3d model includes. If what was on the display was all it knew it would be totally unreliable.

      That would mean that Tesla is holding a lot of cards close to the chest so anyone making remarks about its efficacy without inside knowledge doesn't know what they are talking about.

      At the end of the day the decision to deploy is a financial decision and not an engineering one. The CFO doesn't really care about how exciting a new toy is he/she is going to look at what the effect on the balance sheet is going to be. I am pretty sure that CFO sign-off isn't going to be capricious on this.

      • It should go without saying on a tech site that the "visualization" on the display is not the entire dataset that the autopilot hardware is working with. It's far more important to have spare compute cycles for processing the camera images to see if you're about to run into stuff than rendering out every single thing you're going past on the side of the road. I'm sure you know that, but some shrub around here will reply to what you said as "SEE!! It only sees a few things, you're going to run straight in

    • Re:Progressing (Score:5, Interesting)

      by lazarus ( 2879 ) on Thursday July 09, 2020 @11:42AM (#60279402) Journal

      So, I have a Model 3 as well. Every few months I try out enhanced autopilot or whatever and TBH it tries to kill me every time. I live in a location where the roads are very winding, have no shoulders, and are poorly marked. There is no way that FSD is anywhere near ready for life outside of cities, towns and interstates. In fact I'd say we are still at least 5-10 years away and I have no confidence we'll get there without AI technology that hasn't been developed yet.

      It's going to be dealing that that last 20% that is going to cause all the headaches. That said, I appreciate Musk's enthusiasm and optimism (it's necessary to get anything extraordinary done).

    • Yep. It used to be worse. Then, it got better. Then, a little later, even better.

      The problem is Autopilot is about 0.09% reliable for a trip without intervention even within the domain it's supposedly able to handle (Interstates).

      If it's going to achieve human level safety (and therefore be fully autonomous, even within that domain let alone ALL domains aka L5). So the "Problem" the engineers are facing is improving a system that is 0.09% to 99.99% reliable a factor of 10,000x better.

      If your engineering problem was that you need a processor 10,000x better than what you have that's easy

  • by ZipprHead ( 106133 ) on Thursday July 09, 2020 @11:16AM (#60279260) Homepage

    Like an overly optimistic engineer.... take what he says, double, and then double it again.

    But, it doesn't need to be perfect. This is what a lot of people and media seem to miss. Yes, the system will break, yes, some one will die in an autonomous vehicle accident. It only needs to happen less frequently than with meat bag drivers. Last time I checked the public statistics, it's actually already doing that.

    But anyhow, yeah, we're still years away from the highway safety bureau allowing it.

    • ZipprHead wrote:

      ...Yes, the system will break, yes, some one will die in an autonomous vehicle accident. It only needs to happen less frequently than with meat bag drivers.

      From everything I've seen about Teslas the bar is set much higher than with other vehicles. When three caught fire, it was a massive failure of the technology, when somebody crashed while watching a Harry Potter movie from a laptop, it was Autopilot's fault. If you look at the rate at which these incidents happen in regular, gas powered "meat bag" driven vehicles and compare it to the rate at which Teslas have problems you'll see that Teslas are a lot safer and more reliable.

      Tesla and Musk

      • getting human buyers (not to mention regulators who are political entities) out there to accept any kind of failure that is greater than zero is going to be "difficult".

        Clearly not, otherwise there wouldn't be videos on youtube of people reading with autopilot on.

    • Everyone believes they are a great driver even if they are absolutely horrible. If people keep hearing on the news that "autopilot kills yet another." public-opinion is going to sour and regulators and lawmakers are going to jump in to reign it in, even if statistically it is much safer then your average meat-bag.
    • by RobinH ( 124750 ) on Thursday July 09, 2020 @12:45PM (#60279698) Homepage
      Having done a lot of automation, particularly with computer vision, there are just some cases where you solve a list of problems and when you're done you're at 99.9% because it's basically just a case of "it works or it doesn't." Unfortunately they're in the realm of the other class of problem: you're at 92%, which is good enough for demos, and makes investors feel like you've made a lot of progress, and then you're at 94%, so you're making progress, and two years later you're at 95%, and three years later you're at 95.4%. I've seen it happen. There are some classes of problems where solutions only approach good enough and never really get there. A bunch of cameras understanding the world is definitely in that class. The real solution (it seems clear to me) is that you need to re-engineer roads to be "autonomous friendly."
    • by olddoc ( 152678 )
      It is already doing this (being safer in terms of deaths per mile) in good conditions on highways. It is not doing this on side roads, in rain or snow.
  • by Way Smarter Than You ( 6157664 ) on Thursday July 09, 2020 @11:19AM (#60279272)

    At last check over a year ago we were promised FSD by end of 2019.

    We will have real FSD with no human control required right after we get cold fusion powering our flying cars. Then at least autopilot might make sense.

  • by Trailer Trash ( 60756 ) on Thursday July 09, 2020 @11:20AM (#60279280) Homepage

    They're close now, and probably will be close for the next 10-15 years.

  • How many Teslas are actually driven around using even the existing assist features? None as far as I can tell. I get cut off and tailgated by Teslas just as much as by Dodge pickups.

    • Considering that Tesla has literally hundreds of millions of autopilot miles driven worth of data, yeah it gets used. Your personal anecdote of shitty drivers doesn't discount that they've sold hundreds of thousands of vehicles. Also, all it takes is moving the steering wheel a bit and autopilot disengages, so if they want to be shitty drivers, there is nothing preventing a shitty driver from driving shitty.

  • Given the track record of companies and their computer security policies, I wonder how long until this is hacked to do something like drive like a crazy person through a park, or just run off a bridge? Remember Defcon 21 when the researchers hacked two vehicles through a cellular connection and the car's entertainment system? https://www.cnet.com/news/car-... [cnet.com]
  • by chispito ( 1870390 ) on Thursday July 09, 2020 @11:33AM (#60279362)
    I'm assuming like most ambitious projects, it's all the details around the last 5% that make it nearly impossible to ship.
  • Unless Elon Musk and his stable of engineers somehow made a massive breakthrough in Artificial Intelligence, creating the worlds' first true General AI, conscious, fully cognizant, capable of actual thought and in all significant ways at least comparable to a human brain, then no, he isn't any closer than anyone else.

    Stuff and nonsense. More marketing hype. Empty promises. Don't fall for it, lads.
  • by fluffernutter ( 1411889 ) on Thursday July 09, 2020 @11:35AM (#60279372)
    Musk is simply not seeing around A BILLION EDGE CASES.
  • What he said was that *basic* functionality for Level 5 should be complete by year end with lost of remaining work to make it reliable in the real world. If it is ever as good *overall* as a human driver in the myriad of situations of the real world, it will likely be many many years in the future and probably after Starlink provides high speed low latency connectivity to allow distributed fleet training IMHO
  • Can't change it's spots.

  • In other news (Score:2, Informative)

    In other news, Elon Musk endorses Kanye West's Presidential bid [marketwatch.com] after a night of fat blunts and margaritas with Kim K.

  • Since there are no more cars on the road
  • We will have our fully autonomous cars when we get our flying cars.
  • What a lot of observations miss is that - once there's a critical threshold of vehicles with autonomous capabilities on a road, they should wirelessly mesh and no longer just rely on environmental input. This would dramatically improve the safety of this technology, enabling a "full route" image of every vehicle's perspective being sent to all within the immediate network. Any changes ahead are sent to upcoming cars long before the sensors detect it locally.

    This is also where driving becomes more trai

    • by Jeremi ( 14640 )

      What a lot of observations miss is that - once there's a critical threshold of vehicles with autonomous capabilities on a road, they should wirelessly mesh and no longer just rely on environmental input.

      ... assuming they can afford to trust the data provided to them by the other cars, of course. It only takes a few malfunctioning (or hacked) data-transmitters telling everyone else that driving off the side of a bridge is a good shortcut, to ruin the whole party.

  • Well his timelines are crap, his social skills are as you'd expect an old-time Slashdotian to be (brought up in a cellar with minimal human contact).
    BUT he seems to always pull the rabbit out of the hat, eventually... Elon Time. Personally I've given up on seeing autonomous driving in my lifetime, I could yet be surprised... I can never forget the "impossible" first successful barge landing... or land-ing ;)

  • So far, Tesla's Autopilot is Level 2+. If they are claiming to be "close" to Level 4/5, they must have a quantitative metric to represent how close they really are. That metric is a really tricky challenge, and if Tesla has a reliable metric, that itself is a huge advance. In fact, such an advance would go a long ways in solving the historically tricky challenge of validating complex software systems. Such complex software systems have many inputs along with sequential state over a long period of time.

"Nuclear war can ruin your whole compile." -- Karl Lehenbauer

Working...