Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Transportation AI

iPhone Hacker Geohot Builds Self-Driving Car AI (bloomberg.com) 98

An anonymous reader writes: George Hotz, known for unlocking early iPhones and the PlayStation 3, has developed an autonomous driving system in his garage. "Hotz's approach isn't simply a low-cost knockoff of existing autonomous vehicle technology. He says he's come up with discoveries—most of which he refuses to disclose in detail—that improve how the AI software interprets data coming in from the cameras." The article has a video with Hotz demonstrating some basic autonomous driving similar to what Tesla rolled out earlier this year. He's clearly brimming with confidence about what the system can accomplish with more training.
This discussion has been archived. No new comments can be posted.

iPhone Hacker Geohot Builds Self-Driving Car AI

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Wednesday December 16, 2015 @01:24PM (#51131381)

    Does he have insurance coverage for his selling idea? Seems very risky as he can be on the hook for big damages with stuff goes wrong.

    Is he willing to have his code go under some thing like a FAA code audit?

    How much redundancy is in that system?

    Does his friends really want to take on the risk?

    • Re: (Score:3, Funny)

      by Anonymous Coward

      " “I live by morals, I don’t live by laws,” Hotz declared in the story. “Laws are something made by assholes.”"

      We rebels don't need insurance! If we kill your family on the highway, well LOL dude. #hacktheplanet

      • by OzPeter ( 195038 ) on Wednesday December 16, 2015 @01:57PM (#51131665)

        " “I live by morals, I don’t live by laws,” Hotz declared in the story. “Laws are something made by assholes.”"

        We rebels don't need insurance! If we kill your family on the highway, well LOL dude. #hacktheplanet

        Not only that but you can't tell us we're wrong

        “For the first time in my life, I’m like, ‘I know everything there is to know’”

      • If killing people on the highway hacks out fixes, leading to quality AI for cars just a year or two sooner than a measured approach does, you will have net saved several million lives over a handful of people.

        This is similar to arguments the FDA is a net killer -- the precautionary principle delays introduction of treatments rather than let some drugs get to market a little early, and dangerously.

        • by Lunix Nutcase ( 1092239 ) on Wednesday December 16, 2015 @02:45PM (#51132053)

          If killing people on the highway hacks out fixes, leading to quality AI for cars just a year or two sooner than a measured approach does, you will have net saved several million lives over a handful of people.

          So you're going to make sure to offer up yourself and your family to be the first people to be killed, right?

        • by Desler ( 1608317 )

          If killing people on the highway hacks out fixes, leading to quality AI for cars just a year or two sooner than a measured approach does, you will have net saved several million lives over a handful of people.

          Then why aren't you already volunteering yourself to be one of the other drivers around him while he tests his system? You're all gungho about deaths from testing this being fine yet you don't seem to be mentioning that you'll sacrifice yourself first.

          This is similar to arguments the FDA is a net killer -- the precautionary principle delays introduction of treatments rather than let some drugs get to market a little early, and dangerously.

          It's not really similar at all. Drug trials done for FDA approval use people who volunteer and give consent to be part of the trial. They aren't just random people that get injected the drug without their consent.

        • by KGIII ( 973947 )

          I suppose that's some *sort* of logic. Would you have applied it to AIDS victims or to Muslims if you had been given the chance? How about white people? They've probably killed more people in the past few hundred years. If we kill all the white people then that'll surely pan out as a life-saver eventually - with enough time.

          We can save infinite lives if we just kill all the humans at once. We should do that. Think of the children!

          So, yes, it was *some sort* of logic.

        • If killing people on the highway hacks out fixes, leading to quality AI for cars just a year or two sooner than a measured approach does, you will have net saved several million lives over a handful of people.

          The alternative is that his hack kills some people which causes regulations to lock out future testing, which in-turn holds back the technology by decades, thus dooming even more people.
          Who should be the person/people that make that decision?

    • FAA code audit is a joke so not sure what your are meaning by that, how much redundancy is in any system? i know how much redundancy competitors have built in!!! not much!
    • by Okian Warrior ( 537106 ) on Wednesday December 16, 2015 @02:20PM (#51131869) Homepage Journal

      Does he have insurance coverage for his selling idea? Seems very risky as he can be on the hook for big damages with stuff goes wrong.

      Is he willing to have his code go under some thing like a FAA code audit?

      How much redundancy is in that system?

      Does his friends really want to take on the risk?

      I think you're confusing engineering with science.

      The difficult part of AI, or science in general, is getting something that works. Once you have a working demo, anyone can add the reliability, the redundancy, and do a code audit.

      And indeed, visionary investors might examine the idea and think "I'll take on the responsibility for liability and development, because I believe that the value of your ideas will be worth more than the expense of dealing with those issues. Sell me your idea."

      But it all starts with getting something to work.

    • It's Geohot. He's a known uncontrolled dumbass with little forethought and lots of hyperfocus.
  • by jeffb (2.718) ( 1189693 ) on Wednesday December 16, 2015 @01:34PM (#51131485)

    We'll see. I do kind of hope that his youthful arrogance doesn't get him killed. It seems unlikely that one kid will be able to outdo the big-budget teams of researchers working on this problem -- but I don't think it's impossible.

  • Good luck with Michigan's unplowed roads and potholes! I doubt an autonomous car will ever be able to handle it here without killing people.

    • Those only become a problem at high speeds. Limit the car to 30 MPH, and they vanish.

      If you can sit back and read/rock/watch a video. the time delay - which saves you fuel (and reduces air pollution) becomes meaningless. So your 30 minute commute takes 45 minutes. Not a significant issue.

      • So your 30 minute commute takes 45 minutes. Not a significant issue.

        (45 minutes - 30 minutes) x 2 ways x 5 days/week x 50 weeks/year = 7500 minutes / year = 5.2 days / year.

        Most people have better things to do with 5.2 entire days per year than to live up to your low expectations.

        • No, they don't. You will probably save time vs. newly-non-existent traffic jams, except for the occasional ones caused by that asshole who refuses to let robots drive.

          And a few years after that, you will want to outlaw humans driving so we can do away with stop lights and stop signs alltogether.

        • by Anonymous Coward

          Most people have better things to do with 5.2 entire days per year than to live up to your low expectations.

          Most people who say they have better things to do are actually going to end playing with their phones.

        • by SirSlud ( 67381 )

          .. says the guy posting on /., which you could be doing in a self-driving car.

      • by narcc ( 412956 )

        Where are you from, southern California?

        Around here, I've driven in conditions where 30mph was akin to ludicrous speed.

      • If all your hobbies can be completed in the car, then you are right. But what if you'd rather spend your time doing something more active like playing a game of basketball. You now have 30 minutes less in your day in which you can enjoy your chosen activity. Maybe it's alright with you to sit in a car for 1.5 hours every day, but I'd rather not be sitting in a car for any longer than necessary.

    • Humans can't handle it without killing people either.

      It's odd that people take the things that humans are the absolute shittiest at, as things that self-driving cars will never do. Self-driving will start with the easy, repetitive stuff (as it did ages ago with cruise control and is steadily increasing), then the next inroads will be things humans suck at.

      Self-driving cars are much more likely to have problems with situations humans find easy but unusual, like a human doing traffic control at intersections

      • well assuming multiple cars are on the road, and they pool learning/algo's, eventually they'd log enough hours to figure out even a snowstorm, godzilla attack, or human traffic controller.

  • by sjbe ( 173966 ) on Wednesday December 16, 2015 @01:37PM (#51131497)

    The article has a video with Hotz demonstrating some basic autonomous driving similar to what Tesla rolled out earlier this year.

    Basic autonomous driving is (relatively) easy to do as long as you don't care much about it being actually useful in the real world. I suspect many good programmers and engineers could accomplish something functional (and dangerous) pretty quickly. It's not much more than an RC car with some sensors. Think Roomba on steroids. The problem is all the corner cases needed to actually make the system safe in real world driving. That is highly non-trivial.

    • Exactly, but I think the key here is he's saying he's made "discoveries" to help the AI be much more robust. I don't believe him and you shouldn't either, but that's the take away. Autonomous vehicles existed, technically, back as far as ww2 (self guiding V rockets, can't recall if it was v1 or v2 but they were self guiding) That in itself is meaningless, if I wanted a car that can drive itself I could probably hodge podge it together quickly. It would result it my death and probably a few others becaus
      • by KGIII ( 973947 )

        It was the V-2 and was. for some limited definition, self-guided. The V-1 was just good until the power went away and then it went silent and fell where physics told it to. The controller in the V-2 was a little more complicated but not *vastly* so IIRC. It still couldn't go much more than where you pointed it at but it was able to stabilize itself a bit better. It basically followed a predetermined path using accelerometers and gyroscopes and wasn't all that controllable but it was autonomous, to some exte

        • No limited definition. It was fully self guided as was the V1. Technically it is not difficult, especially if one were to make it with modern day off the shelf components.
    • by monkeyxpress ( 4016725 ) on Wednesday December 16, 2015 @02:53PM (#51132123)

      The problem is all the corner cases needed to actually make the system safe in real world driving. That is highly non-trivial.

      Indeed and that is what seems to be quite interesting about his approach. Basically he is saying that he is developing a system that can generate all those rules and corner cases itself, without a human having to quantify each scenario and code the rules into the machine. He states in the video that the car has gotten to where it is now (basic highway driving) by teaching itself. If it has, and his approach is extendable, then this is quite an interesting solution to the problem, precisely because it may deal with the non-trivialities you describe in a, well, comparatively trivial way.

      Unfortunately, based on the article and video, there isn't really any way to determine whether he will be able to extend his system to give better performance, or even whether his system is just one of those 'learning systems' that is actually so highly tuned to the problem domain that it is essentially just an obfuscated rule based program. I guess we will have to wait for it to either get better, or for him to release some more information.

      The main thing that makes me suspicious is why he has gone to the media about this now. If he has gotten this far with a design that actually does use learning, then why not spend a bit longer and get it to the point where he can demo it in less predictable environments. That would get us all interested. As it stands his current system only works in very predictable situations, so without more information it is impossible to know if this is a scam or not.

      • An interesting issue (and it applies to not just this guy but to the big guys too) is the recent research that showed pretty much all neural nets will incorrectly classify input that is almost identical to input that gets classified correctly (including deep nets that are showing best results in image classification etc.). They chose to tweak a very small amount of input that maximized error causing mis-classification and found that they were able to do this consistently across nets of varying architecture
  • So the gist of it is that, after capturing data and training the AI, the car functions using only regular old cameras. The prohibitive cost in current self driving cars is the expensive sensors. So his system is much more economical. He envisions a kit to turn cars into self driving cars costing $1000 (obviously only certain cars with the ability to control steering digitally, etc, would be supported).

  • ...for without you, I wouldn't have root on my phone - Verizon would have taken it from me. I'd buy your car any day.

    Make it rain!

  • Key Phrase (Score:4, Insightful)

    by PvtVoid ( 1252388 ) on Wednesday December 16, 2015 @01:49PM (#51131607)

    most of which he refuses to disclose in detail

    Snake, meet oil.

  • by Spy Handler ( 822350 ) on Wednesday December 16, 2015 @02:09PM (#51131771) Homepage Journal

    Freeway autonomous driving is doable. But on regular streets it's hard, maybe impossible given current roads and parking lots.

    But a freeway-only fully autonomous vehicle is still very valuable. Long-haul trucks and RVs spend most of their driving time on freeways. If a trucker can sleep 8 hours while the truck drives itself on the freeway and then take over only when the truck exits the freeway, the trucking company can save huge amounts of money. You can basically double the productivity of a driver/truck combo, since you can operate it continuously instead of having to shut down for the night. Also it's a plus from a safety standpoint; tired sleep-deprived truck drivers cause a lot of accidents. It's worth doing.

    • You don't need to shut down trucks overnight - they don't shut down planes while pilots sleep, they have pilots on shifts. And, before that, the Pony Express! Large trucking companies could have drivers on shifts.
    • by PRMan ( 959735 )
      It doesn't matter. HOS rules prohibit him driving longer even if he was asleep the whole time. The law will have to change first.
  • "Amazing: Frankly, I think you should just work at Tesla,” Musk wrote to Hotz in an e-mail. “I’m happy to work out a multimillion-dollar bonus with a longer time horizon that pays out as soon as we discontinue Mobileye.”

    “I appreciate the offer,” Hotz replied, “but like I’ve said, I’m not looking for a job. I’ll ping you when I crush Mobileye.”

    Musk simply answered, “OK.”

  • by Anonymous Coward

    The video makes it sound like geohot's big advance is preferring a learning algorithm over a rules based approach. Rules based approaches were popular in AI in the 80's, when researchers thought they could emulate reasoning with a system of logical rules. This approach has not been in vogue for 20+ years, so I'm wondering what is revolutionary about his approach to AI?

    • by PRMan ( 959735 )
      Everyone keeps forgetting that it's not better because people in general (and programmers in particular) tend to be control freaks.
  • “‘If’ statements kill.” They’re unreliable and imprecise in a real world full of vagaries and nuance. It’s better to teach the computer to be like a human, who constantly processes all kinds of visual clues and uses experience,

    I think he meant to say that "They're reliable and precise in a real world full of vagaries and nuance." The statement is insightful - if statements are perfectly accurate and do exactly what they were coded to do, no more no less. Normally, this is what makes computers really powerful and reliable. But when dealing with humans, it is what limits them. It's an analog world out here.

    • That's like suspecting a compiler bug. No, he said what he said. if/then can only capture a (small) finite input-set which the programmer can foresee at the time of coding. When you face a new/unknown input, you fall thru' the series of if/then/else and into the final else where you don't know what to do. That's what he says by 'unreliable and imprecise". Sure computers are great at mapping known-inputs to required-output; they fail when given unforeseen inputs. AI which does some kind of curve-fitting, kno
      • by MobyDisk ( 75490 )

        I'm unsure what you mean by compiler bug. I think we are all saying the same thing, but the words "reliable" and "precise" are not being used correctly.

        Let us be clear on terms:
        reliable: consistently good in quality or performance; able to be trusted.
        precise: marked by exactness and accuracy of expression or detail.
        flexible: ready and able to change so as to adapt to different circumstances.

        If statements are reliable and precise. AI is flexible. This is a trad

        • I can only let you think over this more and see why in the context he is talking, if/then is imprecise. I do agree 'unreliable and imprecise' may not be the most fitting words here.. may be he could've said "They are inadequate in a real world..." or "They lack in completeness in describing..." or "They can't foresee and handle any situation" .. "they are too cumbersome/tedious to ".. etc [Basically there are too many situation that you can't enumerate in a series of if/then.. the developer will always mis
    • I think he meant:

      if ( is_safe() )
      proceed_new_heading()
      else
      cancel_new_heading()

      vs.

      while ( true )
      evaluate_heading_inputs_and_update_heading_and_hope_for_the_best()

      On the first snippet the programmer believes that safety can be pursued with an if else clause which is not the case in the real world.
      The real world is not if else. The real world is fuzzy logic. The real world is more akin to the Monte Carlo Method.
      A collection of approximations.

  • https://www.youtube.com/watch?... [youtube.com]

    Except for perhaps the learnability, which is more of an AI than an automated driving advance? Though, in any case it would certainly be impressive to duplicate this independently.

    It sounds like the gas and brake controlers are fairly commonly built in, but I was not aware of the steering controls, and he didn't mention adding any motors.
  • Ok, the kid is bright, but he's also arrogant, reckless and probably a bit insane. But set aside his personality, I don't understand the negativity on this forum towards another geek who hacks things together to make it work. There are a lot of hard problems in what he is working on, and if he can come up with a new way to do computer vision, I would be really happy too. The current start-of-the-art in this field is convolutional neural network (CNN), which, basically, is just a kind of brute force pattern
    • by m2pc ( 546641 )

      As others have mentioned, it's not that he may be onto something, it's the fact that he's using public roadways as his "lab" to test this stuff out that's the problem here. He could have the best AI in the industry but not account for some of the "corner cases" where his AI in untrained and then a family is killed. This doesn't compare to other companies like Google and Tesla who have access to private tracks with controlled environments and safety nets in place should something go wrong.

      I'm all for the "

Before Xerox, five carbons were the maximum extension of anybody's ego.

Working...