When Autonomous Cars Teach Themselves To Drive Better Than Humans (ieee.org) 86
schwit1 shares a report from IEEE Spectrum, written by Evan Ackerman: A few weeks ago, the CTO of Cruise tweeted an example of one of their AVs demonstrating a safety behavior where it moves over to make room for a cyclist. What's interesting about this behavior, though, is that the AV does this for cyclists approaching rapidly from behind the vehicle, something a human is far less likely to notice, much less react to. A neat trick -- but what does it mean, and what's next? In the video [here], as the cyclist approaches from the rear right side at a pretty good clip, you can see the autonomous vehicle pull to the left a little bit, increasing the amount of space that the cyclist can use to pass on the right.
One important question that we're not really going to tackle here is whether this is even a good idea in the first place, since (as a cyclist) I'd personally prefer that cars be predictable rather than sometimes doing weirdly nice things that I might not be prepared for. But that's one of the things that makes cyclists tricky: we're unpredictable. And for AVs, dealing with unpredictable things is notoriously problematic. Cruise's approach to this, explains Rashed Haq, VP of Robotics at Cruise, is to try to give their autonomous system some idea of how unpredictable cyclists can be, and then plan its actions accordingly. Cruise has collected millions of miles of real-world data from its sensorized vehicles that include cyclists doing all sorts of things. And their system has built up a model of how certain it can be that when it sees a cyclist, it can accurately predict what that cyclist is going to do next.
Essentially, based on its understanding of the unpredictability of cyclists, the Cruise AV determined that the probability of a safe interaction is improved when it gives cyclists more space, so that's what it tries to do whenever possible. This behavior illustrates some of the critical differences between autonomous and human-driven vehicles. Humans drive around with relatively limited situational awareness and deal with things like uncertainty primarily on a subconscious level. AVs, on the other hand, are constantly predicting the future in very explicit ways. Humans tend to have the edge when something unusual happens, because we're able to instantly apply a lifetime's worth of common-sense knowledge about the world to our decision-making process. Meanwhile, AVs are always considering the safest next course of action across the entire space that they're able to predict.
One important question that we're not really going to tackle here is whether this is even a good idea in the first place, since (as a cyclist) I'd personally prefer that cars be predictable rather than sometimes doing weirdly nice things that I might not be prepared for. But that's one of the things that makes cyclists tricky: we're unpredictable. And for AVs, dealing with unpredictable things is notoriously problematic. Cruise's approach to this, explains Rashed Haq, VP of Robotics at Cruise, is to try to give their autonomous system some idea of how unpredictable cyclists can be, and then plan its actions accordingly. Cruise has collected millions of miles of real-world data from its sensorized vehicles that include cyclists doing all sorts of things. And their system has built up a model of how certain it can be that when it sees a cyclist, it can accurately predict what that cyclist is going to do next.
Essentially, based on its understanding of the unpredictability of cyclists, the Cruise AV determined that the probability of a safe interaction is improved when it gives cyclists more space, so that's what it tries to do whenever possible. This behavior illustrates some of the critical differences between autonomous and human-driven vehicles. Humans drive around with relatively limited situational awareness and deal with things like uncertainty primarily on a subconscious level. AVs, on the other hand, are constantly predicting the future in very explicit ways. Humans tend to have the edge when something unusual happens, because we're able to instantly apply a lifetime's worth of common-sense knowledge about the world to our decision-making process. Meanwhile, AVs are always considering the safest next course of action across the entire space that they're able to predict.
Re: (Score:1)
One important question that we're not really going to tackle here is whether this is even a good idea in the first place, since (as a cyclist) I'd personally prefer that cars be predictable
I'd prefer that cyclists follow the law. Cease blowing stop signs. Don't weave in and out of traffic. Don't change lanes without signaling. DON'T switch between sidewalk and street at your whim. Cyclists as a body are far more unpredictable than motor vehicles.
If I had my druthers, they'd be restricted to sidewalks since their proportional mass more closely resembles pedestrians than motor vehicles (they are PEDal powered, afterall), and collisions between bicycle and pedestrian would be far
Re: Learning AI (Score:2)
Cyclist ... (Score:2, Troll)
Re: Cyclist ... (Score:2)
You mean in the bike lane?
Re: Cyclist ... (Score:4, Informative)
Re: (Score:3)
I sometimes find myself ignoring "random acts of kindness" by car drivers who have the right of way yet decide to wait for me to let me pass. This leads to a stupid mexican standoff wait-a-thon, but there is a fundamental lack of trust on my side, I'd rather stick to right-of-way
Re: Cyclist ... (Score:3)
Re: (Score:3)
and when the system can't id one & Elaine Herz (Score:3)
and when the system can't id one & Elaine Herzberg dies?
Having a stronk? (Score:3)
uhh.... should I call you a Bondulance? [kym-cdn.com]
Re: and when the system can't id one & Elaine (Score:2)
It wasn't that dark. The original videos were altered to make it look darker than it actually was.
Re: (Score:1)
Dusk is actually the worst time for human vision.
Re: (Score:3)
Irrelevant because the road there was exceedingly well lit, something that cheap rubbish video camera didn't get. I would hope that the Uber car wasn't using the same rubbish camera as one of it's driving sensors.
Re: (Score:1)
One of my co-workers used to drive that street daily, he said she had to have been stupid as a rock to cross there any time of day much less at night.
Re: (Score:3)
The fact that the system even spotted her, but could not classify her predicted path due to the erratic nature of her behavior demonstrates its capabilities.
My understanding of that was that the car was in fact able to track and predict her but was literally incapable of braking to avoid hitting her.
Re:and when the system can't id one & Elaine H (Score:5, Informative)
In fairness, that Uber system knew full well that she was there but Uber had turned off the safety systems which would have stopped the car in ample time and the driver was watching video on her gadget. The fault is likely both with Uber for not getting adequate confirmation that the driver knew the car wouldn't stop for anything and the driver for not paying attention to this and subsequently to the road.
Bad cyclists, are unpredictable. (Score:1)
But that's one of the things that makes cyclists tricky: we're unpredictable."
Unpredictable? Ever ask yourself why? Yes, I'm serious.
Every other cyclist has to abide by all the same laws that you do when it comes to operating a bicycle on a public road, especially one that you share rules with deadly automobiles.
The problem I have with this statement is the fact that while cyclists are unpredictable, a GOOD cyclist should not be. Based on rules and visual signals, they should be anything but unpredictable. Again, dedicated lanes and rules, make them more predictable. Kind of lik
Re: Bad cyclists, are unpredictable. (Score:5, Insightful)
Re: (Score:3)
I don't mind when cyclists switch between road and footpath (which is legal where I live) as that's something you can predict and make allowance for. If it happens then you're ready at least.
Re: (Score:3)
Filtering path-side of vehicles is perfectly legal in the UK and safe if done right (when traffic is crawling along). Footpath cycling isn't legal here unless there are signs saying otherwise.
Re: (Score:3)
Actually, cyclists ARE more unpredictable than cars, though not as much so as skateboarders. Motorcycles are a lot more predictable than bicycles, though when they have a problem it's a lot worse. Two wheeled vehicles (current designs) are inherently unstable, and small variations in control can cause large changes in action. A small rock in the road can send a bicycle in a rapid change of direction that needs to be rapidly corrected. Etc. I once had my rear hub strip it's gears while I was riding, and
I'm a cyclist and passing on the right is stupid (Score:5, Insightful)
Re: (Score:2)
It's a trade-off. approaching the median to pass on the left is its own danger, especially at intersections and stop signs.
Re: (Score:3)
Wholly carp, you're going to go out into the middle of the street and assume that the driver sees you coming and that every other driver notices you, then cross back in front of them assuming that they're not going to speed up unexpectedly. How the frack are you still alive?
The first thing that I was taught when I was old enough to ride in the street is "Assume you are invisible".
Re: (Score:2)
Funny how "Hey, traffic may be slow but it's still moving... I'll just not pass at all and wait my turn" never seems to have even popped into your brain when you replied to OP.
Re: (Score:2)
If the traffic is moving under 10 mph then I'll pass either side, your post is based upon assumption that may not be true.
Re: (Score:2)
Unless it's a red light, passing on the right is a terrible idea. I've seen cyclist do that and I think WTF dude!
If you're being overtaken by a cyclist then the odds of you being a danger are far lower than when a cyclist isn't able to catch up with you. A school zone speed limit is faster than a typical road cyclist can sustain. Mind you if you frequently take turns without indicating that may be more of a bad idea for you rather than the people around you wondering what it is you're doing.
Also ... America... I don't think they have any functioning concept of the idea that passing left or right makes a difference. It
Re: (Score:2)
Also ... America... I don't think they have any functioning concept of the idea that passing left or right makes a difference. It's not like Europe where that is seen as an offence worse than speeding.
In California we have no law prohibiting passing on the right, but we do have a law prohibiting clogging the passing lane. Unfortunately, it is a very permissive law, and it is also almost never enforced. You have to be holding up five or more people on a highway or freeway before you are in violation. Literally the only time I've seen it enforced was against a backhoe on the CA29. They are permitted to drive there if they have signals, but they still have to permit passing like anyone else.
Unfortunately in
Re: (Score:2)
I'm from the Netherlands and based on certain metrics this apparently is cyclist Valhalla. If you don't have to share the road with people that got their drivers license for free with a carton of milk, have traffic laws that are reasonable and an infrastructure that's built for all kinds of traffic instead of just for cars, passing on the right generally is the safer option for cyclists.
Depends on the training data. (Score:3)
Not a lot of details on how it's trained or the depth of the AI so it could simply be mimicking drivers. This could be dangerous because with only information on how to react to the bicyclist, it could move across the dividing line (I have seen cars do this) to give the bicyclist space. Usually this isn't problematic but usually there isn't also another car coming form the opposing direction.
Another possibility is that it is doing more than just mimicking and may "think" that it should keep a distance from bicyclists at all costs... including running off the road.
It's good to know they are developing stuff but let's not get ahead of ourselves on what this progress means without getting important facts.
Re: (Score:2)
The problem with the low grade AI we have now is that it can be trained to do things, but it doesn't understand context and we don't understand how it "thinks", so it can be unpredictable.
I'm reminded of that time the army tried to train an AI to spot tanks. They ended up training it to spot sunny days because all the photos with tanks were taken in more favourable weather conditions. The risk is we end up with AI drivers like that, they seem to work but we don't realize they are actually basing their decis
I'm with the cyclist: (Score:5, Insightful)
Maintain your course and speed, don't swerve to accommodate lane splitters, don't swerve for cyclists.
Re: (Score:3)
Fully agree with this. (Not a cyclist, too many crazy drivers...).
Humans are mostly good at predicting steady state progression. If the next logical step is linear for everything then you can predict within that range. For a car moving straight ahead, it has only 4 options (left/right/slower/faster). The worst thing is to slowdown or move to the right (closing distance). If I see a cyclist from a distance, I might give a bit more room if they have time to integrate my new position into their spatial m
Re: (Score:2)
Summary says nothing about swerving. There's a difference between swerving and noticing a cyclist is right behind the curbside back end of your car and then checking centre-side and moving over so that you're up against the land dividing lines.
If I'm passing a vehicle on the inside, I'll only do so when it's safe to do which means traffic is crawling along and the vehicle either isn't a large truck or it's a truck but it hasn't got time to get anywhere before I can safely get past it.
Re: (Score:2)
There's a difference between swerving and making space for someone lane splitting when you're obviously in insanely slow moving traffic.
Re: (Score:2)
Re: (Score:3)
You don't have an idea? I do. Cars making way for lane splitting at times when it's legal to lane split are very obvious. When it's legal traffic is moving very slowly, usually in a traffic jam. Also splitting implies 2 lanes, so straight away the idea of someone doing a u-turn is out of the window. Mind you if you're relying on people not to do actions without using the little yellow blinking lights then you have completely lost and you could die at any moment lane splitting or not.
As for moving over, what
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I agree - BUT... if you're far enough from the kerb for the cyclist to get past easily, then don't give them any more room. However, if you're too close to the kerb, and there's enough road width that you can move out and still leave plenty of space in the middle, then do move out. But, if you can't move out, then stay put, even though that obstructs the cyclist. The car behind you needs to make similar decisions, but if it sees you haven't moved out, and it can't either, but there's enough space for the cy
Ahead of Toyota (Score:3)
Glad to see Cruise is progressing, unlike Toyota.Toyota unfortunately hired the wrong self driving team. I read multiple interviews where they claim that we can't have self driving cars because a car supposedly can't predict if someone is suicidal and going to jump in the middle of the street (like humans can do any better). They gave some weird esoteric examples that I am sure most humans would fail too. The car just needs to predict trajectories and make decisions based on that. It doesn't need to predict when someone is going to break the law and do a suicidal move. If that's the case it can never drive near another car on the highway because there is a chance the driver may go crazy. Why don't they quit their jobs if they can't figure out how to do it? They are happy to keep drawing a paycheck.
Reference: https://spectrum.ieee.org/tran... [ieee.org]
Re: (Score:3)
Funny thing is the self-driving car could easily react quicker to the suicidal person and could result in less harm being done.
This article gives me a bit more confidence in the future of self driving systems. I've heard of both BMW and Tesla behaving in a manner that is borderline criminal when it comes to passing cyclists, but these days you can't tell if the self driving system was bad or if the driver was simply lying and saying the self driving system was responsible for actions that were their own. Fo
Re: (Score:2)
Re: (Score:2)
No visible lane markings is one that will no doubt throw a spanner in the works for self-driving systems, humans will be better at reading clues as to where the road is. As for sliding about on roads, I expect they'll be as bad as we are, I've been in a car that hit black ice, we got lucky because there were no parked cars in our sliding trajectory when that bit of road usually did have parked cars.
Liability is a no-brainer, manufacturers must insure for when the self-driving system is active, there is no o
Re: (Score:2)
a car supposedly can't predict if someone is suicidal and going to jump in the middle of the street (like humans can do any better). They gave some weird esoteric examples that I am sure most humans would fail too. The car just needs to predict trajectories and make decisions based on that.
I've driven in a city at least once in my life.
We do this all the time, there are simple cues like eye contact between drivers and pedestrians, or if we're not looking at each other what is the other person looking at. We are extremely adept other person intention calculators. It is probably the thing we're best at, and the last thing machines will be good at. Are they impatiently stepping off the curb to walk behind our car as we pass, or are they totally oblivious to surroundings because conversation w
How about an AI to assist cyclists? (Score:1)
You know, to apply the brakes when they're about to blow through a stop sign, or when they're passing a bunch of cars on the right when the cars are stopped or while cars and bike are approaching an intersection?
Everyone seems to be focussing on cars - it seems the market for other vehicles is wide open.
Explicit vs tacit knowledge (Score:5, Interesting)
There's something called "Polyani's Paradox" [wikipedia.org]:
Summarised in the slogan "We can know more than we can tell", Polanyi's paradox is mainly to explain the cognitive phenomenon that there exist many tasks which we, human beings, understand intuitively how to perform but cannot verbalize the rules or procedures behind it.[2]
This "self-ignorance" is common to many human activities, from driving a car in traffic to face recognition.[3] As Polanyi argues, humans are relying on their tacit knowledge, which is difficult to adequately express by verbal means, when engaging these tasks.[2] Polanyi's paradox has been widely considered a major obstacle in the fields of AI and automation, since the absence of consciously accessible knowledge creates tremendous difficulty in programming.
In machine learning, you can either tell the machine what it needs to know, or you can let it learn, which some would say is reinventing the wheel - wasted processor cycles.
Driving seemed initially so simple - keep the car on the road, between the lines, avoid hitting anything. But as we've discovered, there is a huge amount of variation in that seemingly simple premise. So maybe tacit learning is the way to go. There's a bit of a discussion about it in the machine learning community, as noted in a recent issue of the CACM [acm.org].
Re: (Score:2)
yet we expect AI models to explainable when even we can't introspect our own processes
Re: (Score:2)
yet we expect AI models to explainable when even we can't introspect our own processes ...
Our own processes are a shared experience - I may not understand how you and I recognize faces, but I can understand why you didn't recognize my face if you show me the picture in question and I see that it's just badly lit. I may not understand WHY it makes us fail, exactly, or where the threshold is or what the features are that decide it - but I can follow along because we share the same process (with small variations).
But the AI and I don't have the shared process. I do not understand why it classified
Re: (Score:2)
Re: (Score:2)
But when it comes to safety engineering, we require the processes to be explicit and verifiable - except for machine-learning self-driving (and 737 MAX attitude controls)
Low bar (Score:2)
Humans don't really drive that well, so if "AI" can do better, that's great, but not particularly surprising!
Re: (Score:2)
what are you talking about? Until the AI can drive after 3 hours sleep then trun around to yell at kids in the back wild swerving over 3 lanes, it's clearly not as good as a Human. Also if the AI can't get shitfaced and wind up with a court ordered ignition interlock, it's just substandard (this is a mandatory feature in NM).
Also we need to teach the AI to buy trucknuts, tail gate and roll coal.
What the hell? (Score:1)
Re: (Score:2)
So being safer near cyclists is reckless behaviour. How did you manage to work that one out? I await your baseless assumptions.
Six. It gets six tries. (Score:4, Interesting)
An intelligence (artificial or otherwise) mostly learns from its mistakes. In a specific novel situation, such as a fast bicycle approaching from behind on the right, a smart car in training is likely to make a mistake. If it should become annoyed and swerve toward the bicyclist, thereby killing her (possibly a red-headed 4th grader late for school), then the car has begun to learn something about this particular situation. There are other slight variations of the situation that could also cause a mistake. Generally speaking, it is reasonable for the car to kill up to 6 cyclists in the process of fully understanding how to deal with very similar situations.
You may think that 6 is excessive, but remember that these cars will be able to share what they've learned with the entire fleet. We can't be sure of the total number of risky situations that will present to these cars-in-training, perhaps 430. So it's fairly safe to predict that only 430X6, or 2580 people will be lost in the effort to create truly smart cars. Over time, many more lives may actually be saved!
Re: (Score:3)
An intelligence (artificial or otherwise) mostly learns from its mistakes
That is the ideal situation but these AI cars aren't learning that way right now.
Generally speaking, it is reasonable for the car to kill up to 6 cyclists in the process of fully understanding how to deal with very similar situations.
For deep learning, it will take a lot more than six to get enough data for them to deal reasonably with it.
and sixty years hardtime for someone (Score:2)
and sixty years hardtime for someone
Re: (Score:3)
Okay
> If it should become annoyed and swerve toward the bicyclist, thereby killing her (possibly a red headed 4th grader late for school), then the car has begun to learn something about this particular situation.
No, that's not all of it. What you're talking about is learning by reward/penalty. But there is also learning by predicting the future, such as if the cyclist was behaving in an expected way or not. There is
Re: (Score:3)
Humans don't need to drive off a cliff to learn to avoid this behavior in the future, either.
Some do.
In fact, lots of humans aren't actually very good at learning from mistakes. The reason machines will soon overtake is us is pride - a lot of people would rather go to great lengths to justify what they did than to just admit "I fucked up".
Equivalent Tesla headline (Score:2)
Tesla's self-driving fiasco is spooking cyclists by unpredictably making jerking movements into the left lane, possibly onto incoming traffic if the car detected a cyclist coming from the right. While head-on crashes and fiery explosions have not yet happened because of this new misguided feature introduced by Elon Musk (possibly in response to a Twitter joke), they could, in theory, happen any time.
This is coming at a bad time for Tesla, at only ${TIME} after the news of ${OTHER_TESLA_RELATED_THING}.
Concer
Cyclists, listen and learn! (Score:2, Troll)
Work harder, get a car.
https://www.youtube.com/watch?... [youtube.com]
the example is a problem, not solution (Score:2)
This is actually a great example of a problem with "accomodative" ai programming, a sort of second-order challenge we haven't even begun to address.
That is, "We CAN, but should we?"
I see a bicyclist charging up on my right, I will NOT "pull slightly left" to make it easier for them to continue to follow a course that is dangerous, illegal in places, and frankly stupid.
Considering the widespread opportunism and frankly maliciousness of these hairless chimps, we will see people deliberately "messing with" ai
Re: (Score:3)
Re: (Score:1)
If the cyclist is COMING UP BEHIND ME, fuck him.
Cyclists want to have the rights of CARS in TRAFFIC LANES.
They can wait in the TRAFFIC LANE behind me...you know, like a car or truck. And then when it's clear, and he has the speed to overtake, he can pass me on the left.
The #1 worst behavior is cyclists believing they are entitled to choose that they have the legal entitlements of "vehicles" or "pedestrians" changing as they feel, moment to moment.
Fuck 'em. It's not my job to make the road safer for them i
This! (Score:2)
I think as we see self-driving vehicles become more commonplace/the norm? People will learn to adapt to their behaviors. So if you feel like trying to get ahead of traffic, you might barrel down the middle of the road, swerving towards these vehicles so they try to move out of your way and clear a path for you to get through.
It needs to be established that all the self-driving vehicles are following rigid sets of rules for driving that a person can't "trick them" into deviating from to cause traffic jams an
Introducing the Toyota Nagilum (Score:2)
Predictability == Safety (Score:1)
(as a cyclist) I'd personally prefer that cars be predictable rather than sometimes doing weirdly nice things that I might not be prepared for.
THIS! I frequently have cars stop right in the middle of a 4-lane with traffic b/c I'm waiting on my bike to cross the steet. This is so insanely dangerous. And they're looking at me, with 3 other lanes of cars buzzing by, like "well?" Please, @carppl don't move into oncoming traffic because you think I may want to pass you on the right. This is not a safe maneuver.
As a regular human (Score:2)
Humans generally do not have the edge (Score:3)
"Humans tend to have the edge when something unusual happens, because we're able to instantly apply a lifetime's worth of common-sense knowledge about the world to our decision-making process"
I posit that this is a statement not only not supported by data, but completely made up by the reporter.
In fact, in real life it seems that humans have basically no edge unless they're trained to handle unusual situations.
Moving left (Score:1)
Driverless Cars Safety Record Unproven (Score:2)