Vans Drive Themselves Across the World 157
bossanovalithium writes "Four driverless electric vans successfully ended a 13,000-kilometer test drive from Italy to China which mirrored the journey carried out by Marco Polo in the Middle Ages. The four vans, packed with navigation gear and other computer software, drove themselves across eastern Europe, Russia, Kazakhstan and the Gobi Desert without getting lost. They had been equipped with four solar-powered laser scanners and seven video cameras that work together to detect and avoid obstacles."
Re:Not more "safety features" please (Score:4, Interesting)
Who do you know that drives more like an idiot because their car has safety features? I drove like an idiot even when my car didn't have ABS, and these days even though all cars I drive have ABS, I drive like less of an idiot.
Traction control is no use for driving like an idiot. I switch it off when I want to have some fun.
Re:Autonomous vehicles (Score:5, Interesting)
Stop wasting your time and build a personalised rail network when I can get into a "pod" or something, enter my destination and it would take me there on good, solid, metal rails and a bit of signalling.
Indeed. A packet-switched transport system. Broadcast your destination via Bluetooth, "routers" can receive that and direct you the best way. The pods would be unpowered, but pushed/blown along - possibly compressed air?
If you had a system of tubes under the ground, and some sort of decent bearings, you could make it work. You could also have large "trunk"/"backbone" roads, which smaller roads joined. Basically, model it on the Internet. But without the packet loss, or routing loops. Or collisions.
Re:Autonomous vehicles (Score:3, Interesting)
So the second one person has an accident in an autonomous vehicle, you're looking at major liability and lawsuits directed towards the car manufacturer - whether or not it was their fault and whether or not a human driver could have prevented the accident in *any* car. That manufacturer now has to take responsibility for that car versus every idiot on the road, every pedestrian that runs out and everything that can confuse one of its sensors.
I've thought about this problem for a while, and here is my guess how it will proceed. When cars started being made with cruise control, the responsibility in an accident still belonged to the driver. There are cars being built today which automatically apply brakes when they sense an oncoming collision, but in the event of a malfunction or accident, the human driver is ultimately held responsible.
I don't believe anyone is going to drop an autonomous car into the market, but instead it will simply be more and more iterations of the computer taking control. The human driver will always have a manual override though, and will be responsible for the accidents, simply because that was the status quo. My guess is by the time we do get autonomous cars, people probably won't be paying attention to the road since their cars are driving themselves fine anyway, but they will have signed a disclaimer claiming responsibility anyway. I do think there will be uproars when accidents do occur, like we have seen with the Toyota problem, but not for a long while after we have become comfortable with autonomous vehicles will any law change regarding responsibility.
Re:Not more "safety features" please (Score:3, Interesting)
Also your assertion that the AI problem would not require a groundbreaking solution is founded on what knowledge? I think you vastly underestimate the problem. Example scenario: a vehicle is traveling on a rural road in the winter around a tight, blind turn on a mountain road. Suddenly, another vehicle appears heading toward the first in the middle of the road. Does the AI in the first vehicle know it's winter and black ice may interfere with braking? Does the AI know that turning out of the other vehicle's path toward the mountainside may result in the vehicle flipping? Does the AI know that if it turns away from the mountain to avoid the other vehicle that it could cause it to plummet to its doom?
Let's back this off a bit, instead of a mountain, it's a hilly region and the same scenario, turning toward the hill would cause the same risk of flipping, but turning away would probably be rough but survivable. The AI turns away, but the hill is too steep and icy to brake effectively, does it know how to steer under such conditions? Does it know where to steer? Let's say there's a body of water down there, does it recognize that as a hazard to avoid? What if the water is frozen? Does that appear as a solid surface to the AI? What about at night? On and on and on.
Human intuition and integration is so powerful we don't think about most of these things consciously. We have the capacity to act with so many key factors understood naturally and relationally. AI will get there, that's inevitable, but it will be decades more before that happens, and when it does it will be "groundbreaking".
Re:More Importantly (Score:3, Interesting)
Re:More Importantly (Score:3, Interesting)
I imagine one of these would be less effective at explosives delivery than a remote controlled vehicle would be.
Autonomous vs. remotely operated is different how?