Toyota Is Uneasy About the Handoff Between Automated Systems and Drivers (caranddriver.com) 135
schwit1 shares a report from Car and Driver: Toyota has not yet decided whether it will bring a car to market that is capable of automated driving in some situations yet still requires a human driver behind a wheel who can take control if needed -- but the automaker, characteristically, is more cautious than many about moving forward with the technology. Citing safety concerns regarding the handoff between self-driving technology and human driver, Kiyotaka Ise, Toyota's chief safety technology officer, said the biggest issue with these kinds of systems is that "there is a limbo for several seconds between machine and human" in incidents when a car prompts a human to retake control if it cannot handle operations. These kinds of systems, defined as Level 3 autonomy by SAE, have divided automakers and tech companies in their approaches to developing cars for the self-driving future. As opposed to Level 2 systems, like Tesla Motors' Autopilot, in which a human driver is expected to keep his or her eyes and attention on the road while a system conducts most aspects of the driving, Level 3 is characterized by the system's claiming responsibility for the driving task when it is enabled. Although Toyota assures us that its researchers are hard at work figuring out the challenges of Level 3 autonomy, it seems like the company could eventually join others moving directly from its current Level 2 system to a Level 4 system. Given the self-driving race has been on for a while, this could put Toyota at a competitive disadvantage, but it's clear engineers at the company care more about getting things right than they do about being first.
Toyota is... (Score:5, Insightful)
The summary states, "it's clear engineers at the company care more about getting things right than they do about being first."
So, basically what you're saying is, Toyota is the anti-Tesla.
Re:Toyota is... (Score:4, Insightful)
I'm not convinced that Tesla will get to level 5 with their current hardware. They are already selling level 5 to customers as a future firmware update (â3000 extra last time I looked), saying they will upgrade hardware if necessary for people who already paid.
Their system is only cameras and ultrasonic sensors, no lidar. They are using neural nets for image processing.
Re:Toyota is... (Score:5, Funny)
Tesla is the industry leader in making promises .
Re: (Score:2)
No lidar, but they have a forward facing radar.
8 cameras, 1 radar and 8 ultrasound sensors. Way more than a we have.
Re: (Score:3)
The whole sensor package seems poorly thought out. For example, the lack of a nose camera means that they can't implement a 360 degree overhead view like Nissan and several other manufacturers have now.
And then there is the whole "AP2.5" debacle, where they did a major computing power upgrade over the original AP2.0 hardware after realizing it wasn't up to the task, and then promised free upgrades to people who had already paid for full self driving a year ago.
The thing about cameras is that they are not th
Re: (Score:2)
Re: (Score:3)
12 ultrasonic sensors on Model 3.
That said, what we have that a car doesn't is a brain that automatically fixes photogrammetric stitching errors based on the logic of "that doesn't make sense" - whether something "makes sense" or not being an AI-hard problem. So one tries to compensate for this on vehicles with better sensors than human beings have.
Lidar provides a superb data stream when conditions are right, but it's too problematic. You're not going to put awkward, draggy, ugly domes on top of everyone's
Re: (Score:2)
"I'm not convinced that Tesla will get to level 5 with their current hardware."
I think your analysis is perfectly valid. One thing though. Human drivers basically have two (at most) pretty good optical sensors with no useful range capability (human stereo vision only works out to about 6-7 meters). The sensors can scan right to left through about 180 degrees and up/down through 30-45 degrees. They are augmented by at most three very limited mirrors and maybe on modern cars by a flaky, small screen rear
Re: (Score:1)
To be fair, Tesla constantly presents their mediocre driving aid as if it were a level 5 autonomy system. I can imagine that someone less interested in automotive technology may think that it is.
Re: (Score:1)
To be actually fair, Tesla continually does precisely the opposite, and there is essentially zero confusion among actual Tesla owners about what it's actually capable of. Heck, if you take your hands off the wheel for too long too many times in a row, the vehicle will revoke your autopilot privileges until the next time you charge up.
EAP is in a state at this point where its limitations are obvious enough that it's hard to become too complacent. The problem arises when autopilot-style systems get good enou
Re: (Score:2)
The problem arises when humans get good enough that you don't see its flaws very often, and so drivers become complacent - and then when something actually does go wrong, they're not ready to deal with it.
FTFY.
There were 1.25 million road traffic deaths globally in 2013 that's 400 9/11's.
Re: (Score:2)
Ahem [insideevs.com].
I can only guess that those other 2% have never actually used the thing or known anyone who has.
Re: (Score:2)
Re: (Score:2)
So, auto pilot won't actually automatically pilot your car for you? Then it probably shouldn't be called auto pilot. But Musk just wants to be cool and be able to say that he has the first auto pilot* car.
*Warning. An 'auto pilot' car won't actually be able to reliably automatically pilot your car. Please leave your hands on the wheel and look forward at all times... just like when you drive the car.
Re:Toyota is... (Score:5, Insightful)
The summary states, "it's clear engineers at the company care more about getting things right than they do about being first."
So, basically what you're saying is, Toyota is the anti-Tesla.
Basically you're saying Toyota is being Toyota (conservative to the extreme, but good at what they do).
Toyota is not the only one concerned with this, as a road user I'm concerned what will happen when Dopey Doris' automated car struggles with faded lines on a single lane road (quite common on my 18 mile commute to work). Right now, Dopey Doris can only spend half her attention on her phone, I hate to think what will happen when she puts her full attention into it and because she's so engrossed in FaceCrush or watching the latest episode of Keeping up the Cardasians that she completely tunes out the alarm throwing control back to the driver and the car veers into my lane uncontrolled.
Autonomous cars need to be 99.999999999999999% reliable before they should be considered ready for public consumption. Right now, they're nowhere near it. Google's success has been due to two factors, 1. it was all done in sunny California (I'd like to see the same car in Berkshire) and; 2. the car has been in the hands of a professional driver the whole time. The current track record for autonomous cars is nil, the record is for car and driver working together. Of course we know in the real world if you gave the Google autonomous car to Dopey Doris commuting from Finchamstead every day, she's going to assume that it will do everything for her, so we need to make sure it can operate without human intervention because it needs to, human intervention cant be counted upon from the average steering wheel attendant with a phone shoved up their nose (we get enough collisions from these types as it is without giving them a reassurance).
Re: (Score:3)
Autonomous cars need to be 99.999999999999999% reliable before they should be considered ready for public consumption.
Since that's essentially an impossible standard, what you're saying is that we should never use autonomous land vehicles, even though human drivers fall massively short of that same safety threshold.
Re: (Score:2)
Re: (Score:2)
I live in Los Angeles and don't own a car, but I have to rent one to drive ~100-250 miles per day a few times per month. Last week I was twice stuck driving at odd hours and struggling to stay awake and focused on the freeway. While I "powered through" it, having just automatic lane keeping and car following operational would have made the trip much less stressful, and likely improve safety by reducing event risk to ~5% per 100 miles to ~0.01%, on par with a non-fatigued driver.
So then you get into the que
Re: (Score:2)
I"d say at the least the onus is on them to guarantee that it will never create an accident in a situation that I, as an individual, would be able to deal with. And that should go for anyone buying an automated vehicle.
What about all of the situations in which you would have an accident, but the automated system would avoid it? Suppose that there are a thousand of those for every one where you'd succeed and the system would fail. That would fail your requirement as stated. Do you really think that makes sense?
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Why? If it's safer than 90% of drivers, then that 90% would be better off in an autonomous vehicle. The remaining 10% would be better off driving themselves, but even they'd benefit from the rest of the traffic being autonomous - fewer idiots to deal with.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The manufacturers all seem on board with that scenario - assuming the car is actually in autonomous mode and is at fault for the accident. Pretty straightforward product failure scenario in that case.
Re: (Score:3)
Re:Toyota is... (Score:5, Insightful)
This is also a big concern of mine. Cars should either be 100% autonomous, or 0% autonomous. I'm all for adaptive cruise control, but as soon as you introduce technology that allows people to take their attention away from the driving and have it still follow the road for a significant period of time, that's where you run into problems.
If you haven't had to actually touch the steering wheel for a month, how much would you really be paying attention? What happens when the car screws up and you need to take over? Are you going to be too engrossed if your other activities to take over? Also, what is the point of paying for all this technology anyway if you don't get to actually not pay attention anyway. If you're going to have to keep your hands on the wheel, you might as well actually be driving, because other wise it isn't really worth the expense.
Re: (Score:2)
I mostly agree. I could see there being room for a car that safely pulls over, or otherwise gives the driver plenty of time to "shift gears" and develop situational awareness before they have to take over in unusual circumstances - but it should *never* require the driver to take over on short notice, which demands that they be constantly maintaining situational awareness against the very low chance that they need to take over. Human attention just doesn't work that way.
Highway driving is probably a good
Re: (Score:2)
Level 3 is okay. That's where the car can drive itself under certain limited conditions (e.g. on a highway, but not on urban roads) and gives the driver plenty of warning when they need to take over. By "plenty" I mean 60+ seconds, and if you don't take over nothing terrible happens.
Re: (Score:2)
But nothing is 100%. Not even 99.99...%. :(
Re: (Score:2)
By 100%, I mean no steering wheel or the system is good enough that you can sit in the back seat if you so desire. This concept of "good enough but you might have to unexpectedly take over once in a while" isn't really that great of an idea because people simply won't be paying attention if they aren't required to pay attention all the time.
Re: (Score:1)
I actually think Toyota is wrong here.
I suspect that advanced level 2 (what we currently have) is the most dangerous. An aggressive alarm with a seconds long handover seems far safer than completely unatentive touching the wheel.
I think the danger zone starts as soon as you combine adaptive cruise control and lane assist, and that the only safe option is to leave the wheel completely in control of the human until level 3 (which I think is safer than current level 2).
Re: (Score:2)
IHS has indicated that lane departure WARNINGS increase safety. That is far different than 'the car is driving until something goes wrong'.
Re: (Score:1)
Maybe I used the wrong word, lane departure warnings are only a plus if they don't false positive an annoying amount and don't turn the wheel so you can't rely on them to make micro adjustments.
Re: (Score:2)
Autonomous cars need to be 99.999999999999999% reliable before they should be considered ready for public consumption.
No, they really do not need to be that reliable. Not even close to that reliable.
People are not that reliable. People like the Dopey Doris you describe can't keep themselves alive because the phone is so much more interesting than driving is. I absolutely want Doris to have a self-driving car, because even current technology is likely better at driving than she is.
Even a shitty self-driving car will likely be better than the bottom 25% of human drivers. Unless that makes the middle 50% far worse, it will be
Re: (Score:1)
Re: (Score:2)
Ok as long as people don't have to pay for it with personal injuries or financial loss.
They will have to pay. But it will be less than they're paying now.
Re: (Score:2)
Re: (Score:2)
Autonomous cars need to be 99.999999999999999% reliable before they should be considered ready for public consumption.
Noting that nothing else is that reliable. Not people, not Verizon, not even condoms - but use the last one with the first two anyway.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The summary states, "it's clear engineers at the company care more about getting things right than they do about being first."
So, basically what you're saying is, Toyota is the anti-Tesla.
No, Toyota is being responsible instead of going for a quick short-term money grab due to hype.
Re: (Score:2)
Which is exactly what GP said.
Can you read? The GP said:
So, basically what you're saying is, Toyota is the anti-Tesla.
What I said is not logically equivalent to that in any universe.
Re:Toyota is... (Score:5, Interesting)
The Duke of Wellington claimed he won the Battle of Waterloo against Napoleon on "the playing fields of Eton".
The big battle for autonomous driving will be won or lost in the tort courts of the US. Who is responsible for the accident? The driver? Or the manufacturer?
Your local ambulance chaser lawyer would prefer to sue the manufacturer . . . simply because the manufacturer has more money!
The first big cases will unsettle the industry, but a sort of fudge agreement will be reached between lawyer groups, the manufacturers and the insurance companies. Unfortunately, the average driver will end up paying for this.
The lawyers don't want to kill the autonomous car industry . . . they want to "milk" it for their "piece of the action".
Re: (Score:2)
You raise an important point, but as soon as autonomous land vehicles exceed the miles driven per accident that humans are capable of, insurance companies will line up in favor of paying for fewer wrecks.
Not to put too fine a point on it, but the average driver already pays for this under the mandatory auto insurance laws.
Re: (Score:2)
Last I heard the auto manufacturers were pretty unanimous that they would be responsible for any accidents that occur while the car is in self-driving mode, so there's not really a whole lot of conflict to be resolved. It's a simple product-safety liability situation - if you're using a product in full accordance with the manufacturer's instructions, and it injures or kills you anyway, then it's a pretty open-and-shut case of liability for a faulty product.
The only "loophole" I've seen so far is for "semi
Re: (Score:2)
If it attempts to hand off control in a situation where it cannot safely do so, it's still a product safety issue.
Re: (Score:2)
The summary states, "it's clear engineers at the company care more about getting things right than they do about being first."
So, basically what you're saying is, Toyota is the anti-Tesla.
Perhaps what's being said is that Toyota has had issues in the past with software and is trying to be careful about that.
See: Toyota's killer firmware: Bad design and its consequences [edn.com]
Barr's ultimate conclusions were that:
Re: (Score:2)
I remember way back when that there was a site that plugged itself as a "Toyota Simulator" (seems to be down now). When you went to the site, it was nothing but a looping video of a car driving way too fast down a road and a person screaming. ;)
Re: (Score:2)
I remember way back when that there was a site that plugged itself as a "Toyota Simulator" (seems to be down now). When you went to the site, it was nothing but a looping video of a car driving way too fast down a road and a person screaming. ;)
Compared to the Peugeot simulator, which is a car travelling down a stretch of road way too slow with the driver complaining about their prostate, hooligans, being half blind and the Yoofs of today.
I really don't understand the interest here (Score:3)
Slip ups on the road can become fatal in seconds because of the speeds and forces involved. You know people are going to rely on these systems precisely when they should be off the road period or be paying attention. And what happens when a deer decides to bolt out from the woods in front of your vehicle? Are you going to trust that the car can detect a deer?
This seems like a solution to people who hate the idea of mass transit and transporting goods by trains. Self-driving cars and trucks and hyperloops! FFS, just hire Disney's engineers and building a fucking monorail in most cities and connect them to the suburbs. That would be more than sufficient to raise the quality of life on transit.
Re: (Score:2)
This seems like a solution to people who hate the idea of mass transit and transporting goods by trains. Self-driving cars and trucks and hyperloops!
I really hope this is not the primary driver behind this technology. If this is true, the idea is to eliminate the cost of CDL drivers and making the most dangerous vehicles on the road automatically driven. Why couldn't they just have a series of underground tunnels specifically for transporting commercial goods where all the automatic transport vehicles could be driven? That way, if an "error" occurs, it doesn't injure anyone? It's probably because it costs too much to do that I'm guessing... (facepal
underground tunnels cost way more then truckers (Score:2)
underground tunnels cost way more then truckers
Re: (Score:2)
underground tunnels cost way more then truckers
I think you would have to qualify this a little better. Sure, the up front cost is substantial. However, being that a logistics company would not have to pay for drivers at CDL hourly rates anymore, eventually it would pay for itself and then pay dividends.
Re: (Score:2)
And what happens when a deer decides to bolt out from the woods in front of your vehicle? Are you going to trust that the car can detect a deer?
Actually, yes. I trust a computer system that can handle thousands of computations every second much more than I trust a startled, panicky human in that scenario. I don't know if you've ever hit a deer with your car, but they don't exactly give you much warning, regardless of how much you're paying attention to the road.
Source: Lives in Pennsylvania
Re: (Score:2)
The scenario is obstacle on the road, large enough to cause serious damage. It won't even bother to classify it as a deer, bear, wolf, whatever.
Re: (Score:2)
Re: (Score:2)
That's actually part of the problem with LIDAR - it's superb at seeing obstacles (in good weather conditions, at least), but it provides you with no information about what you're seeing. Everything is an obstacle.
Radar on the other hand wouldn't even see the trash bag. But a bit of aluminum foil will give a huge blazing return.
Camera vision systems might recognize a trash bag as a trash bag, but only if they were trained to and are proven good at their task.
You really need a combination of sensors, and so
Re: (Score:2)
Probably - as should you. Hitting that bag has a fair chance of getting it snagged on your car and obstructing your view. Probably don't even need to stop - just slow down a bit so it makes it across the road before you reach it, and so that you will have time to stop if you mis-identified it at first glance.
Re: (Score:2)
The problem is you're a software engineer - we tend to think like engineers - i.e.software is a bunch of deterministic sequences created by an engineer. Modern AI doesn't really work that way, which is why AI is a whole separate research branch. It probably has more in common with an actual neural circuit than traditional software, and it will generally *not* handle a situation identically every time - because situations are never identical to begin with.
Now, perhaps they only use AI to categorize the env
Re: (Score:2)
Am I going to trust that a computerized system is capable of detecting an object in front of the car an apply brakes faster than a human being?
Hell yes I am. The human reaction time is around a second (often considerably more if the driver's distracted or tired). By the time your brain has gone 'oh shit a deer", and decided to slam the breaks, the computer is alread
Re: (Score:2)
Replying to myself because I accidentally copied the wrong link which is to a clip of the said talk and not the whole of it. Here's the original TED talk [youtube.com]. The part about the lady & ducks is slightly after 11:10.
I'm sick of hearing this BS (Score:3)
"Humans are on average really bad drivers"
No, we're not. We're actually bloody good at it considering we never evolved to drive something weighing 1.5 tons at anything up to 10 times our maximum running speed alongside other vehicles doing similar speeds.
People cite accident statistics as if they're significant. When you consider the TRILLIONS of miles driven every year by the worlds drivers and the number of potential accidents that DIDN'T happen because drivers reacted properly, the actual number of accid
Re: (Score:2)
Re: (Score:2)
And the pilots have trained for that exact scenario, and have drilled on it, and have studied the common causes of that. Yeah, the comparisons to a car are not really useful here.
Re: (Score:2)
I think you rather missed the point - the computers in aircraft have an easier time than would computers in a completely automated car, yet they still have to hand back control to the pilots from time to time.
Re: (Score:2)
And what happens when a deer decides to bolt out from the woods in front of your vehicle? Are you going to trust that the car can detect a deer?
I'm going to trust the car to detect the deer a lot more than I trust myself to detect a deer. I only have two eyes and I can only focus them on one point at a time. They also only see in the visible spectrum. My car could conceivably have 360 degree, multi-spectrum "vision".
Re: (Score:2)
And what happens when a deer decides to bolt out from the woods in front of your vehicle?
Most likely, the computer system detects it more quickly than a person would, reacts faster than humanly possible, and brakes and/or steers in an optimized way to avoid both collision and loss of control.That kind of scenario is one where automated systems easily beat humans. The concerning ones are when visibility is poor, or lane markings are bad or confusing, such as in inclement weather or construction zones.
just hire Disney's engineers and building a fucking monorail in most cities and connect them to the suburbs
A train line is great if all the places you want to go are in a nice line. And we could certain
Re: (Score:2)
Depends on how you look at the efficiency of the solution: in terms of time all these Elon musings are more efficient; in terms of resource utilization, mass/rapid transit is more efficient. The caveat is you need walkable communities on either end for mass transit to work, and Elon isn't too close to a metro station.
Bravo Toyota! (Score:1)
Someone is actually taking time to think this through. Don't get me wrong. I'm ready for our self driving car overlords. I just want to make sure they are ready for the job first.
Caution is important (Score:4, Insightful)
Re: (Score:3)
The risk of a dropped dish or torn shirt is much more tolerable than a car crash at highway speeds.
But the rewards are also so much lower.
The potential rewards of autonomous driving are HUGE. I won't pay more than a couple hundred bucks to have my dryer fold laundry. But I would pay a lot to not have have society ever have to face drunk drivers again. To give old people the freedom to get out and about. To have cars that can precision park themselves, thereby taking up way less parking area (no door opening space needed). And so on.
Re: (Score:2)
A dishwasher could be automated, but you'd be talking about having it integrated into a cabinet system and having grippers on slide tracks trying to grab non-metallic, non-magnetic plates and glasses with just the right amount of force not to break them and organize them throughout the cabinet. Any pote
Re: (Score:2)
Re: (Score:2)
Delusional (Score:2)
Re: (Score:2)
I hope your loved ones don't get harmed by this newfound laziness hiding behind flashy, imperfect technology.
This is why I work from home. :) It's also better for the environment...
Re: (Score:2)
>This is why I work from home.
As someone who has had an (abysmally bad) driver determine that their home was a valid roadway... I question the value of your choice.
Re: (Score:2)
>This is why I work from home.
As someone who has had an (abysmally bad) driver determine that their home was a valid roadway... I question the value of your choice.
I'm not sure I follow your logic there, it seems to be nonsense. First of all, if you want to objectively make a claim the my choice of working from home is less valuable than commuting, then by all means lay out a claim and supporting logic and evidence for the claim. If your claim is subjective, meaning that you would not find my choice valuable, then we just agree to disagree because what is good for you is not necessarily good for me and vice versa but we should respect each other.
Re: (Score:2)
>I'm not sure I follow your logic there, it seems to be nonsense
I'm not sure I follow the motivation behind your post, but I'm guessing you have no sense of humour and a stick lodged up your backside.
Loong hand-over times (Score:5, Insightful)
Toyota is not the only one deliberating skipping L3 and go directly to L4. Volvo intends to to the same, as well as some of the German vendors.
The reason is that studies show that hand-overs do not only take "a few seconds" according to the article, but that there is a tail of up to 40 seconds before the "driver-to-be" comprehends the situation.
Since 40 seconds is an eternity in traffic, it poses essentially the same challenges as L4 systems. So why bother with L3?
Re: (Score:1)
Bingo. Even highly-trained airline pilots have proven unable to deal with a handover from the autopilot to the pilots without flying a perfectly good airliner into the sea. So expecting the average driver to do it when they don't have minutes to react is crazy.
What makes it doubly bad is that the computer will only hand over when it runs into a situation it can't handle, which means the human will only be expected to take over when the car is in a complex and dangerous situation to begin with.
Re: (Score:2)
Toyota is not the only one deliberating skipping L3 and go directly to L4. Volvo intends to to the same, as well as some of the German vendors.
Google (now, Waymo) also decided years ago that L3 is a bad idea.
Re: (Score:2)
Even with the semi-autonomous systems in my car (adaptive cruise control and lane guidance), hand-off is a big problem. Either the car decides to slam on the brakes when a car starts to pull off the road (slows down and switches lanes), or it accelerates with reckless abandon because it can't sense the stopped cars 300+ feet ahead. (And it won't slow down when it does detect the cars ahead because the speed delta is now greater than 30 MPH)
Or it starts tugging at the wheel to guide my car back into the cent
It IS unsafe (Score:1)
Expecting the human to take over in a panic situation IS unsafe. The human should only be taking over while parked.
There is exactly one class of people who have the training qualifying them to take over a driving car: Driving instructors. And they are usually limited to stepping on the brake, something the autonomous car could easily do on its own in a panic situation.
However, that's not saying that we have to go straight from level zero to level five. We just have to do it in a different way.
Rather than le
Re: (Score:2)
Expecting the human to take over in a panic situation IS unsafe. The human should only be taking over while parked.
I agree completely. Doing a handoff of a moving car is just asking for trouble.
Rather than letting the car drive on the straight road, and expecting the human to take over in case the car overlooks a pedestrian, we should be letting the human drive on the straight road, and let the car take over when the human overlooks a pedestrian.
There is a 3rd solution which I prefer. Let the computer drive on straight roads in good weather on limited access highways first. i.e. the boring stuff. If the weather starts to turn bad or you are approaching a city, have the computer pull over to the side of the road so that you can switch driver. This is already common practice. Growing up on vacations, my mom would help drive on the long stretches and then pull over an
Good, rather not see Level 3 at all... (Score:2)
Whenever you have to hand control of the vehicle back to the human, there is going to be a delay. This is absolutely unavoidable and potentially very dangerous.
The driver, who was presumably inattentive during the fully-automated drive, will have to assess the surroundings and respond. This makes existence of a SAE Level 3 car inherently unsafe---there is little empirical support for idea that we can have a safe sometimes-automated system that fails into manual control.
Human attention change, perception tim
Brakes? (Score:1)
Considering Toyota couldn't write software for brakes I'm not surprised they're uneasy about writing autonomous driving software.
Experience is critical to safe driving (Score:2)
So, after a while with cars driving themselves, what exactly will the turnover to the now rusty human do but insure a crash? Also, there is not instantaneous human situational awareness. What...wait...oh, THAT is happening, so I must.....crash. Driving is a never ending story. You must pay attention until you turn the key to off. Also, the system is perfect, or it is not. Everything degrades and eventually needs repair. And how dangerous is a degrading self-driving-car system? And, since they cannot legally
This will end in disaster (Score:2)
Re: (Score:2)
Until we have REAL AI, self-aware, capable of actual thought and real interaction with humans
We don't even know what "self-aware" is, to say nothing of "actual thought". How exactly is it defined? A sea cucumber is aware enough of itself to try to preserve its existence, and yet they have no brain. Is that "self aware?" They're not bumping around in the dark, speeding off in random directions, randomly eating, or mating with whatever they touch. They can process input from their sensory organs to find food, and are able to communicate for the sake or reproduction. Is that thought?
How is a sea cucum
Re: (Score:2)
We don't even know what "self-aware" is, to say nothing of "actual thought". How exactly is it defined?
..and THAT is why we can't create REAL AI; we have NO IDEA how things actually work, and you are taking for granted how complex a task driving is, which is why you need a mind that can actually THINK.