Toyota Raises Concerns About California Self-Driving Oversight, Calls It 'Preposterous' (reuters.com) 230
A Toyota official on Tuesday raised concerns about California's plans to require compliance with a planned U.S. autonomous vehicle safety check list, calling it "preposterous." Reuters reports:Hilary Cain, director of technology and innovation policy at Toyota Motor North America, criticized California's proposal to require automakers to submit the U.S. National Highway Traffic Safety Administration's (NHTSA) 15-point safety check list before testing vehicles. "If we don't do what's being asked of us voluntarily by NHTSA, we cannot test an automated system in the state of California. That is preposterous and that means testing that is happening today could be halted and that means testing that is about to be started could be delayed," she said at a Capitol Hill forum. On September 30, California unveiled revised rules that carmakers will have to certify that they complied with the 15-point NHTSA assessment instead of self-driving cars being required to be tested by a third-party, as in the original proposal.
Why is it preposterous? (Score:5, Insightful)
"If we don't do what's being asked of us voluntarily by NHTSA, we cannot test an automated system in the state of California. That is preposterous and that means testing that is happening today could be halted and that means testing that is about to be started could be delayed"
Well sorry to shit on your parade, lady, but maybe it's not such a bad idea to slow all of this down and get it right. NHTSA isn't the devil. If you want to get angry at someone, go after IIHS. NHTSA is trying to actually keep the rest of us, who may someday interact with your automated system, safe from it.
Re: (Score:2, Flamebait)
Meanwhile, about 32,000 people die annually in vehicle accidents in America, about 88 per day. How many of those could be prevented if we didn't have bureaucrats trying to slow progress because it isn't perfectly safe?
Re: (Score:2)
Perfectly safe, as in the "Zaphod Plays It Safe" sense?
Re: (Score:2)
bureaucrats trying to slow progress because it isn't perfectly safe
Do we know that the test would force them to be "perfectly safe"?
I genuinely want to know, I've no idea what those 15 points are, or whether or not they're reasonable. The summary just makes it sound like Toyota is upset at the test being there at all, rather than the contents of the test; I could check TFA, but that isn't the Slashdot way. If Toyota are just objecting to the test on principle, I'm with ACs post; oversight isn't an inherently bad thing. On the other hand, if it is the contents of the test i
Re: (Score:2)
I've no idea what those 15 points are...
Here you are 15 points check [nhtsa.gov]. I googled it for you. ;)
Re: (Score:2)
Thanks for the link. But that doesn't look like a checklist so much as like a PowerPoint slide. Not that it's evil or stupid, but how does one check off items like "Human Machine Interface: Approaches for communicating information to the driver,
occupant and other road users"?
I should think that there must be more detail somewhere.
Re: (Score:2)
it doesn't have to be perfectly safe and you damn well know it.
it only needs to be as safe or safer than we expect people to be.
the statistic you cite is meaningless in terms of whether or not we let have some minimum level of reliability of self-driving cars.
any idiot should be able to see that we do not improve those statistics by just letting any other idiot put out a self driving car without regard to its reliability and safety.
Re: Why is it preposterous? (Score:2)
Re: (Score:2)
While there may be many drivers who have achieved a certain level of safety, people have certain weaknesses that computers don't. They don't have perfect attention. Their reaction time is significant. Most people can't look in even two directions at once and their multitasking capability is pitiful.
So at some point in the future we will see that computers achieve a higher safety level than any sample of human drivers, while remaining imperfect. At this point, it will probably become necessary to ban manual
Re: Why is it preposterous? (Score:2)
Re: (Score:2)
I am wondering. People are good at inferring data from context. A ball bouncing into the street is liable to be followed by a child. A wobbly tire might be about to blow out and cause another car to veer suddenly. That sound might indicate a train coming.
Are these inferences not trainable? For certain image classification tasks, computers are already better than
Re: Why is it preposterous? (Score:2)
Re: (Score:2)
So, California conditions other than the mountains. Not a problem for me, and an obvious good place to start.
Regarding cost, they're prototypes. If the system adds $30,000 to the cost of the vehicle, it would be cost-effective for a lot of people here. I doubt it has to add that much.
Re: Why is it preposterous? (Score:2)
Re: (Score:2)
Except there are numerous cases of even simple automated braking systems failing and slamming on the brakes for no reason. So when someone smashes into the car that erroneously hit the brakes full force who is at fault? Who is at fault when the inevitable hack comes in and causes the automated car(s) to do dangerous stuff? We are simply not ready for this. I submit voice recognition as the example. I remember in the 80's that it was just around the corner. Well it is 2016 (36 years later) and it is just now
Re: (Score:2)
Does vehicle maneuvering include a similar amount of ambiguity to the problem of recognizing natural language? It might not.
Re: (Score:3)
Nothing in the 15 point checklist requires perfect safety. In fact, most of the items are just "it should include something that tries to do X" where X is "obey local traffic laws", "refuse to go into automatic mode if sensors are damaged", "save data if there's a crash" and "switch safely from autopilot to manual control."
The actual document can be found here [transportation.gov] and simple summary that leaves out a lot can be found here [nytimes.com].
Re: (Score:2)
Now I know what self driving car not to buy (Score:2)
Spoilers: Toyota
Re:Now I know what self driving car not to buy (Score:4, Funny)
their attempt at self accelerating cars was the first warning ;)
Re:Now I know what self driving car not to buy (Score:5, Informative)
After the unintended acceleration fiasco (for which some engineers and management really should have been put to death instead of settling out of court), no one at all should be driving a Toyota, self-driving or otherwise.
Source:
http://www.safetyresearch.net/Library/Bookout_v_Toyota_Barr_REDACTED.pdf
tl;dr:
Here is a list of ways Toyota fucked up:
-Not following appropriate coding style (ie: 'spaghetti'/unmaintainable code, acknowledged by Toyota engineers in internal emails)
-Not following appropriate coding standards (ie: MISRA-C)
-No memory error detection and correction (which they told NASA they had, but "Toyota redacted or suggested redactions that were made in the NASA report almost everywhere the word EDAC appears it's redacted. So someone at Toyota knew that NASA thought that enough to redact from the public that false information.")
-Not mirroring all critical variables (which they initially claimed they did), in particular the critical kernel data structures had no protection, as well as the global throttle variables
-Task X responsible for a retarded amount of work: pedal angle reading, cruise control, throttle position, writing diagnostic troublecodes, failsafes
-Buffer overflows (at least one confirmed)
-Invalid pointers (pointers not checked for validity before being used)
-Existance of race conditions
-Using nested/recursive locks
-Unsafe type casting
-Insufficient parameter checking
-Stack overflows
-Excessive code complexity - 67 functions have cyclomatic complexity (MCC) over 50 (aka -'Untestable') (30 is a typical max), 12 functions have MCC over 100 (aka 'Unmaintainable')
-The function that calculates throttle position is MCC 146 and is 1,300 lines of code (executed by Task X)
-Uses recursive functions, which must not be used in critical applications according to MISRA-C
-Incorrect worst case stack size analysis - Toyota claims worst case usage was 41%, expert found worst case stack usage was 94% *NOT INCLUDING RECURSIVE FUNCTIONS!!!*
-Critical, unprotected kernel structures located directly after stack. IE: if stack overflows, critical kernel data is guaranteed to be lost.
-No runtime stack monitoring to ensure it doesn't overflow
-RTOS (named RX OSEK 850, after the OSEK API/Standards used by many automotive RTOSes) was not actually certified as compliant with the OSEK standard, but used by Toyota anyways
-MISRA-C rule violations (over 100 rules in total). NASA looked at 35 rules and found over 7,000 violations. Expert looked at all rules and found over 80,000 violations.
-Toyota claims their internal coding standards overlap ~50% with MISRA-C, but in reality, only 11 rules overlap. 5 of those rules were violated. In total at least a 3rd of their own internal standards were violated.
-Toyota cannot produce any records of bugs or bug fixing from testing, no bug tracking system was used
-Inadequate/rare/no peer code review
-Over 11,000 global variables
-Totally incorrect ("abysmal") watchdog usage: Run by hardware timer so operates if other parts of CPU are failing, doesn't check that critical tasks are running, throws away error codes sent to it by the OS from other tasks, allows for CPU to overload for 1.5 seconds before reset (a football field @ 60mph).
-Toyota didn't look at or review the monitor CPU code, though they claimed that there could be no software cause for UA
-Monitor CPU had all the requirements (electrical signals coming in and going out, adequate memory, CPU) to monitor brake pedal, throttle and to do something useful if there was a malfunction, but it just wasn't implemented due to lazyness or incompetence
-Many single points of failure
-Their failure mode analysis missed obvious things because they didn't follow any formal safety processes like MISRA
-Mix of Toyota code and Denso code
-"It cost them less to water down the watchdog then to upgrade the CPU to a fast enough CPU"
-If a fault occurs when there is pressure on the brake pedal, then applying further press
Re: (Score:2)
-Invalid pointers (pointers not checked for validity before being used)
How do you check a pointer for validity? You can only check for NULLness, right?
-No runtime stack monitoring to ensure it doesn't overflow
How do you check this? Most uCs don't have a valgrind (although the newer Atmels have MPEs) and I'm not sure how hard it is to add stack canaries to the compiler.
Re: (Score:2)
Over 11,000 global variables
Whoa, wait... was the entire entertainment system controlled by a single process or what? If that's just just globals, how can EMU software possibly be this large?
Fantastic write-up, BTW. People like you are why I still visit Slashdot.
Re: (Score:2)
Spoilers: Toyota
And it seems like a States Rights issue to me. If Toyota doesn't want to sell their cars in Callyforniay, they are completely free to not do just that.
Problem solved.
Re: Now I know what self driving car not to buy (Score:2)
Silly goose, states rights only matter on some issues and safety regulation isn't one thats something the feds should stop states from doing. State rights apply to things like human rights not things that could cost businesses money.
Laws! (Score:3)
Re: (Score:3)
This isn't a law. It's sour grapes on California's part for losing Toyota.
You against the rights of teh individual state to pass laws that are not provided to the federal Government by the constitution?
States rights baby, it isn't just to make mandatory carry and flying of the confederate flag the law.
And Toyota is completely free to ignore it and not sell their vehicles in California.. The marketplace. If you don't like onerous regulations, refuse to sell Toyotas to California citizens. And not a regulation at all to hinder you.
Re: (Score:2)
Don't test in California? (Score:2)
SO, they are worried about making their cars safe? (Score:2)
Tesla passed it. Why can not Toyota, Volvo, Mercedes, etc?
What the 15 points actually are - (Score:2)
https://www.transportation.gov... [transportation.gov]
The Safety Assessment would cover the following areas:
Data Recording and Sharing
Privacy
System Safety
Vehicle Cybersecurity
Human Machine Interface
Crashworthiness
Consumer Education and Training
Registration and Certification
Post-Crash Behavior
Federal, State and Local Laws
Ethical Considerations
Operational Design Domain
Object and Event Detection and Response
Fall Back (Mi
Preposterous? (Score:2)
Are you angry because it's probably aimed at your company?
http://www.forbes.com/sites/ji... [forbes.com]
Automakers With The Lowest (And Highest) Recall Rates ...Toyota/Lexus/Scion led the pack for the second year in a row with nearly 5.3 million cars and trucks recalled, followed by the Chrysler Group at around 4.7 million and Honda/Acura with nearly 2.8 million models recalled. While these would seem to be staggering numbers, as NHTSA points out theyâ(TM)re not weighed against sales, and as such arenâ(TM)t n
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
I'm sure you'd agree that autopilot system testing by the FAA is too onerous also. Flying is the safest form of travel, but everyone doesn't use it because it is expensive. Part of the reason it is expensive is because of all the regulations aircraft have to comply with. By eliminating testing. the price will come down, more people will take planes and helicopters everywhere and even more lives will be saved.
Interesting that you use flying as an example. Here is a list of the safest ways to travel: Trains - .2 deaths per billion miles
Buses - .5 deaths per billion mile
Airplanes .5 deaths per billion miles
Cars - 4 deaths per billion miles
Space Shuttle - 7 deaths per billion miles (18 peopple total)
Ferries - 20 deaths per billion miles
Bicycles - 35 deaths per billion miles
Walking 41 deaths per billion miles
Motorcycles - 125 deaths per billion miles
source - http://961theeagle.com/what-is... [961theeagle.com]
So we got
Re: As it should be (Score:2)
Yeah, my office isn't a boat in the middle of a lake so I can't take the ferry there.
And driving should be safer regardless of where it ranks in the list of transportation modes.
Re: (Score:2)
Yeah, my office isn't a boat in the middle of a lake so I can't take the ferry there.
And driving should be safer regardless of where it ranks in the list of transportation modes.
And I don't ride buses either. What's your point? I just went through the safest modes of transportation by miles traveled.
Re: (Score:2)
Yeah, my office isn't a boat in the middle of a lake so I can't take the ferry there.
And driving should be safer regardless of where it ranks in the list of transportation modes.
And you never walk either.
Re:As it should be (Score:4, Insightful)
"Per billion mile" is a stupid way to measure safety in practical terms [ijhssnet.com]. We don't measure our lives in miles or kilometers. We measure them using time.
Let's look at those transportation methods in fatalities per billion hours traveled:
Bus - 11
Rail - 30
Air - 30
Water - 50
Van - 60
Car - 130
Foot - 220
Bicycle - 550
Motorcycle - 4,840
Space Shuttle - 438,019
Now, let's consider how many hours we spend each day in each of these activities. I'd guess I'm in the car an average of perhaps 1 1/2 hours per day. Since nothing else comes close (assuming treadmills don't count as "walking"), I'm at FAR more risk than dying in a car crash than any other transportation method by a very large margin.
Lies, damn lies, and statistics. According to your statistics, the space shuttle is only slightly more dangerous than driving in a car and less dangerous than a ferry, which is obvious nonsense.
Re: (Score:2)
According to your statistics, the space shuttle is only slightly more dangerous than driving in a car and less dangerous than a ferry, which is obvious nonsense.
Whoosh!
Re: (Score:2)
Re: (Score:2)
Per mile is kind of disengenuous. What happens when you do it per trip or per hour of travel? Also, by your own numbers, plane travel is nearly 10 times safer than automobile travel. So there is plenty of room for improvement.
Or number of total trips, or survivors per accident? Or passenger miles, or by television coverage or day of the week? Does anyone note that the original post I replied to was by mile, so I replied by mile?
While I tend to believe that train travel is pretty darn safe, space shuttle flight is skewed by the fact that once achieving orbit, you are travelling somewhere around 17,500 miles per hour once in orbit, so racking up the miles at a prodigious pace.
It wouldn't stop me from tiding in one, but big ka
Re: (Score:3)
Those bicyclists and motorcyclist and walking deaths you're citing as being so much bigger than car deaths are mostly killed by cars.
Re: (Score:2)
Those bicyclists and motorcyclist and walking deaths you're citing as being so much bigger than car deaths are mostly killed by cars.
True. But seriously - note that my response was pointing out that using the safest by miles travelled is a strange metric to use. Y'all disagreeing with me are inadvertantly proving my point.
Re: (Score:2)
Yeah, those deaths should decline as a result of self driving cars too.
Re: (Score:2)
air travel is expensive because they deregulated the industry, which while followed by a short term price crash, also led to massive consolidation, reduced competition, and sure as s--- the prices went back up, and stayed there, only now the service is worse, and in fewer locations.
and did you actually suggest eliminating testing as a way to lower prices and save lives?
you're an idiot.
you know nothing of this topic.
also lets consider than the autopilot of an airplane is in no way comparable to that of a car
Re: (Score:3, Insightful)
Re: As it should be (Score:4, Insightful)
Robot may or may not be better, but to say humans are the worst drivers imaginable is a hyperbole. I suppose you let your dog drive because it is safer.
the population of the US is 318 million (I assume that 30,000 is in the US), that is 0.009% of people die, sure it could better. 13,322 people die from falls, given that walking is so much slower are we even worse at walking.
To me it is not apparent that less people will die, if robots drive, you need actual evidence and testing, not wild statements about how bad people are.driving you need use actual facts.
If I died every time my computer had a blue screen I would be dead a long time ago.
Re: (Score:2)
Re: (Score:2)
Few will pay double for a car for automation.
Tesla charges an extra $3k for the Autopilot option, which includes all the sensors and actuators needed for automation. That is no where near "double".
Re: (Score:2)
Few will pay double for a car for automation.
Tesla charges an extra $3k for the Autopilot option, which includes all the sensors and actuators needed for automation. That is no where near "double".
They also claim that it isn't self-driving.
Re: (Score:2)
They also claim that it isn't self-driving.
It is only partially self driving. But that is because of the software, not the hardware. Once the software is ready, Tesla says that the cars already in the hands of customers will be fully self driving.
So $3000 is enough to make a HDC into a SDC. Once production ramps up, it is likely that the cost will fall dramatically. You will save far more in insurance premiums than the additional cost for the self-driving capability.
Re: (Score:2)
Once the software is ready,
The software was 98% there 18 years ago. That last 2% hasn't been achieved in 18 years, yet you think it will be achieved in the next five?
Re: As it should be (Score:2)
Re: (Score:2)
The software was 98% there 18 years ago. That last 2% hasn't been achieved in 18 years, yet you think it will be achieved in the next five?
Please cite a source for your otherwise made up numbers.
Sure thing, from the history of self-driving cars [wikipedia.org]
:
n 1994, the twin robot vehicles VaMP and Vita-2 of Daimler-Benz and Ernst Dickmanns of UniBwM drove more than 620 miles (1,000 km) on a Paris three-lane highway in standard heavy traffic at speeds up to 81 miles per hour (130 km/h), albeit semi-autonomously with human interventions. They demonstrated autonomous driving in free lanes, convoy driving, and lane changes with autonomous passing of other cars
...
In 1995, Carnegie Mellon University's Navlab project completed a 3,100 miles (5,000 km) cross-country journey, of which 98.2% was autonomously controlled, dubbed "No Hands Across America".[37]
...
Also in 1995, Dickmanns' re-engineered autonomous S-Class Mercedes-Benz undertook a 990 miles (1,590 km) journey from Munich in Bavaria, Germany to Copenhagen, Denmark and back, using saccadic computer vision and transputers to react in real time. The robot achieved speeds exceeding 109 miles per hour (175 km/h) on the German Autobahn, with a mean time between human interventions of 5.6 miles (9.0 km), or 95% autonomous driving. It drove in traffic, executing manoeuvres to pass other cars. Despite being a research system without emphasis on long distance reliability, it drove up to 98 miles (158 km) without human intervention.
SDC's have gotten more hype the last five years than many Apple products. This does in any way mean that there have been advances over the 158km stretch of self-driving that occurred in 1995.
Re: (Score:2)
Re: (Score:2)
You'll still be able to get a $5k car. You will just have to wait an extra 10 years after the original purchaser has paid the $3k for the autonomous upgrade to get one with self driving features. Think about it - even when they become available, they will initially only be a small part of the cars sold in, say, 2020. Even if they are the majority of cars sold by 2030, there will still be lots of used non autonomous cars available at that point and beyond. Maybe by 2050 it will be hard to get anything else,
Re: (Score:2)
no, the reality is that people are very bad at judging risk at all.
Re: (Score:2)
> I suppose you let your dog drive because it is safer
With better hearing, better vision and faster reflexes, it very probably would be. The only problem is, he can't reach the pedals and his paws keep slipping off the steering wheel.
Re: (Score:2)
that is 0.009% of people die, sure it could better.
You are not thinking this through. Sure, SDCs are safer, so we can reduce fatalities. But another option, is that we could keep fatalities at their current acceptable (to you) level, and just have the SDCs drive faster. The average American spends about 300 hours per year driving. By doubling the speed, we could cut that to 150 hours. The savings would be 150 hours * 330 million people / 365 / 24 = 5.6 million years. If the average lifetime is 80 years, then this is about 70,000 lifetimes saved annual
Re: (Score:2)
The average American spends about 300 hours per year driving. By doubling the speed, we could cut that to 150 hours. The savings would be 150 hours * 330 million people / 365 / 24 = 5.6 million years.
Err.... You are misleading with the number. You shouldn't add all times which concurrently occurs to one big number (similar to the Pokemon Go post about extending lives). That is not the way it is because the 300 hours per year per person is still that much to each person. You can't quantify every single activity of each person would be the same quantity. However, saying 300 hours per year per person would be good enough and it explains itself.
Re: (Score:2)
US 2014 road fatality statistics:
29,989 fatal motor vehicle crashes resulting in 32,675 fatalities.
Fatalities by Vehicle Type: Car (38%), Pickup/SUV (25%), Large Truck (2%), Motorcyclist (13%), Pedestrian (15%), Bicyclist (2%)
Fatalities by Crash Type: Single-vehicle (57%), Multiple-vehicle (43%)
Drivers Killed: 15,479 (47% of total fatalities)
Drivers Killed with BAC >= 0.08: 4,913 (15% of total fatalities)
Road Fatalities by Environment: Urban (47%), Rural (51%)
Re: (Score:3)
The evidence and testing is being done, and accident rates are lower for autonomous cars already.
No, it isn't. Stop comparing "self-driving cars that are corrected by a human" with "human-driven cars".
Re: (Score:2)
People are the only drivers imaginable.
Re: (Score:2)
robots cannot possibly EVER be worse.
said by a person who clearly knows nothing about both technology and fallacies.
Re: As it should be (Score:2)
It's a damn good card. Reducing the fatalities ASAP is the main point and should be our goal. I mean are you saying you are cool with 30,000 people dying on US roadways? And I am sure when we include all the world's highways the number will go up to the 100s of thousands. We know that the Tesla is safer than any other automobile. It's been fucking proven already no thanks to people like you who tried hard to block it. How much testing do you need? Cause while you are doing testing, which will never satisf
Re: (Score:3, Insightful)
It's a damn good card. Reducing the fatalities ASAP is the main point and should be our goal.
Why? We trade safety for freedom with a higher risk of fatality in pretty much all aspects of life. It's called living.
The lifetime risk of dying in a car accident is around 0.17%, which I think are very acceptable odds. Certainly much better odds than for the risk of dying from a fall, which is around 0.5%. We could reduce that quite substantially if we lived in padded rooms and moved around with walkers, wearing helmets.
But I prefer the freedom that accepting risks give.
If I die, I will have lived.
Re: (Score:2)
Re: (Score:2)
Nothing kills young healthy people like cars do. Anyway, if you want to risk your life on the road there will always be race tracks and recreational vehicle areas where you won't hurt anybody. We could even keep some back country roads open to you, perhaps... and of course anywhere on private property.
Re: (Score:2)
Nothing kills young healthy people like cars do.
Motorbikes?
Motorbikes kill more young people than the military does (but we don't treat bikers like heroes, do we?)
Re: (Score:2)
Re: (Score:2)
Living in a padded room is terribly inconvenient. We can make cars automated for very little cost, its certainly worth it .. there is no inconvenience .. in fact there is convenience ... we can watch TV or videochat with family during the commute. How is that not a benefit? You are saying we should partake in a risky activity even if the less risky alternative is more convenient and fun? What fool goes for that??
Plus, I dunno about you .. but I am also not found of car accident related injuries. You realize
Re: (Score:2)
You are saying we should partake in a risky activity even if the less risky alternative is more convenient and fun?
I guess "let's go for a ride!" is something never heard in your household. The joy of not just using a car as transportation from A to B, but as a means of exploration. Seeing new things. Not knowing where the road takes you.
If you don't think that is fun, and is willing to trade it for making a very low risk even lower, I feel sorry for you.
Re: (Score:2)
You seem to think we won't be able to tell an autonomous vehicle commands like "take us to a random destination on highway 101"?
And again, I am sure there will be trails and areas you can self drive a vehicle at your own risk.
Re: (Score:2)
You seem to think we won't be able to tell an autonomous vehicle commands like "take us to a random destination on highway 101"?
Again, you miss the point entirely. The point is to make decisions on the fly. It's not the destination, it's the ride.
And again, I am sure there will be trails and areas you can self drive a vehicle at your own risk.
And again you miss the point. If you remove the freedom to go anywhere except a playpen, you remove what's compelling: Freedom.
As for risk, I repeat: The lifetime risk of dying in a car accident is around 0.17%
Not per year. Lifetime. It's going to be much lower for you, given the years you have already lived.
I am sure there are car free zones you can drive your autonomous vehicles ar
Re: (Score:3)
If I die, I will have lived.
Yep. Your daily commute is really "living". I admire you for being so alive.
Re: As it should be (Score:2)
Re: As it should be (Score:2)
LOL it must be maddening to live with such insane paranoia. You a Trump voter? Anti Vaxxer? Anyway, collision avoidance systems take priority over navigation, and navigation is not tied to the date. Seriously if you have never taken a software engineering class or even written a program in your life you really shouldn't be hallucinating on how software bugs work.
Re: (Score:2)
so answer the question: should these cars be let on the road before or after those bugs are worked out?
Re: (Score:2)
so answer the question: should these cars be let on the road before or after those bugs are worked out?
If we wait until we're sure that all the bugs are worked out, the cars will literally never be allowed on the road, despite being enormously better than the alternative. Should we refuse to let people drive until we're sure that they'll never make an error?
Re: As it should be (Score:2)
Re: (Score:2)
...Because when a fallible human makes a mistake driving a car, an accident can occur right there and then, while when a fallible human makes a mistake programming the AI for the car, it's followed by months, or years, or decades of testing and oversight during which someone can say "hey, there's a mistake here, let's fix that" before any real-world accidents are possible.
Plus, when a fallible human makes a mistake that gets someone killed, the best case scenario (from a future safety point of view) is that
Re: As it should be (Score:2)
Re: As it should be (Score:2)
Nationalists really shouldn't be calling anyone anti-human. Not when your core values are that it's OK to torture foreigners without being absolutely certain of their guilt and also that it's ok to kill all the relatives of terrorists (again, without any regard as to whether they are guilty or involved). That's the flag you Trump voting nationalists are carrying. Nationalism hates the concept of all human beings having value. Nationalists believe only certain human beings have any value or rights. That's a
Re: (Score:2)
Re:As it should be (Score:4, Funny)
Bonus, no need to paint flames on the sides!
Re: (Score:2)
This time - when someone gets killed - the code review should start the same day.
Code reviews do not help with machine-learned algorithms. The algorithm is opaque even to the developer. They should not be using machine-learning and/or neural nets to steer cars. The resulting systems cannot be reviewed for accuracy and reproducibility. They can only be tested.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
I think shirts with the standard red octagonal STOP symbol may become very popular with pedestrians -- although, maybe that would just cause the cars to begin to ignore STOP signs that looked like shirts - perhaps because they were slightly bent. That may not end well.
Re: (Score:2)
Re: (Score:2)
At some point we will be forced to measure these AI systems, by their pedigree (e.g., the "schooling" the have received), and the actual testing they have passed. This is not unlike how humans are (inaccurately) measured. That isn't today, but it might be coming sooner than you think...
What do you think you're doing when you solve a Google captchas? Google is running hundreds of thousands of video through AIs analyzing every frame.
Remember "Select all the road signs" a while ago? Now they seem to be on to billboards. You train it on a set of data and run it back on the hundreds of thousands of hours of data they have from the Google street view project.
Re: (Score:2)
Re: (Score:2)
Contrary to most of the morons posting here (I'm not talking about you), the 15 points make eminent sense to me. Presumably, here is what the twits think:
1) Documented process for data collection and sharing. Hell no, let's have a lackadaisical process and not share anything.
2) Maintenance of privacy. Nope, leak everything all over the place.
3) Systematic design for safety. Why? Duct tape it together.
4) Cybersecurity. Why should it be any more secure than crappy mobile, desktop, and server security are now?
Re: (Score:2)
Re: (Score:2)
"Before testing them". Not before selling them. Before *testing* them.
They want to run their tests on real roads with real people; they want to put at risk innocent bystanders who are not in the least bit remotely interested in their product.
Before putting your untested crap on the road to drive amongst uninterested third parties, they *should* be validating the shit out of the system.
Real roads and real people are not there for your personal use as guinea pigs.
Re: (Score:3)
You just posted that to a business running on a web server in California. Yeah, with a GDP of $2.5 trillion we sure are running short of businesses. Let's scrap all safety rules in a desperate attempt to increase their profits.
Re: (Score:2)
Re: (Score:2)
I'm surprised they don't test in the 3rd world where fewer formally care if people die from mistakes.
Because the traffic conditions are far from perfect, and the state-of-the-art in SDC battles to handle anything but near-perfect conditions.
Re: (Score:2)
"Kim Jong won't let me shop for jeans, OMG, the Horror!"