Elon Musk Rolled Out Autopilot Despite Engineers' Safety Concerns, Says Report (theverge.com) 195
An anonymous reader quotes a report from The Verge: When Elon Musk announced last fall that all of Tesla's cars would be capable of "full autonomy," engineers who were working on the suite of self-driving features, known as Autopilot, did not believe the system was ready to safely control a car, according to the Wall Street Journal. The WSJ report sheds more light on the tension that exists between the Autopilot team and Musk. CNN previously reported in July that Musk "brushed aside certain concerns as negligible compared to Autopilot's overall lifesaving potential," and that employees who worked on Autopilot "struggled" to make the same reconciliation.
A major cause of this conflict has apparently been the way Musk chose to market Autopilot. The decision to refer to Autopilot as a "full self-driving" solution -- language that makes multiple appearances on the company's website, especially during the process of ordering a car -- was the spark for multiple departures, including Sterling Anderson, who was in charge of the Autopilot team during last year's announcement. Anderson left the company two months later, and was hit with a lawsuit from Tesla that alleged breach of contract, employee poaching, and theft of data related to Autopilot, though the suit was eventually settled. A year before that, a lead engineer warned the company that Autopilot wasn't ready to be released shortly before the original rollout. Evan Nakano, the senior system design and architecture engineer at the time, wrote that development of Autopilot was based on "reckless decision making that has potentially put customer lives at risk," according to documents obtained by the WSJ.
A major cause of this conflict has apparently been the way Musk chose to market Autopilot. The decision to refer to Autopilot as a "full self-driving" solution -- language that makes multiple appearances on the company's website, especially during the process of ordering a car -- was the spark for multiple departures, including Sterling Anderson, who was in charge of the Autopilot team during last year's announcement. Anderson left the company two months later, and was hit with a lawsuit from Tesla that alleged breach of contract, employee poaching, and theft of data related to Autopilot, though the suit was eventually settled. A year before that, a lead engineer warned the company that Autopilot wasn't ready to be released shortly before the original rollout. Evan Nakano, the senior system design and architecture engineer at the time, wrote that development of Autopilot was based on "reckless decision making that has potentially put customer lives at risk," according to documents obtained by the WSJ.
Full autonomy would be unsafe (Score:2)
The current hardware doesn't have side facing cameras (or lidar or radar) on the front sides of the car. There are cameras in the side front fenders (in the logo) but they are looking backwards. They need to have side cameras close to the front of the car and high up as possible because before proceeding onwards from a Stop sign you need to see what's coming at you from the left or right side. The camera mounted in the middle post of the windows doesn't have an adequate view.
Re: (Score:2)
There are side facing cameras in the B Pillars. This is only a few inches further back than where a human driver's eyes are, and approximately the same height.
Re: (Score:2)
That's not good enough. We want it to be better than what a human driver would be able to work with. 40% of fatal accidents are at stop signs -- mostly side impact. That is, tens of thousands of people are killed in side impact collisions every year. Reference: https://safety.fhwa.dot.gov/in... [dot.gov]
Anythng to eliminate that would be good. If a collision is imminent the early warning may help the car decide to speed up or brake such that the passenger compartment is safe. Having a camera in the B pillar may help
Autopilot! (Score:2, Troll)
You know, a country with money's a little like a mule with a spinning wheel. No one knows how he got it and danged if it knows how to use it.
Heh-heh, mule.
The name's Musk, Elon Musk. And I come before you good people tonight with an idea. Probably the greatest... Aw, it's not for you. It's more of a China idea.
Now, wait just a minute. We're twice as smart as the people of China. Just tell us your idea and we'll give you subsidies for it.
All right. I'll tell you what I'll do. I'll show you my idea. I give yo
Someone always has to make the tough call (Score:5, Insightful)
I've been in engineering organizations releasing new products that had life saving or threatening potential. It is always an agonizing, scary hard call as to when you've passed the threshold of risk.
There is a bell curve with a peak. You rarely hit the peak. If you make the call too late, you cost the lives of those you might have saved - too soon, you cost lives of those who might have saved themselves.
Even if you hit the peak perfectly, you'll always be able to truthfully argue that some people are being saved who would have died and some are dying who would have lived. The peak is a point of balance between the two - not a perfect elimination.
I can remember many times hating my bosses when they released a product that I didn't feel was ready. As an engineer, I have to be over-focused on the problems and stand no chance of seeing when I am perfectly perched on that probability peak. They had to pry the projects from my hands to get them out the door. I actually begged in tears once. But, in retrospect, I can't think of any case where my bosses weren't right in releasing the product that I was concerned about releasing.
What we need to force progress is for attorneys to get smart and start figuring out how to file more effective suits for lack of progress toward autonomy. How many are dying today because we don't have it? We need to focus hard on that.
Re: (Score:3)
You are obviously not an engineer in the way I'm familiar with in the UK. ie. chartered and at a minimum a member of your professional institute: mech, civil, electrical, whatever; they've all got their own professional body. Except software "engineers", of course.
The fact that you agonize
Re:Someone always has to make the tough call (Score:5, Insightful)
Having the license in this country is often career-ending, much like having a PhD. It can make it very difficult to get a job. I've been in corporations that had thousands of engineers and never met anyone I knew to have it. I think they tend to be in certain structural and mechanical, and most civil and architectural engineering areas. The electrical, aeronautical, and computer engineering professions have much less of this.
Regardless, there is no such thing as a vehicle on the road today that does not make some safety compromise. Not one single vehicle uses the best-known safety mechanism for every single aspect of the car. No one could buy it if they did, and it wouldn't meet other necessary criteria if every compromise was made in the safety direction. Our government often has to force the matter by making regulations like the ones coming down the pipe soon to require all vehicles to have automatic braking technology. This is tech that has been available for a while, but many engineers must be signing off on vehicles that are killing people, otherwise, the government wouldn't have to be stepping in.
Everything engineered makes these compromises. For example, every building might be designed to handle a 500-year quake, but what happens if a 5,000-year quake comes along?
Airbags are an interesting example. Even the best airbag systems kill some people who would not have died without airbags. But they save many more that would have. So, you accept the compromise. Many years ago, seatbelts did the same and still do. Yet, we have them, and are even required by law in most places to wear them.
With the autonomous vehicle question, it is ready to deploy when it will save more people than it will kill when measured versus human drivers (all of them, not just the competent ones). To wait any longer would be killing those people that it might have saved. Of course, determining when that point is is a near impossibility. The hard call will either be made or the vehicles will never be made because the engineers will never be able to say with any product that it is not flawed in some situation - often in which it is being misused by the consumer.
Realistically, we do wait longer than the point of net balance because the public does not understand statistically-based decisions very well. When it is your family member that died because the tech failed you want to blame the tech without looking at the whole picture. We often don't even know when our family member died because the tech that could have saved them was held back because it was being over-engineered.
Often, these hard decisions are the reason for regulation - not to protect the public but to allow the companies protection in deploying something that a big picture organization like the government has determined will be a net benefit to the public while being a detriment to some individuals. The engineers then have the excuse of having met the regulation. It seems to work better with our minds.
Absent specific regulations and tests to target (which is the ideal situation in a free society), the business leaders are usually the ones who make the tough calls.
Re: (Score:2)
Our government often has to force the matter by making regulations like the ones coming down the pipe soon to require all vehicles to have automatic braking technology.
Too bad that's a garbage example, since the automakers voluntarily chose to implement AEB by that time, without being forced. The example you want is seatbelts. And actually, many government safety mandates are crap. The rears of vehicles are creeping upwards to meet rear impact crash test requirements, that's a natural process. But the fronts of vehicles are being mandated to specific dimensions in the name of passenger safety. Instead of having test requirements to meet, the government is forcing specific
Re: (Score:2)
A well thought out and reasoned response to an inflammatory post.
Airbags are an interesting example. Even the best airbag systems kill some people who would not have died without airbags. But they save many more that would have. So, you accept the compromise.
Perfect example!
Realistically, we do wait longer than the point of net balance because the public does not understand statistically-based decisions very well. When it is your family member that died because the tech failed you want to blame the tech without looking at the whole picture. We often don't even know when our family member died because the tech that could have saved them was held back because it was being over-engineered.
Often, these hard decisions are the reason for regulation - not to protect the public but to allow the companies protection in deploying something that a big picture organization like the government has determined will be a net benefit to the public while being a detriment to some individuals. The engineers then have the excuse of having met the regulation. It seems to work better with our minds.
Absent specific regulations and tests to target (which is the ideal situation in a free society), the business leaders are usually the ones who make the tough calls.
And insightful!
Just wanted you to know, your efforts were appreciated. :-)
Whenever someone starts getting sanctimonious abou (Score:2)
Whenever someone starts getting sanctimonious about safety I ask them if they fit the best possible tyres to their car for the next journey. If not then they are prepared to sacrifice safety for cost or convenience.
Re: (Score:2)
In practice what happens is the engineer only certifies for very limited use cases in controlled environments, and the management/sales people push it further. Then a few years later in court the engineer produces their documentation to show that they didn't support it being used that way and tried to warn people of the impending disaster.
licensed engineers may be need for autodrive softw (Score:2)
licensed engineers may be need for autodrive software or something like it.
The FAA does code audits on autopilot software.
Re: (Score:2)
licensed engineers may be need for autodrive software or something like it. The FAA does code audits on autopilot software.
It's not exactly a code audit, it's more like the FAA certifies code to a certain level of robustness. For commercial airline software to get certified, they generally have to prove that every line of code has been covered by tests, and every branch has been taken and not taken. There are even higher levels of certification (usually for the OS), where the code must be symbolically expressed, and mathematically proved to be correct. Not an inexpensive undertaking.
Rush to market (Score:2)
NONE of them are really ready and won't be for quite some time -- if ever.
Meanwhile people really don't want them anyway. [cnbc.com]
Re: (Score:2)
"However, just over 70 percent would ride in a car that was partially autonomous. Gartner defined partially autonomous vehicles as those that could drive autonomously, but allow a driver to retake control of the car if needed."
That describes the Tesla Autopilot.
Re: (Score:2)
Contradictory (Score:2)
Electric cars cool, self-driving cars bad.
If Musk really wants to "save the planet", drop the self-driving crap already. It makes the car more expensive so less people can afford one, meaning they keep their old polluting car or even buy a brand new polluting car.
Re: (Score:2)
Electric cars cool, self-driving cars bad.
If Musk really wants to "save the planet", drop the self-driving crap already. It makes the car more expensive so less people can afford one, meaning they keep their old polluting car or even buy a brand new polluting car.
I disagree, self-driving cars have an even better ability to "save the planet" than electric cars. But I would say "Electric cars cool, self-driving cars cooler, but currently unrealistic." That said, unrealistic is not in Elon's lexicon.
Re: (Score:2)
I disagree, self-driving cars have an even better ability to "save the planet" than electric cars.
They really don't, because they're still cars, and cars still suck. PRT would be dramatically superior and it would even permit the existing automakers to continue to exist (just like AVs will) but big changes are scary.
Re: (Score:3)
Engineer behavior outside of competence is indistinguishable from MBA behavior.
But the engineers are supposed to know better. The MBAs are _trained_ that 'competence is irrelevant' they can manage anything.
Re: (Score:3)
Re: (Score:3)
This is MBA behavior, not engineer behavior.
That's funny, because when I read this the first thing I thought was that he sounded like one of the new-school engineers. You know, the ones with the motto "move fast and break things".
Re: (Score:3)
No competent engineer, new or old, has ever uttered those words or advocated what they represent. Such a motto only works when you're involved in shit that doesn't really matter. In other words, it's perfect for Facebook, or a small Google team working on a new project that will be abandoned as soon as it's acquired a few loyal users, or a Silicon Valley startup no one's heard of writing an app that no one cares about.
Re: (Score:2)
No competent engineer, new or old, has ever uttered those words or advocated what they represent.
I completely agree. However, it's a sentiment that I hear quite commonly these days, usually from people in software engineering positions who are in their 20s.
Re: (Score:2)
They Write the Right Stuff [fastcompany.com]
I'm not a software engineer, just the old fashioned brick and mortar kind (the ones who build physical systems for the real world that people depend on for life). But this was required reading when I started my job.
Brilliant read, thanks. I loved reading the part about how errors that slip past are meticulously analyzed. When a bug is found and fixed, the entire process is looked at to discover *why* the bug slipped past in the first place.
This is something I instinctively do as a developer, but have never really heard formalized. When I see a mistake that was made, I like to take a step back and ask "was there something I could have done to prevent that mistake from occurring in the first place? What part of the
Re: (Score:2)
'Engineer' is not just a job title.
Re: (Score:2)
Absolutely right, but I was talking specifically about people holding software engineering positions.
"Move fast and break things" started as the engineering motto at Facebook (thanks, Zuck). Unfortunately, there's a whole bunch of people who still think that sounds great.
Re: (Score:2)
Its just like plane autopilot.
Elon, you are off the hook.
Re: (Score:3)
Re: (Score:2)
Better yet don't call it autopilot. Even if you try to explain it, there are plenty of fools who won't get it. Call it "drive assist" and people might be a little less foolish. Some assholes will still misuse it, but you can't stop someone hellbent on stupidity.
Agreed, this was a marketing fiasco of Tesla's making. The technology is good, but they marketed it as something it wasn't, and at least one guy died.
Re: (Score:2)
Agreed, this was a marketing fiasco of Tesla's making. The technology is good, but they marketed it as something it wasn't, and at least one guy died.
Actually, they marketed it as what it was, and that was confusing for people, and at least one guy died expressly ignoring the warnings of the manufacturer, of which he literally could not possibly be unaware since he had to attend a safety lecture before he was allowed to use the feature. He ignored the warnings of the people who produced the system, and that is what killed him.
Re: (Score:2)
Except those words are false. It is NOT just like plane autopilot. Your Tesla on autopilot going down highway 101 is in a vastly different environment than an airplane flying on autopilot in the middle of the Pacific ocean at 30,000 feet altitude.
Plane autopilot is safe because it is well understood by all parties that it is not to be engaged in an environment containing tractor-trailers and pedestrians in your path. Telling people Tesla autopilot is just like plane autopilot is dangerous snake oilmanship.
Re: (Score:2)
Re: (Score:3)
"Is it safe?" is the wrong question.
The real question is: "are the fatalities, injuries and accidents that occur per passenger mile greater or less when compared to a human driver?"
"Safe" is a subjective and unmeasurable term, unless you define it in unreasonable terms (for example if your definition of safe is zero accidents).
Re: (Score:2)
Re: (Score:2)
Aeronautical autopilots are not safe, precisely because they have significant nuances and problems which have resulted in numerous crashes through misuse and over reliance. You do not turn on an aircrafts autopilot and then sit back and relax for the rest of the flight.
Which is pretty apt considering what we are discussing and what is being claimed on all sides...
Re: (Score:3)
And autopilot for planes is far, far, far more advanced, capable, robust, and reliable
That's mostly because it is also much simpler. The comparison with it is correct, it just omits the part where you have to additionally recognize roads, vehicles, traffic signs, pedestrians...all the things airplanes don't have to worry about. Airplane-level capability just isn't enough.
Re: (Score:2)
And autopilot for planes is far, far, far more advanced
This comment demonstrates an glaring ignorance of how plane and automobile autopilots work.
Re:Not surprised (Score:4, Informative)
And autopilot for planes is far, far, far more advanced, capable, robust, and reliable than the shit Elon is selling.
That's total bullshit. Autopilot for airplanes has been around for many decades now. It just maintains a heading and altitude. It's roughly analogous to cruise control on cars in technological terms, and maybe automatic lane-keeping in actual functional terms (since cars have to follow a road, planes don't; of course, technologically, lane-keeping is far, far, far more advanced than the autopilot in a typical Cessna).
Yes, there are very advanced autopilots in today's newest passenger planes like the 787, but the term is not exclusive to those, and there's countless decades-old Cessnas and Pipers out there with autopilots that are quite primitive.
No, autopilot in planes does not autonomously pilot the plane. It doesn't take off or land, it doesn't fly around bad weather, it doesn't check METARs and PIREPs, it doesn't watch for other traffic, it doesn't handle radio calls to ATC when you cross into class B airspace, it just keeps you flying straight and level.
The only thing in your post that's correct is the bit about pilots being trained to use their equipment. That isn't true for Teslas, but it also isn't true for any other car either. How many drivers on the road today got explicit training to use the cruise control in their car? Cruise control has been around a few decades now too. Or how about the more advanced features we have not, like adaptive cruise control, lane-keeping, and operating the infotainment system? Every car is different, with different controls and different quirks. Airplane pilots aren't even allowed to fly a plane (solo) unless they've been specifically trained for that model and received a rating for it. Perhaps we should do that for cars....
Re: (Score:2)
Autopilots in planes have been able to land and exit the runway for quite a while. I'm pretty sure they should be able to take off as well if the plane is well aligned at the start of the runway, but they can't taxi (drive the plane on the ground to the start of the runway).
Re:Not surprised (Score:5, Informative)
It's a lot simpler than that:
This article is bullshit.
Sorry to be so blunt, but it's journalistic malpractice. The author is confusing Enhanced Autopilot (EAP) with Full Self-Driving (FSD). To be clear:
* Some safety features related to autopilot, such as automatic braking and the like, are available to everyone for free.
* EAP is an optional add-on available today ($5k on the Model 3 if purchased at the time of buying the vehicle, $6k as an over-the-air upgrade) which provides lane-following (requires hands on the wheel and driver attention) and driverless summon features (very low speed, "back out of / into a tight space / drive down the parking lot" stuff without a driver in the car). More to the point, there are two entirely different versions that have existed over the year. AP 1.0 was used on earlier vehicles, based on software and hardware from Mobile Eye. Tesla and Mobile Eye split in a contract dispute. Mobile Eye claims that Tesla wasn't using their hardware right. Tesla says that Mobile Eye found out that Tesla was working on an in-house Autopilot system and demanded that they stop as a precondition to get to continue to use their hardware. Mobile Eye says they knew about Tesla's internal work but didn't feel threatened by it. Regardless, Tesla was forced to switch to their internal version, AP 2.0, which was a step backward. AP 2 is just now catching up to the features of AP1.
* FSD is Tesla's current goal, where the vehicle can drive itself without you having to have your hands on the wheel or paying constant attention. You cannot use FSD, even if you buy it. It costs $3k on the Model 3 ($4k as an over-the-air upgrade later). The article is talking about FSD being rolled out before engineers think it's ready. To reiterate: you can only buy FSD right now, you can't use it until it's ready. Tesla apparently tried to clarify this for the author:
The author apparently nonetheless still failed to understand what that means. You Cannot Use FSD. Period. If engineers are complaining about FSD being rolled out too soon, they're complaining about Tesla selling something that its drivers aren't going to be able to use for too long of a period of time. And you know what, I fully agree with the engineers in that regard - I think it's wrong of Tesla to sell something that there's a big question as to whether they'll be able to get it working reliably enough or pass the serious regulatory barriers in its way.
But if engineers are complaining about FSD, then it's not complaints about EAP. Because the two are very distinct things. EAP isn't perfect, don't get me wrong - and the 1.0 / 2.0 switch was a big setback (they still don't use all of the cameras on the vehicle). But it also pesters drivers enough if they show signs of not paying attention to the road (e.g. not holding onto the wheel) in order to overcome its imperfections (the level of pestering was significantly increased after AP1's fatal accident, in which the driver was apparently watching movies during most of his trip).
Re: (Score:2)
The author is confusing Enhanced Autopilot (EAP) with Full Self-Driving (FSD).
Every single person I talk to, who only casually observes Tesla-related news, assumes that EAP == FSD. Every friggin single one. Usually in knee-jerk comments, when I mention something about having to drive my car somewhere. (I have a Model S /w AP v1... Love it in stop-and-go highway traffic and long highway trips, but I'm quite well aware of its capabilities and limitations.)
Re: (Score:2)
I'm a bit worried by their FSD technology. The current auto-steering/speed control needs to use map data to work, i.e. its maps tell it where sharp bends are so it can slow down, that sort of thing. For full self driving it needs to be able to work without accurate maps, e.g. there might be some roadworks or a new road layout that isn't on the map yet.
At best the car would be forced to stop, possibly stranding the passengers if they were unable to legally drive it. At worst it might not slow down in time.
Ma
Re:Not surprised (Score:5, Informative)
They implied that Tesla is currently having people drive something that its engineers deem unsafe. This is simply not the case at all. If the engineers were complaining about selling FSD, they're not complaining about anything that consumers are actually driving.
Everyone who buys FSD does so on their assessment of how likely they think it is that Tesla will actually deliver. There is zero confusion among anyone who buys it about the fact that they can't use it right away; the option always includes the "you can't use this until it's finished and legally approved" disclaimer next to it. It all comes down to how optimistic or pessimistic you are about the technology. I'm a pessimist, and will not be buying it. Some of Tesla's engineers working on it are apparently also pessimists. I'm not surprised. It's a crazy-hard task, and very different from human-supervised autosteer / EAP.
Re:Not surprised (Score:4, Informative)
Airline pilots are intelligent and highly trained individuals. That is why they are not found on every street corner, and are worth more than a dime a dozen.
Actually, this isn't true. Pilots start out their careers as instructors ("those who can, do, those who can't, teach"), and make peanuts. After that, they might get a job as a copilot for a regional jet company. Last I heard, the starting salary for one of these guys is $18k. Yep, barely above minimum wage. It takes many years for them to work up to any kind of decent salary approaching 6 figures. Then, when they hit 60 years old, they're forced to retire.
Being a pilot is for people who are independently wealthy (e.g. trust fund, or has a spouse willing to support them), or for people who love it so much they're willing to sacrifice everything just to have that job.
Re: (Score:2)
through no fault of his own
Are you referring to that idiotic youtuber? Through what leap of logic does one conclude that he wasn't at fault?
Re: (Score:3)
That's hardly the only time. There was an incidental in China where autopilot drove full speed into a road sweeper that it apparently couldn't see.
Tesla seem to have admitted it doesn't work as originally advertised, by repeatedly increasing the amount of effort it makes to keep the driver alert.
Re: (Score:2)
Re: (Score:2)
No, he was instructed not to do what he did. And he did it anyway. Seems pretty clear to me.
Yes, two things are very clear;
1) The driver was irresponsible and legally at fault
2) Tesla autopilot was not good enough to stop the car from plowing into the truck on its own.
Re: (Score:2)
I think the "on it's own" it really the thing.
If a driver can detect 95% of hazards, and the car can detect 70% of hazards, then there is improved security, because they both would have to miss a hazard. But switching completely from the driver in charge to the car in charge decreases security.
Kinda reminds me of some people I encountered in IT back in the 90's. "Hey, I don't need to backup anymore, I have RAID now"
Agree. That last part, the human factor element, is the hard part. As people feel safer they naturally take greater risks. We all do it, some more than others. Its those 'some' that do it more that we have to worry about.
Words are important (Score:3)
I don't have the exact words Musk has used, but I distinctly remember that he said that all Teslas will come equipped with the HARDWARE necessary for fully autonomous self-driving (computer power, sensors), but that the actual functionality would depend on a future software upgrade.
Now you and I, as software-related techies most of us, know that that will have to be one MASSIVELY COMPLEX and not really invented yet by any stretch of the imagination software upgrade, but technically, what he said is not fals
Re: (Score:2)
I don't have the exact words Musk has used, but I distinctly remember that he said that all Teslas will come equipped with the HARDWARE necessary for fully autonomous self-driving (computer power, sensors), but that the actual functionality would depend on a future software upgrade.
That is the new line. Elon was much bolder in the past, and the whole thing have always been intended to give the impression of being fully automated even if it wasn't. You can see the confusion all over older threads discussing the Tesla.
Re: (Score:2)
I'm not expecting this upgrade any time soon. When they moved from the V1 to V2 sensor package the current auto-steering system got a lot worse, and the car's ability to read its surroundings was severely degraded too.
For example, on the V1 package the car could tell the difference between motorcycles, cars and trucks. The V2 sees motorcycles as cars and often mistakes trucks for cars as well. It also seems to detect them much later, and not see nearly as far ahead as V1. The auto-steering seems much more p
Re: (Score:2)
There's been more people than those mentioned here who left Tesla. Chris Lattner is one.
Re: (Score:3)
Chris Lattner wasn't there long enough to get started. We don't know why be backed out.
Personally I could never see why a compiler guy was being hired as head of one of the most complicated AI projects anyway. Different field.
Re: (Score:2)
Re: (Score:2)
Really, who isn't?
Re: (Score:2)
Re: (Score:2)
I can come up with a few examples that I STRONGLY suspect don't meet that standard... Mostly they drive cars around here, but some of them are in public office...
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It won't matter because AGI will mark the extinction of humans.
I'm pretty bearish on AGI. To get it in our lifetimes, it'll require continued exponential increases in processing power, i.e., it requires Moore's law to continue. However, many signs currently indicate that Moore's law is dead. It's entirely possible that our civilization may be hitting a technological plateau and that real AGI is not a few or dozens of years away, but hundreds or thousands.
Re: (Score:2)
In other words we don't know.
Re: (Score:2)
If the complexity of the tax code keeps increasing then yes, AGI will mark the extinction of Americans, at least, due to how long it will take to actually compute that number.
The tax code is large and opaque to most, but it has a bit of organizational beauty to it. Most people don't understand it because they never studied it. But if you can write software, you can understand tax law.
Re:Deep neural nets will never give us full autono (Score:5, Insightful)
Neural Nets are very specifically NOT rule based. They are trained.
GOFAI was pretty much a phrase invented to label stuff that IS NOT the neural net approach.
Autonomous vehicles do not need AGI. It's very much a single domain system. You don't need your autonomous car to be able to diagnose diseases for example.
Re: (Score:2)
What if there are two people on different parts of the road in front of the car, and swerving to avoid one will mean hitting the other? The car needs to be able to diagnose which of the two people has a terminal disease, in order to select to hit that one.
Re: (Score:2)
What if there are two people on different parts of the road in front of the car, and swerving to avoid one will mean hitting the other? The car needs to be able to diagnose which of the two people has a terminal disease, in order to select to hit that one.
What if the two people are healthy, but the car can quickly identify which of the two has a higher net worth, so it can hit the poorer one. No moral problems there at all.
I know it's a currently an absurd result, but eventually, the computer will have to decide, or decide to ignore their net worth and use other factors, or to just flip a coin and hit one randomly. Every decision comes with it's own set of moral issues.
Re: Deep neural nets will never give us full auton (Score:2)
Might be useful, actually. You pull over and the hooker gets in. Car scans for disease and automatically hits the eject button before you pay.
Re: (Score:2)
Re: (Score:2)
More specifically, they are indeed rule based, but the rules are learned, not engineered.
Re: (Score:2)
Human's aren't that great in situation's for which they don't have experience either...
Like pluralizing thing's?
Re: (Score:2)
We know how to generalize.
Hmmm... Ain't that the truth...
Re: (Score:2)
artificial deep neural nets know how to generalize too. What's your point?
Re: (Score:2)
If they are well trained on a dataset that covers all the relevant situation, they can interpolate, but they cannot really extrapolate all that well. No known machine learning technique know how to deal with a completely new and unexpected situation.
Re:There's always one or two voices.... (Score:5, Insightful)
anyone who's tried to execute a change or deliver an outcome will always find one or two dissenting voices in any organisation of scale.
Absolutely true. And it's equally true that it's foolish to not take those dissenting opinions very seriously (even if, after careful consideration, they don't change your plans).
In any organization, there is a strong "rah-rah" tendency, and people tend to suppress their own doubts. Nobody wants to be a wet blanket or potentially risk their career by not seeming to be a "team player". So the voices of those who point out problems need to be listened to much more carefully than the voices of those who say "everything's great".
Re: (Score:2)
Aside from that, I think Musk is guilty of misrepresenting safety numbers. He repeatedly stated cars on autopilot were safer than cars without, even stated a number, but used apples and oranges data to do that comparison, with no normalization of the non-Tesla data to make in comparable. I t
Re: (Score:2)
.. this feels like lawyers trying to put the screws on Telsa for not shipping a product that prevents people ending themselves through idiocy.
Why should Tesla be treated different than any other company that must prevent idiots from killing or injuring themselves with their products? In Tesla's case, one could argue Musk was encouraging them to take risks based on his early descriptions of its capabilities.
Re: (Score:2)
In Tesla's case, one could argue Musk was encouraging them to take risks based on his early descriptions of its capabilities.
Every owner that wants to enable the autopilot functionality is required to attend an orientation in which the limitations of the system are explained in some detail, so there is literally no valid argument in that direction.
Re: (Score:2)
Ah yes, but marketing is reality to a lawyer looking for a payday. You cannot SELL something with color glossy printed materials if what you are actually getting isn't as described in said color glossy materials.
You can bet that the lawyers will have their way with Tesla should the "autopilot" thing turn out to have faults (or even limitations) that might kill you and these are not plainly disclosed in the advertisements. You can also bet that Tesla will now require signatures (in blood) that indemnify t
Re:Mr Musk (Score:4, Insightful)
Re: (Score:2)
Self driving with zero wireless inputs thank you very much and add in a system verify feature at startup, to ensure no hacking. So multiple systems, rather than an all in one hackathon special, that take you pick of three letter agencies, can hack to drive you straight off a cliff, or into a train or on the other side of the road towards a semi that is also accelerating because it has been hacked to kill you. So self driving, not remote, who the fucks wants to get into a remote controlled vehicle. The Tesla
Re: (Score:2)
Self driving with zero wireless inputs thank you very much and add in a system verify feature at startup, to ensure no hacking.
Yep. I'm not getting into an AV without a glass break tool. At least that way if someone tries to use it to kidnap you, you might be able to escape. I'm not particularly worth kidnapping, but lots of people who aren't really worth kidnapping get kidnapped anyway because it's cheap and so many people have nothing to lose. AVs will make it both cheap and easy.
Re: (Score:2)
Evidence?
Re: (Score:2)
Re: (Score:2)
OK, so you have no evidence for motorcycles being missed 10 times as often as cars of pedestrians.
The first two were stories about collisions. So what? Human drivers often miss motocycles and hit them. You've presented no evidence that Autopilot does it more often.
Re: (Score:2)
The Tesla forum discussion (3rd link) makes it clear that the the Tesla software is not properly aware of overtaking motorcyclist and is actually confused by them into thinking that the *car* ahead has accelerated. Which causes the Tesla car to accelerate in turn, when there is a very real slow car ahead. This is a known issue which at the time of the forum had not been fixed (July 2015). Since then, Tesla Motors has parted ways with MobileEye and reports are that the newer software is not as good.
Re: (Score:2)
Motorcycle riders don't given they miss motorcycles at a rate 10 times higher than cars or pedestrians...
Evidence?
No, hyperbole!
Evidence would require research:
Number of Tesla cars on the road. Number of non-Tesla cars on the road. Number of motorcycle accidents involving a Tesla. Number of motorcycle accidents involving a car that wasn't a Tesla.
And some basic understanding of statistics:
Given the proportionally low number of Tesla vehicles compared to total vehicles you'd probably want to limit data to particular locales (i.e. continental USA), and, further, correct for 'error magnification'
Based on my observations t
Re: Mr Musk (Score:3)
Re: (Score:2)
Re: (Score:2)
Can you please focus on making your electric cars and the batteries that run them better and more affordable instead of getting sidetracked with these bullshit features that nobody wants?
Actually, I think everybody wants the stuff the pied piper is attempting to sell (I'd love to have all that stuff he keeps dreaming up), he just doesn't have it at a price anybody can really afford. I cannot afford a Tesla, even the stripped down model myself.
Of course, Musk's issue is that a Tesla is way too expensive BECAUSE of all this wiz-bang cool stuff he keep stuffing in them, and THAT is why Tesla will continue to struggle as a company until this kind of madness stops.
Take an example from history.
Re: (Score:2)
Of course, Musk's issue is that a Tesla is way too expensive BECAUSE of all this wiz-bang cool stuff he keep stuffing in them, and THAT is why Tesla will continue to struggle as a company until this kind of madness stops.
That is a valid point, but the other side is that the cars are going to be very expensive regardless simply because of the batteries, and that wiz-bang stuff is helpful in getting the target market to part with their $$$$ by further distinguishing the product.
Re: (Score:2)
Perhaps, but Musk is playing a loosing hand on this then... With Oil and Natural Gas prices at bargain basement prices and no real upside in sight, who can afford an EV anyway? The ROI on the investment just isn't there now.
For me (with a 15 min commute one way) an EV would be great, but there is zero chance I'm going to get one anytime soon. I simply cannot afford the extra expense of the purchase when even if I had free electricity to charge it with the cost savings on fuel wouldn't make up the differen
Re: (Score:2)
Re: (Score:2)
Nobody wants? I can't wait!
I suppose you imagine no one wanted cruise control either.
Re: (Score:2)
No.
Cruise control amounts to a throttle with a memory so you can dial in a speed until you intervene. Occasionally useful, often overused, occasionally dangerous but very rarely leads to deaths when it goes wrong.
Self-driving cars involve you entrusting your life to software that:
1. Should be able to determine the best route from your location to your destination, with any intermediate waypoints, and maintain a speed in compliance with local restrictions. For travel on common roads, software is getting pr
Re: (Score:2)
None of that reflects the post you were replying to, which was about the fact people want autopilot, just as they wanted cruise control in previous decades.
And the Gran Turismo jibe - I've been driving for 30 years. I never play driving games. So duh!
Re: (Score:2)
Now you've got me curious - what exactly do you think "autopilot" means, when applied to cars and not aircraft?
Re: (Score:3)
You assume Musk's motives are about selling cars... I'm not so sure that's true.
I actually think that Musk's driving force is more about PR than running any of his business ventures the most productive way possible. I suspect that he craves the attention that comes from having that flashy idea, and the money that comes from the starry eyed investors who flock to his door to "invest" in them. I don't think he's a snake oil salesman, only that he's not opposed to throwing plausible ideas up on the wall and
Re: (Score:2)
IF the color glossy sales literature leads you to believe that the thing you are buying is capable of safely driving itself and the company doesn't go out of it's way to dispel such misconceptions, one could imply that the company either advertised falsely and/or produced a product that wasn't safe.
Change products for a second... Let's say you market a drug to treat some sickness. You clearly say it CURES the illness in your advertisements, and in most cases that's true. But in a small percentage of cases
Re: (Score:2)
That the fountain of BS that is Elon Musk allegedly put personal profit ahead of people's lives?
Maybe a wanker, and maybe risking people's lives, but don't think he's doing it for profit. He's doing it to advance the state of the art. Still not a valid excuse, but better than profit.