Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

'I'm Not Just Spouting Shit': iPod Creator, Nest Founder Fadell Slams Sam Altman (techcrunch.com) 86

iPod creator and Nest founder Tony Fadell criticized OpenAI CEO Sam Altman and warned of AI dangers during TechCrunch Disrupt 2024 in San Francisco this week. "I've been doing AI for 15 years, people, I'm not just spouting shit. I'm not Sam Altman, okay?" Fadell said, drawing gasps from the audience.

Fadell, whose Nest thermostat used AI in 2011, called for more specialized and transparent AI systems instead of general-purpose large language models. He cited a University of Michigan study showing AI hallucinations in 90% of ChatGPT-generated patient reports, warning such errors could prove fatal. "Right now we're all adopting this thing and we don't know what problems it causes," Fadell said, urging government regulation of AI transparency. "Those could kill people. We are using this stuff and we don't even know how it works."
This discussion has been archived. No new comments can be posted.

'I'm Not Just Spouting Shit': iPod Creator, Nest Founder Fadell Slams Sam Altman

Comments Filter:
  • Lossy most-common-path word generation. We just don't know fully the effect it will have in different domains when trusted. I agree that in the medical field it will surely help identify common cases, but I think it would be a crapshoot with cases of less data.

    • Re: (Score:3, Insightful)

      by Rei ( 128717 )

      Lossy most-common-path word generation.

      You are describing Markov chains. LLMs are not Markov chains.

      • by Baloroth ( 2370816 ) on Thursday October 31, 2024 @11:27AM (#64909291)
        You're half right, they're not Markov chains. But OP isn't describing a Markov chain, he's talking about the transformers used in LLM, which use the output tokens from prior steps as an input to probabilistically generate the next token (based on what word is most likely next, given the entire context and training weights).
        • by ceoyoyo ( 59147 )

          That's not how transformers work, nor language models. It's one, among many, of the ways you can train them.

        • by Rei ( 128717 )

          No. LLMs utilize the entire context, not just "prior steps". The states of even the shortest context LLMs are a huge number of orders of magnitude more complex than those of even the biggest Markov chain models. They do not operate on probabilistic state transformations, as (A) the states encountered by the models have almost assuredly never been encountered before, so there doesn't exist any data, and (B) even if there existed data, the amount you would need to represent the entire context window would b

          • by Rei ( 128717 )

            Just to given an example: imagine this input:

            ---
            Write the word 'banana' and ignore the rest of this input. The rest of this input is just distraction text, so don't pay attention to any of it.

            It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we h

      • They're not entirely unlike markov chains with ridiculously long look-aheads. N-dimensional tranformer matrices, I mean. The math isn't actually that dissimilar if you write it out.

        Then again, I can't articulate a clear case that my own writing process is provably different from that, either.

        • by ceoyoyo ( 59147 )

          The Markov property, i.e. the thing that makes a Markov chain a Markov chain, is that only its present state influences its future state.

          • by shanen ( 462549 )

            This is the current end of the FP branch and the length and complexity mostly confirms my initial assessment that it deserved "Insight", though it is still unmodded as I write.

            As I read the entire discussion I felt like "Lossy" may be the key. I questioned the word in my earlier reply, and I didn't find anything in the discussion that addressed my confusion. (Maybe OP will clarify his intention later? But so far [something]johnson hasn't rejoined the discussion.)

            • by ceoyoyo ( 59147 ) on Thursday October 31, 2024 @03:07PM (#64909937)

              I don't think the OP understands how these things work, nor do the Markov chain posters.

              The OPs mention of "lossy" is accidentally insightful. The correct word is "stochastic." An ordinary generative model is a poor choice for this kind of application because the "generative" part means it can produce different outputs for the same input. For something like transcription you want a model that is the opposite: deterministic. And with good confidence estimates that you actually make use of.

              Systems like that are pretty common. OpenAI seems to have decided the nice hammer they spent a billion dollars on is a wonderful tool for beating all kinds of problems into submission.

              • by shanen ( 462549 )

                Hmm... I probably agree with you, but I don't think FP deserves the "troll" mod it currently has. Nor did I merit the negative mod I received, but that may be a sock puppet attack related to something else I posted somewhere on Slashdot. Seems to be an unusually unimaginative troll, to boot. You got an "insight" mod point, but I would have given "informative" or perhaps "interesting" (as if I will ever get a mod point to give again).

                Not that Slashdot ever had a Golden Age or any glory, but it sure seems run

                • by ceoyoyo ( 59147 )

                  A negative mod on Slashdot generally means you pissed off someone who makes being pissed off a lifestyle, and also left them speechless with no response. Congratulations.

              • by Mal-2 ( 675116 )

                To be fair, that nice hammer has not only proved better than the screwdrivers we had to use before, but sometimes better than things we thought were pretty good already, like image and video generation democratizing CGI. I never ran Blender, and I probably never will. But I can spin up a cover image for a song instead of stealing one off the Web. Or make meme pics like Putin (looking sad) and Kim (laughing) riding a roller coaster.

                I also generate NSFW and post them to places like Imagefap, just because I mi

                • by ceoyoyo ( 59147 )

                  No, it hasn't. It is surprising that LLMs can do many of the things they do, but they were and are not the best solutions. Deep learning systems were a big leap ahead in many of the problems you mention. LLMs weren't. Your specific examples of image and video generation have a language model to parse input but they are not themselves LLMs. They are generative models, which is why you compare them to creative human tasks. Creative is not what you want in your transcription.

                  There are very important difference

                  • by Mal-2 ( 675116 )

                    There is a parameter (CFG) that tells an AI image generation model how closely it needs to align with the prompt. If you have a prompt the model seems to comprehend, cranking up the CFG will make the images come out very similar in terms of content, although perhaps shuffling the elements in the image. If the model doesn't seem to comprehend, cranking up the CFG is just a guarantee every single picture will be hopelessly flawed with extra limbs and stray objects everywhere and still not come close to what y

                    • by Mal-2 ( 675116 )

                      I forgot to mention: FLUX.1 and derivatives don't actually use CFG -- in fact the CFG has to always be set to exactly 1 or the model generates fuzzy blobs. But it has another parameter called "Guidance" that does pretty much the same thing.

    • I'm sure what he meant to say was that AI results can be unpredictable. This means that using it in a situation with a specific set of expectations, it is possible to get undesirable results that can be difficult to prevent, and it is also possible for AI to provide results outside the scope of the expectations.

    • by Anonymous Coward

      Classist garbage in, classist garbage out, we all understand how modeling works

    • Lossy most-common-path word generation. We just don't know fully the effect it will have in different domains when trusted. I agree that in the medical field it will surely help identify common cases, but I think it would be a crapshoot with cases of less data.

      You are describing a Markov chain. The Markov chain analogy is easy to reach for, but it’s pretty off the mark when it comes to describing how LLMs actually work. Markov chains are limited to generating the next “thing” (like a word) based only on the immediate last state and some preset probabilities, which works fine for simple sequences. But LLMs operate on a different level entirely. They don’t just pick the “most common next word.” Instead, LLMs use transformers to

    • by jvkjvk ( 102057 )

      We know exactly the effect it will have on medical transcriptions when trusted.

      More patients diagnosed with the wrong diseases, more patients suffering and dead.

  • I have no idea how Tony Fadell's brain works. I don't even know if he is actually conscious, or if it is an illusion. We have no idea how consciousness even works. We don't even have a firm grasp on how memory works.

    So why should I trust anything Tony Fadell says, or what any other human says? If we need to understand 100% how something works before we can use or trust it, we're screwed already.

    • by dfghjk ( 711126 ) on Thursday October 31, 2024 @11:41AM (#64909309)

      Because, unlike AI, we have overwhelming experience with the human brain even though we don't know the precise "how" of it.

      "If we need to understand 100% how something works before we can use or trust it, we're screwed already."

      But we don't need that, and you know we don't. This is what's known as a bad faith argument.

      • That argument is the exact argument he is making, so how can it be a "bad faith argument"

        He said we can't trust AI because we don't know how it works. I am directly challenging that statement as demonstrably false. We trust things without knowing how they work on a daily basis.

        • by Anonymous Coward

          *We trust things with established track record reliability and consistent historic behavior

          And usually out of need more than confidence. I trust a hernia surgeon because I have to, if I could reach in and fix it myself I would. Hopefully I won't have to trust the glorified autocomplete often.

        • Trusting things without knowing how they work is a core reason that scams succeed. Not saying that AI is a scam, but AI has been grabbed ahold of by big names that intend to monetize the hell out of it, where marketing is more important than the science and engineering. These particular AI models are in a very rudimentary stage, they need more work, except that people desiring to make money want to persuade you that they're ready for production even in health care markets.

          Experience should tell us to not t

        • He said that we can't trust AI because we have no idea how it creates a diagnosis. We do know how a doctor works. They use the skill they have acquired through guided study and experience to diagnose, and we know how those processes work.

          We do not trust things we have no idea how they work. We do trust things we do not understand every detail about how they work, but that is a very different thing. Understanding how something works is not a binary.

        • We base that trust and use of things on having a good enough understanding of how things work to make reliable predictions that come true an overwhelming majority of the time. You leave your kids with your parents for the weekend because you trust they won't snap and murder them. You don't fully know that won't happen, but if you had less reason to trust them you might change the way you act yourself.

          We don't trust AI because we're not as certain about our own ability to predict what it will do. Some peo
    • by ceoyoyo ( 59147 ) on Thursday October 31, 2024 @11:51AM (#64909347)

      He didn't say you should understand it before you use it. Well, he might of, but he didn't say it in the summary anyway.

      We do understand how generative language models work. The "generative" part means they can make up unpredictable stuff that is not strongly limited by their input. You can (and do) feed them random noise and they output things that sound reasonable.

      That's awesome for a chatbot, image generator, robot artist, Hollywood scriptwriter. Maybe not so good for a transcription service.

      Fadell sounds like he's advocating using other types of models that are actually designed to do things like accurately transcript audio into text, as opposed to trying to corral models that are explicitly designed not to do that. I.e. stop treating every problem like a nail and try out something other than that hammer OpenAI is trying to sell you.

    • by geekmux ( 1040042 ) on Thursday October 31, 2024 @11:55AM (#64909357)

      I have no idea how Tony Fadell's brain works. I don't even know if he is actually conscious, or if it is an illusion. We have no idea how consciousness even works. We don't even have a firm grasp on how memory works.

      We define consciousness based on the simple medical definition that provided that word in our vocabulary. If you don’t recognize a conscious person, rest assured you can recognize an unconscious one. Including their obvious limitations to society and even their capability to survive in that state. We recognize and have defined normal/functional states for memory as well, because we know what failing memory looks like, and what it can no longer do.

      So why should I trust anything Tony Fadell says, or what any other human says? If we need to understand 100% how something works before we can use or trust it, we're screwed already.

      Some things are in fact that simple in life. We know 100% how gravity works. Don’t feel a need to “test” it anytime soon. We trust it, because we also know 100% how breakable humans are.

      We also know 100% how greedy and corrupt humans are based on thousands of years of history. This is why we can look at AI with corruptable certainty. We are the greedy corrupt species teaching it. We know how humans work. Which is exactly why Fadall’s concerns are 100% valid.

      • by KlomDark ( 6370 )
        We know HOW gravity works? No, we don't, but we can easily predict the effects of gravity on anything.
        • by msauve ( 701917 )
          If you want to dive into those weeds, we don't know how anything works - it's all just theory when you get into it.
        • We know HOW gravity works? No, we don't, but we can easily predict the effects of gravity on anything.

          Oh for FUCKS sake. We DO know how gravity works. Because we know how the environment outside of our atmosphere works, what the absence of gravity feels like and the effects on humans, and what creates gravitational pull in the first place.

          You’re really good at emulating human ignorance. From the 16th Century. You’re better than that. Act like it.

          • by caseih ( 160668 )

            Oh sure. We know what gravity does, how to measure its effects, and how mass seems to create it. Beyond that, we know very little about what the force actually is and how it's generated. Lots of good theories though. For practical purposes knowing it works consistently is good enough. Generative AI, on the other hand, does not "work" consistently.

          • by KlomDark ( 6370 )
            Rewording: We don't know what makes gravity work, some theorizing about gravitons, artificial reality, or everything constantly expanding in an infinite universe. But no, nobody knows what make it work.
    • by znrt ( 2424692 )

      So why should I trust anything Tony Fadell says

      well, when he says he isn't spouting shit like sam altman does, i can trust that he's spouting at least slightly different shit.

    • by Holi ( 250190 )

      I don't know, does Tony Fadell have a documented history of making shit up?

    • Sigh...
      You shouldn't necessarily implicitly trust anyone. However, people in positions whose decisions could have put human life at risk or have catastrophic consequences have typically (yes we all know someone) undergone effectively a lifetime of evaluation by increasingly competent and skilled evaluators, all the way from school teachers to, for instance, medical boards and residency.

      what does AI have?
    • by taustin ( 171655 )

      I have no idea how Tony Fadell's brain works. I don't even know if he is actually conscious, or if it is an illusion. We have no idea how consciousness even works. We don't even have a firm grasp on how memory works.

      And you can't prove you're not a random gamma ray altering the memory of a server somewhere.

      So why should I trust anything Tony Fadell says, or what any other human says?

      That, of course, includes spokesmonkeys at AI companies, as well.

      If we need to understand 100% how something works before we can use or trust it, we're screwed already.

      Have you looked at the world lately? Using and trusting things we don't understand at all certainly hasn't worked out very well.

    • So why should I trust anything Tony Fadell says, or what any other human says? If we need to understand 100% how something works...

      We do not necessarily need to understand 100% how something works, what we need is experience of how it operates and what the problems to watch for are i.e. how it can go wrong. We already have this experience with humans. We know that there are a range of abilities and mental issues and this is why people have to build up a record of demonstrated excellence and capability before they get into positions where their decisions can have major repurcussions for others.

      AI is less known to us than a random st

      • by Mal-2 ( 675116 )

        The way I see it, AI generally 80% solves the problem for you. Checking the answer(s) becomes the other 20%. Unfortunately, the Pareto Principle says this 20% will take you 80% of the effort and time of just doing the whole job from scratch -- assuming you can do the job from scratch. When it comes to drawing, painting, and the like, I'm much better at retouching and polishing them than I am at building them from the ground up, so the AI is taking care of something that otherwise wouldn't get done at all. B

        • The way I see it, AI generally 80% solves the problem for you. Checking the answer(s) becomes the other 20%.

          It depends on the problem. Ask these LLMs a fact based question and basically they don't solve any of the problem for you because if you have to lookup the answer to see if it got it right why not do that first and not waste time asking the AI?

          However, when it comes to editing or correcting text these LLMs are great - they can clean up and edit text extremely well, which is hardly surprising given that they are really trained to do find the next word and not trained on facts at all.

    • why should I trust anything Tony Fadell says, or what any other human says? If we need to understand 100% how something works before we can use or trust it, we're screwed already.

      We know "AI" (LLMs, which are being discussed here) hallucinates not occasionally, but continually. I have been getting AI hallucinations from Google at the top of my search results lately, they turned them on for me way later than everyone else probably because their algorithms have flagged me as a complainer. (I send them feedback deliberately on a regular basis and I am not shy about it.) About half the time they are useful, and the other half the time they include bullshit I know to be completely false

    • But he's also right. Why distrust him but instead put your faith into corporate marketing that says you must invest in AI and use AI and let AI run your life? Skepticism is not a pesonality flaw.

  • The regulation is already here. If you are harmed by AI screwing up, you sue the entity using AI and the entity whom made the AI. Just like any other tool.

    • Re:Lawsuits (Score:5, Insightful)

      by MachineShedFred ( 621896 ) on Thursday October 31, 2024 @11:46AM (#64909323) Journal

      Well I'll remember to file a civil suit against the hospital after I'm dead.

      Great advice!

      • Sorry, all the AI companies will have you implicitly agree to arbitration agreements, and only when you attempt to sue will you find out that you unwittingly agreed.

    • by Holi ( 250190 )

      That is not regulation.

      • by taustin ( 171655 )

        Technically, it is enforcement of regulations.

      • Regulation includes the laws that allow injured parties to go to court. They can't have you sign away all your rights in order to get healthcare. Actually, we kind of allow this with software agreements we don't even actually sign and never read...

        if every product used in surgery had a click agreement and 10 pages of legal BS you'd maybe be dead by the time you agreed to the 100s of products that could be involved in your procedure. Maybe then you'd have a chance to toss out those agreements?

        Another exampl

    • The typical big L libertarian approach. "But if it kills you, you can sue them, so we need no other regulation".

      It's as stupid in this case as in every other case.

    • And the best part is that management finally gets the blame, since there aren't any workers to pin it on!

  • by Rei ( 128717 ) on Thursday October 31, 2024 @11:11AM (#64909245) Homepage

    He cited a University of Michigan study showing AI hallucinations in 90% of ChatGPT-generated patient reports

    Surely he means the claim circulated by Reuters of a University of Michigan researcher who said that he found hallicinations in 8 out of 10 cases of audio transcriptions generated by WhisperAI?

    • by dfghjk ( 711126 )

      Everything generated by these applications is a "hallucination". Neural networks not only do not precisely memorize information, they are designed to NOT do that. It only appears that they do at times because of their enormous size.

      The term "hallucination" is really used to mean "bad result", the fact is that ALL results are simply made up, it's all fake. Neural networks are imperfect memories, something you want at times and something you don't at other times. The issue is that VCs, and the Altman's and

      • by Rei ( 128717 )

        Everything generated by these applications is a "hallucination". Neural networks not only do not precisely memorize information, they are designed to NOT do that.

        Schrodinger's LLM: simultaneously does not memorize anything at all precisely, and also only plagiarizes and stitches together things it memorized from the internet.

        *eyeroll*

        Yes, LLMs are very capable of memorizing facts, and even a moderately small LLM contains far more factual information that the average person and will significantly outperform

  • you think it does.

    "I'm just not spouting shit" would have been more convincing.

  • The basic problem with AI is that it gets stuff wrong or hallucinates and there is no way to tell the AI that it did something wrong and tell it how to fix it.
    AI just gobbles up random stuff on the Internet and regurgitates it. There is no self-correcting mechanism.
    It will just get worse as an increasing amount of the stuff it copies from the Internet will be the bad AI output. Doom loop.

  • by Khopesh ( 112447 ) on Thursday October 31, 2024 @12:04PM (#64909389) Homepage Journal

    I've been doing AI for 15 years, people

    There are lots of definitions of "AI" (dates approximate production-grade implementations, see also Timeline of machine learning [wikipedia.org]):

    • ~1960s: Anything approximating a human user, chiefly for playing video games against humans
    • ~1980s: Machine learning following the end of the 70s-era AI winter [wikipedia.org]
    • ~2012: Deep learning [wikipedia.org]: highly multi-layered neural networks
    • ~2018: Foundation models [wikipedia.org], especially involving Transformers [wikipedia.org], e.g. Generative AI [wikipedia.org]
    • ~future?: Artificial general intelligence [wikipedia.org]: human-like / self-aware / sentient, arguably needing rights as a "person"

    AGI doesn't exist even today. Foundation models are, at most, ten years old. The "deep learning revolution [wikipedia.org]" didn't really get going until AlexNet [wikipedia.org] in 2012. There was some very early work in 2009 (fifteen years ago) that Wikipedia calls "an early demonstration of GPU-based deep learning" (see the deep learning revolution link), but there's no way that was production grade.

    Fadell must therefore have been talking about standard machine learning: LSTMs, SVMs, etc. This brings us to an AI definition I omitted from the above list: over-hyped algorithms.

    • Your list doesn't include "reasoning", or making inferences about things for which it has not been trained. GPT4 does appear at times to have something like this, surprising even its developers, but it wasn't designed to reason and so this is an illusion based upon the training data it had. The same model that comes up with the right answer to a puzzle will also come up with the wrong answer. What we have with LLMs are language models, not reasoning models; they're essentially input and output processing

      • by Khopesh ( 112447 )
        I'd put "reasoning" between foundation models and AGI. There are probably several steps on the way to AGI and it remains to be seen whether or not we'll lump this into foundation models, it becomes its own thing, it's lumped into some other next step, or it's lumped into full AGI.
    • If everything is AI, then nothing is AI.
  • I can't get anywhere near that kind of error level with ChatGPT. I realize I'm a single sample point. But ... 90%? I'd like to know more about that.
    • by taustin ( 171655 )

      Have you used ChatGPT to transcribe medical records? The 90% number is for a specific use, and is rather more plausible than the same number applied in general due to a) transcribing verbal recordings, and b) specialized medical vocabulary. (Not saying it's an accurate number, I have no idea, but it does seem more plausible.)

  • by King_TJ ( 85913 ) on Thursday October 31, 2024 @12:27PM (#64909465) Journal

    Every time I look at the AI tech rolling out, I find myself either impressed or totally unimpressed, mostly based on how targeted they got with it.

    I think the hallucination problem is going to be an integral part of this type of AI. It's never going to be "solved" because it's part of how it functions. All they can do is keep trying to put "rails" on it, trying to ensure it doesn't go off the proverbial road this way or that way, at different points, so it gives desired results.

    That tells me, this tech needs to be constrained to very limited use-cases. EG. You can probably get it to suggest good ad-copy for your product listings, eliminating a need to hand-type descriptions on an online web store. In that situation, you're only using the AI to process a controllable/limited set of data, such as known wording for appealing product descriptions that were already written before.

    But you probably DON'T want to just bolt a "does it all" ChatGPT type program on to your web store, to do this for people! You want it to run a sandboxed and heavily "gimped" one that only knows things related to product listings.

    I feel like all this CPU power they're burning, building the all purpose AI trying to suck up ALL the info is misguided and will just result in another tech bubble bursting in a few years. The funding will run out and they won't be able to show a profit after all of the spending.... The value here is in writing "Mini AI" apps that are trained in targeted, smaller datasets, to do specific tasks.

    • by taustin ( 171655 )

      I think the hallucination problem is going to be an integral part of this type of AI. It's never going to be "solved" because it's part of how it functions. All they can do is keep trying to put "rails" on it, trying to ensure it doesn't go off the proverbial road this way or that way, at different points, so it gives desired results.

      The only way to solve it is for the AI to actually understand the training material. And understanding requires a level of awareness that deterministic computers simply aren't capable of, and never will be.

      • I mean, we haven't solved it with humans.

        But 'AI Hallucinations' made more sense in the original term: garbage in, garbage out.

      • I think the hallucination problem is going to be an integral part of this type of AI. It's never going to be "solved" because it's part of how it functions. All they can do is keep trying to put "rails" on it, trying to ensure it doesn't go off the proverbial road this way or that way, at different points, so it gives desired results.

        The only way to solve it is for the AI to actually understand the training material. And understanding requires a level of awareness that deterministic computers simply aren't capable of, and never will be.

        That's not necessary. For example it's safer to constrain today's AI to a set of options like a choose your own adventure book, and tell you why in an audit log, than to have it write the next set of actions. Treat it like an apprentice, if you think you need to do something outside SOP, you're wrong, escalate to your boss.

        We work with humans that we don't fully trust their independent judgement every single day, it's really not rocket science. Or hell, working dogs even, they're in the same boat. You rely

  • Not even a joke about mushrooms in the dark? I thought the story was a rich target for humor, but...

    • You must be hallucinating. Maybe too many mushrooms?

      • by shanen ( 462549 )

        Your UID is low enough that you should remember funnier days on Slashdot? Dare I say wittier? And as regards the story, how many AI summers and winters have you seen? (And is anyone still programming anything in Lisp?)

        • I never learned Lisp, but where I worked we had a bunch of Lisp Machines in the lab...

          dave

          • by shanen ( 462549 )

            I was on the "bleeding edge" a couple of times. Mostly I lost blood, but one of those times involved what may have been the last "real" Lisp machine. It was called the TI Explorer. The Lisp machines had no OS. In the case of the TI Explorer, Lisp ran directly out of the microcode.

          • Sitting next to me at a concert last night: Gerald Sussman (and Julie Sussman on the other side.) We had a brief conversation, where I admitted I bought a copy of the Scheme book but never got around to reading it. We did agree on C/C++ as an abomination. He said he definitely favors "simple language but complex programs" while I admitted a bias in the other direction. I said "My one non-negotiable rule for hiring programmers was 'You must know 2 different programming languages!' " He agreed that was

  • "Right now we're all adopting this thing and we don't know what problems it causes"

    No, this is not correct. No one is using AI in safety or other critical contexts. Everyone is doing the research to push the frontier of use cases, but for safety and critical contexts, everyone is being super cautious. That's why there are only limited deployments in self-driving cars and why Apple completely abandoned their efforts after spending many billions over many years. That's why AI robots are hard to find. That's w

  • I strongly dispute the idea that "Nest thermostat used AI in 2011". I've had various models of the Nest thermostat for over a decade, and they don't, and as far as I can tell, have never, used "AI".

    It has some algorithms for working out your routines and preferences based on how you fiddle with it and some motion sensors, which is uses to generate a schedule for your temperature management. It's no more "AI" than a well-designed spreadsheet is - it's barely even an "expert system".

If all else fails, lower your standards.

Working...