Forgot your password?
typodupeerror
IT Technology

Dell Walks Back AI-First Messaging After Learning Consumers Don't Care (pcgamer.com) 50

Dell's CES 2026 product briefing, PC Gamer writes, stood out from the relentless AI-focused presentations that have dominated tech events for years, as the company explicitly chose to downplay its AI messaging when announcing a refreshed XPS laptop lineup, new ultraslim and entry-level Alienware laptops, Area-51 desktop refreshes and several monitors.

"One thing you'll notice is the message we delivered around our products was not AI-first," Dell head of product Kevin Terwilliger said during the presentation. "A bit of a shift from a year ago where we were all about the AI PC." The shift stems from Dell's observation that consumers simply aren't making purchasing decisions based on AI capabilities. "We're very focused on delivering upon the AI capabilities of a device -- in fact everything that we're announcing has an NPU in it -- but what we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI," Terwilliger said. "In fact I think AI probably confuses them more than it helps them understand a specific outcome."
This discussion has been archived. No new comments can be posted.

Dell Walks Back AI-First Messaging After Learning Consumers Don't Care

Comments Filter:
  • no AI please (Score:3, Insightful)

    by roman_mir ( 125474 ) on Wednesday January 07, 2026 @12:05PM (#65907831) Homepage Journal

    I was thinking about changing my phone, the first thing I wanted to know how to avoid AI, how to disable AI, how to make sure there is no AI on my equipment. Thank you.

    • Not possible. Even the slowest processor can run AI algorithms, just a lot slower. No way to avoid it.

    • Browsers and apps can run simple inference or access remote services. This shit is going to be everywhere.

    • Re: (Score:3, Insightful)

      by shmlco ( 594907 )

      Define AI. Modern smartphones use ML tools and models everywhere, from text prediction to touch detection to voice recognition to photo processing. They're not just limited to chat bots.

    • by Sloppy ( 14984 )

      If you're trying to avoid it, then that's just a software problem, not a hardware one. It's only a hardware problem if you're trying to get more of it.

      • No, it's hardware problem when the software companies force artificial obsolescence on us all and we have no choice but to ignore security updates or buy new hardware, which inevitably has bullshit AI
        • NPUs can do things we have already been doing on the CPU for a long time, just a bit more efficiently. Noise cancellation, object recognition, basic subject analysis, text prediction, grammar checking and many other things which are genuinely useful. Yes, it can probably accelerate some GEGL-like operations too, as you would want for aesthetics in photographs. What it cannot do is handle all the generative nonsense people hate working with. If it could, NVIDIA would be cooked.
  • by Unpopular Opinions ( 6836218 ) on Wednesday January 07, 2026 @12:25PM (#65907867)

    A tech company with some early displays of a return to common sense.

  • by Somervillain ( 4719341 ) on Wednesday January 07, 2026 @12:25PM (#65907869)
    Remember 3D TVs? Great idea in theory, but they just didn't work out. On-device AI is kinda similar. Any AI stuff you want it to do makes more sense to do in the cloud either due to local processors not being able to handle it or it benefitting from frequent model updates. In general, I am a huge skeptic of LLMs, especially as they're marketed, but for most LLM-tasks...nope...of all the use cases I know of, there's not a huge benefit to doing on device. This is a solution looking for a problem.

    You threw a buzzword on a laptop and hoped it would make us choose you. When I see "AI" on a laptop, I just ignore it and honestly get nervous. I don't know how AI on a laptop will benefit me. You've failed to communicate any possible benefit.

    Like most, I want a laptop to work in a straightforward manner with as few gimmicks as possible. Laptop in which I can replace components, like the RAM/Disk/Battery? You DEFINITELY have my attention...laptop with AI in the title?...eh, what else you got?
    • Furthermore, why would I want to pay for hardware that can do it locally if it is available on the cloud? It's funny that so much software has gone the way of "if you aren't online you can't use it" but AI is going the way of "you need this so bad you need it to run locally".
    • Privacy? I would rather wait longer than send all my personal questions to a profiling company.
      • by Somervillain ( 4719341 ) on Wednesday January 07, 2026 @05:14PM (#65908805)

        Privacy? I would rather wait longer than send all my personal questions to a profiling company.

        But why do I need AI at all? Apple Intelligence has been VERY underwhelming. Most of what ChatGPT and others can do is very underwhelming or highly specialized. I don't know of a general purpose need....and if you have a specialized need, you probably need more than a device can offer.

        What I hear you saying is..."An LLM is going to shove it's foot up my ass...which one has the smallest shoe size?"

        What I am saying is..."wait...why do I want someone's foot up my ass?"

    • by pereric ( 528017 )

      Honest question: Could local text translation be a good use case for local ML models? Nice not having a remote translation service getting to know what text you read or write.
      That said, such things can probably be done OK enough without specific NPU:s.

      (otherwise, agree it sounds like buzzword of the year, with usage of rather overestimated)

    • The ads Iâ(TM)ve seen for AI tolls seem to be around word avoidance, ie having it summarize a document you were supposed to have read before the meeting. How does that create benefit for me as an employer?

      • The ads Iâ(TM)ve seen for AI tolls seem to be around word avoidance, ie having it summarize a document you were supposed to have read before the meeting. How does that create benefit for me as an employer?

        Apple's current implementation is counter-productive and honestly dogshit. I have yet to see those things working out nicely. If it's easy to read?...why not read it yourself? OK, it's long and complex?...well....either it matters, like for work, and I need to understand it thoroughly....or it doesn't matter...in which why bother?

    • by Tyr07 ( 8900565 ) on Wednesday January 07, 2026 @03:50PM (#65908539)

      This is the exact issue. They have yet to prove how AI is useful to the average person, as it's not reliable, can get things wrong, has a lot of limits of usage, and can't actually do anything complex on it's own.

      The only thing it has been useful for is to summerize a lot of content (on large scale) I.E collect your personal data for marketing based content and LLMs. None of this is making my lfie easier.

      AI didn't check the roads for me knowing my morning routes and alert me to an accident. It's not tracking my vehicle maintenance and reading my ODB sensor to tell me what might be going on with my car, and making sure the mechanic isn't lying and screwing me. It's not monitoring my electricity usage and using any sensor in my house to save me money realizing I'm not in the room and giving me quantifiable data of waste electricity, and offering to shut off lights when people aren't in the room.

      Have an Alexa with room sensors? You can make it auto shut off, but the AI data portion? No, that's yummy marketing data for Amazon to see how you use things, and which products it might be able to market for to you.

      All of this, in order to get more labor out of you. That''s litearlly it.
      Of course no one wants it.

      "Hey look! Do you want this great thing we invented to get more labor out of you? You don't? I don't understand,..."

    • by AmiMoJo ( 196126 )

      I'll give you an example.

      Google's on device speech recognition isn really good, in part because it uses an LLM to help differentiate similar sounding words, and deal with you hesitating and backtracking. It feels like you can talk to it like a person, not just giving it a carefully worded command like a Star Trek computer.

      It's useful for both transcription (including adding subtitles to video and phone calls) and for stuff like entering addresses into Google Maps.

      • I'll give you an example.

        Google's on device speech recognition isn really good, in part because it uses an LLM to help differentiate similar sounding words, and deal with you hesitating and backtracking. It feels like you can talk to it like a person, not just giving it a carefully worded command like a Star Trek computer.

        It's useful for both transcription (including adding subtitles to video and phone calls) and for stuff like entering addresses into Google Maps.

        I'll assume you meant "is" and not "isn't"...but this is a laptop, wouldn't you prefer that on a phone? Also, how preferable is it to have it on-device vs remote? I'd say that 99% of use cases in which I'd want to interact with a phone's voice functionality I have a decent signal. Don't remote services typically have more features and frequent updates?

        I see your point, generally, though...and if they really master the LLMs enough that a device model can be complete, I'd suppose that would be cool. Ho

  • by Kobun ( 668169 ) on Wednesday January 07, 2026 @12:26PM (#65907871)
    The closest âoeAIâ comes to being useful is through a web browser using someone elseâ(TM)s servers. An âoeAIâ PC does not provide any value whatsoever.
    • by crow ( 16139 )

      That's mostly true. I've run some LLMs locally: they're slow and not very powerful, but there are times when having it local is nice for privacy.

      I do expect this to change over time, with more operations being practical to run locally. I would really like to have voice input be a local task, for example.

      • by Tyr07 ( 8900565 )

        The other problem is they don't have a way to monetize the LLM you're running locally. It can't answer a million requests at once like their system does, but it doesn't need to. The training dataset is the money maker, the thing they used people's copyrighted and private data to build.

        They literally do not want you to be able to 'run it on your own'. Maybe use your hardware to help offload work on their servers, but actually just running it locally and not paying them? Hell no they don't want that. When you

      • AI =/= LLM, silly. There are a multitude of actually useful AI models which are not LLMs which have actual utility to emd users and which would be better to run locally instead of having to push data up to the cloud.
  • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday January 07, 2026 @12:26PM (#65907873) Journal
    Obviously they aren't going to say so; but their 'AI' marketing was not so much 'confusing' as 'fucking stupid and actively irrelevant'. You bother me with a video of a parent and child watching video on a laptop on the couch and tell me that Dell means longer battery life with AI? And the one with a generic small business owner sitting in front of a Dell, which acts intelligently so you can run your business. So, what do you mean by that exactly? Auto-dimming backlight based on ambient light? Nothing in particular? Seriously? Sure, if you take a suitably expansive view there's probably a bit of DSP in there somewhere, pretending that the built in speakers suck less than they do, which you could call 'AI'; but that's really reaching.

    Nobody is going to be honest enough to do so; but they weren't 'confused' so much as you just dumped non-sequiturs in front of them and pretended that you were delivering some sort of profound and delightful insight; which you were not. I realize "Dell: a new laptop's battery will probably be less fucked than your old ones' is" is not an exciting slogan; but that's basically what you actually had to offer, so obviously pretending that the NPU was magic left everyone puzzled.
  • by rsilvergun ( 571051 ) on Wednesday January 07, 2026 @12:48PM (#65907931)
    We hate it intensely. The only thing it's good for is helping Donald Trump underest teenagers on grok and taking all the jobs, electricity, water and RAM and storage.

    It is a psychotic and destructive technology that rapaciously steals everything it can get its grubby little paws on and gives it to a group of five or six billionaires planning on being trillionaires at our expense.

    I think we've actually found a technology worse than the nuclear bomb. At least with the nukes we got a brief period of Peace caused by mutually assured destruction until we put lunatics in charge. We're not even going to get anything like that out of AI.
    • Finally, something we can agree on. +1
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      AI finally breaks the link between survival and labor. If machines can handle production, logistics, maintenance, and boring administrative work, humans no longer have to sell most of their waking lives just to exist. Scarcity shrinks, essentials become guaranteed, and work turns into something you choose to do because it matters or because you enjoy it. That is basically the Star Trek setup, post scarcity economics where automation does the grind and people focus on science, art, exploration, and self impr
    • by OhPlz ( 168413 )

      Their push was for PCs with NPUs built in. That would host local AI workloads, not the cloud based stuff that gets handled by the "evil" datacenters. Problem is, those workloads don't really exist on the Windows platform, therefore the NPU is underutilized. It just needs a killer app, but there isn't one right now. People are gravitating to the cloud based stuff.

    • You should seek help for your advanced case of TDS before it becomes terminal.

  • by wakeboarder ( 2695839 ) on Wednesday January 07, 2026 @12:56PM (#65907957)

    If I am getting a device that is AI enabled, what do I get? Does it do more for me? Not right now as a far as I can tell. I don't have any apps that require AI running on my device. It's because with AI the industry is putting the cart before the horse. If you buy the next gen graphics card or double memory, you know what the result is going to be. What if your device comes with an NPU? What capability does that bring to the device for a consumer? I'm a techie and I know what a NPU does, but I have no reason to want one in my device other than to have one.

    Now all that would change if I could add extra features and talk to my phone and have it run on the edge without having to be connected to a network. I would want an NPU if there was some cool application that required it. Right now I can't think of one.

    • by gweihir ( 88907 )

      The NPUs are a speculation on the future. Basically a guess. But for regular people it can be used to produce FOMO, so the vendors put them in. A few percent more or less sold make a big difference.

      • One difference about the AI bubble is previous industrial bubbles were built on technologies that we already had. AI is the first one that we are building out tech on bets, they are betting that we will have applications.

  • Dell just needs to make working laptops that are just screen, touch pad, keyboard and sufficient ream and storage. As for Windows, people just want a taskbar, start menu, web browser and their usual apps and games. There's a reason why over 16 years later a significant amount of people still use Windows 7 despite the security risks.All the OEMs and Microsoft need to just stop making gimmicks and make Windows 12 an operating system that shuts up and let you get on with your work in peace or lose their trill
    • And quit making new laptops with that stupid fucking AI button that I can't easily remap to a more useful function, like the ctrl key it replaced.

  • Something extra stupid like Alexa on my PC or phone? No thanks. I use AI almost daily, but I can't think of a scenario where I'd want a local version that's nowhere near as capable as the cloud based tools.
  • First fix your batteries and charging systems. Every Dell laptop I've ever had to throw out has been due to batteries that mysteriously stopped charging. And were no longer available for that model as OEM parts. Meaning I had to go with flamable Chinese after market option or relegate the device to permanent desktop duty.

  • by Sloppy ( 14984 ) on Wednesday January 07, 2026 @02:03PM (#65908179) Homepage Journal

    I remember when floating point was the luxuriously optional silicon. I try to be welcoming to new things even if I don't know how/if I'll use them, because I think they don't really inflate the cost of the processors much. (Am I right? I don't actually know.)

    Long-term, I think there's widespread consensus that integrated floating point was a good idea. Even less controversial, integrated MMUs are a critically necessary part of our modern world. (It's hard to imagine that separate chips like the 68851 used to exist.) The vector stuff? Some code uses it. The cryptographic instructions? Oh hell yes! Maybe I'll get reamed for this, but I think the processor industry has a pretty good track record of making silicon that we eventually truly do light up.

    This time, it's a little harder. The applications for LLMs seem so niche. Part of me thinks they're doing this several years too soon. But that said:

    0. Neural networks have more applications than LLMs. However worthless you think LLMs are: if your computer is good at LLMs, what else might it be good at?

    1. I strongly disagree with everyone who says the hypothetical applications for this should run "in the cloud" instead of on the user's own hardware. All my experience tells me that's definitely wrong. IF this "AI" stuff really isn't a bubble (I think it probably is), then getting coprocessors widely deployed for it out there, is a very good thing. "AI" is no different from non-"AI" logic, in that whatever you're doing, from the user's point of view it should be as local as possible, and with as few external dependencies as possible. You don't need to teach me that lesson again for the 100th time, dammit. Maybe a lot of laypeople will get stuck with OpenAI's (or whoever's) services, but we will want to run it on our machines.

    2. Maybe the reason there are so few existing applications that use neural nets, is that the cheap hardware to make it practical isn't out there yet! Get the silicon out there and then developers will find uses for it. Back when I was stealing my employer's electricity (and coffee) at night, I ray-traced on a network of 80386s and 80486s, and the 486's floating point performance made me a lot more excited to work on my ray-tracer. Had it ran slower, I would have moved on to the next amateur time-waster sooner.

    But I can see why consumers wouldn't care a bit, right now. By the time you have real use for this hardware, I think you'll have already retired the new machine that you're buying today. But wasn't that sort of the case with vector and crypto instructions too? Different people will check it out at a different pace. It's always been like that.

    • by ceoyoyo ( 59147 )

      "AI hardware" isn't "LLMs". It's a souped up vector unit. You can run LLMs with it, but I doubt very much that's what Dell or Microsoft want you to do. Especially Microsoft. That would spoil the business model.

      They want your processor to have hardware support for image and video processing, voice recognition, text to speech and a bunch of other things that nobody has thought of yet.

      Neural networks have more applications than LLMs.

      LLMs are neural networks.

    • 1. I strongly disagree with everyone who says the hypothetical applications for this should run "in the cloud" instead of on the user's own hardware. All my experience tells me that's definitely wrong. IF this "AI" stuff really isn't a bubble (I think it probably is), then getting coprocessors widely deployed for it out there, is a very good thing. "AI" is no different from non-"AI" logic, in that whatever you're doing, from the user's point of view it should be as local as possible, and with as few external dependencies as possible. You don't need to teach me that lesson again for the 100th time, dammit. Maybe a lot of laypeople will get stuck with OpenAI's (or whoever's) services, but we will want to run it on our machines.

      On the one hand, I do run some interesting AI stuff locally. I run a local instance of Immich that leverages the GPU for facial matching, duplicate detection, and other things of that nature. I've been having a bit of an issue deploying Speakr, but that's also an interesting up-and-comer. A few other LLM-leveraging self-hosted software titles are on the rise, and I do hope to see more of them make some progress in leveraging both GPUs and NPUs as time progresses.

      The problem, however, is the industry's catch

  • Nobody wants stupid fucking AI garbage on their machines
  • Prime Video only had one season of this excellent series.
  • Seriously, if I see 10 laptops, and one says 'No AI bloatware', I'm going to be interested. Give us OLED monitors, better batteries, and chips that are optimized for our actual tasks, not buzzwords.

    • +2

      I turn off all the AI boatware and spyware (Microsoft Recall anyone?). I would prefer it simply didn't come with it in the first place. If I want to use AI, I will install the stack I want.
  • The real story here is that Dell listens to his customers.

There is nothing so easy but that it becomes difficult when you do it reluctantly. -- Publius Terentius Afer (Terence)

Working...