Dell Walks Back AI-First Messaging After Learning Consumers Don't Care (pcgamer.com) 50
Dell's CES 2026 product briefing, PC Gamer writes, stood out from the relentless AI-focused presentations that have dominated tech events for years, as the company explicitly chose to downplay its AI messaging when announcing a refreshed XPS laptop lineup, new ultraslim and entry-level Alienware laptops, Area-51 desktop refreshes and several monitors.
"One thing you'll notice is the message we delivered around our products was not AI-first," Dell head of product Kevin Terwilliger said during the presentation. "A bit of a shift from a year ago where we were all about the AI PC." The shift stems from Dell's observation that consumers simply aren't making purchasing decisions based on AI capabilities. "We're very focused on delivering upon the AI capabilities of a device -- in fact everything that we're announcing has an NPU in it -- but what we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI," Terwilliger said. "In fact I think AI probably confuses them more than it helps them understand a specific outcome."
"One thing you'll notice is the message we delivered around our products was not AI-first," Dell head of product Kevin Terwilliger said during the presentation. "A bit of a shift from a year ago where we were all about the AI PC." The shift stems from Dell's observation that consumers simply aren't making purchasing decisions based on AI capabilities. "We're very focused on delivering upon the AI capabilities of a device -- in fact everything that we're announcing has an NPU in it -- but what we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI," Terwilliger said. "In fact I think AI probably confuses them more than it helps them understand a specific outcome."
no AI please (Score:3, Insightful)
I was thinking about changing my phone, the first thing I wanted to know how to avoid AI, how to disable AI, how to make sure there is no AI on my equipment. Thank you.
Re: (Score:2)
Not possible. Even the slowest processor can run AI algorithms, just a lot slower. No way to avoid it.
Re: (Score:3)
Browsers and apps can run simple inference or access remote services. This shit is going to be everywhere.
Re: (Score:3, Insightful)
Define AI. Modern smartphones use ML tools and models everywhere, from text prediction to touch detection to voice recognition to photo processing. They're not just limited to chat bots.
Re: (Score:3)
Spam filtering, translation, battery life indicators.
Re: (Score:3)
If you're trying to avoid it, then that's just a software problem, not a hardware one. It's only a hardware problem if you're trying to get more of it.
Re: no AI please (Score:2)
GenAI not included (Score:2)
Finally (Score:3)
A tech company with some early displays of a return to common sense.
AI PCs are the 3D TVs of today (Score:4, Insightful)
You threw a buzzword on a laptop and hoped it would make us choose you. When I see "AI" on a laptop, I just ignore it and honestly get nervous. I don't know how AI on a laptop will benefit me. You've failed to communicate any possible benefit.
Like most, I want a laptop to work in a straightforward manner with as few gimmicks as possible. Laptop in which I can replace components, like the RAM/Disk/Battery? You DEFINITELY have my attention...laptop with AI in the title?...eh, what else you got?
Re: (Score:2)
Re: AI PCs are the 3D TVs of today (Score:2)
Re: (Score:2)
Re: (Score:2)
smallest shoe when foot is shoved up your ass?? (Score:4, Insightful)
Privacy? I would rather wait longer than send all my personal questions to a profiling company.
But why do I need AI at all? Apple Intelligence has been VERY underwhelming. Most of what ChatGPT and others can do is very underwhelming or highly specialized. I don't know of a general purpose need....and if you have a specialized need, you probably need more than a device can offer.
What I hear you saying is..."An LLM is going to shove it's foot up my ass...which one has the smallest shoe size?"
What I am saying is..."wait...why do I want someone's foot up my ass?"
Re: (Score:2)
Honest question: Could local text translation be a good use case for local ML models? Nice not having a remote translation service getting to know what text you read or write.
That said, such things can probably be done OK enough without specific NPU:s.
(otherwise, agree it sounds like buzzword of the year, with usage of rather overestimated)
Re: (Score:2)
iOS is using on device machine learning models for text translation and dictation today.
Re: (Score:2)
Firefox too it seems, even if I haven't tried it.
Re: AI PCs are the 3D TVs of today (Score:2)
The ads Iâ(TM)ve seen for AI tolls seem to be around word avoidance, ie having it summarize a document you were supposed to have read before the meeting. How does that create benefit for me as an employer?
Apple Intelligence is pretty poor at it (Score:3)
The ads Iâ(TM)ve seen for AI tolls seem to be around word avoidance, ie having it summarize a document you were supposed to have read before the meeting. How does that create benefit for me as an employer?
Apple's current implementation is counter-productive and honestly dogshit. I have yet to see those things working out nicely. If it's easy to read?...why not read it yourself? OK, it's long and complex?...well....either it matters, like for work, and I need to understand it thoroughly....or it doesn't matter...in which why bother?
Re:AI PCs are the 3D TVs of today (Score:5, Insightful)
This is the exact issue. They have yet to prove how AI is useful to the average person, as it's not reliable, can get things wrong, has a lot of limits of usage, and can't actually do anything complex on it's own.
The only thing it has been useful for is to summerize a lot of content (on large scale) I.E collect your personal data for marketing based content and LLMs. None of this is making my lfie easier.
AI didn't check the roads for me knowing my morning routes and alert me to an accident. It's not tracking my vehicle maintenance and reading my ODB sensor to tell me what might be going on with my car, and making sure the mechanic isn't lying and screwing me. It's not monitoring my electricity usage and using any sensor in my house to save me money realizing I'm not in the room and giving me quantifiable data of waste electricity, and offering to shut off lights when people aren't in the room.
Have an Alexa with room sensors? You can make it auto shut off, but the AI data portion? No, that's yummy marketing data for Amazon to see how you use things, and which products it might be able to market for to you.
All of this, in order to get more labor out of you. That''s litearlly it.
Of course no one wants it.
"Hey look! Do you want this great thing we invented to get more labor out of you? You don't? I don't understand,..."
Re: (Score:2)
I'll give you an example.
Google's on device speech recognition isn really good, in part because it uses an LLM to help differentiate similar sounding words, and deal with you hesitating and backtracking. It feels like you can talk to it like a person, not just giving it a carefully worded command like a Star Trek computer.
It's useful for both transcription (including adding subtitles to video and phone calls) and for stuff like entering addresses into Google Maps.
Sounds like a phone feature + LLM vs on-device? (Score:2)
I'll give you an example.
Google's on device speech recognition isn really good, in part because it uses an LLM to help differentiate similar sounding words, and deal with you hesitating and backtracking. It feels like you can talk to it like a person, not just giving it a carefully worded command like a Star Trek computer.
It's useful for both transcription (including adding subtitles to video and phone calls) and for stuff like entering addresses into Google Maps.
I'll assume you meant "is" and not "isn't"...but this is a laptop, wouldn't you prefer that on a phone? Also, how preferable is it to have it on-device vs remote? I'd say that 99% of use cases in which I'd want to interact with a phone's voice functionality I have a decent signal. Don't remote services typically have more features and frequent updates?
I see your point, generally, though...and if they really master the LLMs enough that a device model can be complete, I'd suppose that would be cool. Ho
No shit Sherlock (Score:3)
Re: (Score:2)
That's mostly true. I've run some LLMs locally: they're slow and not very powerful, but there are times when having it local is nice for privacy.
I do expect this to change over time, with more operations being practical to run locally. I would really like to have voice input be a local task, for example.
Re: (Score:3)
The other problem is they don't have a way to monetize the LLM you're running locally. It can't answer a million requests at once like their system does, but it doesn't need to. The training dataset is the money maker, the thing they used people's copyrighted and private data to build.
They literally do not want you to be able to 'run it on your own'. Maybe use your hardware to help offload work on their servers, but actually just running it locally and not paying them? Hell no they don't want that. When you
Re: No shit Sherlock (Score:3)
What they don't admit... (Score:3)
Nobody is going to be honest enough to do so; but they weren't 'confused' so much as you just dumped non-sequiturs in front of them and pretended that you were delivering some sort of profound and delightful insight; which you were not. I realize "Dell: a new laptop's battery will probably be less fucked than your old ones' is" is not an exciting slogan; but that's basically what you actually had to offer, so obviously pretending that the NPU was magic left everyone puzzled.
It's not that we don't care (Score:5, Interesting)
It is a psychotic and destructive technology that rapaciously steals everything it can get its grubby little paws on and gives it to a group of five or six billionaires planning on being trillionaires at our expense.
I think we've actually found a technology worse than the nuclear bomb. At least with the nukes we got a brief period of Peace caused by mutually assured destruction until we put lunatics in charge. We're not even going to get anything like that out of AI.
Re: (Score:2)
Re: (Score:2, Interesting)
Re: (Score:2)
Their push was for PCs with NPUs built in. That would host local AI workloads, not the cloud based stuff that gets handled by the "evil" datacenters. Problem is, those workloads don't really exist on the Windows platform, therefore the NPU is underutilized. It just needs a killer app, but there isn't one right now. People are gravitating to the cloud based stuff.
Re: (Score:1)
You should seek help for your advanced case of TDS before it becomes terminal.
What does AI get you? (Score:3)
If I am getting a device that is AI enabled, what do I get? Does it do more for me? Not right now as a far as I can tell. I don't have any apps that require AI running on my device. It's because with AI the industry is putting the cart before the horse. If you buy the next gen graphics card or double memory, you know what the result is going to be. What if your device comes with an NPU? What capability does that bring to the device for a consumer? I'm a techie and I know what a NPU does, but I have no reason to want one in my device other than to have one.
Now all that would change if I could add extra features and talk to my phone and have it run on the edge without having to be connected to a network. I would want an NPU if there was some cool application that required it. Right now I can't think of one.
Re: (Score:2)
The NPUs are a speculation on the future. Basically a guess. But for regular people it can be used to produce FOMO, so the vendors put them in. A few percent more or less sold make a big difference.
Re: (Score:2)
One difference about the AI bubble is previous industrial bubbles were built on technologies that we already had. AI is the first one that we are building out tech on bets, they are betting that we will have applications.
The new Windows 8 (Score:2)
Re: (Score:1)
And quit making new laptops with that stupid fucking AI button that I can't easily remap to a more useful function, like the ctrl key it replaced.
I'm certainly not sold on any benefit (Score:2)
Dell ... (Score:2)
First fix your batteries and charging systems. Every Dell laptop I've ever had to throw out has been due to batteries that mysteriously stopped charging. And were no longer available for that model as OEM parts. Meaning I had to go with flamable Chinese after market option or relegate the device to permanent desktop duty.
The integrated coprocessor du jour (Score:3)
I remember when floating point was the luxuriously optional silicon. I try to be welcoming to new things even if I don't know how/if I'll use them, because I think they don't really inflate the cost of the processors much. (Am I right? I don't actually know.)
Long-term, I think there's widespread consensus that integrated floating point was a good idea. Even less controversial, integrated MMUs are a critically necessary part of our modern world. (It's hard to imagine that separate chips like the 68851 used to exist.) The vector stuff? Some code uses it. The cryptographic instructions? Oh hell yes! Maybe I'll get reamed for this, but I think the processor industry has a pretty good track record of making silicon that we eventually truly do light up.
This time, it's a little harder. The applications for LLMs seem so niche. Part of me thinks they're doing this several years too soon. But that said:
0. Neural networks have more applications than LLMs. However worthless you think LLMs are: if your computer is good at LLMs, what else might it be good at?
1. I strongly disagree with everyone who says the hypothetical applications for this should run "in the cloud" instead of on the user's own hardware. All my experience tells me that's definitely wrong. IF this "AI" stuff really isn't a bubble (I think it probably is), then getting coprocessors widely deployed for it out there, is a very good thing. "AI" is no different from non-"AI" logic, in that whatever you're doing, from the user's point of view it should be as local as possible, and with as few external dependencies as possible. You don't need to teach me that lesson again for the 100th time, dammit. Maybe a lot of laypeople will get stuck with OpenAI's (or whoever's) services, but we will want to run it on our machines.
2. Maybe the reason there are so few existing applications that use neural nets, is that the cheap hardware to make it practical isn't out there yet! Get the silicon out there and then developers will find uses for it. Back when I was stealing my employer's electricity (and coffee) at night, I ray-traced on a network of 80386s and 80486s, and the 486's floating point performance made me a lot more excited to work on my ray-tracer. Had it ran slower, I would have moved on to the next amateur time-waster sooner.
But I can see why consumers wouldn't care a bit, right now. By the time you have real use for this hardware, I think you'll have already retired the new machine that you're buying today. But wasn't that sort of the case with vector and crypto instructions too? Different people will check it out at a different pace. It's always been like that.
Re: (Score:2)
"AI hardware" isn't "LLMs". It's a souped up vector unit. You can run LLMs with it, but I doubt very much that's what Dell or Microsoft want you to do. Especially Microsoft. That would spoil the business model.
They want your processor to have hardware support for image and video processing, voice recognition, text to speech and a bunch of other things that nobody has thought of yet.
LLMs are neural networks.
Re: (Score:2)
1. I strongly disagree with everyone who says the hypothetical applications for this should run "in the cloud" instead of on the user's own hardware. All my experience tells me that's definitely wrong. IF this "AI" stuff really isn't a bubble (I think it probably is), then getting coprocessors widely deployed for it out there, is a very good thing. "AI" is no different from non-"AI" logic, in that whatever you're doing, from the user's point of view it should be as local as possible, and with as few external dependencies as possible. You don't need to teach me that lesson again for the 100th time, dammit. Maybe a lot of laypeople will get stuck with OpenAI's (or whoever's) services, but we will want to run it on our machines.
On the one hand, I do run some interesting AI stuff locally. I run a local instance of Immich that leverages the GPU for facial matching, duplicate detection, and other things of that nature. I've been having a bit of an issue deploying Speakr, but that's also an interesting up-and-comer. A few other LLM-leveraging self-hosted software titles are on the rise, and I do hope to see more of them make some progress in leveraging both GPUs and NPUs as time progresses.
The problem, however, is the industry's catch
No shit (Score:2)
If AI isn't like "The Peripheral", I'll pass (Score:1)
How about a 'NO-AI' marketing strategy? (Score:2)
Seriously, if I see 10 laptops, and one says 'No AI bloatware', I'm going to be interested. Give us OLED monitors, better batteries, and chips that are optimized for our actual tasks, not buzzwords.
Re: (Score:2)
I turn off all the AI boatware and spyware (Microsoft Recall anyone?). I would prefer it simply didn't come with it in the first place. If I want to use AI, I will install the stack I want.
The real story (Score:2)