Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

AI PCs To Account for Nearly 60% of All PC Shipments by 2027, IDC Says (idc.com) 70

IDC, in a press release: A new forecast from IDC shows shipments of artificial intelligence (AI) PCs -- personal computers with specific system-on-a-chip (SoC) capabilities designed to run generative AI tasks locally -- growing from nearly 50 million units in 2024 to more than 167 million in 2027. By the end of the forecast, IDC expects AI PCs will represent nearly 60% of all PC shipments worldwide. [...] Until recently, running an AI task locally on a PC was done on the central processing unit (CPU), the graphics processing unit (GPU), or a combination of the two. However, this can have a negative impact on the PC's performance and battery life because these chips are not optimized to run AI efficiently. PC silicon vendors have now introduced AI-specific silicon to their SoCs called neural processing units (NPUs) that run these tasks more efficiently.

To date, IDC has identified three types of NPU-enabled AI PCs:
1. Hardware-enabled AI PCs include an NPU that offers less than 40 tera operations per second (TOPS) performance and typically enables specific AI features within apps to run locally. Qualcomm, Apple, AMD, and Intel are all shipping chips in this category today.

2. Next-generation AI PCs include an NPU with 40 to 60 TOPS performance and an AI-first operating system (OS) that enables persistent and pervasive AI capabilities in the OS and apps. Qualcomm, AMD, and Intel have all announced future chips for this category, with delivery expected to begin in 2024. Microsoft is expected to roll out major updates (and updated system specifications) to Windows 11 to take advantage of these high-TOPS NPUs.

3. Advanced AI PCs are PCs that offer more than 60 TOPS of NPU performance. While no silicon vendors have announced such products, IDC expects them to appear in the coming years. This IDC forecast does not include advanced AI PCs, but they will be incorporated into future updates.
Michael Dell, commenting on X: This is correct and might be underestimating it. AI PCs are coming fast and Dell is ready.
This discussion has been archived. No new comments can be posted.

AI PCs To Account for Nearly 60% of All PC Shipments by 2027, IDC Says

Comments Filter:
  • by Anonymous Coward

    2027 is three years away, seems far-fetched, but, then again, this also just sounds like marketing some CPU features a bit differently by throwing "AI" in there.

    In practical terms, I'd be curious about what's actually changed or is changing in CPU design and how much of this is BS.

    • Re:Seems far-fetched (Score:5, Informative)

      by HBI ( 10338492 ) on Thursday February 08, 2024 @12:07PM (#64225040)

      Shorthand; it's all bs, the marketing doesn't have to make sense. They sit in a room and all lie to each other with much applause. It's some morale building thing for lying to the rest of the world.

      Telling the truth to such people is both unwelcome and detrimental to your career.

      Nothing in CPU design has changed as a result of any of the marketing of the past 15 years or so. The multimedia stuff did result in some instruction set changes, but that more or less ended a while ago. Virtualization support was added back around the same time. If they add an AI instruction set that somehow makes sense in some way, then i'll notice, but I see no sign of that.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        If they add an AI instruction set that somehow makes sense in some way, then i'll notice, but I see no sign of that.

        Skim the article blurb, they are talking specifically about an NPU, not adding CPU instructions for AI.
        Product sheet from Intel [intel.com] Excerpt:

        Intel® AI Boost / Intel® NPU - Integrated AI engine for low-power AI acceleration and CPU/GPU off-load.

        • by Junta ( 36770 ) on Thursday February 08, 2024 @01:16PM (#64225262)

          Problem being is that an NPU, like FPU and GPU will almost become just another part of a CPU.

          Maybe not merely some new instructions, but likely to be almost equivalent in terms of whether something is "AI" ready or not.

          I would even dare say the entire concept of 'NPU' is trying to steal some of nVidia's thunder. nVidia does have math units in their GPU geared towards machine learning, but didn't call it a whole new category of 'PU'.

          • That's how computer architecture always advances. There was a time when computers didn't have FPUs. There was a time when they didn't have vector units. There was a time when they didn't have GPUs. New types of processing units get added to do useful things. For a little while it's a selling point: you want a computer with the new kind of processing unit, not one without it. Then it becomes a standard part of all computers and you take it for granted. You don't ask, "Does this computer have a GPU?" a

          • Not almost- every new cpu from the big 3 consumer chip manufacturers includes an npu. This is not something customers are clamoring for. It's something the spyware companies want so they can better profile you or sell you bs that you don't need (and probably won't really work).
        • Seems plausible that phones would want this, as they need to save power and AI is much more useful when you don't have a keyboard. And given Microsoft's hardon for attaching an AI to the operating system, maybe it makes sense for PCs too.

      • Nothing in CPU design has changed as a result of any of the marketing of the past 15 years or so. The multimedia stuff did result in some instruction set changes, but that more or less ended a while ago. Virtualization support was added back around the same time. If they add an AI instruction set that somehow makes sense in some way, then i'll notice, but I see no sign of that.

        Intel's AMX instructions in their 4th gen Xeon's were designed for AI to enable loading huge amounts of data into a register array for matrix multiplication. This shit easily saturates all available memory bandwidth. Before that there was VNNI instructions which AMD also supports and was explicitly designed for AI.

        • Nothing in CPU design has changed as a result of any of the marketing of the past 15 years or so. The multimedia stuff did result in some instruction set changes, but that more or less ended a while ago. Virtualization support was added back around the same time. If they add an AI instruction set that somehow makes sense in some way, then i'll notice, but I see no sign of that.

          Intel's AMX instructions in their 4th gen Xeon's were designed for AI to enable loading huge amounts of data into a register array for matrix multiplication. This shit easily saturates all available memory bandwidth. Before that there was VNNI instructions which AMD also supports and was explicitly designed for AI.

          The way I see this, it's just SIMD units getting wider. Some of us have been working with large vectors/matrices for decades, and wonder what's so "neural" about the latest matrix multiplication unit. It's nice to se the wide SIMD trend continue, as long as the hardware doesn't get too application-specific, so it remains useful after the current AI craze wears off.

    • Data point (Score:4, Interesting)

      by fyngyrz ( 762201 ) on Thursday February 08, 2024 @12:11PM (#64225054) Homepage Journal

      sounds like marketing some CPU features a bit differently by throwing "AI" in there.

      FWIW, all of Apple's M1, M2, and M3 based hardware already include dedicated APUs as well as GPUs and CPUs. I have no idea how these devices rate in terms of the performance measures cited in TFS, but the neural-like units are present.

      I will say this, too: my M1/Ultra's performance with local LLMs and generative image ML is pretty snappy. LLMs respond immediately upon query entry and produce about a paragraph every couple seconds, and images are generated in about ten seconds. What's funny is I don't think either application is even using the APUs, just the CPUs and GPUs. I'm running GPT4All and DiffusionBee (which is a Stable Diffusion derivative.)

      • by drnb ( 2434720 )

        FWIW, all of Apple's M1, M2, and M3 based hardware already include dedicated APUs as well as GPUs and CPUs. I have no idea how these devices rate in terms of the performance measures cited in TFS, but the neural-like units are present.

        The less capable Apple Silicon CPUs in iPhone, iPad and Apple Watch also have "AI" support, at a minimum ML support. The Apple Watch can run small ML models that process voice.

      • I would have to agree, even my lowly M1 is wickedly fast on most things compared to any previous machine I have used, even the "high-performance" versions. I have been very impressed.

      • FWIW, all of Apple's M1, M2, and M3 based hardware already include dedicated APUs as well as GPUs and CPUs. I have no idea how these devices rate in terms of the performance measures cited in TFS, but the neural-like units are present.

        Right. It wouldn't surprise me if in three years, all common CPU packages (Apple, Intel, AMD, ARM) include a NPU core or six. That's the architecture trend these days: CPU designers have so many transistors available they can add cores for any special purpose task you want for basically no cost. Budget CPUs for $400 laptops or Arduinos might skip them but that's about it.

        I was at HP when we were designing Itanium, long before Intel came into the picture. We absolutely thought the way to use transistors was

      • by Bert64 ( 520050 )

        DiffusionBee does have support for the NPU.
        The NPU is the same across different models in a particular family aside from the Ultra i believe (which is basically double a Max), so it makes a big difference on a base M1 and a much smaller difference on an M1 Max relative to using the GPU.

    • 2027 is three years away, seems far-fetched, but, then again, this also just sounds like marketing some CPU features a bit differently by throwing "AI" in there. In practical terms, I'd be curious about what's actually changed or is changing in CPU design and how much of this is BS.

      The CPUs will have some built-in support for running ML models. We already have inexpensive microcontrollers that offer ML support. Or maybe it'll be more ambitious and it'll be similar to the "Neural Engine" in Apple Silicon ARM based CPUs. Yes, "AI" is somewhat abused in marketing but even small scale ML support is real. An Apple Watch can run small ML models that process voice in real time.

    • by ceoyoyo ( 59147 )

      The AI units are chiplets/cores/coprocessors/units that are optimized for running neural networks. Mostly fast, energy efficient matrix multiplies. Apple started adding them to their processors and Intel, AMD and Qualcomm are all doing so as well. It's not that unlikely they'll be 60% of new PCs in three years. It's like floating point units were in the 90s. Expensive coprocessors then high-end options, then ubiquitous. Intel is currently at the high-end option stage, Apple has been putting them in everythi

  • Last PC I purchased had a CPU sticker, Low-blue light sticker, some manufacturer sticker and a Windows sticker. What I was missing is AI.
  • The next-generation CPU's and GPU's that AMD/Nvidia/Intel put out this year all have some new "AI acceleration" checkbox feature on them, and Dell will upgrade their PCs to use those new processors. Just like they do every year. Mind you that most of your software will not utilize these features for the next 18 months or so, but at least they can say it's there to give you another excuse to upgrade.

    • Sounds like some sort of add on like card that does basically what a GPU does, a parallel processing unit. Like how ASICs sped up bitcoin mining, a purpose built AI processor seems like it would outperform something created to display graphics. There's a need for a lot of memory, and a lot of linear algebra (maybe more, no expert), but how ideal is Nvidia's offerings? Is there extra? They are making a huge profit so there has to be room for competition. Video cards are selling for quite a bit more than they
    • The next-generation CPU's and GPU's that AMD/Nvidia/Intel put out this year all have some new "AI acceleration" checkbox feature on them, and Dell will upgrade their PCs to use those new processors.

      And then MS will require these features to be present for their next version of Windows to install/run on systems, despite no fundamental reason this must be so, other than to justify the new also mandatory Copilot key. And like that, millions of PCs are once again in "need" of being upgraded.

  • AI PCs To Account for Nearly 60% of All PC Shipments by 2027

    This is how you get Skynet.

    John Connor: By the time Skynet became self-aware it had spread into millions of computer servers across the planet. Ordinary computers in office buildings, dorm rooms; everywhere. It was software; in cyberspace. There was no system core; it could not be shutdown.

  • by xack ( 5304745 ) on Thursday February 08, 2024 @12:04PM (#64225032)
    NPUs will meet the same fate, as the novelty wears off. AMD eventually dropped their 3DNow instructions too.
    • by drnb ( 2434720 )

      NPUs will meet the same fate, as the novelty wears off. AMD eventually dropped their 3DNow instructions too.

      The market, not the manufacturers made those decisions. Developers largely ignored 3DNow and stuck to MMX/SSE/AVX. Users largely ignored PhysX cards and just used their older GPUs for Physics if they wanted to. Much like some users are ignoring crypto mining cards and still using GPUs.

    • I remember PhysX, I remember they were bought by NVIDIA and integrated into GPUs. Incidentally this is precisely what is being talked about here. AI co-processors either on SoCs or as dedicated units in CPUs.

      Also 3DNow was dropped because it was a largely unused AMD only feature. It was implemented virtually no where. Quite different from anything being discussed here.

    • NPUs will meet the same fate, as the novelty wears off. AMD eventually dropped their 3DNow instructions too.

      The idea of 3DNow instructions hasn't gone anywhere, it's just that Intel pushed their own version called SSE, which was eventually adopted by AMD as well. From what I understand, NPUs, tensor cores etc. are just continuing the trend of wider SIMD units and shouldn't be too application-specific. It's just marketing that likes to name them after the most popular application, just like AMD's floating point SIMD unit was named for 3D graphics.

  • Laughable (Score:4, Insightful)

    by irreverentdiscourse ( 1922968 ) on Thursday February 08, 2024 @12:04PM (#64225034)

    PC's with "AI" slapped on the box are certainly coming. I'm not so sure about the rest.

    • Reminds me of the old "Multimedia PC" crap from the 90's. Yeah it's got a CD-ROM!
      • by drnb ( 2434720 )

        Reminds me of the old "Multimedia PC" crap from the 90's. Yeah it's got a CD-ROM!

        I thought they also required a CPU with the MMX instruction set, ie a Pentium or better?

        • Nope, I remember the thousands of ads in magazines back in the early 90's that would send you a "kit" to bring your 386/486 up to full MPC compliance. In actuality what you got was a sound card (usually an 8bit sound blaster or equivalent) and a 1x or 2x cd-rom drive.Your PC also needed at least 640x480 256 color VGA and a 30MB hard drive to be "full" MPC compliant but by that time most computers had that covered.

          • by drnb ( 2434720 )
            Glad to hear my BYO 486DX2-66 made the cut. Sound Blaster, ATI Mach64. 2x CD-ROM, 3COM network card. Installed Linux dual boot no problem given these common parts. Had some multimedia from WInNT and Linux.
    • I'm not sure why you think it is laughable. AMD and Intel have already announced dedicated processing units for AI workloads to be included in their next CPUs. Apple already has it as do several ARM SoC providers.

  • What does SoC capabilities even mean? I suspect they're trying to count every device that listens for clapping to turn off lights as a PC in an effort to push up the value of one stock or another. If it's an SoC it's not a PC.
    • It may depend on your definition of SoC. if you go back to the expansion "System on a Chip", it basically means the north bridge and south bridge are merged into the same chip as the CPU.

      Consider the Intel N100. It forms the basis of many a miniPC, but it has no [discrete] north bridge or PCH.

  • by williamyf ( 227051 ) on Thursday February 08, 2024 @12:08PM (#64225042)

    Currently, in Win10 & 11, DirectML ( https://learn.microsoft.com/en... [microsoft.com] ) only requires a GPU capable of DX12... but since even a lowly GTX6xx qualifies, it means that the HW level is only DX11.

    Most likely, Win12 will require a DirectML with HL 12_2, OpenCL 2.1 + a ton of extras + AVX2 (all 3 of them at the same time)... And that is IF MICROSOFT IS CONSERVATIVE! (to prevent a debacle like we had with the HW requirements of Win11).

    Because, if Microsoft decides to be agressive, they may as well go for DP4a support in the GPU (making all machines with intel iGPUs below the 14th gen inelegible), or, even more agressive, requiring a NPU

    So, yes, by Microsoft's definition, 60% (or more) of PC shipments in 2027 will count as AI PCs.

    • by drnb ( 2434720 )

      Most likely, Win12 will require a DirectML with HL 12_2, OpenCL 2.1 + a ton of extras + AVX2 (all 3 of them at the same time)... And that is IF MICROSOFT IS CONSERVATIVE! (to prevent a debacle like we had with the HW requirements of Win11).

      Coincidentally I was looking at AVX2 recently, Intel CPUs have been offering it for the last 11 years. Even with PCs lasting so long now, 7 or 8 years is common, that seems a conservative and reasonable requirement.

    • Because, if Microsoft decides to be agressive, they may as well go for DP4a support in the GPU (making all machines with intel iGPUs below the 14th gen inelegible), ...

      According to this [intel.com], 11th generation with Iris graphics support DP4a.

  • The AI PCs will fit nicely beside the 3D TVs that were predicted to take over the market in the mid-2010s.

    • by drnb ( 2434720 )

      The AI PCs will fit nicely beside the 3D TVs that were predicted to take over the market in the mid-2010s.

      The AI PC CPUs will have ML support, like inexpensive microcontrollers already have.

      • And?

        3D TVs had dedicated hardware to support the 3D functions. There wasn't a huge amount of 3D media and only a vanishingly small percentage or buyers were interested in it, so it was a big waste.

        The same vanishingly small percentage of buyers are asking for their PC to have AI features. The vast majority don't care about AI and there's no breakout application that will drive sales.

        This is exactly what 3D TV was; a gimmick that they hope will boost sales.

        • by drnb ( 2434720 )

          The AI PC CPUs will have ML support, like inexpensive microcontrollers already have.

          And? 3D TVs had dedicated hardware to support the 3D functions ...

          Is not 3D. Consider Apple Watch. It has some ML support and its able to run small ML models that allow for some local voice processing. Some applications for microcontrollers with ML support also mention voice processing on device.

  • by JeffSh ( 71237 ) <jeffslashdot.m0m0@org> on Thursday February 08, 2024 @12:35PM (#64225134)

    The only thing i can think of that distributed generative AI will accomplish is enabling local devices to re-write or superposition content over other content in real-time and dynamically. basically, real-time ad content. today that has to be done on the content distributor side, or with pre-prepared ad content of a specific form factor. with distributed generative AI, ad content distributors will be able to "take advantage of" every single pixel of "empty" real estate to sell ads.

    what a horrific future

  • by wakeboarder ( 2695839 ) on Thursday February 08, 2024 @12:45PM (#64225170)
    If you need to train models you use the cloud. If you need a LLM you use the cloud. I don't see a lot of edge applications right now for PC.
    • by drnb ( 2434720 )

      If you need to train models you use the cloud. If you need a LLM you use the cloud. I don't see a lot of edge applications right now for PC.

      Look at Apple Watch. Its CPU has ML support and it can run small ML models that process voice. So its able to do some work offline, processing the voice data onboard the watch.

      • Yeah, but an apple watch isn't attached to the internet. The ML can basically wake up and do simple commands. I'm talking a PC, I don't know of any apps that require a neural processor, most have GPU's. But still not many apps and no major apps use it. In addition you can use a CPU or GPU for most of this type of processing.
        • by drnb ( 2434720 )

          Yeah, but an apple watch isn't attached to the internet. The ML can basically wake up and do simple commands. I'm talking a PC, I don't know of any apps that require a neural processor, most have GPU's. But still not many apps and no major apps use it. In addition you can use a CPU or GPU for most of this type of processing.

          Part of the reason for doing the processing locally rather than over the internet is speed and privacy.

          The iPhone camera uses the Apple neural engine for image processing / cleanup. Macs cleanup video conferencing too, do a limited "camera follows you", does the background using the neural engine too. Audio stuff too.

          Maybe this will turn out to be a GPU rather than ML support. So we'll really be setting a baseline level GPU. That would be a good thing too.

    • If you need to train models you use the cloud. If you need a LLM you use the cloud. I don't see a lot of edge applications right now for PC.

      If you need to TRAIN a model for something (say, to recognize your handwriting or voice in a tablet), you could use the cloud for higher raw power, of the local resources, for higher privacy. If you go for the local resources, you can go for the CPU (higherst power consumption), the GPU (middle of the road power consumption) or, if available, the Dedicated Machine Learning Accelerators (lowest power consumption).

      If you need to EXECUTE models, then, to the issue of the privacy, you have to add latency and po

  • If you need an NPU, on most PC's you can add it. Even on some laptops with thunderbolt you could add an NPU. Why would you need an new computer for an NPU? Seems like a waste.
    • Most people have no ability to add an NPU, even if their computer makes it possible to do so.

      Most businesses don't want to spend scarce IT resources upgrading PCs, they'd rather just buy new pre-configured boxes.

    • If you need an NPU, on most PC's you can add it. Even on some laptops with thunderbolt you could add an NPU. Why would you need an new computer for an NPU? Seems like a waste.

      If you add an NPU to a laptop via Thunderbolt, most likely the NPU (I preffer to call'em Dedicated Machine Learning Accelerators, or DMLAs) will not be available on the go.

      If your laptop has an m.2 slot, you could add a DMLA there, like we used to0 do in the past replacing miniPCI cards with CrystalHD decoders to allow old laptops to be used as media centers past their primes.

      But many laptops lack said slots, and many people lack the skill to do it even if the laptops does have the slot.

      And that leaves the

  • by Junta ( 36770 ) on Thursday February 08, 2024 @01:21PM (#64225280)

    Roughly, this is measuring the expected trajectory of what is essentially an Intel marketing push. Intel is pushing that AI should be equivalent to 'NPU' to try to diminish the perception that GPU == AI and therefore nvidia==AI.

    NPU will likely be a 'feature' of the Intel CPUs that come out at the time, similar to how CPUs provide FPU and GPU.

    There's some more nuance, and more than Intel is likely to hop onto the 'it doesn't have to be nvidia' marketing bandwagon.

    Notably, by their measures a desktop PC with an RTX4090 would not count as an "AI PC" because it doesn't have anything marketed as an 'NPU', despite the tensor cores that are, in practice, the gold standard of "AI" calculations.

  • This reminds me of how in the 1980s, things like FPUs and MMUs were separate chips. Do you want an 80387 with your 80386? Do you want a 68851 with your 68020? But then the newer CPUs just came with that stuff.

    Even if 90% of the machines sold over the next few years never use it (think of how many 80386 chips were running MS-DOS as a "fast 8086" and never went into protected mode), it's nice that on the software side you'll eventually be able to expect it. In 1988 you couldn't assume floating point was fast

  • Hot Dog! Better spellcheck is coming in 2027
  • It's 3D TV round 2. Gentlemen, start your hype.

  • by MicroSlut ( 2478760 ) on Thursday February 08, 2024 @03:03PM (#64225572)
    I only use AI for creating posts in Slashdot. With my new 60+ terafloppy NPU in 2027 i will be able to not only generate awesome comments with correct spelling, but frist phrost!
    • It's "frosty piss", Bing. I mean, Gemini.

    • by mjwx ( 966435 )

      I only use AI for creating posts in Slashdot. With my new 60+ terafloppy NPU in 2027 i will be able to not only generate awesome comments with correct spelling, but frist phrost!

      It's gained enough sapience to work in marketing!

  • an AI-first operating system (OS) that enables persistent and pervasive AI capabilities in the OS and apps

    I think you misspelled "invasive" there...

  • The amazing performance characteristics of specialized processors is nice and all but it is going to to sit idle if there isn't the bandwidth available to feed it. Seems either these future PCs sport significantly faster RAM than present day DDR5 or go nuts with multi-channel which does not seem particularly realistic. While there are number of AI applications where you don't need bandwidth beyond local memory/caches the ones that are all the rage these days very much do.

  • Won't the world have ended before then?

  • Do we want generative AI to be local? I think it would make sense for each machine to have a local AI processor only if it's used a good fraction of the time. Otherwise, the cost of the hardware and keeping the model up-to-date is going to be larger than the cost of having a centralized service generate your content for you.

    Don't get me wrong. I wouldn't bet against generative AI being big. I'm just skeptical that everyone is going to want to have their own dedicated generative AI hardware sitting idle

  • Hi, I have a pretty standard medium PC (Ryzen something, RTX3070, 16GB) While not having deep understanding of GenAI, I like to play with LLMs (GPT4All) and Text-to-image (Stable Diffusion) locally on my PC. (hate cloud concept) While a lot of discussion here is about adding NPU cores to the CPU die, would that not go in the direction of iIntel's integrated graphics? ie more or less does the job but sucks compared to AMD and Nvidia dedicated GPU offerings? Would it not make more sense to release consumer le

FORTUNE'S FUN FACTS TO KNOW AND TELL: A guinea pig is not from Guinea but a rodent from South America.

Working...