Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Technology

Intel Gives Details on Future AI Chips as It Shifts Strategy (reuters.com) 36

Intel on Monday provided a handful of new details on a chip for artificial intelligence (AI) computing it plans to introduce in 2025 as it shifts strategy to compete against Nvidia and Advanced Micro Devices. From a report: At a supercomputing conference in Germany on Monday, Intel said its forthcoming "Falcon Shores" chip will have 288 gigabytes of memory and support 8-bit floating point computation. Those technical specifications are important as artificial intelligence models similar to services like ChatGPT have exploded in size, and businesses are looking for more powerful chips to run them.

The details are also among the first to trickle out as Intel carries out a strategy shift to catch up to Nvidia, which leads the market in chips for AI, and AMD, which is expected to challenge Nvidia's position with a chip called the MI300. Intel, by contrast, has essentially no market share after its would-be Nvidia competitor, a chip called Ponte Vecchio, suffered years of delays. Intel on Monday said it has nearly completed shipments for Argonne National Lab's Aurora supercomputer based on Ponte Vecchio, which Intel claims has better performance than Nvidia's latest AI chip, the H100. But Intel's Falcon Shores follow-on chip won't be to market until 2025, when Nvidia will likely have another chip of its own out.

This discussion has been archived. No new comments can be posted.

Intel Gives Details on Future AI Chips as It Shifts Strategy

Comments Filter:
  • Well this seems like a "Bud Light" moment for Intel.
    • True. True.

  • This will turn out better than Larrabee or Itanium, we promise. This time we will deliver all the performance.

  • I've never been a fan of Intel. Of course both CPU manufacturs are corporate and evil these days. I loved the days when we had several CPU manufacturers, in the Pentium era and before...
    • So do I, irrespective of that:

      8-bit floating point computation

      8-bit floating point? This can't be serious. I can't remember having ever used anything below 24-bit (or was it 32-bit?)

      • Re:I use AMD. (Score:4, Interesting)

        by UMichEE ( 9815976 ) on Monday May 22, 2023 @02:25PM (#63543227)
        It's serious. I don't know a lot about it, but apparently AI/ML workloads don't need high precision. It seems like the industry is converging on 8-bit as the standard.
        • I'm not convinced.

          8 bits is fine precision wise, it's the range that's the killer. You have to be much much much much more careful training networks to make sure you don't escape the range of 8 bit types. It's really easy to do. 16 bit floats are much less of a pain in the star to use for inference.

        • A floating point number is M * 2-to-the-power-C (ugly, but I can't represent it properly using Slashdot html).

          Assume you have a sign bit, that leaves 7 bits left for M and C (which is also signed) together. Make them 3 bits and 3+sign bits, the largest number you could represent would be in the 7 * 2**7 which is 7 * 128. What use is that?

          • It gets worse than FP8. There are formats that are all exponent. Some workloads make use of FP2, basically just positive, negative or 0. As to why it's useful, you may not always need the full range of FP32. You might find that all your important numbers lie in a convenient range somewhere, so you encode them all as FP8 and then apply a global offset to put them where they are meant to be. You can achieve huge speedups this way because in AI it's all about crunching numbers and who cares about accuracy when
      • This is how 'AI chips' are faster, they give a lot of precision and they mostly focus on multiply/accumulate operations; you don't need a full CPU or GPU just some very specialized circuits.

        • That's how some of them are faster, optimized for reduced instruction set and register size. Others co-locate memory with the cells, (neuromorphic nand and memristors, for example.) I'd imagine there are some that do both, or there will be.
      • by ceoyoyo ( 59147 )

        16, or 8 bit floating point is a highly desirable capability. It multiplies your memory, makes things faster, and deep learning models don't care much about the precision.

        Nvidia purposely disables less than 32-bit precision on their consumer cards so you have to buy their super expensive special purpose ones if you want it.

  • I was sure they were going to announce a cpu consisting of 1000 Intel 4004 cores and they were going to call it Larrabee 2.

    • I interviewed for a Larrabee engineering position up in Oregon.
      Took the corporate shuttle jet. They had good peanuts.

      Unimaginative interview guy wanted me to write him a sort in pseudocode on his whiteboard.
      Fuck that. I gave him: list.sort

      We both agreed on 'no' and Larrabee was dead within a year.
      My fault.

  • I'm not a chip expert, but doesn't the AI field change too fast for a big chip design to keep up? Neural nets were the thing, and then "fact chains" (GPT) are the thing now. Can both use the same kind of chip design well (with only minor tweaks)?

    Is there something in common between 3D graphics, neural nets, and GPT chains such that a somewhat generic chip can be ready for the AI Thing Of The Month down the road? How can one be sure of the processing needs of future AI breakthru's?

    I expect Intel can be good

    • by godrik ( 1287354 )

      all neural net are still fundamentally still doing GEMMs at the bottom. So, so far, we are still pretty good.

      • Sparse GEMM. Even NVIDIA is way behind Google there, but AI developers mostly don't give a shit about efficiency and Google doesn't sell TPUs. As the market matures a bit though, the AI developers who just go "lets have 99% of the activation values be just above 0 ... not my problem" are going to improve or be out of a job. Sparse is the future, not structured either, massive sparsity.

        Some of the postprocessing like batchnormalization and softmax isn't entirely trivial, so you do need decent general purpose

        • by Tablizer ( 95088 )

          Generally there is:

          1. Small GEMMs
          2. Large "full" GEMMs
          3. Large sparse GEMMs
          4. *

          Can a chip be reasonably optimized for all 3 of these? I would expect as different AI fads/trends come and go, different GEMM "shape" optimization will be favored.

          Perhaps Intel can make one that does all 3 reasonably well and is relatively cheap to reduce the chance customers will have to change hardware again. That could be their selling point, and show variety-GEMM benchmarks to demonstrate it.

          * I suppose there can be oddly sha

    • by ceoyoyo ( 59147 )

      GPT is a neural net. Anything impressive you've heard about is almost certainly a neural net.

      And neural nets are a lot of multiply-adds, which is why GPUs are so good at them.

  • by presearch ( 214913 ) on Monday May 22, 2023 @03:49PM (#63543407)

    After working there, I am surprised that they can get any products out the door.
    There's a small percentage of really bright people. But there's an army of clueless
    boomer middle managers just wanting to cash out their shares like back in the 90's,
    people from India that believe that they will take over by hiring only from the same caste,
    and guys from China that walk four abreast down hallways, smirking like
    they (will soon) own the place. When you're a Jet....

    Worst of all, the sales weasels, relentlessly milking expense accounts,
    always too late chasing whatever tech is making the front page of The Journal.

    Those smart people? They aren't allowed to make decisions.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...