Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Technology

Intel Takes on AMD and Nvidia With Mad 'Max' Chips For HPC (theregister.com) 26

Intel's latest plan to ward off rivals from high-performance computing workloads involves a CPU with large stacks of high-bandwidth memory and new kinds of accelerators, plus its long-awaited datacenter GPU that will go head-to-head against Nvidia's most powerful chips. From a report: After multiple delays, the x86 giant on Wednesday formally introduced the new Xeon CPU family formerly known as Sapphire Rapids HBM and its new datacenter GPU better known as Ponte Vecchio. Now you will know them as the Intel Xeon CPU Max Series and the Intel Data Center GPU Max Series, respectively, which were among the bevy of details shared by Intel today, including performance comparisons. These chips, set to arrive in early 2023 alongside the vanilla 4th generation Xeon Scalable CPUs, have been a source of curiosity within the HPC community for years because they will power the US Department of Energy's long-delayed Aurora supercomputer, which is expected to become the country's second exascale supercomputer and, consequently, one of the world's fastest.

In a briefing with journalists, Jeff McVeigh, the head of Intel's Super Compute Group, said the Max name represents the company's desire to maximize the bandwidth, compute and other capabilities for a wide range of HPC applications, whose primary users include governments, research labs, and corporations. McVeigh did admit that Intel has fumbled in how long it took the company to commercialize these chips, but he tried to spin the blunders into a higher purpose. "We're always going to be pushing the envelope. Sometimes that causes us to maybe not achieve it, but we're doing that in service of helping our developers, helping the ecosystem to help solve [the world's] biggest challenges," he said. [...] The Xeon Max Series will pack up to 56 performance cores, which are based on the same Golden Cove microarchitecture features as Intel's 12th-Gen Core CPUs, which debuted last year. Like the vanilla Sapphire Rapids chips coming next year, these chips will support DDR5, PCIe 5.0 and Compute Express Link (CXL) 1.1, which will enable memory to be directly attached to the CPU over PCIe 5.0.

This discussion has been archived. No new comments can be posted.

Intel Takes on AMD and Nvidia With Mad 'Max' Chips For HPC

Comments Filter:
  • supply drivers for windows 10 and alot of people will buy. otherwise its another dud. meh dont care for win11.
  • I wonder how much of the transistor budget is consumed through pointless "efficiency cores."

    In any case, you can only push an IMC so far and AMD's "fat L3 approach" is going to offer far more real-world performance for the money... they'll have 16cores per chiplet and 192 or 256MB L3 (on inexpensive desktop processors!) soon enough.

    Intel: deeply-pipelined junk since Willamette.

    • by Chas ( 5144 )

      The next gen EPYC stuff is insane. 96 cores per chip assembly?
      Higher power budget?
      PCI-E 5 interconnects?

      And with DENSITY being the big thing right now, if given a choice between lots of super-high-speed cores.
      Or even MORE only slightly lower speed cores? Such compute houses are going to go with MORE CORES.

      If Intel's going to be releasing something it better be SOON.
      And it better be UTTERLY FUCKING SPECTACULAR.
      Otherwise AMD's already eaten their lunch.

      • Too late, Genoa just launched today and Intel has a known-inferior product launching next year. Genoa-X will put any MAX chips out to pasture. Intel has no chance this generation. They're about to lose even more market share in enterprise.

        Their only competitive product in client just got released to a whopping 3 SKUs (6 if you count the KF parts that are just K parts with no functioning iGPU):

        13600k
        13700k
        13900k

        That's it. Everything else in client is a rehash of Alder Lake - that is, no updated L2$ etc.

        • by Chas ( 5144 )

          I don't want to say "too late". Because Intel could always have something up their sleeve.

          But yeah, they look to be well and truly fucked at this point.

          • by brxndxn ( 461473 )

            Good.. nothing gets engineers kicked into high gear like telling them everything they're trying to do is impossible.. And this is coming from someone rooting for (and investing in) AMD for the past 30 years. For the first time, I might invest in Intel too... because they're so fucked, they might actually let the engineers run the show for once in a long time.

            What am I rooting for? Well, I want to make money... but what I'd rather have over money is a major advancement for mankind. We all really should be ap

            • For the first time, I might invest in Intel too... because they're so fucked, they might actually let the engineers run the show for once in a long time.

              Sure, just look at how Boeing let the engineers- er, wait, they got a big stack of money handed to them for fucking up, right? And Intel, though they have enough cash on hand sitting around to build their own fucking fab, they got a big handout to build one, right? Intel is nowhere near fucked yet, they can continue to fuck up like this for the foreseeable future if congress is just going to keep handing them cash.

            • It is not the engineering that is impossible. It is the business and marketing aspect that is impossible. Intel is grievously fucked. New designs take years to be brought to fruition, and the money men that were responsible for continuing node and design improvements ruined the business. Brian Krzanich and Venkata Renduchintala screwed the pooch.

          • In the server room, they do not have anything they can release to compete until 2024, and that's assuming Granite Rapids does not see delays.

            They're completely exposed.

          • AMD has 16% of the server CPU market.
            That's not extant, that's new sales.
            16 out of every 100 server CPUs sold, is an EPYC.

            I imagine most companies would love to be as fucked as the person getting 84 out of every 100 sales.

            For a long time, I've been trying to figure out what the hell has people on this site so in-love with EPYC parts.
            They've never been nearly as compelling as AMD's desktop parts.
            All I can figure, is that you guys don't represent those who use big servers, in general.

            8 cores per CCD
      • They still suffer from the classical EPYC problem, though.
        There's no top-level cache.
        You get 96 cores that will do embarrassingly parallel workloads like a fucking boss, but trying to do something like MySQL with more than a single CCD? Stumble, and fall.

        There are definitely 2 very different use cases for EPYCs and Xeons, being chiplet and monolithic designs.
        With Xeons, you get a package-level L3 cache, which means IPC between *any* core grouping is fucking fantastic.
        with EPYCs, cross-CCD IPC is abysm
      • by AmiMoJo ( 196126 )

        Intel manages to claim back the top spot in benchmarks from AMD now and then, but all their recent processors have been insanely power hungry. Unless you have extreme cooling they will thermal throttle, and your energy bill will be a nasty shock.

        For anyone with a decent number of servers, that's a huge problem. Not only is the cost to run them much higher, but so is the amount of heat that has to be removed from the room. AMD is leagues ahead per unit of computing.

        Intel has been trying to make things better

        • The discussion here is about top-tier server processors, a regime where I am happy to report, that literally no one cares about the power consumption.

          As the article notes, the customers who purchase these devices want more power consumption per socket, because our game is physical density.
          We can supply practically infinite power and cooling, but space- space is tricky.

          Xeons do not have efficiency cores, and EPYCs are power hungry as fuck too- just how we want them.
    • I wonder how much of the transistor budget is consumed through pointless "efficiency cores."

      The efficiency cores allow for more work per unit energy. So if your CPU speed is being pulled down by thermals then you probably should have opted for more efficiency cores in place of heavyweight cores.

      Overall workflows will generally fit into one of two categories. Either a workflow is limited by the speed of a single thread or there are multiple threads that can operate independently. In both scenario - a combination of big and small cores will get the job done faster. This is why AMD is opting f

      • Efficiency cores are great for desktop users, but if someone has use for them on a server in an enterprise that means they're operating inefficiently. These cores are supposed to be explicitly for HPC, where any nodes which are idle should be shut down, or at least put into deep sleep, and definitely not permitted to idle.

  • Are these "data center GPUs" processing graphics? It sure doesn't sound like they are, which means they really shouldn't be called Graphics Processing Units, right?

Life is a whim of several billion cells to be you for a while.

Working...