Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Amazon's New Chip Moves AWS Into High-Performance Computing (bloomberg.com) 29

Amazon's cloud-computing unit is rolling out new chips designed to power the highest-end of computing, supporting tasks such as weather forecasting and gene sequencing. From a report: Amazon Web Services, the largest provider of over-the-internet computing, on Monday said it would let customers rent computing power that relies on a new version of its Graviton chips. Peter DeSantis, a senior vice president who oversees most of AWS's engineering teams, said in an interview that the product is a springboard for making what the industry calls high-performance computing more readily available.

The newest chip is the latest piece of Amazon's effort to build more of the hardware that fills the massive data centers that power AWS. Amazon says making its own chips will give customers more cost-effective computing power than they could get by renting time on processors built by the likes of Intel Corp., Nvidia Corp. or Advanced Micro Devices. The move has put AWS in direct competition with those companies, which are also among its biggest suppliers. DeSantis said the chipmakers remain "great partners," and that AWS plans to continue to offer high-performance computing services based on chips made by other companies.

On Tuesday, AWS Chief Executive Officer Adam Selipsky announced a new version of the Inferentia chip, which is designed to draw inferences from vast amounts of data. Inferentia2 is built to handle bigger sets of data than its predecessor, enabling things like software-generated images or detecting and interpreting human speech, Amazon said. [...] The latest version of AWS's line of Graviton processors, the Graviton3E, will have twice the ability of current versions in one type of calculations needed by high-performance computers, DeSantis said. When combined with other AWS technology, the new offering will be 20% better than the previous one. Amazon didn't say when services based on the new chip would be available.

This discussion has been archived. No new comments can be posted.

Amazon's New Chip Moves AWS Into High-Performance Computing

Comments Filter:
  • by thatseattleguy ( 897282 ) on Tuesday November 29, 2022 @02:08PM (#63089114) Homepage

    Since TFA is paywalled, and doesn't discuss the details that really matter anyways, here's a quick overview of the architecture:

    -
    AWS Graviton (Alpine AL73400) is a 16-core ARMv8 SoC designed by Amazon (Annapurna Labs) for Amazon's own infrastructure. The chip was first unveiled by Peter DeSantis during Amazon's AWS re:Invent 2018 and has been in deployment for user access since early 2019. These processors are offered as part of Amazon's EC2 A1 instances. The Graviton features 16 Cortex-A72 cores organized as four quad-core clusters, all operating at 2.3 GHz.

    From: https://en.wikichip.org/wiki/a... [wikichip.org]

    • AWS Graviton (Alpine AL73400) is a 16-core ARMv8 SoC designed by Amazon (Annapurna Labs) for Amazon's own infrastructure.

      Thanks; I was wondering if was ARM-based.

      Looks like the days of Intel dominance are over.

      • by alvinrod ( 889928 ) on Tuesday November 29, 2022 @02:41PM (#63089170)
        Intel dominance has been over for a few years already. AMD ate their lunch in server with Zen and will likely continue to do so for at least another few years. ARM does have a good chance at dethroning x86 in general though. Apple has been able to get their ARM SoCs to offer x86 desktop class performance and the Qualcomm's Nuvia acquisition looks like it's going to deliver similarly stellar performance.
        • Re: (Score:2, Informative)

          by CAIMLAS ( 41445 )

          I give x86 in the datacenter for non-Microsoft workloads maybe 10 years at this point, and 5 before people really start evaluating it - it largely depends on how heavily datacenters push energy conservation and associated costs.

          We're now at a point where ARM provides greater general capabilities than x86 64 bit in terms of both time and energy performance.

          • There's only one at-large vendor of enterprise ARM hardware right now and that's Ampere. Unless you count NV's Grace chip, but the way that's packaged it's aimed at select workloads.

            • by r1348 ( 2567295 )

              Well AWS makes it's own ARM chips through its subsidiary Annapurna Labs, Google might bring homegrown ARM cores to its GCP services too since they already make Tensor ARM chips for their phones. Not sure if Azure has a similar capability at the time.
              Ampere is targeting the retail server market, but neither AWS or GCP use open market servers, they engineer them internally and build them through a network of ODM.

              • You can't buy any of that hardware for your own datacentre. If you want access then you're playing by someone else's rules in "the cloud". Not all orgs can do business that way.

                AWS and many other hyoerscalars use ODMs to build out their Intel and AMD servers. Thats nothing new.

          • by Burdell ( 228580 )

            There needs to be a broader market for server vendors. Dell and HP have a huge chunk of market in part because of integrated management via iDRAC/iLO (Redfish is getting there at being a cross-vendor standard for these services). Last I looked, Dell is still mostly-Intel even, so don't look for them to offer significant ARM servers any time soon.

            • Even if there were a broader market for off-the-shelf enterprise hardware, there's no guarantee that Amazon would resell their custom Neoverse implementations through those OEMs.

        • Comment removed based on user account deletion
      • by timeOday ( 582209 ) on Tuesday November 29, 2022 @03:23PM (#63089228)
        Having different processor types in the same cloud really reduces the friction of trying out an alternative, too - you don't have to purchase this funky new machine you might not be able to fully utilize. Just pay some hours to try it out.
      • by hhr ( 909621 )

        We were hoping so as well. It's amazing how much of our pipeline has hidden dependancies on Intel. Virtualization on M2 based Macbooks is very limited until new drivers come out. Our CICD pipeline only recently added support for ARM. Every step we take to try and get to ARM is met with months of delays.

      • Define dominance.

        My daily computers contain 150,000 to 1,000,000 cores in 9 countries. We have mostly AMD+NVidia, NVidia DGX SuperPods, a few Intel nodes, some ARM nodes, and I just configured a simulated RISC-V cluster.

        ARM and RISC-V suck for HPC. You can get cores cheaper and they could even perform well on synthetic tests, but except for Apple ARM with Apple tweaked and tuned LLVM, the toolchains are rancid.

        RISC-V can never be well suited for HPC because it doesn't define things like how cache coherence
      • Itâ(TM)s an A72, a bit of an aged design in comparison with Apple M1/M2. Basically itâ(TM)s an overclocked slightly larger version of Raspberry Pi (same chip, except the Pi has 4 cores @ 1.5GHz).

        It will be cheap, low power and it is reasonably fast for low-end work (simple things like load balancing HTTP request and simple databases) but it certainly isnâ(TM)t going to be making its way into a list of supercomputers.

  • by DrMrLordX ( 559371 ) on Tuesday November 29, 2022 @02:45PM (#63089172)

    https://techcrunch.com/2022/11... [techcrunch.com]

    It's Graviton 3E.

    • So, it looks like they finally fixed the "Graviton is too slow for my business case" issue, along with the "You don't have a network-optimized Graviton instance" issue.

      I'll still need to fix the "my code doesn't run on ARM yet" issues on my end before I can start deploying instances, though.

  • ... before ze Quantum Cloud gets here. (lotsa "QC"-s in this sentence, he-he)
  • I hope this helps gaming servers like the ones GW2 uses. WvW could use extra hamsters,
    • Mostly indirection chasing in C++ spaghetti code, not physics simulation. Don't see how better vector instructions will help.

      • by Jimekai ( 938123 )
        Are you saying Inferentia is C++ spaghetti code? How else would one interface with it? I have a cloak of many colors that I wrapped around an inference engine linked to my Ingrid. I would like to offload the matrix work and keep the cloak. Apologies for the long reply. Imagine that everyone has thoughts to contribute. For you at the moment, let's say you researched equal rights and woman's pay. You create a repertory grid with the relevant elements and constructs and put it into Ingrid. Assuming your data
      • by JASegler ( 2913 )

        It will take time for gaming to switch over but there are steps being made to utilize vector instructions.

        Unity DOTS is one example. It is a complete reorganization of how you represent things in memory but it does give significant performance boosts when used properly.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...