Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD

AMD Gains Ground in Data Center, Laptop CPU Markets (tomshardware.com) 49

AMD increased its market share in data center and laptop CPU segments during Q2 2024, according to a new report from Mercury Research. The company captured 24.1% of the data center CPU market, up 0.5% from the previous quarter and 5.6% year-over-year. In laptops, AMD's share rose to 20.3%, a 1% increase quarter-over-quarter and 3.8% year-over-year. The company's revenue share in laptops reached 17.7%, indicating lower average selling prices compared to Intel.

Intel maintained its overall lead, controlling 78.9% of the client PC market. In desktops, Intel gained 1% share, now holding 77% of the market. AMD's data center revenue share hit 33.7%, suggesting higher average selling prices for its EPYC processors compared to Intel's Xeon chips. AMD earned $2.8 billion from 24.1% unit share, while Intel made $3.0 billion from 75.9% unit share.
This discussion has been archived. No new comments can be posted.

AMD Gains Ground in Data Center, Laptop CPU Markets

Comments Filter:
  • Intel hasn't had a truly competitive datacentre product in a very long time. They live off niche applications where their on-package accelerators can make a difference and on discounts to select customers (which these days is pretty much anyone that asks for a discount). It's getting to the point that Intel TCO is so bad that nobody wants to touch their hardware. While Emerald Rapids was a step in the right direction, it was far too late to market to matter (and was little more than an update to Sapphire

    • Re:Not surprising (Score:5, Interesting)

      by Rockoon ( 1252108 ) on Monday August 12, 2024 @02:13PM (#64699478)
      Their ("Intel" the Brand) underlying structural issue cannot be solved.

      The solution to Intels problems is to spin off their fabs, To stop being a vertically integrated company. For their fabs to become rent-a-fabs. For the cpu designers the freedom to use the best fabs no matter who is running them.

      But with all that, the brand Intel becomes the next Motorola, just an asset thats traded around.
      • But with all that, the brand Intel becomes the next Motorola

        You mean AMD, I'm sure.. Right?

        Because Intel has a major fabrication problem- that can't be denied... but its CPU architecture is still *fucking excellent*, comparatively speaking.
        Motorola, they are not.

        • "but its CPU architecture is still *fucking excellent*, comparatively speaking."

          Compared to what? Certainly not AMD.

          • Oh, shut the fuck up, dude.
            You always say this stupid shit, and then refuse to back it up with anything but an "uhhh, because I know these things"

            Currently, they're neck-and-neck in a performance per clock metric.
            In non-top-of-the-line chips, efficiency is about comparable, with some Intel parts taking the win, and some AMD parts taking the win- ultimately, coming down to both of them have highly efficient parts that they punish to hit performance benchmarks they've selected for sale.
            • "In non-top-of-the-line chips, efficiency is about comparable,"

              In their so-called top of the line chips Intel is only barely matching AMD by using vastly more power and running at a much higher temperature, reducing CPU lifespan. We do not measure performance with budget CPUs like you are doing here as an Intel fanboy. Now run along and play in traffic.

              • by kriston ( 7886 )

                Now run along and play in traffic.

                Lame response. Try harder. He's still right about you.

                • Lame response. Try harder. He's still right about you.

                  Not trying hard to try hard here, friend. I think we'll have to disagree.

              • In their so-called top of the line chips Intel is only barely matching AMD by using vastly more power and running at a much higher temperature, reducing CPU lifespan.

                They're both doing this.
                Both the 14900K and the 7950X only lose about 20% of their performance for 50% of their total power usage. That means both chips are being cranked way past what is sane for the sake of hitting a marketing mark, as I said in the first reply.

                We do not measure performance with budget CPUs like you are doing here as an Intel fanboy. Now

                There is no we on your side of the fence.
                I make purchasing decisions, and unsurprisingly, the market seems to agree with me.
                While AMD continues to make gains, they're still a minority player in the datacenter, and not by a small amount.

                As I im

                • That means both chips are being cranked way past what is sane for the sake of hitting a marketing mark, as I said in the first reply.

                  Eh?

                  Or you know people want fast chips and are prepared to pay the power to get it. I got a 7950x3d and I don't want an arbitrary mark in efficiency, I want it fast.

                  • Or you know people want fast chips and are prepared to pay the power to get it.

                    That's not mutually exclusive.

                    I got a 7950x3d and I don't want an arbitrary mark in efficiency, I want it fast.

                    The argument can be made for some_intel_here as well.

                    • That's not mutually exclusive.

                      You dais they're being cranked way past sane: I want it as fast. That's not insane.

                      The argument can be made for some_intel_here as well.

                      This isn't about AMD vs Intel so much as being cranked way past what's sane. I've got a 4090 in this machine so the power use is often through the roof. Cranking the CPU way up and it running quite hot is definitely worth it.

                    • You dais they're being cranked way past sane: I want it as fast. That's not insane.

                      Fair.
                      I'll caveat the claim with, "way past sane for the purposes of evaluating the efficiency of the microarchitecture"

                      This isn't about AMD vs Intel so much as being cranked way past what's sane. I've got a 4090 in this machine so the power use is often through the roof. Cranking the CPU way up and it running quite hot is definitely worth it.

                      I don't disagree with you in the slightest. Trading efficiency for horsepower is a perfectly valid desire.

                      Keep in mind that the argument you jumped in on had context. Particularly:

                      In their so-called top of the line chips Intel is only barely matching AMD by using vastly more power and running at a much higher temperature, reducing CPU lifespan.

                    • OK fair enough.

                • By "marketing mark" do you by chance actually just mean "market" ??

                  The power curves being similar on these things is no coincidence, ya dumbshit. Of course they try to make the fastest chips and of course things land about where the market will bear the added cost of the lower yields inherent in the binning of the best parts.

                  And the 65W desktop market is very price sensitive, while the laptop market has bigger drivers like gpu and display quality
                  • By "marketing mark" do you by chance actually just mean "market" ??

                    lol.
                    There's a reason both Intel and AMD have marketed new chips as a rebrand of the old, just clocked a bit higher and taking significantly more power.
                    They're setting the market, not you. It's cute that you think you are, though.

                    The power curves being similar on these things is no coincidence, ya dumbshit.

                    That was literally my point, ya dumbshit.

                    Of course they try to make the fastest chips and of course things land about where the market will bear the added cost of the lower yields inherent in the binning of the best parts.

                    Again, my point. If you ha actually read the conversation, that was my reasoning for discluding the top-of-the-line chips from comparison.

                    And the 65W desktop market is very price sensitive, while the laptop market has bigger drivers like gpu and display quality

                    Objection: speculation.
                    Further, how in the fuck is that relevant at all? We weren't even talking abou

            • We're talking datacentre here, and Intel is in the shutter. Emerald Rapids is slower than Genoa and uses more power. Turin and Turin-dense are nearing release. Intel is screwed.

              • Sure, unless you're running a highly coherent workload where Sapphire Rapids smokes Genoa about 2:1 in terms of performance per clock due to the cross-CCD latency cost.
                See: SQL benchmarks.

                There's a reason we're still using Intel.
    • I wouldn't say Emerald Rapids was just a step in the right direction, it's the first time in quite a while that Intel leaps ahead of AMD in data centre performance. Sapphire Rapids was basically as fast as Milan and came close to the time AMD released the much faster Genoa. But a year later, Emerald Rapids is actually faster than AMD's last years release. That is as long as you have a modern image with the latest drivers - I tried Google's high clock rate c4 Emerald Rapids with an old CentOS webserver image

      • Where do you get that crap?!? In no way did Emerald Rapids outperform Genoa except in some biased Intel slide.

        • by Ecuador ( 740021 )

          We have Genoa and Emerald Rapids servers in our cloud (GCP), they run side by side so I can compare performance.

        • by Ecuador ( 740021 )

          Yeah, those are rather meaningless. Of course a 96 core EPYC is faster than a 64 core Emerald Rapids, by a lot. Duh.. But in the cloud we pay per vCPU not per processor! So the only 3 things I care about are single-core performance (because there are tasks, like web server requests, which cannot be further parallelised), multi-thread *per vCPU* and price *per vCPU*. I have been using Google's c4 since they were in private preview early this year. At first they were underperforming in multi-thread and it tur

          • Of course a 96 core EPYC is faster than a 64 core Emerald Rapids, by a lot. Duh..

            Funny enough, not in coherent workloads.
            Install Postgres, and watch your 96 core EPYC struggle to keep up with your 64 core Xeon.
            This is because the cross-CCD coherency cost is ~130ns, or double and change that of a Xeon.
            Work within a CCD, or work that doesn't care much about coherency at all does well on the EPYC, though.

            • by Ecuador ( 740021 )

              This is because the cross-CCD coherency cost is ~130ns, or double and change that of a Xeon. Work within a CCD, or work that doesn't care much about coherency at all does well on the EPYC, though

              Well, yes, because EPYC's L3 is then much faster if your workload fits in that CCD - it's a classic design choice that offers a benefit along with a drawback. Genoa doubled the window each core gets, so that improves things, but Postgres is indeed the one major benchmark often cited as an example of that drawback the EPYC L3 design brings. I am not sure if it's other databases too, we use mysql and it is not affected the same way AFAICT (well... it is one of the less "sophisticated" RDBMSs), but in general

              • Maria 10 is definitely affected similarly (for us)
                We see about 2:1 performance there on a *rapids vs $your_epic_here for any amount of threads greater than a CCD.
                • I will note that we're using bare hardware, not cloud (we have our own datacenters)
                  And that if you need 96 cores of firepower that isn't doing a pretty limited set of work (RDBMS, large scale software traffic forwarding and analysis [DPDK]) then the EPYCs are a superior offering.
                  If you are doing one of those workloads, EPYC really kind of sucks. In a world where the Xeon and EPYC had a similar price/core, then there would simply be no contest. The Xeon is the better part. Unfortunately, Intel isn't stupid
    • Intel hasn't had a truly competitive datacentre product in a very long time.

      This is a very interesting claim. Many would believe that Intel hasn't had a product that could compete in a technological sense, and I wouldn't necessarily disagree. However, the financial history is overwhelming that Intel has dominated the last decade in terms of dollar sales. AMD has never been close. And given the choice between dominating technology or sales, I assume that any company in their right mind would always choose sales.

      Simply thinking that Intel's overwhelming dominance even during a mi

Measure twice, cut once.

Working...