AMD Overtakes Intel in Datacenter Sales For First Time (tomshardware.com) 48
AMD has surpassed Intel in datacenter processor sales for the first time in history, marking a dramatic shift in the server chip market. AMD's datacenter revenue hit $3.549 billion in Q3, edging out Intel's $3.3 billion, according to SemiAnalysis.
The milestone ends Intel's decades-long dominance in server processors, where it held over 90% market share until recent years. AMD's EPYC processors now power many high-end servers, commanding premium prices despite selling at lower costs than comparable Intel chips.
The milestone ends Intel's decades-long dominance in server processors, where it held over 90% market share until recent years. AMD's EPYC processors now power many high-end servers, commanding premium prices despite selling at lower costs than comparable Intel chips.
Re: (Score:2)
Because they're the best available. Nobody can beat Turin and Turin dense.
Re: (Score:2)
Re: Most of these "datacenter sales" (Score:3)
That's not at all why. Such applications aren't at all performance hungry -- if they were, they wouldn't use shit performance languages like Java, python, js, etc. They certainly wouldn't if they gave one shit about energy efficiency either. In most cases it's because something along the software stack only comes in x86 form. Whether that's hypervisors, databases, sdlan controllers, etc. In other cases it's because they rely on some proprietary application that is x86 only, even if it's written in supposedl
Re: Most of these "datacenter sales" (Score:5, Informative)
CPU per watt.... why did we come up with this metric? Was it because the buying side of the industry was crying out for it or because Intel was no longer able to squeeze major raw performance gains out of their chips each new generation?
I was a data center guy for many years. At no point was the amount or cost of power ever a major issue or part of my cost structure due to cpu power use. Rarely did the CPUs hit anything like 100%, when they did it wasn't for long, and power was sold to me by the circuit. Whether I ran the circuit hot 24/7 or left it cold and unused for months at a time, I paid the same amount for that circuit not for the power I actually used.
Power is certainly a data center cost but not enough of a factor by itself to justify buying CPUs based on their power efficiency or power envelope. I did buy a few racks of the power efficient Intels in my early days before I had the experience to know any better. After that I bought on performance per $ not performance per watt.
No matter what company I worked for, big or small and what the application was, CPU power use was never a serious issue. For certain server types/use cases I bought the absolute fastest CPU I could get, the rest I got normal mid range nothing special CPUs because they cost a lot less for only being 20% slower.
Why did we use languages like Java and Python? Because the hardware costs were irrelevant next to the cost of a developer's time and time to market. Who gives a shit if I had to buy 3x as many servers to run our shitty Java apps if we could launch months earlier?
Re: (Score:2)
For datacenters the limiter is power *out*, i.e. cooling.
The room is typically rated on watts per square foot that can be extracted. Higher performance per watt means you can fill more of the vertical space of a rack with compute.
Re: (Score:2)
Sure, agreed but if you choose your data center carefully you will get enough cooling per rack that this isn't a big issue. Kw per rack was a major factor in my data center location decision. I saw some nice places over the years with quality staff but they were old buildings with pretty low kw/rack so I had to keep looking.
My typical 42U rack for something generic like web servers was 40x 1U pizza boxes, switches, consoles, or other utility crap in the other 2U depending on need. Database was same idea
Re: Most of these "datacenter sales" (Score:2)
It depends on the kind of stuff you run. Low-latency for example is by definition high-density, and getting more than 50kW per cabinet is usually a problem.
Re: (Score:2)
My app stacks were pretty normal. Nothing anyone here would find shocking. Typical 2 or 3 tier applications, some kind of db, memory caching layer in some cases, almost always a fuck ton of storage.
In that very common situation 50kw racks weren't an issue. I rarely had to reconfigure my rack layouts to account for excessive power or heat.
All that being said, ymmv, sometimes by a lot and there are definitely cases in the HPC world where you'll need to squeeze out every drop of cpu power while trying to co
Re: (Score:3)
"Java: Write once, run nowhere."
There's an argument to be made that webservers need X86 specifically because of the bloat. Nobody likes nice, neat, tidy work. You gotta jam pack every web page with ten thousand libraries or frameworks or google derived fonts or Facebook / Meta trackers or whatever other nonsense the marketing team decides they need. Build a decent, lightweight site and you can run it on a Raspberry Pi, but you can't track every click, and you haven't made it buzzword compliant.
Re: (Score:2)
Nobody likes nice, neat, tidy work.
Engineers do, but businesses make a trade-off when they hire people with limited experience due to cost limitations. From a business point of view, it's the engineer's job to deliver a functional product with a certain level of quality. How they measure quality is by measuring user experience and customer sentiment. Neat, tidy work indirectly impacts that but, again, that's not something that businesses can affect unless they choose to pay for more experienced developers.
And it's not that inexperienced deve
If you think Java performance is shit... (Score:2)
That's not at all why. Such applications aren't at all performance hungry -- if they were, they wouldn't use shit performance languages like Java, python, js, etc. They certainly wouldn't if they gave one shit about energy efficiency either. In most cases it's because something along the software stack only comes in x86 form. Whether that's hypervisors, databases, sdlan controllers, etc. In other cases it's because they rely on some proprietary application that is x86 only, even if it's written in supposedly portable shit like java, because shit like java only runs reliably on whatever you developed it on. A fact java fanboys are keenly aware of but will never admit.
...you don't know what you're talking about. Java's runtime server performance is competitive with every technology out there. If you think it's performance is shit, you haven't tried the alternatives...or you're writing some really expert-level assembly code. Most benchmarks indicate that it is on par with native code for long-running processes and the memory utilization isn't very bad and is typically superior to the others I've used, like go, C#, node.js, python, ruby, even legacy ones like ColdFusion
Re: If you think Java performance is shit... (Score:2)
...you don't know what you're talking about. Java's runtime server performance is competitive with every technology out there.
https://aws.amazon.com/blogs/o... [amazon.com]
What the study did is implement 10 benchmark problems in 27 different programming languages and measure execution time, energy consumption, and peak memory use. C and Rust significantly outperformed other languages in energy efficiency. In fact, they were roughly 50% more efficient than Java and 98% more efficient than Python.
Itâ(TM)s not a surprise that C and Rust are more efficient than other languages. What is shocking is the magnitude of the difference. Broad adoption of C and Rust could reduce energy consumption of compute by 50% â" even with a conservative estimate.
Though I'm one of those weirdos who writes basically everything in rust. And the efficiency isn't even why, rather the semantics, syntax and pointer rules make it super easy to write code that just works exactly the way I intended without something insanely stupid like type erasure biting you at runtime.
That is to say, regardless of what platform I target, it's likely to run to completion exactly as expected. Really. Doesn't matter if it's Windows, Debian, some old Mac OS
Re: (Score:2)
Re:Most of these "datacenter sales" (Score:5, Insightful)
Why do you need these bloated X86 chips for this?
Oh, my sweet, summer child.
Sometime you should actually look at a block diagram of a modern CPU, so you have some vague clue what you are talking about. The x86 decoder is nearly the smallest thing on the whole chip. There are functional units which are bigger than it is.
amd64 processors are bang for buck, watt for watt, and in every other measurement the most powerful processors you can buy. AMD's mobile processors outbenchmark Apple's at about the same power consumption, even with off-processor memory, and those are considered to be the best ARM CPUs around.
This isn't a natural law or anything, it is possible for others to beat AMD, but nobody has done so since they took the lead from Intel and it doesn't look like anyone will do it soon.
Re: Most of these "datacenter sales" (Score:2)
You can get a single epyc chip with 192 cores now in a 500w envelope. 2.6w/core and 12xddr5 channels and 128 pcie channels to feed it. Full fat cores, too. VM hosting heaven.
Re: (Score:2)
Your big database or mostly-cached web server, say, is still going to need a whole bunch of RAM and some high speed networking, possibly storage, so you are basically buying a big fat memory controller and lots of P
BeanCounteritis (Score:1)
Bean-counters who talked companies into under-funding R&D and quality control, surfing on laurels instead, have ruined Intel, Boeing, VW, HP, Burger King, GE, KFC, and countless others.
Damn You!
Re:BeanCounteritis (Score:4, Insightful)
Re:BeanCounteritis (Score:5, Insightful)
Companies really ought to stop judging executive performance by the quarter. Honestly, at that level, I think 2 years should be the minimum reporting period.
Shareholders look at their returns quarterly, so CEOs performance is done the same. You're not changing Wall Street's mind on how to look at these things. The onyl option is to never go public.
Re: (Score:2)
Shareholders look at their returns quarterly, so CEOs performance is done the same. You're not changing Wall Street's mind on how to look at these things.
Some rando on /. certainly won't change any minds. But Wall Street didn't spring fully formed from the Big Bang and minds can be changed. If we as society agree that the ultra-short horizon of many companies is bad, we can lobby and make laws to change things. It won't be fast and it won't be trivial, but it's absolutely possible.
Problem is: "We as society" has stopped existing. We're now neatly sorted into manageable bubbles.
Re: (Score:1)
> The only option is to never go public.
Which is not necessarily a bad idea. Some institutional investors are looking for stable long-term investments, so private investing can still bring in funds.
There are quite a few here who are doing well: https://www.forbes.com/lists/l... [forbes.com]
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Hit-and-run investors don't care about the long-term: they know they are milking the cow to death and so sell the milk and cow quickly. ROI models are generally myopic. Most Japanese and German companies told ROI theory to go F itself and focus on quality instead (with exceptions), and that's who owns the precision instrument markets.
Now do Nvidia (Score:3)
I get why. Why bother investing in a competitor when you can just buy stock in the dominant player and watch it go up. It's not like competition is much of a thing anymore
But Nvidia has business practices that would make Microsoft blush. I'd love to see some actual antitrust law enforcement. I wish we would stop calling the people who enforce white collar crimes "bureaucrats". You notice we don't do that for other crime. We called those cops or police.
Re:Now do Nvidia (Score:4, Insightful)
Well they're all mega corporations (Score:2)
The problem with Intel is their famous for doing things like threatening to pull OEM pricing if they don't get favorable terms in other markets. Every now and then they get caught but the hand slap is so minimal it's not even a small fraction of the profits they made doing the illegal activity.
It's like how everyone's always telli
Re: (Score:2)
Remember that business customers get a much more limited warranty than consumers do. Intel's business customers are having to absorb the cost of replacing all those Intel CPUs that are dying now.
Data centre customers have to replace a large number of servers early, and companies like Dell who sell to consumers have to deal with massive numbers of warranty claims.
Because Intel likes to change sockets every couple of years, their current generation CPUs can't be installed in those affected systems either.
Re: (Score:2)
Re: (Score:2)
This is just a YouTuber and what he was told was Intel pays manufacturers in backroom deals not to make AMD motherboards in black or white but other "ugly" colors. Take with a grain of salt on how true it might be.
It's complete, total, and obvious bullshit. My last two AMD boards and the current one were/are all black. One Gigabyte and two ASRock. And virtually all of the other boards I looked at all three times were black, too.
Re: (Score:2)
Re: (Score:2)
Sure, I'll believe you drinkypoo. Your history on this forum is nothing but stellar. Bahahahaha.
When I have made mistakes I have admitted them, unlike the cowardly cucks who continually attack me from anonymous accounts, and their fellows who mod me down when they have no rebuttal to my statements. I have consistently had more fans than freaks, and my karma has almost never done anything other than sit right up next to the kap.
Go ahead, tell us all about how bad that is.
Re: Now do Nvidia (Score:2, Insightful)
It boggles my mind that there isn't more investment in AMD and even Intel's graphics divisions to compete with Nvidia.
I get why. Why bother investing in a competitor when you can just buy stock in the dominant player and watch it go up. It's not like competition is much of a thing anymore
No, you don't get why. As you said, it boggles your mind. It does that because it's beyond your capacity to comprehend just how fucking hard it is to develop this. In your simplistic little world a company can just hire anybody off the street and train them up to be an expert IC engineer but they won't because greed.
Completely fucking wrong. Most people straight up don't even have the aptitude for it. And here's an easy way I can prove this to you: Go look up how to create a simple 4-bit adding calculator o
We're talking about a multitrillion dollar market (Score:2)
But again why would you bother spending money investing in a competitor when you can just buy the market leader? I
Re: (Score:2)
There is so much money there that it doesn't make sense not to invest.
You're making a TON of assumptions here without realizing it. Before even starting, you're talking about building a new architecture from scratch, because unlike with CPUs, there are no reference architectures you can just license from and build off of. Maybe, maybe you could buy off one from somebody who is exiting the market and build off of it, like say intel. But historically nobody really does well with the shit that intel sells off, probably because it fundamentally was never good to begin with. Apple
Re: (Score:2)
It boggles my mind that there isn't more investment in AMD and even Intel's graphics divisions to compete with Nvidia.
AMD is investing more in their graphics division as we speak. Intel's graphics division is barely worth investing in, the only place they have any kind of win is video encoding and we all already do that on the same GPUs we use for gaming or other purposes.
But Nvidia has business practices that would make Microsoft blush.
wat
Re: (Score:3)
> and Intel's graphics divisions to compete with Nvidia.
You DO realize that Intel has a LONG history [wikipedia.org] of trying NUMEROUS times [computer.org] and utterly failing in the market, right?
The more famous ones include:
* Intel i740
* Intel i860 / i960
* Larrabee
* ARC
Even ARC has 0% market share [extremetech.com] now. The only reason Intel has 7% on the Steam Hardware Survey [steampowered.com] is because of integrated GPUs, discrete cards such as ARC shows 0.024% [steampowered.com].
This thread [reddit.com] on reddit has a summary of Gen 1 through Gen 12.
Re: (Score:2)
Whoops, that should be 0.24% for ARC.
Re: Now do Nvidia (Score:1)
Re: (Score:2)
The i860 predates [computer.org] Intel's graphics division.
The i860 was used as graphics accelerator [wikipedia.org]
Re: Now do Nvidia (Score:1)
Various reasons (Score:2)
The biggest will be Intel's aversion to QA. Their reputation has been savaged and the latest very poor performance benchmarks for the newest processors will not convince anyone Intel has what it takes.
I have no idea whether they could feed into AI the designs for historic processors from the Pentium up, along with bugs found in the designs, to see if there's any pattern to the defects, a weakness in how they explore problems.
But, frankly, at this point, they need to beef up QA fast and they need to spot poo