AMD Unveils the 12-Core Ryzen 9 3900X, at Half the Price of Intel's Competing Core i9 9920X Chipset (techcrunch.com) 261
AMD CEO Lisa Su today unveiled news about its chips and graphics processors that will increase pressure on competitors Intel and Nvidia, both in terms of pricing and performance. From a report: All new third-generation Ryzen CPUs, the first with 7-nanometer desktop chips, will go on sale on July 7. The showstopper of Su's keynote was the announcement of AMD's 12-core, 24-thread Ryzen 9 3900x chip, the flagship of its third-generation Ryzen family. It will retail starting at $499, half the price of Intel's competing Core i9 9920X chipset, which is priced at $1,189 and up. The 3900x has 4.6 Ghz boost speed and 70 MB of total cache and uses 105 watts of thermal design power (versus the i9 9920x's 165 watts), making it more efficient. AMD says that in a Blender demo against Intel i9-9920x, the 3900x finished about 18 percent more quickly. Starting prices for other chips in the family are $199 for the 6-core, 12-thread 3600; $329 for the 8-core, 16-thread Ryzen 3700x (with 4.4 Ghz boost, 36 MB of total cache and a 65 watt TDP); and $399 for the 8-core, 16-thread Ryzen 3800X (4.5 Ghz, 32MB cache, 105w).
$329 (Score:5, Informative)
It looks to me like the $329 chip is the best buy if you need a high-performance chip. Low power, high core count, and high clock speed.
Re:$329 (Score:5, Interesting)
^^^ This.
The Ryzen 7 3700X at $329 will make a killing once launched. 8 cores/16 threads with 4MB of L2/36 MB L3 at 3.6 GHz base (4.4 GHz turbo) frequency... and a TDP of only 65W. That's just bananas.
I need to dig up stats, but this might be the most power-efficient x86 CPU ever created.
Re: (Score:3)
Re: (Score:2)
Right, I was going to do my next build around it, but everybody will be doing that so I guess it means I need to go to the 105 watt part with 12 cores :-)
Re: (Score:2)
Poor you!
Re:$329 (Score:5, Interesting)
Re: (Score:2)
It's a good bump over the 2600 that also used to be $199, +200Mhz base, +300MHz boost, double L3 cahce, IPC improvements, PCIe 4... but I'm mostly curious to find out if their dual-chiplet design lets you have a no-compromise 12 core desktop. Threadripper was impressive but having multiple CCX and a NUMA architecture has its drawbacks. If you can pay $300 more to have a threading monster on top of everything else it's pretty good value or rather a relatively small premium for extreme performance. It's a ful
Re: (Score:3, Interesting)
Oh, Threadripper 3.0 (Castle Peak) is absolutely coming and using the same setup. The question was if running the I/O hub in a switching mode rather than a dedicated mode like you can do with only one chiplet comes with drawbacks for single thread performance and minimum latency. If it's no issue for 12/16 cores on Ryzen it'll be the same for TR3 and Epyc Rome. But Ryzen will be released first to answer that question plus it's probably the only one cheap enough that you'd try to make it do everything. If yo
Re: (Score:2)
I think the value OCing AMD has long passed for those not engaging in some very over the top watercooling or other more extreme efforts. Few people have overclocked their chips to speeds possible that single core boosts can get out of the box and those achievements are great for getting decent Cinebench scores and not much else.
Now interesting may be what you could do with the 12 core chip. At that point I'm wondering if using Ryzen master to kill 4 cores would let you boost the remainder to max speed out o
Re: (Score:2)
Re: $329 (Score:3, Informative)
No, according to these benchmarks, AMD has performance *advantage* over Intel
https://www.anandtech.com/show/14407/amd-ryzen-3000-announced-five-cpus-12-cores-for-499-up-to-46-ghz-pcie-40-coming-77
Re: (Score:2)
Re: (Score:2)
Except their promo materials for Zen and Zen+ were completely in line with performance. Indeed, prior to full information release at CES of that year, their promo materials for Zen understated it's performance.
Re: (Score:2)
Re: (Score:2)
The TDP is much higher, which guarantees higher sustained clocks. Kinda how you have an r7 2700x at 105w, and an r7 2700 at 65w.
At least they didn't return to the first-gen days, when you had a lot more confusion: the 1700, the 1700x, and the 1800x all had the same core count.
YOU PAID $100 extra to go from 1700x to 1800x, and it was just 200 mhz, same 95w TDP. Jumping from 65 w to 110w is quite an increase in sustained performance, bu comparison.
Re: (Score:2)
Or pick up the previous generation now for 2/3 the cost (but somewhat more power draw).
It's hard to go wrong either way but I upgraded my primary desktop to G2 as soon as G3 was announced (a 2700 with a Gigabyte board that can handle 4x16GB of RAM) and it's damn hard to complain. Figuring out fan readings on Debian has been the only headscratcher.
That 7nm sure is pretty though.
Re: (Score:2, Funny)
You need to go back to your cognitive therapy sessions.
Intel sweetens the pot... (Score:5, Funny)
"For the extra 50% money you're getting a free backdoor and cloud access to your data, even when your machine is off! WHO COULD REFUSE? And if they do, we'll tighten screws on them with vendor contracts until they capitulate."
hopeful for GPU price wars (Score:2)
Re: (Score:2)
So you hope that AMD lowers prices on their GPU so that it forces Nvidia to do the same, then you will reward Nvidia by buying their hardware? Why would AMD (note, it's no longer ATI) do this? Why drop their prices and _still_ not be rewarded for it??
People like you are the reason that Nvidia cards are $800+ dollars. You will pay whatever price that Nvidia (and Intel) demands, and then wonder why the prices are so high.
Re: (Score:2)
I had an ATI mach64 that did nothing it claimed it could do. Even the game that came with it lost textures occasionally. Couple that with drivers and well, It was a bad experienc
Re: (Score:2)
Lucky for you and your vow, nobody makes ATI GPUs any more. However I can recommend Radeon GPUs, particularly Radeon VII, which is at the moment the best value in a high end GPU in the known universe. Or if that breaks your bank then RX 580 is currently the top selling discrete GPU, for good reason.
Of Course I Will Use It to Full Potential (Score:2)
I do design, raytracing animations, video editing, transcoding, some science-oriented stuff using Paraview and ImageJ, and even a bit of gaming, so I am looking forward to finally upgrading my old Sandybridge 2600k desktop.
In Paraview and ImageJ, I keep running out of memory, at 24GB on my 4800qm laptop, though I am glad I bought good memory, because it doesn't have problems until every bit of it is gone. 64GB of memory is still so expensive, it doesn't make sense to scrimp on a new processor because the me
Re: (Score:2)
No. That is the width of the smallest structures. Of course there are larger ones as well. Maybe look at the definitions before you claim nonsense? Also, fanboi much?
Re: (Score:2)
You can hunt around for some process dimension that happens to match the node name but if you find one, it is pure coincidence. Fin width for the 7nm node is 3.5nm [mpedram.com].
Tantalizingly equal to 1/2 7nm, no? But in reality, today they just pull node names out of their collective butts. Most of the industry tries to use more or less similar node names, except Intel for no discernible reason.
Re: (Score:2, Troll)
intel makes more powerful chips, and when Intel goes to the real 7nm they'll blow this stuff away
Re: lolz calling them 7 nm (Score:2)
Re:lolz calling them 7 nm (Score:5, Insightful)
intel makes more powerful chips,
Not when they include fixes for MELTDOWN.
and when Intel goes to the real 7nm
That's if, not when.
Re: (Score:3)
again, AMD isn't making 7 nm chips, instead marketers are calling something 7 nm that isn't
So I'd say the "if" more applies to AMD, Intel will do it
Sure, power at a budget price is good thing, and AMD is doing that. they blow away intel for price.
Re: (Score:3)
TSMC's 7nm process node is roughly equivalent to Intel's 10nm process node. The problem is that TSMC is shipping tons of 7nm parts today, and Intel still hasn't launched mainstream 10nm processors. They're saying they expect them on store shelves by the holidays, but probably won't be shipping them in significant volume until next year.
Re:lolz calling them 7 nm (Score:5, Interesting)
And both are right in the terminology used. There is no fixed definition. The TMCS definition of 7nm describes a specific process and that process is as "7nm (TMSC)" as it gets. Also, that comparison does stipulate that the Intel 10nm actually works for mass-production. It does not and may well require larger structures to ever work for that. Although, I heard they have given up on it. Skipping a step is pretty much a very high risk operation, so it is possible that Intel 7nm will never really work either.
Re: (Score:2)
Thing is that TSMC 7nm+ doesn't offer better performance and may even drop perf back a bit. Whereas 5nm give another performance jump and additional pensity/power drops.
Re: (Score:3)
I predict that Intel will have to close shop and go fabless
If anything, it is the cpu design that gets spun off. Intels big problem is their vertical structure. They have fabs they want to make things with, but everything they make is intel branded. This means a marketings team. This mean no renting downtime. A fab that sits idle 40% of the time isnt making them money 40% of the time. Meanwhile the newest fabs are running basically 100% of the time but also blowing off 40% to bad yields. Intel needs to dump the vertical integration so as to get those older fabs ru
Re: (Score:3)
Well, the only thing that has allowed Intel to dominate the CPU market was a superior manufacturing process not available to the competition. Their CPU design was basically always worse than AMD. If you look at people that stream games (basically impossible from one system on Intel, no problem at all on AMD) or at Spectre and meltdown, or at how long they were behind to get the memory-controller in the CPU, this becomes pretty obvious. There is also a reason, that this is the AMD64 architecture, not the I64
Re: (Score:3)
Why are you worrying about a label? Neither company is accurately describing their features, so if you're going to bad-mouth one, don't neglect the other. But what's important is how it works, how large a complete system built with the chips is, how much power it dissipates, etc. Marketing speak is a tissue of lies from all parties. And you can't look and tell the difference between 10 nm and 7 nm, even with a microscope, because the chips are sealed. So it's irrelevant. (Actually, it would be irrelev
Re: (Score:3)
again, AMD isn't making 7 nm chips, instead marketers are calling something 7 nm that isn't
You could say that AMD's 7nm is roughly equivalent to Intel's 10nm; however, you'd have to acknowledge that AMD's suppliers are churning those chips out whereas Intel has had many problems with yields at 10nm. Going to 7nm for Intel isn't likely to solve those yield problems.
Re: (Score:3)
again, AMD isn't making 7 nm chips, instead marketers are calling something 7 nm that isn't
That's not exactly correct. All the fabs pick some feature size and measure that in nm. Whenever they get an nm bump it's that feature that shrinks. Intel pick a different feature from TMSC but there's is no more or less correct than TMSC. To top it off, comparing feature size is about as useful as comparing MHz for different architectures.
Re: (Score:2)
To top it off, comparing feature size is about as useful as comparing MHz for different architectures.
No, because with MHz you can just look up the operations per cycle for the features you're using. So if you only have MHz you can't compare yet, but you're almost ready to by adding some static information about the processor class.
The feature size though you get one dimension, but in a context where the raw dimensions don't clearly predict any of the performance characteristics. They're all different shapes, and everybody measures a different dimension and only advertises that one number, but having all th
Re: (Score:2)
Not when they include fixes for MELTDOWN.
If I could un-Meltdown my Ryzen for improved performance I would. I couldn't care less about completely irrelevant security improvements and my home has windows made of glass not steel for exactly the same reason.
What are your windows made of? Why do you not take your home security seriously?
Re: (Score:3)
Re: (Score:2)
why should anybody give a flying fuck what nm its on?
It's called e-peen. This is one of the rare cases when smaller is actually better.
Re: (Score:3)
I have an even better argument...why should anybody give a flying fuck what nm its on?
Intel had two technological advantages over AMD. One was their unsafe speculative execution which led to MELTDOWN, and the other was their process technology. Well, they no longer have superior process technology, and since they don't have superior design either, what have they got? Answer, nothing but a name, and a dominant position in the market that they have abused in the past.
Re:lolz calling them 7 nm (Score:5, Informative)
Re: (Score:2)
By that time (if it ever happens given Intel's multiyear schedule slips) the industry will already be on 5nm and working on 3nm.
The Swiss Cheese of processors (Score:2)
Re: (Score:2)
What I want to know is how to disable the security in AMD chips to get that speed boost.
What I also want to know is why you live in a house without windows. I mean you do live in a house without windows right? Clearly you think security is a binary on and off rather than a risk assessment process and therefore it's unfathomable to think that you would be concerned about speculative execution attacks while still having actual windows in your house.
Re: (Score:2)
What I want to know is how you're managing to type that with your front door open
Well that should be an obvious answer. It's because I risk assess my risks.
Unfortunately, the whole class of chiefly Intel-specific exploits that are defined by Spectre, Meltdown, ZombieLoad, and so on are genuinely something to be concerned about; as is any silent data exfiltration of private keys, bitcoin wallets, passwords and other private data, you name it, that have practical exploits shown to be working.
Except they haven't been shown to do anything outside a lab in carefully controlled conditions where practically limitless information is known about the computer. Yes the consequence is high, but the likelihood of your home computer being affected is as close to 0 that maybe you should be more concerned about someone breaking into your house mid-bdsm-romp.
We do get it, your PC is a toy, you don't put it to serious use
Quite the opposite. It contains my financial data, and encrypted work document
Re: (Score:2)
If you're comparing Spectre to your car's breaks then you either must be on the watch list of every three letter agency in the world, or you seriously suck at risk assessment.
Re: (Score:2)
Intel makes _weaker_ designs, but until a while ago, they had a superior process that compensated and even put them a bit ahead. They do not anymore.
Re: (Score:2)
Intel makes _weaker_ designs, but until a while ago, they had a superior process that compensated and even put them a bit ahead. They do not anymore.
I find that hilarious, mostly because I remember the shrill cries about the sky falling when AMD started selling their fabs.
Now they're not behind on fabrication anymore... because they're using somebody else's fab. LOL
Originally, AMD had high quality fab and lower quality design, but they seem to have had a roadmap all along.
Re: (Score:3)
Of course they will. Intel will make a more powerful chip "tomorrow" and then the "day after tomorrow" AMD will make one more powerful. That is the way it should be.
But that is not the right statement, "Intel will make a more powerful chip tomorrow." That is a given. The question that you should be asking is, "Will Intel make a more powerful chip tomorrow that is cost effective?" That is very much in the air.
Re: (Score:2)
intel makes more powerful chips, and when Intel goes to the real 7nm they'll blow this stuff away,
And when will this be? One of the reasons Intel has been in trouble is that they can't make enough chips at 10nm much less 7nm.
Re: (Score:2)
intel makes more powerful chips
And by that, you mean "chips with more watts"?
Re: (Score:2)
you'\re confused by marketing names for nodes.
what AMD calls "7nm".. isn't
Re: (Score:3, Informative)
Marketing names vs. final result. (Score:5, Insightful)
you'\re confused by marketing names for nodes.
what AMD calls "7nm".. isn't
The way AMD and Intel decide to call they process isn't relevant and doesn't change the end result in products currently on the shelf and the performance of their respective processes.
You could replace marketing names with made up names, the main idea that people are tell in this thread will not change.
AMD is currently making chips with the {globibolga} process by TSMC, which is the currently best available on the market.
Intel is still using an older slightly increased {deadbeef} process, they plan to introduce their own competing process {b00b15} at earliest for the end of the year for some subset of parts (laptops) and the bulk (workstations/servers) is to be expected next year at earliest, Then on their roadmap, they have an upcoming process {c0ffee} that will finally be able to beat the TSMC's current {globibolga} process, but it will be only be out in 3 years.
By then, by the time Intel's {c0ffee} process is out and on the shelfs, TSMC could have their {b0rkb0rk} process ready for AMD to use, or could even had it tweaked a bit further into {blaAarg} process.
AMD is and will probably still have 1-2 process iteration in advance of Intel (no matter what creasy number the marketing department decide to call them. Intel next process could be using "pi" in their number, and TSMC could resort to imaginary number for their process).
it just happens that currently TSMC are naming their process "7", "5", and "5+".
Intel is naming their "14++", "10" and "7".
With what Intel calls "10" (not yet on the shelf, not before 2020 mostly) roughly equivalent to what TSMC calls "7" (and you can go and order parts from AMD if you want).
The reality is currently Intel is still lagging one increment behind TSMC.
Re: (Score:3)
they plan to introduce their own competing process {b00b15}
I think you meant "{b00b135}" there but otherwise spot-on.
Intel next process could be using "pi" in their number, and TSMC could resort to imaginary number for their process).
Actually I think it's more likely that Intel will make up an entirely new system of math and use those numbers. Maybe that symbol [wikipedia.org] Prince used instead of his name could be the next Intel process technology.
Intell once tried to trademark the numbers "386" and "486" and that didn
Re: (Score:2)
AMD made the difficult decision to skip one process; that left them stuck on their old process even longer, but now they are reaping the rewards. They not only are caught up, they are ahead.
AMD also made the difficult decision to let someone else handle the process technology and what do you know, it seems to have been the right one.
Re: (Score:2)
Re: (Score:2)
There is no chance Intel will ship their 7nm in 2021 if they are just managed to get their 10nm to work.
Re: (Score:2)
I don't know. I've got a suspicion that we're running into a hard limit with current gate designs, and that switching to a smaller size will require something really new. Perhaps using carbon conductors with embedded gates or something. (Conductive carbon monolayer wires seem to have some interesting properties, but current chip designs would need to be totally redone.)
P.S.: I'm not really defending that approach. But I suspect that SOMETHING radically different will be needed. Maybe something that wo
Re: (Score:2)
So do I. Those things are either fancy phones or telefactors depending on just what specs they have. Robots make their own decisions and act on their own. They can accept directions, but the directions are abstract and relate to what they should do, not directly how they should do it.
Re: (Score:2)
AMD's chips still whip the shit out of INTEL even at lower MHZ.
What if we're trying to optimise processing speed, cost and temperature and don't have a need for faecal frothing?
Re: (Score:2)
Then you probably already moved your loads to horizontally-scaled algorithms and giant arrays of RISC processors.
Re: (Score:2)
No, I just want a cheap quiet computer that'll handle video and photograph processing without delays.
You know, like normal people.
Re: (Score:2)
Well, golly Grandpa, they had that already since you were a kid!
Are you sure you're on the right channel? This story is about current stuff. You can just buy the old stuff.
Re: (Score:2)
Oh please. I thrashed my quad core i7 for three hours at 100% on Friday morning just processing photographs.
I hate to think how that'd have taken on my 486, even with its dx co-processor.
Just because you don't use a CPU at home doesn't mean the rest of us are still in the 80s.
Re: (Score:2)
Wow, that must be a really yuuuuuuge photograph. Your epeen must be gigantic.
Re: (Score:2)
No, it was just over 1500 perfectly normal sized one.
Out of curiosity are you posting while drunk or stupid? You certainly seem incapable of proper conversation, let alone comprehending someone with genuine needs for home processing capabilities.
Re: (Score:2)
No, no, your argument doesn't support that at all.
You're saying you had a giant batch of photos to process all at once, for home use. It isn't something that finished fast enough for a reasonable person to click on it and wait. You go and do something else. 3 hours, 4 hours, 5 hours, it makes little difference to your example case.
There is not a genuine need for anything other than a regular home computer there. Even an old one. Even a really old one. And just because you see little bars at 100% doesn't act
Re: (Score:2)
Some photo processes are quite slow. I recently ran a blind deconvolution deblur on a 1000x700 image; it took about 20 minutes on an i7-870. On even the fastest new processors it would have taken over 5 minutes. Tweaking parameters for better results is impractical because it takes too long.
CPU technology has a long way to go before it's satisfactory for difficult photography applications.
Re: (Score:2)
And yet, that in no way makes it seem reasonable to be doing that processing on a large batch; or that batches need a vertically scaled workstation.
And your example has similar realities; a slightly faster computer wouldn't change that it is a long operation you have to wait for.
All the solutions that can do it fast involve horizontal scaling, not vertical. And without Moore's Law, there is no reason to think that is ever going to change! Certainly the trend line for CPU speeds suggests it will never change
But what about the software side? (Score:3, Interesting)
This is a two-sided situation. On one side we're seeing the hardware manufacturers put out faster offerings. But on the software side we're seeing slowness and bloat, which negates the hardware improvements.
Look at software like Firefox. It really doesn't do all that much more compared to what it did in 2009. Yet modern versions running on modern hardware feel so slow and laggy to me. It's like Firefox somehow gets slower at or faster than the rate at which hardware gets faster, in my opinion. I think it ma
Re:But what about the software side? (Score:5, Insightful)
Re:But what about the software side? (Score:5, Interesting)
Look at software like Firefox. It really doesn't do all that much more compared to what it did in 2009. Yet modern versions running on modern hardware feel so slow and laggy to me. It's like Firefox somehow gets slower at or faster than the rate at which hardware gets faster, in my opinion. I think it may have gotten worse now that parts of Firefox that were written in C++ have been converted to Rust.
Between Firefox's vastly improved JS engine, and the new Rust code significantly improving the levels of parallelism and concurrency in Mozilla's web rendering code, I think you couldn't possibly be further away from the truth.
Annoying, but cost effective (programmers cost) (Score:4, Insightful)
It's annoying that the software we use all the time doesn't run 10X faster on hardware that's 100X faster, isn't it. As a programmer, I often write my software to be much, much faster than other programmers do. Sometimes I'm doing it wrong - I should make it slower.
The thing is, 15 years ago if you looked at the cost of programmer time, it made sense to spend enough programmer time to make a web page load in a few seconds. Once the speed was acceptable, a programmers time was better spent either adding features or working on another project.
Fast forward to today. If you look at the cost of programmer time, it makes sense to spend enough programmer time to make a web page load in a few seconds. Once the speed is acceptable, a programmers time is better spent either adding features or working on another project.
Sometimes I spent my time optimizing something to run 10X or 100X times as fast as it needs to. That might be a bit of a waste of time. Perhaps I should have been doing something else instead, like fixing bugs or making it more secure.
If you look at the things that were NOT acceptably fast 15 years ago, like ray tracing or searching large databases, those things are a lot faster today. Because they needed to be faster.
Re: (Score:2)
You are arguing that once load time is "good enough", there is no benefit to further improvements in load time.
But that is not quite true. There is a difference between "barely tolerable" and "very good". It might technically be possible to release the software either way, but there is definitely a difference in software quality as a result.
In the past, it was a struggle even to make software speed "barely tolerable", and "very good" either required a huge amount of extra work, or else was simply impossible
Not saying that. It's about priorities. Ex Rust (Score:4, Insightful)
> You are arguing that once load time is "good enough", there is no benefit to further improvements
Not at all. I'm saying asking if there is any further benefit is the wrong question.
The right question is "what is the most important thing to work on this week?" Should we focus on improving security? UI experience? Rendering speed? Normally, there is a backlog of problems to be fixed. There are new standards which need to be supported, new platforms to support. If speed is acceptable, fixing problems is more important than making it even faster.
Mozilla is writing a prototype browser engine in Rust. Rust is, in theory, somewhat safer than C++ because it performs some safety checks at runtime. For example, if the programmer tries to use the third item in a list (array), Rust first checks to see if there *is* a third item. Obviously that check takes some time, checking for it and then accessing it takes longer than just accessing it.
The biggest problem with that is that most software spends most of its time running a fairly small portion of code over and over. If a browser runs its event loop 1,000 times per second, Rust will perform the same check 1,000 times per second, redundantly.
If you're processing a 1,000 pixel X 1,000 pixel image, doing something with each pixel, Rust may do the same safety check one million times while processing that image. That definitely slows things down.
Mozilla is considering a concious decision to make the browser X% slower in order to make it Y% safer. Faster is good, but perhaps safer is more good. It's not that they think faster has no value. They think safer might have MORE value, for a web browser.*
Right now I'm reviewing a design for a secrets storage engine for a $21 billion company. All of the passwords / access tokens that different systems use to talk to each other will be stored in this system. If this system gets completely breached, it could cause BILLIONS of dollars of damage. While looking for problems, I'm trying to make sure it's as safe as possible. I don't care that it is five times slower than it could be - I'm choosing security over speed.
> So on average, software *should* be faster now.
And it is. Software that uses to be unbearably slow is now reasonably fast. The speed was the most important problem to fix, so someone fixed it.
It's *also* true that we have a lot more "coders" now, people writing software who don't have any significant training in software engineering. That means we end up with more software, with a lower level of craftsmanship on average. That's also a separate issue. The *best* programmers will focus their energy on whatever improvement is most important, and speed may not be what is most important.
* Personally I think the safety of Rust is vastly over-hyped, but that's a different topic.
Re: (Score:2)
Re: But what about the software side? (Score:2)
Itâ(TM)s not really the browsers but the ginormous web pages/sites that are prevalent. The idea of small footprint is long gone, and for few reasons, let alone good reasons.
Re: (Score:2)
You have never produced any kind of content, otherwise you wouldn't say such an idiotic thing.
Re:Very impressive. (Score:5, Interesting)
Well, they ran out of rivers, no surprise they dammed them all with speculative execution. Hopefully for them that holds up!
I'm moving processing loads off of general purpose devices onto embedded RISC processors like ARM all the time. The future of general purpose devices might be mostly just as engineering and programming tools.
Re: (Score:2)
No. But I do realize that those things exist on a chart.
But if you're actually ever writing firmware for embedded systems, you'll realize that the ARM processors that are actually available at reasonable prices on the open market, for example ones made by Texas Instruments, will all be RISC.
Microinstructions are the sound of internet fapping, they're not a real concern. RISC is stil RISC and CISC is still CISC, regardless of what is under the hood, because from the perspective of the code that you can run o
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
With good unchilled cooling, you can expect to run all cores, all at once, at 4.7 GHz. That's how it went for the prior Zens - Top model rated speed +100 MHz, on all cores at once. Didn't matter which model CPU.
TDP is way higher than spec but as long as the cooling/mobo can take it then so can the CPU.
Re: (Score:2)
The rule of thumb used to be to not go for the non X model if you wanted overclocking and overclock it to the level of the X model. Or go for the X model if you wanted a fire and forget CPU.
Seeing as there are no non X models except for the smallest 3600 this rule is obviously no longer universally applicable.
I expect that yo
Re: (Score:2)
Overclocking in the traditional sense died with the Zen architecture. Turbo boosting individual cores up to a thermal limit achieves more per thread performance than overclocking the entire chip up to a lower baseline, and even watercooled it's rare to see an AMD chip overclock to it's turbo boost frequency.
If all you're doing is running Cinebench all day then overclocking may still be right for you, but chances are for a real workload it will have negative impacts.
Re: (Score:2)
Ehhh, that depends. If they didn't want to kick beyond a 105 watt TDP on any of their CPUs t might very well be the case that there is a significant OC on these CPUs.
Re: (Score:2)
Hyperthreading is not always faster, especially in benchmarks. Hyperthreading uses 'leftover' capacity of the chip and to be useful needs very specific circumstances. I'm sure for these comparisons, AMD uses the worst possible scenario for the Intel and the best possible scenario for the AMD, if not specifically compiled to take benefit of certain processor pipelines.
Real world overall performance is typically slightly lower for AMD unless you can optimize.
Re: (Score:2)
It'll be interesting to see third party benchmarks and real world performance. Intel need to be more than slightly quicker given the increased price.
Re: (Score:2)
since HT is used almost exclusively for execution of speculative instructions.
What? No, you get to get share execution units when you're waiting for memory operations. Why would speculation be needed for that?
Re:Why only 12 cores? (Score:5, Insightful)
Announcements are not complete. Current take is that the 8 core CCXes mostly go to the server CPUs and, realistically, desktop needs not more than 8 cores at this time anyways, so 12 is already overkill. But we will likely see the 16 core version pretty soon as well. One key difference between AMD and Intel is that AMD announces when they are confident they can deliver in volume, while Intel will announce long before they can actually deliver any relevant volume.
Re: (Score:2)
Thermals, my dear Watson.
The chip uses less than half the power and generates less than half the heat of the 16C Ryzen 2 you numpty.
Re: (Score:3)
My take is that server customers will get most of the 8 core CCXes for a while (not very long) and that AMD will announce 16 core desktop CPUs when they have enough supply.
Re: (Score:2)
I expect they will save that one for a Threadripper refresh, which is much better suited to it with 4 memory channels.
Re: (Score:2)
And there will be lots and lots of clueless people that just want to feel good about buying Intel and that do not actually understand that performance is not raw, single core speed if you do anything interactively.
Re: (Score:2)
When you do anything interactively, single core speed indeed tends to be a bit more important than having a ton of threads to throw at a task due to the nature that interactive tasks can't easily be parallelized. And when they are parallelized a low and consistent inter-core latency becomes very important as well.
I like to use the analogy of assembling a car vs. driving a car.
You can use one worker to assemble a
Re: (Score:2)
They will switch horses in a heartbeat as soon as they perceive that's what the cool kids are doing.
Re: (Score:3)
So the glowing go faster flames on the side of my computer case are not cool?