CERN Engineer Details AMD Zen Processor Confirming 32 Core Implementation, SMT (hothardware.com) 135
MojoKid writes: AMD is long overdue for a major architecture update, though one is coming later this year. Featuring the codename "Zen," AMD's already provided a few details, such as that it will be built using a 14nm FinFET process. In time, AMD will reveal all there is to know about Zen, but we now have a few additional details to share thanks to a computer engineer at CERN. CERN engineer Liviu Valsan recently gave a presentation on technology and market trends for the data center. At around 2 minutes into the discussion, he brought up AMD's Zen architecture with a slide that contained some previously undisclosed details. One of the more interesting revelations was that upcoming x86 processors based on Zen will feature up to 32 physical cores. To achieve a 32-core design, Valsan says AMD will use two 16-core CPUs on a single die with a next-generation interconnect. It has also been previously reported that Zen will offer up to a 40 percent improvement in IPC compared to its current processors as well as symmetric multithreading or SMT akin to Intel HyperThreading. In a 32-core implementation this would result in 64 logical threads of processing.
Catch up to Intel? (Score:5, Interesting)
Re: (Score:2, Interesting)
At this point, its all about small gains until multithreading goes beyond 2-4 cores for mainstream apps. As it stands intel has an edge on amd as far as efficiency goes....and likely will in the processors to come in the next 2 years. Out side of cores and these last few nanometers there isn't really anywhere else to go except minor opimizations. After the 8 nm level I wouldn't be suprissed if they move to 3d processing or something similar. So maybe by 2020 things will jump to a new fresh level and another
Re: Catch up to Intel? (Score:2)
In the server space majority of applications are running on virtual CPUs, I don't follow your point.
The performance of a hyper visor on the CPU is important
They don't need to be up there (Score:5, Interesting)
Just 90% there at 50% cheaper. I'm 100% happy with my A10 5800K for way cheaper of what it would have cost me to go with Intel. Late 2016 the year of AMD.
Re:They don't need to be up there (Score:5, Insightful)
This. The A10-7800 in my rig is as power-efficient (at idle) as a similarly spec'd intel i5 box would be, has superior on-die graphics (which admittedly, I barely use) and came in about $300 less for mITX mainboard, proc & memory. I could have paid $1000 or more extra for a high-end intel i7 workstation, which would have given me maybe 30% higher performance (at best), that for the most part I'd never notice. AMD wins as far as I'm concerned, and they should make some inroads in the server space with ZEN.
Re:They don't need to be up there (Score:4, Insightful)
As a gamer who always buys graphics cards, I don't want my CPU to cost more and have die space wasted with GPU that I will never use. If Intel, AMD and Nvidia got their act together my GFX card could be turned off whilst I'm not playing games and that onboard GPU would have a use, but sadly they haven't and gigawatts of electricity is wasted. Even getting AMD chips to use power saving has been a total pain to get working in the past - and the amount of power saved could buy a new processor over a few years.
Re:They don't need to be up there (Score:5, Informative)
Re: (Score:1)
Re: (Score:2)
Re: (Score:3)
Different GPU's would have different levels of efficiency for various tasks. This would depend on cache sizes, float-point precision, number of parallelized logic units, queueing, cross-bar switching, and all sorts of other parallel processing tweaks. Data flow design isn't any different from getting as many customers through a Disney theme park as fast as possible.
Whichever GPU is faster is going to do most of the work.
Re: (Score:2)
Re: (Score:2)
One review found that it could give substantial performance increases for some games, but it depends on driver support as well as where the performance bottleneck is at.
This remains the AMD problem. The hardware has awesome price-performance, I am still using one of their CPUs and I started with a K6. But the graphics driver problem continues. nVidia's not perfect either, but at least it usually works.
Re: (Score:2)
I don't usually have problems with AMD CPUs but I do with the ATi GPUs and, sometimes, the on-board stuff. So, I usually go with nVidia for my GPU. Like you, I was first exposed to AMD with the K6 line. In my case, it was the AMD K6-2 350 but I kept it OCed as it was still stable at just a bit under 500 MHz. (I forget the exact number.) If I could keep it cold, I could actually wind it up a bit further.
For amusement, I had it wrapped in plastic and sitting in a freezer for a while - not long-term or anythin
Comment removed (Score:5, Interesting)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
AMD doesn't seem to have a technology called "ZeroTech". At least they've avoided mentioning it at all on their website...
Re: (Score:2)
Re: (Score:1)
As a gamer who always buys graphics cards, I don't want my CPU to cost more and have die space wasted with GPU that I will never use.
An Athlon X4 860K will ~be the A10 7850K without integrated graphics.
You'll save quite a bit of money that way which you can put a better graphics card instead.
That processor is pretty shitty for modern titles and won't stretch all that well though, Intel alternative would be Pentium G3258 but that's not the best either, better up would be i3 and FX-6300, better up low-end i5 and FX-8300-series, and then Intel will have you covered with higher end i5 chips and i7 when you want even more performance.
I've sad
Re: (Score:2)
If AMD come up with a chip that is 50% to 100% faster than my I7-3770k then I'll strongly consider going back to AMD, but the current tech has totally stagnated (amd and intel) over the last decade.
I wouldn't touch win10 with a barge pole, win7 updates turned off now.
Re: They don't need to be up there (Score:2, Troll)
I hope you don't do any banking on your system. I use gwx control panel so I can still get my security updates without the nagware.
FYI Windows 10 is a little buggy but won't be bad once 10.1 Redstone comes out this summer to fix more of the bugs. MS has no plans for Windows 11 and want it macosx like with .1 updates each year. You're going to use it eventually whether you like it or not. Might as well go with Redstone while it is still free.
I like last.fm and crackle while I work and it's where the future i
Re: (Score:3)
If you browse the web or download software then you are also at risk of getting a Trojan which could steal your banking details, regardless of MS updates. If you allow some combination of javascript/webgl/pdfs/java/flash/video etc from dozens of sites to run in your browser for every website you visit then you've increased you chance of getting hit by a driveby vuln's tenfold.
Most MS security updates relate to parts of the OS which I don't use, some are privilege escalation vuln's, which let's face it, if b
Re: (Score:2)
Those big brother issues people still haven't been able to conclusively prove exist beyond "look! It's connecting to MS servers, so it must be sending all my work and my soul to Redmond!". I'm all for privacy, but jumping at shadows and spreading rumours is the antithesis of privacy advocacy.
Re: (Score:3)
The only time my PC should be connecting to MS servers is when I'm doing an update, all of the rest should have an off switch.
Re: (Score:3)
On the contrary, I did try Windows 10 and it finally convinced me to just use Debian [debian.org]* on everything. Admittedly, I only have five computers, not a datacenter full, but after giving Windows 10 what I consider a very fair shot, I then took Windows off every machine using a Debian install disc/USB stick. My only (admittedly distasteful) concession to windows is that I have Vista in a VM for running Turbotax, and b
Re: (Score:1)
Their top of the line FX-processor vs yours:
http://cpuboss.com/cpus/Intel-... [cpuboss.com]
Not too much of a difference, so maybe +40% IPC will actually reach the ~50%, I were going to say it won't but yeah, maybe.
Re: (Score:2)
My processor is better by the looks of the page you linked, especially when it comes to power usage. And it's 3-4 years old now and newer intel chips aren't much better either.
Re: (Score:2)
Yeah, it's a 220 watt chip if I remember correctly.
But it's ~the fattest they have, so that's where 40% higher IPC would put them.
Intel has manage less than 10% better per generation for the last 4-5 generations.
Re: (Score:1)
Considering one can get an 18 core 2.2 GHz Broadwell-EP chip for $999: ... that make one wonder what the 10 core and 8 core Broadwell-E chips will actually end up costing?
http://wccftech.com/ebay-xeon-... [wccftech.com]
8 core Zen-chip maybe will reach up to what you ask for vs your 4 core chip but what about against the 6, 8 and 10 core Broadwell-EP ones?
Or maybe AMD will let a 16 core Zen out for normal desktop users too, if it fit the same cheap motherboard I guess that could be somewhat disruptive.
Re: (Score:2)
Such technology already exists but is generally used in laptops. You have a low performance integrated gpu, and a higher performance discrete one, and the discrete one remains powered off unless you're doing something which requires it.
Re: (Score:1)
The A10-7800 is a great APU (Score:2)
We're going to standardize our family computers around that CPU. We don't need more GPU power than that APU provides, and the power (and money) savings are significant.
Re: (Score:1)
We're going to standardize our family computers around that CPU. We don't need more GPU power than that APU provides, and the power (and money) savings are significant.
Why not wait for a Zen equivalent?
(Early processors for the same socket seem to be of the old design.)
Re: (Score:1)
high-end intel i7 workstation, which would have given me maybe 30% higher performance (at best)
At that time I don't know, now? Totally not true.
The Athlon X4 860K would cost half as much but get rid of the integrated graphics, better for the gamer who would rather spend the money on the graphics card.
Anyway the processor is SLOOOOW, a dual-core i3 6100 will perform about the same as the FX-6300, a quad-core will be better and quad-core i7 with hyper-threading better still.
There's not 30% difference:
vs somewhat older i7 4790K: http://cpuboss.com/cpus/Intel-... [cpuboss.com]
Two generations older still i7 2700K vs A1
Re: (Score:1)
You're correct that the i3-4330 is similarly priced, and it does outperform (but hardly trounces) the A10-7800 on the cpu side -- but it is still $50 more expensive, and intel motherboards run $20-$30 more -- so there's still a price premium, and idle power draw is very close. At the time I built my current setup, intel's prices on haswell were quite a bit higher than now -- I guess skylake brought down the haswell prices quite a bit...
Wow Intel fanboys got their panites (Score:1)
bunched up from moving around the chair too much?
Re: (Score:3)
This has always been AMD's angle in the business. Well, sorta. Fifteen years ago or about, their angle was to be as good for cheaper in the middle range of CPUs. Intel still had the upper hand for top chips. But Intel grew worried of AMD's breakthroughs, and used underhanded methods to keep AMD from nibbling more of its market. Intel threatened CPU dealers to stop providing them if they also provided AMD stuff. Not complying would have meant, for distributors, cutting themselves from the very solvable popul
Re: (Score:1)
The problem AMD have right now isn't affected by whatever people are ok with what they have or not.
Regardless of what people may feel AMD is losing some serious cash each quarter, that's their main problem.
As for in what categories they can deliver without losing even more money.. whatever stop them from losing money would be an advantage for them.
Re:They don't need to be up there (Score:5, Insightful)
This has always been AMD's angle in the business. Well, sorta. Fifteen years ago or about, their angle was to be as good for cheaper in the middle range of CPUs. Intel still had the upper hand for top chips.
Were you living in a cave between 2000 and 2006? AMD was generally preferred by enthusiasts and gamers during Intel's infamous Pentium4/Itanium/Netburst/Rambus period for at least the first few years of the new millenium. It wasn't really until 2006 with Core (Conroe) replacing Pentium that Intel finally took back the lead they had in the 90s. They released the excellent Pentium M CPU in 2003 but that was a mobile chip. Only one or two highly specialized and expensive motherboards supported it. Intel finally realized that the whole Pentium 4 development branch was itself an inefficient long pipeline cache miss, but by the time they did AMD was already the market leader in the enthusiast bleeding edge market segment.
At best Intel could have claimed to be tied with AMD in certain benchmarks, but most gamers and enthusiasts were going for AMD CPUs. During this period AMD CPUs often sold for equal or higher prices as well. Although Rambus RDRAM was ridiculously expensive and raised the cost of the Intel platform a great deal.
Take a look at this Extremetech article [extremetech.com]. Maybe that will jog your memory a bit.
Also note Intel's naive optimism with respect to Moore's Law:
Justin Rattner, Intel Fellow and director of Microprocessor Research at Intel Labs, predicts 10GHz by middle of the decade (which I read as end of 2005 to early 2006)
Re: (Score:2)
You do realise that Justin Rattner is not Intel, right? I know it makes for an amusing point in an argument, but it's intellectually dishonest to claim he speaks for, or represents, all Intel.
Re: (Score:3)
He was the director of microprocessor research at Intel Labs. Maybe not everyone in the company agreed with his views, but it seems significant to me that he believed they would reach 10Ghz by 2006.
Re: (Score:2)
Also read this Anandtech article [anandtech.com] from 2005.
Re: (Score:1)
This... I'm not even all that worried about price but I find I get more than adequate value from AMD. I do buy nVidia GPUs normally but I almost always get an AMD CPU. I really don't even notice a difference.
I linked to my current laptop the other day. That has an Intel. It's on purpose. I really liked the laptop. It's not really a typical laptop, it's a mobile work station from Titan. (The X4K model. All done up sexy, of course.) I picked an Intel because that was in the laptop that I wanted. If I'd been a
Basically What Intel Did.... (Score:1)
Re: (Score:2)
Intel did it again with the first quad cores too, AMD delayed the release of their quad cores in order to do a true quad core design.
Why is this x86 and not 64bit? (Score:2)
Or is "x86" assumed to be 64 bit now?
Can anybody explain the terminology here?
Re: (Score:1)
Yes x86 is assumed to mean the 64bit version (x86_64) nowadays, which is a superset of the 32bit version and the 16bit versions of x86, all thanks to mode switching.
Re: (Score:3)
Yes x86 is assumed to mean the 64bit version (x86_64) nowadays, which is a superset of the 32bit version and the 16bit versions of x86, all thanks to mode switching.
Don't forget the 8-bit processors. Some of us old timers cut our teeth on those 8-bit processors. Now get off my lawn!
Re: (Score:2)
Yes x86 is assumed to mean the 64bit version (x86_64) nowadays, which is a superset of the 32bit version and the 16bit versions of x86, all thanks to mode switching.
Don't forget the 8-bit processors.
Which weren't x86 processors.
Re: (Score:2)
Which weren't x86 processors.
That depends on what you consider to be an 8-bit processor. Based on other comments, the devil is in the details regarding the 8086/8088 processors. I pointed out to another poster that the 80186 had an internal multiplexed 20-bit bus and available with an 8-bit or 16-bit external data bus. Unless someone changed the definition for a processor in the last 40 years, the data bus determines bit-width of a processor.
Re: (Score:3)
Which weren't x86 processors.
That depends on what you consider to be an 8-bit processor. Based on other comments, the devil is in the details regarding the 8086/8088 processors. I pointed out to another poster that the 80186 had an internal multiplexed 20-bit bus and available with an 8-bit or 16-bit external data bus. Unless someone changed the definition for a processor in the last 40 years, the data bus determines bit-width of a processor.
If so, then it's an 8-bit processor that implemented a 16-bit instruction set, then, just as the IBM System/360 Model 30 was an 8-bit processor that implemented a 32-bit instruction set and the Motorola 68000 was a 16/32-bit processor that implemented a 32-bit instruction set. From the programmer's point of view, the 8088 had 16-bit registers, 16-bit arithmetic instructions, and 16-bit "flat" addresses, just as the 8086 did.
What defines the bit width of an instruction set isn't connected to data bus width
Re: (Score:2)
What defines the bit width of an instruction set isn't connected to data bus width, as different implementations of the same instruction can have different data bus widths.
That's news to me. When I doing electronics as a teenager in the 1980's, an 8-bit processor had eight data lines, a 16-bit processor had 16 data lines, and a 32-bit processor had 32 data lines. I recently saw a 64-bit microcontroller that implemented one-half of the data bus (32 bits) as four 8-bit serial ports (four pins). I'm not sure if that's a four-bit or two-bit design.
Re: (Score:2)
What defines the bit width of an instruction set isn't connected to data bus width, as different implementations of the same instruction can have different data bus widths.
That's news to me. When I doing electronics as a teenager in the 1980's, an 8-bit processor had eight data lines, a 16-bit processor had 16 data lines, and a 32-bit processor had 32 data lines. I recently saw a 64-bit microcontroller that implemented one-half of the data bus (32 bits) as four 8-bit serial ports (four pins). I'm not sure if that's a four-bit or two-bit design.
Again, there's the width of the processor's external bus, the width of the processor's internal signal paths, and the width of the registers and instructions of the instruction set the processor implements. Nothing ties the first two of those to the third of those, as evidenced by various models of the System/360 series (the I/O interface [bitsavers.org] had 8 "bus in" lines, 8 "bus out" lines, and various control lines; the processors had internal signal paths ranging from 8 to 32 bits for integer and address operations;
Re: (Score:2)
the Motorola 68000 series (the 68000 and 68010 had a 24-bit address bus and a 16-bit data bus, and a 16-bit ALU for data operations; the instruction set had 32-bit registers and arithmetic instructions and 24-bit physical addresses, extended to 31-bit physical addresses with the 68012 and 32-bit physical addresses with the 68020 and subsequent processors, which had 32-bit internal data paths),
And the 68008 had an 8-bit data bus, but was internally like a 68010, with 32-bit registers and arithmetic instructions and a 16-bit ALU for data operations.
On the other side, the 32-bit original Pentium had a 64-bit external data bus.
So you have the external bus width, the internal data path width, and the instruction set width(s) (registers, arithmetic instructions, addresses, etc.), which can vary somewhat independently; it might be appropriate use the external bus width as an indicator of the bit widt
Re: (Score:2)
Re: (Score:2)
Do you realize that under your definition, you probably posted that with a 256-bit computer?
Probably a 128-bit computer. It's an Intel Celeron dual-core processor. That could probably explain why my inexpensive Dell laptop is so snappy.
Re: (Score:2)
Sometimes the internal CPU data bus can be 128-bits, 256-bits, or 512-bits, but the external data bus on the board is 64-bits. There isn't anything to stop the two being different sizes except the bus protocols for sending and receiving data. This applies to the address bus as well. Some 8-bit systems got around the memory limitations of 64K by having a hardware page register that could select a particular bank of memory visible through a virtual "window". PC's from 1990's used segmented memory where everyt
Re: (Score:2)
What defines the bit width of an instruction set isn't connected to data bus width, as different implementations of the same instruction can have different data bus widths.
That's news to me. When I doing electronics as a teenager in the 1980's, an 8-bit processor had eight data lines,
A microcontroller has a microprocessor in it, yet may only expose a handful of data lines, and not even have enough to make a proper bus as wide as what it can process internally. The interface is not the most relevant feature. The most relevant feature is the size of the data type which can be processed. The second most relevant feature is the instruction size. But frankly, nothing is more relevant than the size of the general purpose registers, which defines that first part.
Re: (Score:2)
The interface is not the most relevant feature.
Although I took intro electronics in college, I never pursued it as a career and eventually ended up in IT. I've gotten back into electronics as a hobby now that I have the time and money. (As a kid, I had the time but not the money.) I'm going through various designs to press a button to increment a counter from 0 to 9 on a LED display. My focus is on the "data" lines between different chips.
Re: (Score:2)
The devil is really in the details with the x86 series. Intel has claimed whatever suited them over the years. As such, the 8088 was a 16-bit processor, even though it had an 8-bit data bus. The 386 was a 32-bit processor, even though the 386SX chip had a 16-bit data bus. The Pentium processor was 64-bit processor, because it had a 64-bit data bus, however the first generation Pentium processors only had a 32-bit ALU.
Modern processors almost defy description by data bus width. There are so many DRAM a
Re: (Score:2)
Generally those processors which operated a narrower external bus did so because memory and other peripherals were already more widely available (and cheaper) for the narrower bus...
Processors in those days also generally operated internally at the same clock rate as the bus so memory was much less of a bottleneck than it is today. Some processors such as the motorola 68040 were advertised based on their bus clock rather than the internal clock.
Some highend machines had a much wider memory bus, for instance
Re: (Score:2)
The register size determines the "bit-width", not the bus.
Or a 6502 would be a 16 bit processor because it has an 16 bit address bus.
Re: (Score:2)
Or a 6502 would be a 16 bit processor because it has an 16 bit address bus.
I was referring to the data bus on the processor. The 6502 was an 8-bit processor with eight lines for the data bus and 16 lines for the address bus (64K RAM). The 65816 was a 8/16-bit processor with eight lines for the data bus and 16 lines for the address bus that is multiplexed for a 24-bit memory space (16MB). The data bus is the parallel lines that run out to the memory chips.
You always need to check the schematics when designing electronics around a particular processor. The 86000 processors had a 32-
Re: (Score:2)
Based on other comments, the devil is in the details regarding the 8086/8088 processors.
Under their definition, my several year old 4-core A10-6800K is a 128-bit processor and the latest processors are all 256-bit. This should be the end of the debate on that matter because its a stupid definition imagined by what are ostensibly outsiders to the subject.
The number of address lines is also not suitable, as under that definition there still
Re: (Score:2)
This idea is only accurate if you are a raging novice on the subject.
When I was studying electronics in the 1980's, the most common processors available to the home hobbyist had a fixed data bus: 8-bit processors had eight lines, 16-bit processors had 16 lines, and 32-bit processors had 32 lines. When I got into college and took intro electronics, I no longer wanted to do electronics as a career and eventually found my way into software testing and IT. Now that I have time and money, I'm getting back into electronics as a hobby. With all the datasheets available on the Inter
Re: (Score:2)
I would have used proper dental tools myself. To each their own of course.
Re: (Score:2)
They were both considered 16-bit processors despite the bus being only 8-bits. The ALUs operated on 16-bit values. On the other hand, the 8080 was 8-bit because the ALU was typically limited to 8-bit operations.
Re: (Score:3)
8088 was 8 bits. 8086 was 16 bits so I assume x86 should mean at least 16 bits.
8088 was 16 bits with an 8-bit external bus, but otherwise code compatible.
Re: (Score:2)
8088 was 8 bits. 8086 was 16 bits so I assume x86 should mean at least 16 bits.
If you really want to nitpick... The 80186, based on the 8086, had a multiplexed 20-bit internal address bus and, depending on the model, an 8-bit or 16-bit external data bus. The 80186 was never released for the PC market, but was typically used for embedded applications and IBM token ring network cards. So I assume x86 should mean at least eight bits (for the data bus).
https://en.wikipedia.org/wiki/Intel_80186 [wikipedia.org]
Re: (Score:2)
There was never an 8-bit x86 CPU.
Please read the other comments on this thread. You might surprise yourself.
Re: (Score:2)
Please realize that the external bus does not determine the number of bits a CPU is classified as, you clueless little shit.
It does if you're designing electronics around the processor. As I explained elsewhere, you need to know how many lines are coming out. The internal structure of the processor only matters when it comes to programming.
I've owned at least one of every generation of x86 CPU.
As a PC owner or an electronic hobbyist?
No x86 CPU was ever 8-bit.
According to Wikipeda: "The Intel 8088 ("eighty-eighty-eight", also called iAPX 88) microprocessor is a variant of the Intel 8086. Introduced on July 1, 1979, the 8088 had an 8-bit external data bus instead of the 16-bit bus of the 8086. The 16-bit regis
More and more cores? (Score:1)
Re:More and more cores? (Score:5, Interesting)
They'll do research and try and raise clock speeds, but the amount of heat required and the amount of cooling required is proportional to the square of the clock speed. The faster you try and change the state of something (electric charge), the more heat is generated. They might be able to switch to optical computing then the heat problem goes away. Maybe they'll get more efficient CPU's with fewer transistors and more parallelization.
But, it's far simpler to just add more cores as transistor sizes shrink by a half every year or two. That's guaranteed.
Re: (Score:3)
Power is linear with clock speed and quadratic with respect to voltage: P = \alpha V^2 f
Re: (Score:2)
Here's what I was thinking of...
https://en.wikipedia.org/wiki/... [wikipedia.org]
Voltage increases power consumption and consequently heat generation significantly (proportionally to the square of the voltage in a linear circuit, for example);
Re:More and more cores? (Score:5, Interesting)
That's guaranteed.
Surely you weren't saying sizes shrink by half every two years is guaranteed. Intel is already saying they won't be able to reach the process shrink goal in 2 years this time around. Around 5nm the shrink will turn into a research project just as challenging as the clock frequency issue. You can't pack carbon atoms closer than ~0.2nm nevermind features. A small protein molecule is 3nm in diameter. A significant drought of Moore's law is coming.
They could simply go back to the larger die areas we had only 10 years ago. It just means performance won't be "free" as time goes on. If you want a better chip you need a bigger chip and it'll cost more because you get less out of a wafer. There's plenty of fucking room on ATX boards and micro ATX boards and even mini ITX boards. And if you want to stick with tiny footprints like the Intel NUCs or the Google/Amazon/Intel "stick it in your HDMI port" shits, you can stack vertically or incorporate your RAM into the die.
I have a suspicion AMD will produce a part with HBM 2 incorporated into the APU die, resulting in a product that is literally a system on a chip, and finally realizes the shit they've been harping on about with regards to HSA. The GPU and the CPU have buckets of memory and all live together holding hands, sharing resources, talking to each other openly, helping each other build a deck or patch some drywall or whatever else the program asks them to do. Mayb we'll see something at E3 2017.
Re:More and more cores? (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
The heat and power consumption start to scale up exponentially from where they are.
Of course, they could well start to shrink the pipelines as we probably don't need those huge things anymore.
Re: (Score:2)
We'll just have you write the massively multithreaded software then. Hopefully you can convert every algorithm into an embarrassingly parallel one. CPU tech has stalled for multi-step algorithms that require the results of a one calculation before continuing to the next. Unfortunately a lot of algorithms are inherently serial in nature.
Re: More and more cores? (Score:2)
At the _top_, forget algorithms. Hopefully your _application_ is big/complex enough to break into many processes, and then some of those process' algorithms are easily threadable. (And the fact that you keep wanting more speed might sometimes be a hint that you apps _are_ getting bigger and more complex.)
So... (Score:1)
We won't be able to buy this CPU's?
Re: (Score:2)
Depends. They may only go through a reseller channel, meaning that you'd have to do the PITA quote/invoice/purchase order thing instead of clicking "add to cart". But eventually, someone like Newegg will become an authorized reseller, making the parts as easy to get as any other.
8 ram channels? but how meny pci-e and htx? (Score:2)
8 ram channels? but how many pci-e lanes and how many htx links?
Can make for a good VM host. But will need good network / storage io links.
Re: (Score:2)
A VM host really only needs x12 PCIe 3 for a dual socket system, x4 for 10Gb dual channel NIC and x8 for 16Gb dual channel HBA, up it to x24 links if you need 40GbE. 8 channels is nice as it allows you to do 1TB of full speed ram in a dual socket system using relatively inexpensive 32GB DIMMs which gives you 8GB per thread which is more than enough for most workloads (you might even choose to go 512GB of ram if your workload is more CPU than RAM limited and save a good chunk of change).
Re: (Score:2)
but what about the CPU to CPU link?
CPU to chipset?
But with 64 lanes apple mac pro can have 1 cpu with 5 TB 3.0 channels, 2 video cards, and 2 X4 pci-e SSD's, + pcie 3.0 X4 for chipset link or 6 TB 3.0 channels if chipset link does not need pci-e lanes.
Re: (Score:2)
Almost 8Gbps of I/O per thread, that's a bit of an odd configuration for x86 and honestly a waste of fairly expensive resources between pins and board realestate.
Re: (Score:2)
64 lanes, 64 threads, PCIe 3.0 lane =7.87Gbps. Or is that 64 lanes for a four way SMP board? If so that makes a bit more sense.
I Hope I Have To Change My Handle (Score:1)
Re: (Score:2)
8 years after this chip is released and the current chip isn't even twice as fast!! Silicon CPU progress has almost ground to a halt :-(
http://www.cpu-world.com/Compa... [cpu-world.com]
Re: (Score:3)
180% of the performance at half the TDP, how horrible...
Oh, and the newer one has an integrated GPU
Re: (Score:2)
180% of the performance used to take 1 to 2 years, not ten. The latest increments in performance are now only 5-10%. Also there were CPUs with much lower TDPs prior to the 920. Silicon has reached the end of the line, I look forwards to 10ghz to 100ghz chips with some other technology.
Re: (Score:3)
In March 2000, Intel released the 1GHz Pentium III. In March of 1992, Intel released the 66MHz 486DX2. That's a huge difference. You could probably swap my 3770K for either the 920 or the 6700K and I probably wouldn't even notice. Even the power usage wouldn't be all that different, unless I left it on 24/7 running Boinc or something like that.
SMT = Simultaneous MultiThreading, not Symmetrical (Score:5)
SMP = Symmetric Multi Processing. "Symmetric" refers to the fact that all of the CPUs are considered "equal" by the OS and each has full access to DRAM, IO devices, etc.
SMT = Simultaneous MultiThreading. "Simultaneous" refers to the fact that a single CPU core can process multiple execution threads at the same time.
Someone from AMD's marketing department needs to take CPU architecture 201.
Re: (Score:1)
Server? (Score:3)
32-cores ought to be enough for anybody! (Score:1)
Powaaaa (Score:2)
I hope this really works out, I need more processing power for my rendering jobs. With Intel suggesting new CPUs won't be faster just more energy efficient I have no one else to look at but AMD.
Re: (Score:1)
"let's just have more of them"
Maybe they are presenting a false pension savings plan.