Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Technology News

CERN Engineer Details AMD Zen Processor Confirming 32 Core Implementation, SMT (hothardware.com) 135

MojoKid writes: AMD is long overdue for a major architecture update, though one is coming later this year. Featuring the codename "Zen," AMD's already provided a few details, such as that it will be built using a 14nm FinFET process. In time, AMD will reveal all there is to know about Zen, but we now have a few additional details to share thanks to a computer engineer at CERN. CERN engineer Liviu Valsan recently gave a presentation on technology and market trends for the data center. At around 2 minutes into the discussion, he brought up AMD's Zen architecture with a slide that contained some previously undisclosed details. One of the more interesting revelations was that upcoming x86 processors based on Zen will feature up to 32 physical cores. To achieve a 32-core design, Valsan says AMD will use two 16-core CPUs on a single die with a next-generation interconnect. It has also been previously reported that Zen will offer up to a 40 percent improvement in IPC compared to its current processors as well as symmetric multithreading or SMT akin to Intel HyperThreading. In a 32-core implementation this would result in 64 logical threads of processing.
This discussion has been archived. No new comments can be posted.

CERN Engineer Details AMD Zen Processor Confirming 32 Core Implementation, SMT

Comments Filter:
  • Catch up to Intel? (Score:5, Interesting)

    by SultanCemil ( 722533 ) on Saturday February 13, 2016 @05:05PM (#51502383)
    So is this actually going to catch up to Intel? It'd be great to have meaningful competition in the CPU space again....
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      At this point, its all about small gains until multithreading goes beyond 2-4 cores for mainstream apps. As it stands intel has an edge on amd as far as efficiency goes....and likely will in the processors to come in the next 2 years. Out side of cores and these last few nanometers there isn't really anywhere else to go except minor opimizations. After the 8 nm level I wouldn't be suprissed if they move to 3d processing or something similar. So maybe by 2020 things will jump to a new fresh level and another

    • by future assassin ( 639396 ) on Saturday February 13, 2016 @05:23PM (#51502465)

      Just 90% there at 50% cheaper. I'm 100% happy with my A10 5800K for way cheaper of what it would have cost me to go with Intel. Late 2016 the year of AMD.

      • by eriks ( 31863 ) on Saturday February 13, 2016 @05:39PM (#51502575)

        This. The A10-7800 in my rig is as power-efficient (at idle) as a similarly spec'd intel i5 box would be, has superior on-die graphics (which admittedly, I barely use) and came in about $300 less for mITX mainboard, proc & memory. I could have paid $1000 or more extra for a high-end intel i7 workstation, which would have given me maybe 30% higher performance (at best), that for the most part I'd never notice. AMD wins as far as I'm concerned, and they should make some inroads in the server space with ZEN.

        • by MrL0G1C ( 867445 ) on Saturday February 13, 2016 @06:48PM (#51502917) Journal

          has superior on-die graphics

          As a gamer who always buys graphics cards, I don't want my CPU to cost more and have die space wasted with GPU that I will never use. If Intel, AMD and Nvidia got their act together my GFX card could be turned off whilst I'm not playing games and that onboard GPU would have a use, but sadly they haven't and gigawatts of electricity is wasted. Even getting AMD chips to use power saving has been a total pain to get working in the past - and the amount of power saved could buy a new processor over a few years.

          • by alvinrod ( 889928 ) on Saturday February 13, 2016 @07:18PM (#51503109)
            AMD has a technology they call dual graphics [amd.com] so their APUs could work in conjunction with a discrete GPU, similar to how you can Crossfire two discrete GPUs together already. It's probably more geared towards notebooks where the APU can get by driving the display and the GPU can sit idle. One review [hardwaresecrets.com] found that it could give substantial performance increases for some games, but it depends on driver support as well as where the performance bottleneck is at.
            • by Anonymous Coward
              DX12 allows for AMD+Nvidia pairing of cards. Some games are actually showing that you can get better performance with one of each than both the same. DX12 turns all GPUs into dumb compute engines.
              • Unless there's some monkey business going on in the drivers (hey, it's been known to happen [gamedev.net]), I'd be very sceptical that pairing two different manufacturers cards gives better performance than pairing two of the same manufacturer, all else being equal.
                • by mikael ( 484 )

                  Different GPU's would have different levels of efficiency for various tasks. This would depend on cache sizes, float-point precision, number of parallelized logic units, queueing, cross-bar switching, and all sorts of other parallel processing tweaks. Data flow design isn't any different from getting as many customers through a Disney theme park as fast as possible.

                  Whichever GPU is faster is going to do most of the work.

                  • Yes but all qualities that are so ephemeral to all but the most diligent consumer, I would doubt any benchmark reliable except under very specific loads. And then you have to factor in the optimisations in the drivers the manufacturers include for these specific loads that vary with driver version. Judging performance reliably (at the moment at least) is very hard indeed.
            • One review found that it could give substantial performance increases for some games, but it depends on driver support as well as where the performance bottleneck is at.

              This remains the AMD problem. The hardware has awesome price-performance, I am still using one of their CPUs and I started with a K6. But the graphics driver problem continues. nVidia's not perfect either, but at least it usually works.

              • by KGIII ( 973947 )

                I don't usually have problems with AMD CPUs but I do with the ATi GPUs and, sometimes, the on-board stuff. So, I usually go with nVidia for my GPU. Like you, I was first exposed to AMD with the K6 line. In my case, it was the AMD K6-2 350 but I kept it OCed as it was still stable at just a bit under 500 MHz. (I forget the exact number.) If I could keep it cold, I could actually wind it up a bit further.

                For amusement, I had it wrapped in plastic and sitting in a freezer for a while - not long-term or anythin

          • Comment removed (Score:5, Interesting)

            by account_deleted ( 4530225 ) on Saturday February 13, 2016 @07:28PM (#51503179)
            Comment removed based on user account deletion
            • I was curious about your post because I had not heard of Zerotech. Googling "AMD Zerotech" gave me very few results outside of some sort of drone, a malware site, and a similar post from you on Soylent News. Can you provide a link to describe what you're talking about?
            • by dave420 ( 699308 )

              AMD doesn't seem to have a technology called "ZeroTech". At least they've avoided mentioning it at all on their website...

          • by aliquis ( 678370 )

            As a gamer who always buys graphics cards, I don't want my CPU to cost more and have die space wasted with GPU that I will never use.

            An Athlon X4 860K will ~be the A10 7850K without integrated graphics.

            You'll save quite a bit of money that way which you can put a better graphics card instead.

            That processor is pretty shitty for modern titles and won't stretch all that well though, Intel alternative would be Pentium G3258 but that's not the best either, better up would be i3 and FX-6300, better up low-end i5 and FX-8300-series, and then Intel will have you covered with higher end i5 chips and i7 when you want even more performance.

            I've sad

            • by MrL0G1C ( 867445 )

              If AMD come up with a chip that is 50% to 100% faster than my I7-3770k then I'll strongly consider going back to AMD, but the current tech has totally stagnated (amd and intel) over the last decade.

              I wouldn't touch win10 with a barge pole, win7 updates turned off now.

              • I hope you don't do any banking on your system. I use gwx control panel so I can still get my security updates without the nagware.

                FYI Windows 10 is a little buggy but won't be bad once 10.1 Redstone comes out this summer to fix more of the bugs. MS has no plans for Windows 11 and want it macosx like with .1 updates each year. You're going to use it eventually whether you like it or not. Might as well go with Redstone while it is still free.

                I like last.fm and crackle while I work and it's where the future i

                • by MrL0G1C ( 867445 )

                  If you browse the web or download software then you are also at risk of getting a Trojan which could steal your banking details, regardless of MS updates. If you allow some combination of javascript/webgl/pdfs/java/flash/video etc from dozens of sites to run in your browser for every website you visit then you've increased you chance of getting hit by a driveby vuln's tenfold.

                  Most MS security updates relate to parts of the OS which I don't use, some are privilege escalation vuln's, which let's face it, if b

                  • by dave420 ( 699308 )

                    Those big brother issues people still haven't been able to conclusively prove exist beyond "look! It's connecting to MS servers, so it must be sending all my work and my soul to Redmond!". I'm all for privacy, but jumping at shadows and spreading rumours is the antithesis of privacy advocacy.

                    • by MrL0G1C ( 867445 )

                      The only time my PC should be connecting to MS servers is when I'm doing an update, all of the rest should have an off switch.

                • FYI Windows 10... won't be bad... You're going to use it eventually whether you like it or not.

                  On the contrary, I did try Windows 10 and it finally convinced me to just use Debian [debian.org]* on everything. Admittedly, I only have five computers, not a datacenter full, but after giving Windows 10 what I consider a very fair shot, I then took Windows off every machine using a Debian install disc/USB stick. My only (admittedly distasteful) concession to windows is that I have Vista in a VM for running Turbotax, and b

              • by aliquis ( 678370 )

                Their top of the line FX-processor vs yours:
                http://cpuboss.com/cpus/Intel-... [cpuboss.com]

                Not too much of a difference, so maybe +40% IPC will actually reach the ~50%, I were going to say it won't but yeah, maybe.

                • by MrL0G1C ( 867445 )

                  My processor is better by the looks of the page you linked, especially when it comes to power usage. And it's 3-4 years old now and newer intel chips aren't much better either.

                  • by aliquis ( 678370 )

                    Yeah, it's a 220 watt chip if I remember correctly.

                    But it's ~the fattest they have, so that's where 40% higher IPC would put them.

                    Intel has manage less than 10% better per generation for the last 4-5 generations.

              • by aliquis ( 678370 )

                Considering one can get an 18 core 2.2 GHz Broadwell-EP chip for $999:
                http://wccftech.com/ebay-xeon-... [wccftech.com] ... that make one wonder what the 10 core and 8 core Broadwell-E chips will actually end up costing?

                8 core Zen-chip maybe will reach up to what you ask for vs your 4 core chip but what about against the 6, 8 and 10 core Broadwell-EP ones?

                Or maybe AMD will let a 16 core Zen out for normal desktop users too, if it fit the same cheap motherboard I guess that could be somewhat disruptive.

          • by Bert64 ( 520050 )

            Such technology already exists but is generally used in laptops. You have a low performance integrated gpu, and a higher performance discrete one, and the discrete one remains powered off unless you're doing something which requires it.

          • What? I remember having a midrange Asus mobo that offered this feature back in 2006, had an integrated GT8200 and if you used another nVidia in PCIe you could chose to only use that one when gaming, otherwise the integrated ran almost 90% of the time, pretty decently btw. APUs have had this capability since the beginning, wasn't that the point of buying ATI?
        • We're going to standardize our family computers around that CPU. We don't need more GPU power than that APU provides, and the power (and money) savings are significant.

          • by aliquis ( 678370 )

            We're going to standardize our family computers around that CPU. We don't need more GPU power than that APU provides, and the power (and money) savings are significant.

            Why not wait for a Zen equivalent?

            (Early processors for the same socket seem to be of the old design.)

        • by aliquis ( 678370 )

          high-end intel i7 workstation, which would have given me maybe 30% higher performance (at best)

          At that time I don't know, now? Totally not true.

          The Athlon X4 860K would cost half as much but get rid of the integrated graphics, better for the gamer who would rather spend the money on the graphics card.

          Anyway the processor is SLOOOOW, a dual-core i3 6100 will perform about the same as the FX-6300, a quad-core will be better and quad-core i7 with hyper-threading better still.

          There's not 30% difference:
          vs somewhat older i7 4790K: http://cpuboss.com/cpus/Intel-... [cpuboss.com]
          Two generations older still i7 2700K vs A1

      • bunched up from moving around the chair too much?

      • This has always been AMD's angle in the business. Well, sorta. Fifteen years ago or about, their angle was to be as good for cheaper in the middle range of CPUs. Intel still had the upper hand for top chips. But Intel grew worried of AMD's breakthroughs, and used underhanded methods to keep AMD from nibbling more of its market. Intel threatened CPU dealers to stop providing them if they also provided AMD stuff. Not complying would have meant, for distributors, cutting themselves from the very solvable popul

        • by aliquis ( 678370 )

          The problem AMD have right now isn't affected by whatever people are ok with what they have or not.

          Regardless of what people may feel AMD is losing some serious cash each quarter, that's their main problem.

          As for in what categories they can deliver without losing even more money.. whatever stop them from losing money would be an advantage for them.

        • by 0111 1110 ( 518466 ) on Sunday February 14, 2016 @12:30AM (#51504545)

          This has always been AMD's angle in the business. Well, sorta. Fifteen years ago or about, their angle was to be as good for cheaper in the middle range of CPUs. Intel still had the upper hand for top chips.

          Were you living in a cave between 2000 and 2006? AMD was generally preferred by enthusiasts and gamers during Intel's infamous Pentium4/Itanium/Netburst/Rambus period for at least the first few years of the new millenium. It wasn't really until 2006 with Core (Conroe) replacing Pentium that Intel finally took back the lead they had in the 90s. They released the excellent Pentium M CPU in 2003 but that was a mobile chip. Only one or two highly specialized and expensive motherboards supported it. Intel finally realized that the whole Pentium 4 development branch was itself an inefficient long pipeline cache miss, but by the time they did AMD was already the market leader in the enthusiast bleeding edge market segment.

          At best Intel could have claimed to be tied with AMD in certain benchmarks, but most gamers and enthusiasts were going for AMD CPUs. During this period AMD CPUs often sold for equal or higher prices as well. Although Rambus RDRAM was ridiculously expensive and raised the cost of the Intel platform a great deal.

          Take a look at this Extremetech article [extremetech.com]. Maybe that will jog your memory a bit.

          Also note Intel's naive optimism with respect to Moore's Law:

          Justin Rattner, Intel Fellow and director of Microprocessor Research at Intel Labs, predicts 10GHz by middle of the decade (which I read as end of 2005 to early 2006)

          • by dave420 ( 699308 )

            You do realise that Justin Rattner is not Intel, right? I know it makes for an amusing point in an argument, but it's intellectually dishonest to claim he speaks for, or represents, all Intel.

            • He was the director of microprocessor research at Intel Labs. Maybe not everyone in the company agreed with his views, but it seems significant to me that he believed they would reach 10Ghz by 2006.

        • Also read this Anandtech article [anandtech.com] from 2005.

      • by KGIII ( 973947 )

        This... I'm not even all that worried about price but I find I get more than adequate value from AMD. I do buy nVidia GPUs normally but I almost always get an AMD CPU. I really don't even notice a difference.

        I linked to my current laptop the other day. That has an Intel. It's on purpose. I really liked the laptop. It's not really a typical laptop, it's a mobile work station from Titan. (The X4K model. All done up sexy, of course.) I picked an Intel because that was in the laptop that I wanted. If I'd been a

  • 'AMD will use two 16-core CPUs on a single die with a next-generation interconnect' Which is what Intel more or less did when they first came out with their dual-core CPUs, to try and catch up with AMD at the time who had dual-core CPUs on a single die.
    • by Bert64 ( 520050 )

      Intel did it again with the first quad cores too, AMD delayed the release of their quad cores in order to do a true quad core design.

  • Or is "x86" assumed to be 64 bit now?

    Can anybody explain the terminology here?

    • by Anonymous Coward

      Yes x86 is assumed to mean the 64bit version (x86_64) nowadays, which is a superset of the 32bit version and the 16bit versions of x86, all thanks to mode switching.

      • Yes x86 is assumed to mean the 64bit version (x86_64) nowadays, which is a superset of the 32bit version and the 16bit versions of x86, all thanks to mode switching.

        Don't forget the 8-bit processors. Some of us old timers cut our teeth on those 8-bit processors. Now get off my lawn!

        • Yes x86 is assumed to mean the 64bit version (x86_64) nowadays, which is a superset of the 32bit version and the 16bit versions of x86, all thanks to mode switching.

          Don't forget the 8-bit processors.

          Which weren't x86 processors.

          • Which weren't x86 processors.

            That depends on what you consider to be an 8-bit processor. Based on other comments, the devil is in the details regarding the 8086/8088 processors. I pointed out to another poster that the 80186 had an internal multiplexed 20-bit bus and available with an 8-bit or 16-bit external data bus. Unless someone changed the definition for a processor in the last 40 years, the data bus determines bit-width of a processor.

            • Which weren't x86 processors.

              That depends on what you consider to be an 8-bit processor. Based on other comments, the devil is in the details regarding the 8086/8088 processors. I pointed out to another poster that the 80186 had an internal multiplexed 20-bit bus and available with an 8-bit or 16-bit external data bus. Unless someone changed the definition for a processor in the last 40 years, the data bus determines bit-width of a processor.

              If so, then it's an 8-bit processor that implemented a 16-bit instruction set, then, just as the IBM System/360 Model 30 was an 8-bit processor that implemented a 32-bit instruction set and the Motorola 68000 was a 16/32-bit processor that implemented a 32-bit instruction set. From the programmer's point of view, the 8088 had 16-bit registers, 16-bit arithmetic instructions, and 16-bit "flat" addresses, just as the 8086 did.

              What defines the bit width of an instruction set isn't connected to data bus width

              • What defines the bit width of an instruction set isn't connected to data bus width, as different implementations of the same instruction can have different data bus widths.

                That's news to me. When I doing electronics as a teenager in the 1980's, an 8-bit processor had eight data lines, a 16-bit processor had 16 data lines, and a 32-bit processor had 32 data lines. I recently saw a 64-bit microcontroller that implemented one-half of the data bus (32 bits) as four 8-bit serial ports (four pins). I'm not sure if that's a four-bit or two-bit design.

                • What defines the bit width of an instruction set isn't connected to data bus width, as different implementations of the same instruction can have different data bus widths.

                  That's news to me. When I doing electronics as a teenager in the 1980's, an 8-bit processor had eight data lines, a 16-bit processor had 16 data lines, and a 32-bit processor had 32 data lines. I recently saw a 64-bit microcontroller that implemented one-half of the data bus (32 bits) as four 8-bit serial ports (four pins). I'm not sure if that's a four-bit or two-bit design.

                  Again, there's the width of the processor's external bus, the width of the processor's internal signal paths, and the width of the registers and instructions of the instruction set the processor implements. Nothing ties the first two of those to the third of those, as evidenced by various models of the System/360 series (the I/O interface [bitsavers.org] had 8 "bus in" lines, 8 "bus out" lines, and various control lines; the processors had internal signal paths ranging from 8 to 32 bits for integer and address operations;

                  • the Motorola 68000 series (the 68000 and 68010 had a 24-bit address bus and a 16-bit data bus, and a 16-bit ALU for data operations; the instruction set had 32-bit registers and arithmetic instructions and 24-bit physical addresses, extended to 31-bit physical addresses with the 68012 and 32-bit physical addresses with the 68020 and subsequent processors, which had 32-bit internal data paths),

                    And the 68008 had an 8-bit data bus, but was internally like a 68010, with 32-bit registers and arithmetic instructions and a 16-bit ALU for data operations.

                    On the other side, the 32-bit original Pentium had a 64-bit external data bus.

                    So you have the external bus width, the internal data path width, and the instruction set width(s) (registers, arithmetic instructions, addresses, etc.), which can vary somewhat independently; it might be appropriate use the external bus width as an indicator of the bit widt

                • Do you realize that under your definition, you probably posted that with a 256-bit computer?
                  • Do you realize that under your definition, you probably posted that with a 256-bit computer?

                    Probably a 128-bit computer. It's an Intel Celeron dual-core processor. That could probably explain why my inexpensive Dell laptop is so snappy.

                • by mikael ( 484 )

                  Sometimes the internal CPU data bus can be 128-bits, 256-bits, or 512-bits, but the external data bus on the board is 64-bits. There isn't anything to stop the two being different sizes except the bus protocols for sending and receiving data. This applies to the address bus as well. Some 8-bit systems got around the memory limitations of 64K by having a hardware page register that could select a particular bank of memory visible through a virtual "window". PC's from 1990's used segmented memory where everyt

                • What defines the bit width of an instruction set isn't connected to data bus width, as different implementations of the same instruction can have different data bus widths.

                  That's news to me. When I doing electronics as a teenager in the 1980's, an 8-bit processor had eight data lines,

                  A microcontroller has a microprocessor in it, yet may only expose a handful of data lines, and not even have enough to make a proper bus as wide as what it can process internally. The interface is not the most relevant feature. The most relevant feature is the size of the data type which can be processed. The second most relevant feature is the instruction size. But frankly, nothing is more relevant than the size of the general purpose registers, which defines that first part.

                  • The interface is not the most relevant feature.

                    Although I took intro electronics in college, I never pursued it as a career and eventually ended up in IT. I've gotten back into electronics as a hobby now that I have the time and money. (As a kid, I had the time but not the money.) I'm going through various designs to press a button to increment a counter from 0 to 9 on a LED display. My focus is on the "data" lines between different chips.

                    1. One line inverted from the switch with a debouncer to the clock input of the decade counter.
                    2. Nine lines (1-9) invert
            • The devil is really in the details with the x86 series. Intel has claimed whatever suited them over the years. As such, the 8088 was a 16-bit processor, even though it had an 8-bit data bus. The 386 was a 32-bit processor, even though the 386SX chip had a 16-bit data bus. The Pentium processor was 64-bit processor, because it had a 64-bit data bus, however the first generation Pentium processors only had a 32-bit ALU.

              Modern processors almost defy description by data bus width. There are so many DRAM a

              • by Bert64 ( 520050 )

                Generally those processors which operated a narrower external bus did so because memory and other peripherals were already more widely available (and cheaper) for the narrower bus...
                Processors in those days also generally operated internally at the same clock rate as the bus so memory was much less of a bottleneck than it is today. Some processors such as the motorola 68040 were advertised based on their bus clock rather than the internal clock.

                Some highend machines had a much wider memory bus, for instance

            • The register size determines the "bit-width", not the bus.
              Or a 6502 would be a 16 bit processor because it has an 16 bit address bus.

              • Or a 6502 would be a 16 bit processor because it has an 16 bit address bus.

                I was referring to the data bus on the processor. The 6502 was an 8-bit processor with eight lines for the data bus and 16 lines for the address bus (64K RAM). The 65816 was a 8/16-bit processor with eight lines for the data bus and 16 lines for the address bus that is multiplexed for a 24-bit memory space (16MB). The data bus is the parallel lines that run out to the memory chips.

                You always need to check the schematics when designing electronics around a particular processor. The 86000 processors had a 32-

            • Based on other comments, the devil is in the details regarding the 8086/8088 processors.

              ...only to people that think that the bus width defines the arity of the processor. This idea is only accurate if you are a raging novice on the subject.

              Under their definition, my several year old 4-core A10-6800K is a 128-bit processor and the latest processors are all 256-bit. This should be the end of the debate on that matter because its a stupid definition imagined by what are ostensibly outsiders to the subject.

              The number of address lines is also not suitable, as under that definition there still

              • This idea is only accurate if you are a raging novice on the subject.

                When I was studying electronics in the 1980's, the most common processors available to the home hobbyist had a fixed data bus: 8-bit processors had eight lines, 16-bit processors had 16 lines, and 32-bit processors had 32 lines. When I got into college and took intro electronics, I no longer wanted to do electronics as a career and eventually found my way into software testing and IT. Now that I have time and money, I'm getting back into electronics as a hobby. With all the datasheets available on the Inter

        • by JanneM ( 7445 )

          Some of us old timers cut our teeth on those 8-bit processors.

          I would have used proper dental tools myself. To each their own of course.

  • So both Intel and AMD have basically given up on increasing clock speeds and all we're going to get in the future are more and more cores?
    • by mikael ( 484 ) on Saturday February 13, 2016 @05:42PM (#51502597)

      They'll do research and try and raise clock speeds, but the amount of heat required and the amount of cooling required is proportional to the square of the clock speed. The faster you try and change the state of something (electric charge), the more heat is generated. They might be able to switch to optical computing then the heat problem goes away. Maybe they'll get more efficient CPU's with fewer transistors and more parallelization.

      But, it's far simpler to just add more cores as transistor sizes shrink by a half every year or two. That's guaranteed.

      • by Theovon ( 109752 )

        Power is linear with clock speed and quadratic with respect to voltage: P = \alpha V^2 f

        • by mikael ( 484 )

          Here's what I was thinking of...

          https://en.wikipedia.org/wiki/... [wikipedia.org]

          Voltage increases power consumption and consequently heat generation significantly (proportionally to the square of the voltage in a linear circuit, for example);

    • by wooferhound ( 546132 ) <tim@woo f e r h o u n d.com> on Saturday February 13, 2016 @05:53PM (#51502637) Homepage
      640 cores should be enough for anybody
    • by Z80a ( 971949 )

      The heat and power consumption start to scale up exponentially from where they are.
      Of course, they could well start to shrink the pipelines as we probably don't need those huge things anymore.

  • The second thing to consider is that it's highly unlikely AMD would release a 32-core processor into the consumer market.

    We won't be able to buy this CPU's?
    • by ebh ( 116526 )

      Depends. They may only go through a reseller channel, meaning that you'd have to do the PITA quote/invoice/purchase order thing instead of clicking "add to cart". But eventually, someone like Newegg will become an authorized reseller, making the parts as easy to get as any other.

  • 8 ram channels? but how many pci-e lanes and how many htx links?

    Can make for a good VM host. But will need good network / storage io links.

    • by afidel ( 530433 )

      A VM host really only needs x12 PCIe 3 for a dual socket system, x4 for 10Gb dual channel NIC and x8 for 16Gb dual channel HBA, up it to x24 links if you need 40GbE. 8 channels is nice as it allows you to do 1TB of full speed ram in a dual socket system using relatively inexpensive 32GB DIMMs which gives you 8GB per thread which is more than enough for most workloads (you might even choose to go 512GB of ram if your workload is more CPU than RAM limited and save a good chunk of change).

  • PLEASE be a good chip. My i7-920 is starting to sweat.
    • by MrL0G1C ( 867445 )

      8 years after this chip is released and the current chip isn't even twice as fast!! Silicon CPU progress has almost ground to a halt :-(

      http://www.cpu-world.com/Compa... [cpu-world.com]

      • by afidel ( 530433 )

        180% of the performance at half the TDP, how horrible...

        Oh, and the newer one has an integrated GPU

        • by MrL0G1C ( 867445 )

          180% of the performance used to take 1 to 2 years, not ten. The latest increments in performance are now only 5-10%. Also there were CPUs with much lower TDPs prior to the 920. Silicon has reached the end of the line, I look forwards to 10ghz to 100ghz chips with some other technology.

        • In March 2000, Intel released the 1GHz Pentium III. In March of 1992, Intel released the 66MHz 486DX2. That's a huge difference. You could probably swap my 3770K for either the 920 or the 6700K and I probably wouldn't even notice. Even the power usage wouldn't be all that different, unless I left it on 24/7 running Boinc or something like that.

  • SMP = Symmetric Multi Processing. "Symmetric" refers to the fact that all of the CPUs are considered "equal" by the OS and each has full access to DRAM, IO devices, etc.

    SMT = Simultaneous MultiThreading. "Simultaneous" refers to the fact that a single CPU core can process multiple execution threads at the same time.

    Someone from AMD's marketing department needs to take CPU architecture 201.

  • by WorBlux ( 1751716 ) on Sunday February 14, 2016 @01:52AM (#51504713)
    Zen Opteron sounds pretty cool, especially if paired with coreboot and a mini-ITX form factor.
  • Why would anyone possibly need more?
  • I hope this really works out, I need more processing power for my rendering jobs. With Intel suggesting new CPUs won't be faster just more energy efficient I have no one else to look at but AMD.

Keep up the good work! But please don't ask me to help.

Working...