Intel Says Chips To Become Slower But More Energy Efficient (thestack.com) 337
An anonymous reader writes: William Holt, Executive Vice President and General Manager of Intel's Technology and Manufacturing Group, has said at a conference that chips will become slower after industry re-tools for new technologies such as spintronics and tunneling transistors. "The best pure technology improvements we can make will bring improvements in power consumption but will reduce speed." If true, it's not just the end of Moore's Law, but a rolling back of the progress it made over the last fifty years.
Power efficiency is good in some places, not all (Score:5, Interesting)
Hopefully if this does happen they will keep making the existing products, at least until they *do* manage performance improvements that catch up / exceed older stuff. Where I work we have lots of customers that *need* more processing power, and efficiency be damned.
Re:Power efficiency is good in some places, not al (Score:4, Interesting)
On the other hand, there's certainly a market for more efficiency, especially in mobile devices, so I can certainly see lines of chips designed for that heading in the way described.
Re: (Score:3)
Only for embarrassingly parallel workloads.
Re:Power efficiency is good in some places, not al (Score:5, Insightful)
No, a lot of applications don't scale well across multiple cores / CPUs.
Re:Power efficiency is good in some places, not al (Score:4, Insightful)
No, a lot of applications don't scale well across multiple cores / CPUs.
In 2016 they don't. But as chips evolve the applications will as well.
That's what they said in 2006, when CPU clock speeds essentially hit the wall.
Mainstream CPUs started going multi-core back then. Some things parallelize quite well, and the tools are making it easier for them to do so today, but there's still a lot of sequential crunching to do for a lot of jobs. We're not likely to see a 1000 core 200MHz chip out-performing a 2 core 2GHz chip for "average desktop applications" anytime soon.
speculative execution etc. With 1024 cores ... (Score:3)
> consider, as a thought experiment, any task where the outcome of the first "step" determines the parameters for the next.
> There is no way to complete this overall task in parallel
In fact it's sometimes trivial. Consider this code, in which 'the outcome of the first step determines the parameters for the next':
HasPMI = IsMoreThan80()
PaymentAmount = CalculatePayment (Balance, HasPMI)
If you have 1024 cores, you can easily run CalculatePayment() in parallel with the line before it. You run it for bo
Re:speculative execution etc. With 1024 cores ... (Score:5, Insightful)
1024 cores will make it possible to get ten steps in, assume each step is a binary choice. The software I work with is way more complex than that. Not to mention, cache coherence is going to be a big problem, and multiplying the power draw and heat production by a thousand may be inconvenient.
There are ways to make problems more parallelizable, but they aren't going to work on all problems. Some problems are just really, really difficult to split up efficiently.
Re:speculative execution etc. With 1024 cores ... (Score:4, Informative)
It's actually called speculative execution, and we already do it. For many tasks it's a significant performance increase.
Re: (Score:3)
Where I work we have lots of customers that *need* more processing power, and efficiency be damned.
I assume most customers who need extreme processing power have learned over the past 10 years that faster individual processors are not coming. Algorithm design plus parallel processors is going to be the source of perhaps all performance increases in the foreseeable future. Until we move away from silicon that is.
Are there even supercomputers out there which have faster processors than the fastest Xeon processors out there? I may be wrong, but I believe there really hasn't been any non-parallel based perfo
Re: (Score:3)
Where I work we have lots of customers that *need* more processing power, and efficiency be damned.
I assume most customers who need extreme processing power have learned over the past 10 years that faster individual processors are not coming. Algorithm design plus parallel processors is going to be the source of perhaps all performance increases in the foreseeable future. Until we move away from silicon that is.
Are there even supercomputers out there which have faster processors than the fastest Xeon processors out there? I may be wrong, but I believe there really hasn't been any non-parallel based performance increases for a long time.
Yes there has, but more through architectural changes - new instructions, new modes, bigger better caches, improved offload models, task specific hardware (like crypto, packet moving etc.). This has been enabled by the increasing number of transistors on die and is driven by the mobile and server markets, which have evolving and quite different needs. Xeons today do more work per second than Xeons in the past and the scaling is greater than the scaling of the individual CPU core performance.
Re:Power efficiency is good in some places, not al (Score:4, Insightful)
Of course, the fundamental problem this presents is that it does *not* automatically result in improved performance.
Architectural changes require that performance code be tuned or re-tuned, which means every at-scale application has to be somewhere between rejiggered and given a huge dedicated rewrite effort (The DOE's upcoming 300 petaflop GPU machine will have exactly ten applications that can run at full scale, each of which will have an entire dedicated team rewriting it to do so). And, of course, Amdahl's Law puts an ironclad limit on the effect that more parallel hardware can have on performance, and some problems simply cannot be parallelized no matter how much we wish otherwise.
Contrast with the effect of improving the serial performance of hardware: All else being equal, double the CPU and memory clock rates and absolutely every program will run twice as fast, full stop. That was the desktop miracle from 1990 to 2003 or so - the same exact code screamed twice as fast every year.
But as processors trend towards slower and wider, everything becomes an exercise in parallel programming. OpenMP parallel, MPI parallel, SSE simd instructions, GPU simd parallel... It's harder to do at all, and harder yet to do *right*, and historically the average programmer has enough trouble working with a runtime that's sequentially consistent.
Rant aside though, I agree you're right - until we move to diamond substrates & heatsinks, we've hit the thermal brick wall (actually we hit it circa 2003) and there will not be any further increases in serial processing speed. Plus, AFAICT, there's a similar brick wall with access rates to DRAM and the fact that it requires a microwave-frequency bus with literally hundreds of pins extending for entire centimeters... so forget that too.
Re: (Score:2)
If Intel sticks to what they've done in the last few product generations, they'll still have higher-wattage higher-performance chips at the upper end for servers and workstations. But the ULV parts have been staying at basically the same performance now for a few years, with drastically reduced energy use. I think the current parts are under 4 watts for the same performance you used to have to spend 18 watts to get.
Re: (Score:2)
Here's the thing though. Even if chips remain equally powerful or 10% slower... if they could fit a 40 core Xeon into a 10watt atom power profile that would be a MASSIVE performance increase in mobiles. I'm relatively satisfied with CPU performance these days with a dual Xeon. If it meant I could get a current workstation in a mobile form, great! However I'm assuming that GPUs do keep improving and we finally see openings for specialized chips for physics and raytracing--the last two areas that would re
Optical is the Future (Score:5, Informative)
Optical [eetimes.com]
Re: (Score:2)
Like commercial airplanes (Score:4, Interesting)
A flight from London to New York takes as long today as it did about 50 years ago. But the current planes achieve that more efficiently, with slightly larger windows, and some more pressure and humidity in the cabin. How depressing to think that the computing world might be about to enter a similarly dismal stage as well.
Re:Like commercial airplanes (Score:4, Insightful)
Re: (Score:2)
Concorde was too expensive to succeed in the market - even people who could afford it and were flying routes that Concorde also flew just didn't care enough about the time to spend the money. If you could get them there at trans-sonic speeds, 60 minutes NY-London for Concorde prices, then you might have gotten more interest, but the difference between 6 or 7 hours and 3 or 4 just wasn't enough to tempt enough people - with pre-boarding prep and ground transit, either option basically consumed a full day.
To
Re: (Score:3)
....or it can unavoidably probe every fucking network port on an 8 port system, Solaris style. PXE boot? PXE boot? PXE? PXE? omg staaaaahp
Everything I see in a server room takes like five damned minutes to even start loading the OS.
Re: (Score:3)
Re: (Score:3)
concord is a grape. Concorde is a supersonic aircraft that carries passengers.
Not any more it doesn't.
Slower chips but multiple cores (Score:4, Interesting)
Lots of ways to get "speed."
Umm no (Score:2)
Re: (Score:2)
Bottom line unless program take advantage or multiple cores - and they don't - you want faster frequencies not more cores.
Or more instructions per clock... or perhaps it's time to start getting serious about clockless designs.
But, in reality, I think nearly all of the software that really needs lots of power has been parallelized for a few years now. For a couple of decades supercomputers have been all about massive numbers of cores on fast interconnects. The largest computations are done on tens to hundreds of thousands of multi-core computers -- the "computer" is an enormous data center. On the desktop, CPUs are more than
Re: (Score:2)
It really depends what you're trying to do. Amdahl's law will always apply. A single fast core will always be more versatile than multiple slower cores because the fast single core can perform equally well in sequential and parallel algorithms. Of course, we will always have to compromise and make do with what we can physically make.
If each step in your algorithm depends on every single preceding step then you will probably want the fastest single core you can get your hands on within a reasonable budget in
Re: (Score:2)
Defining "Progress"... (Score:2)
"...If true, it's not just the end of Moore's Law, but a rolling back of the progress it made over the last fifty years."
While Moore's Law was fun to watch and experience while it lasted, this is a bit of a slap in the face when defining progress.
It's kind of like trying to define the transition from the gas-guzzling muscle car era to the fuel-efficient compact car era as rolling back progress.
Regardless of the finite resource, there's plenty of good reasons for humans to be consuming less of it.
Re: (Score:2)
You can find lots of car enthusiasts who won't hesitate to tell you it was exactly that.
Re: (Score:2)
Yeah, I know a lot of people who would gloss over profiling/optimization, with the mindset that is a waste of their time because CPU would cover up for their laziness.
With AMD out of the way Intel can F*** us. (Score:3)
With AMD out of the way Intel can F*** us.
First they cut the pci-e lanes down on a $300-$350+ chip forcing you to pay upped to $350-$400 but then you need jump to $500-$600 to get the same as last gen + a small clock speed boost. This on the server / high workstation side.
On the desktop side they are still on DMI (Now at pci-e 3.0) + 16 PCI-e 3.0 why no QPI to chip set like AMD's HTX?
Re: (Score:2)
Re: (Score:3)
The problem is lackadaisical battery manufacturers (Score:2)
Efficiency is good, no doubt. But the electricity to run your computer, tablet, or phone, vs. the rest of your house, is comparatively very little. It's almost trivial even... except for those mobiles devices that are dependent on a battery. And the sloth and complacency of the battery manufacturers vs. the tech industry is what's holding us back. If they were investing into the R&D to keep up with Intel and Moore's law... doubling their capacity every 18 months as well... performance compromises li
Re: (Score:2)
And the sloth and complacency of the battery manufacturers vs. the tech industry is what's holding us back. If they were investing into the R&D to keep up with Intel and Moore's law...
And how many trillions would this cost? There's actually massive investments into battery technology. We've come a long ways in the last 20 years. But consider, they're figuring out that we had batteries way back in BC times. The Greeks had them, sort of, they think they were used for electroplating stuff.
But they started entering common use in the 19th century. We've put a huge amount of development work into them. But batteries, it turns out, run into physical laws much quicker than the 'completely
Re: (Score:2)
Re:The problem will be lackadaisical programmers (Score:2)
The problem is that programmers have gotten lazy (excuse me: "man-power efficient") off of the free speed we've been adding over all of these years. Layers upon layers of abstraction from machine code have made it possible to code in languages which are far removed from the actual code the runs on machines. There may now come a time when efficiency of programming matters to everyone, not just the embedded folks.
Not good for per core licencing (Score:2)
Server 2016 is going per core licencing which means less cores overclocked
A little hardware speed reduction is fine ... (Score:2)
What I don't understand... (Score:2)
What I don't understand is why are last generation parts not dropping in price? For the longest time, whenever new stuff came out, the prices of older stuff dropped. But that doesn't seem to happen anymore.
What's up with that?
Re: (Score:2)
This is great news for any disruptive new products. If all the brand can now sell is slow and offer energy savings.
Lead story doesn't understand Moore's Law (Score:4, Insightful)
Did I just hear Apple giggle in the background? (Score:2, Interesting)
The cortex-A series of chips appears to be catching Intel CISC in some of the raw compute numbers on a per-core basis. Will this possibly rekindle the RISC vs CISC battles of the 90s?
Not rolling back (Score:2)
Umm, that's not rolling back. It's a tradeoff.
Spin: Misrepresentation of what is really happenin (Score:2)
Think of the children... (Score:2)
What about people that care more about performance (per thread) than power consumption? Will we be stuck on old technology?
That sounds like a performance imrpovement to me.. (Score:2)
They are not fast enough by far (Score:2)
Re: (Score:2)
More and slower can do much (Score:5, Insightful)
You can have strong AI in ~20W, because that's what our brain uses. Each neuron is really, really slow like 100Hz and below, but when you have absurdly many it works. The problem is understanding the programming model, because it's nothing like our one list of instructions.
Re:Better transistors? (Score:5, Funny)
Re: (Score:3, Interesting)
On the other hand, designs with less energy loss will open up the potential of higher speeds, once the techniques get refined.
One of the (many) limit issues with trying to force current CPUs faster is that the waste energy grows quickly as you increase switching frequency. Energy density becomes a significant problem, and manufacturers are not content with the idea of making all consumer devices use liquid-cooling and/or refrigeration techniques to prevent CPU melt. Take a couple years learning a more eff
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Well, I suppose you can call it air since 80% of air is, but using liquid nitrogen and calling it "air cooling" is a little bit misleading, don't you think? ;)
Re: (Score:2)
In addition, lower power per slower transistor does not imply slower per unit area or less work per joule.
I have been involved in the development of things that while smaller and lower power, are in fact faster and lower power per unit area than the larger faster counterparts.
This implies greater performance goes hand in hand with greater parallelism.
Re: (Score:3)
And if they're having a significant reduction in power consumption, then adding more cores gets all the easier.
Its always seemed to me that the best approach to processing is to offer a variety of cores and let the scheduler handle what to put where. You can have one or two extremely fast cores, half a dozen moderate speed cores, and dozens or more low speed cores - why insist that all cores be the same in "general purpose" computing?
Re: (Score:2)
And if they're having a significant reduction in power consumption, then adding more cores gets all the easier.
Its always seemed to me that the best approach to processing is to offer a variety of cores and let the scheduler handle what to put where. You can have one or two extremely fast cores, half a dozen moderate speed cores, and dozens or more low speed cores - why insist that all cores be the same in "general purpose" computing?
If the universe follows the usual scaling rules, I would expect the optimum size distribution of CPUs on a chip for general purpose workloads to be logarithmic.
Re: (Score:2)
Correction, there's been no competition for about a decade now (Barcelona flop)
Re: (Score:2)
Ludicrous Speed....GO!!!
Re:Better transistors? (Score:5, Interesting)
> Going beyond 5Ghz limit has been a problem for the last decade or so.
Last decade? Uhm, try the last ~40 years. A close friend of mine worked with the military running GaAs CPUs at ~4.7 GHz in late 70's. He also worked on GaAs devices operating up to ~100 GHz. Hey, when you have a nearly unlimited tech budget you can do all sorts of things that the commercial sector won't have access to until decades later.
Anyways, the problem with Silicon is that it needs to be < 110 degrees C. In contradistinction GaAs only need < 175 degrees C.
Hardware designers have known about alternatives for years -- Silicon is just plentiful, dirt cheap, and "good enough." No one wants to pay $100,000 for a 10 GHz GaAs CPU, when you could buy 2,000x Silicon chips instead for the same amount of money.
Re:Better transistors? (Score:5, Interesting)
So the plan to make transistors tolerate higher clock speeds by using better materials is not going to happen?
Yet another restating of Moore's Law? The thing gets revised to whatever the latest growth area is.
The original 1965 article it was about "component counts", then it was revised in a later talk to be "circuit density", then revised in 1975 to be "semiconductor complexity", then revised in the later '70s to be "circuit and device cleverness", has been restated yet again when serial devices flatlined in favor of highly parallel chips.
Assuming this goes through the chipset, it will likely be restated again in terms of whatever other factor on the chips continues to grow.
Re: (Score:3)
People may have restated it in many silly ways, but what they actually mean is "Computers become twice as good every 18 months or so." Whether it's multiple cores, or faster clock speeds, or better RAM throughput, that's still what it amounts to: twice as good computers.
I think that's pretty much failed, then, for general purpose computers. At one time, I actually used to upgrade about every 18 months, and would see a really nice boost in performance. That's not so much the case anymore, it takes more like 3-4 years.
Re:Better transistors? (Score:4, Interesting)
I'd argue that it's also the case that most computers for the past decade have been ridiculously overpowered for what most average consumers are asking of them. That's partly why the market is moving to mobile. For many common tasks, a tiny mobile computer is still more than enough to do the job just fine. And in the case of Windows, the required minimum specs for an OS hasn't jumped nearly as substantially since Windows Vista, as MS focused quite a bit on performance optimization rather than letting things keep bloating up. If you had a reasonably powerful computer that could run Windows Vista when it first came out, you could almost certainly still run Windows 10 on it.
Vista recommended specs:
1-gigahertz (GHz) 32-bit (x86) processor or 1-GHz 64-bit (x64) processor
1 GB of system memory
40-GB hard disk that has 15 GB of free hard disk space
Windows Aero-capable graphics card w/ 128 MB of graphics memory (minimum)
Windows 10 minimum specs:
Processor: 1 gigahertz (GHz) or faster processor or SoC
RAM: 1 gigabyte (GB) for 32-bit or 2 GB for 64-bit
Hard disk space: 16 GB for 32-bit OS 20 GB for 64-bit OS
Graphics card: DirectX 9 or later with WDDM 1.0 driver
Note that I'm comparing recommended to minimum specs, but it's still fairly impressive given the time between these two OS releases. In general, I just think there's less market pressure to keep creating faster and faster CPUs.
Re:Uh? (Score:5, Informative)
Re: (Score:2)
It's strictly about the number of transistors on a chip.
This.
Just because clock speeds won't go up much more with silicon technology, it doesn't mean that going from a 2D plane to 3D assemblies (with the associated heat problems, but this "low power" stuff helps with that) won't happen.
It will happen. It's "merely" an engineering and geometry problem rather than a physics problem requiring new science.
--
BMO
Moore's law is actually (Score:2)
About the optimal number of transistors in a SoC vs using many discrete components.
https://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf
see in particular the "bathtub graphs"
Re: (Score:3)
Wrong, completely 100% wrong and currently moderated to +5 Insightful.
Moore's Law has always been about performance. Originally there was a direct correlation between the number of transistors and speed, but that's changed and along with it so has the definition of "Moore's Law".
Moore's Law has always been about cost per transistor. While feature size means you get to fit more components per wafer density alone is not the only factor. Economies of scale, wafer size increases and accumulation of dead labor help to keep Moores law on track.
The basic idea is a feedback loop between cost per transistor vs affordability of features enabled by having more transistors. They cost less so everyone can afford to have more. This trend continues forever or until toasters end up with Intern
Re: (Score:2)
I read that as "slow is the new fast" ... Introducing the new Puntium, now 16% slower.
You laugh now.
Re: (Score:2)
I read that as "slow is the new fast" ... Introducing the new Puntium, now 16% slower.
You laugh now.
Get this new computer which at the push of a button clocks down to 25 MHz for your slow computing needs!
Re: (Score:2)
I have no idea if you're old enough to remember the "Turbo" buttons on late '80s/early '90s machines. With "Turbo" off, they'd run at 8MHz for compatibility mode. With it on, they'd run anywhere from 25MHz to a blazing 66MHz!!!!
ObOldGuy: I once had a machine fail an install of SCO Unix (this was before they were evil) because it was ... [wait for it] ... too fast. There was a spin-delay loop in the Adaptec 1542 driver that failed on a fast machine. I was LMAO when they told me my box was too fast.
Re: (Score:2)
Re:Intel's trolling us (Score:5, Insightful)
Intel's so far ahead of AMD, they have to roll back the clocks in order to stay competitive.
AMD isn't Intel's competition. Intel needs AMD to prevent Anti-Trust litigation. Intel's competition is ARM and all the OEM's who use ARM based chips. Especially if Microsoft ports full Windows 10 to the ARM. The big draw of ARM is performance/price per watt which is exactly what Intel is shooting for.
Re:Intel's trolling us (Score:5, Insightful)
Re:Intel's trolling us (Score:5, Interesting)
Except that Intel has been a licensor of ARM for a very long time, so even if there was some magical shift to ARM in non-mobile ultra-low-voltage devices, Intel would still be able to apply what they know about advancing the state of the art.
Don't worry about Intel, they'll be just fine.
Re: (Score:3)
I assume you mean licensee.
Re: (Score:2)
I was watching some videos on parallel processing. One quote that I remember was that "cores are the transistors of today". Four decades ago, a CPU like a 6502 would have 3510 transistors and was the cheapest on the market, pulling down prices on all the other competitors. A high-end GPU board like an Nvidia Titan will have 2880+ cores. Going by transistors sizes alone, an entire GPU core will fit inside the space of a single 6502 logic gate. It's going to be easier to add more cores as chip sizes get small
Re: (Score:2)
Imagine an ARM based server farm.
Imagine a Beowulf cluster of ARM based server farms!
Re: (Score:2)
Especially if Microsoft ports full Windows 10 to the ARM
They've been there, done that. MS ecosystem is particularly built upon x86 compiled applications. Sure, they may have ways to have portable stuff, but the stranglehold of Windows is built around legacy applications.
Re: (Score:2)
Re:Intel's trolling us (Score:4, Interesting)
I don't see how in the world *Windows* is going to break into the mobile market. They have been trying for over a decade, repeatedly without success. Particularly now it seems a pretty cemented Android/iOS landscape. The only hope I could see is Intel getting some hardware makers onboard and that being a platform for MS to push their continuum concept (yes it can work with ARM, but back to square one, a bunch of my enterprise applications are not about to spend money to dust off the build trees and build ARM for the fun of it)
MS mobile strategy is going to have to settle for trying to make money off of iOS and/or Android users/developers. They can (and do) provide hosting, applications, and services. They miss the revenue opportunity of a curated application distribution platform, but I think this is the best they can hope for.
Re:Intel's trolling us (Score:5, Interesting)
Re: (Score:2)
The issue is that a lot of applications people need won't bother to update, and many current applications will forgo the managed runtime upon which MS cross-architecture strategy is based.
Sure, the ecosystem could move, but there's now adequate x86_64 implementations in the space at fairly low cost.
MS' safer bet is to encourage an x86-centric market. Sure, keep ARM port viable and encourage cross-architecture as a matter of course for as many of the developers as they can to hedge their bets, but backwards
Re:Intel's trolling us (Score:5, Interesting)
Exactly. Apple kept a secret x86 / x64 version of Mac OS X in the closet for 5 years as a hedge against IBM screwing them over on PowerPC. Turns out to be one of the best decisions that they ever made.
Re: (Score:3)
I would have thought these days for 'Continuum' it's just a checkbox in their IDE to target a different processor architecture, with compiler warnings as to why this C code is non-portable.
[X] i686
[X] amd64
[X] arm v7
[X] arm v8
(I guess I should give visual studio a download instead of making uninformed comments!)
Re: (Score:2)
They could've easily provided tools to let you port WIN32 code to ARM. They didn't want to. Instead they wanted to move to an app store model (just like Apple, duh) based around Metro stuff. Didn't work, and maybe they're kicking themselves now. Or maybe not - there may have been compelling reasons not to support WIN32 code on ARM - but in any case, that's why RT failed.
Re: (Score:3)
At this point I think they are looking for business models that are more annuity-like, with recurring revenue. A transactional purchase of an OS is becoming increasingly less interesting because fewer upgrade cycles. So for them, the strategy was 'app store or bust!'.
Re: (Score:2)
> The big draw of ARM is performance/price per watt which is exactly what Intel is shooting for.
Indeed. Here is an example of interesting hardware:
Parallella: The Most Energy Efficient Supercomputer on the Planet [youtube.com]
Re: (Score:3)
no, because if they did that they they would hold a monopoly on desktop / laptop CPUs. Then they would be regulated as a monopoly, and could no longer get away with their abusive business practices.
Intel's biggest competitor: Intel (Score:2)
It's not AMD. Ever since multi-core started, all Intel had to do was toss in more cores after optimizing a single core for a given process. Since none of the commonly used applications are even adequately parallel (most may at best make good use of 2 cores), Intel is unable to DISPLACE recent Core CPUs at their customers. On the software side of things, Microsoft can force people to Windows 10, but Intel can't force people to, say, go from i3 to i5.
This speed drop is fine if it increases battery life:
Re: (Score:2)
Not quite true. ARM is by far the biggest threat for Intel, which is why they want to go slower and be more energy efficient.
I think their message is bad, they should advertise their improvement in energy efficiency over ARM, but at least pretend they're as fast as before (or even faster in some areas).
Re: (Score:2)
Did your mother ever have you tested?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
because like omg mobile! (and server racks).
Re:Shoddy excuse... (Score:4, Informative)
Except they have, in terms of work done per clock (even ignoring multicore). A Haswell 1.2 ghz can achieve the same sort of results as a 3.0 ghz AMD core from 5 years ago in a balanced set of CPU constrained work. It actually comes out ahead in a number of specific workloads. Note I'm comparing to a core significantly older, with less cache for the sake of demonstrating only the senselessness of being fixated on clock, not saying this is a fair Intel v. AMD comparison.
On the other hand, a 1.2 GHz AMD K7 back in the day could beat a 3.0 GHz Pentium 4 of the same time. There's a lot more to processor performance than clockspeed.
Re: (Score:2)
More efficient means less heat which means smaller and quieter devices, so not necessarily meaningless at home.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Just wanted to say "Thanks!" for the informative video!