Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Technology Hardware

Intel Says Chips To Become Slower But More Energy Efficient (thestack.com) 337

An anonymous reader writes: William Holt, Executive Vice President and General Manager of Intel's Technology and Manufacturing Group, has said at a conference that chips will become slower after industry re-tools for new technologies such as spintronics and tunneling transistors. "The best pure technology improvements we can make will bring improvements in power consumption but will reduce speed." If true, it's not just the end of Moore's Law, but a rolling back of the progress it made over the last fifty years.
This discussion has been archived. No new comments can be posted.

Intel Says Chips To Become Slower But More Energy Efficient

Comments Filter:
  • by WilliamGeorge ( 816305 ) on Friday February 05, 2016 @02:09PM (#51448431)

    Hopefully if this does happen they will keep making the existing products, at least until they *do* manage performance improvements that catch up / exceed older stuff. Where I work we have lots of customers that *need* more processing power, and efficiency be damned.

    • by Fire_Wraith ( 1460385 ) on Friday February 05, 2016 @02:12PM (#51448471)
      I can't imagine that there will simply be zero demand for fast, or faster, chips, regardless of the power efficiency. Some applications just demand it. If Intel won't do it, then someone else will, whether that's AMD or some new competitor in China or wherever.

      On the other hand, there's certainly a market for more efficiency, especially in mobile devices, so I can certainly see lines of chips designed for that heading in the way described.
    • by ranton ( 36917 )

      Where I work we have lots of customers that *need* more processing power, and efficiency be damned.

      I assume most customers who need extreme processing power have learned over the past 10 years that faster individual processors are not coming. Algorithm design plus parallel processors is going to be the source of perhaps all performance increases in the foreseeable future. Until we move away from silicon that is.

      Are there even supercomputers out there which have faster processors than the fastest Xeon processors out there? I may be wrong, but I believe there really hasn't been any non-parallel based perfo

      • Where I work we have lots of customers that *need* more processing power, and efficiency be damned.

        I assume most customers who need extreme processing power have learned over the past 10 years that faster individual processors are not coming. Algorithm design plus parallel processors is going to be the source of perhaps all performance increases in the foreseeable future. Until we move away from silicon that is.

        Are there even supercomputers out there which have faster processors than the fastest Xeon processors out there? I may be wrong, but I believe there really hasn't been any non-parallel based performance increases for a long time.

        Yes there has, but more through architectural changes - new instructions, new modes, bigger better caches, improved offload models, task specific hardware (like crypto, packet moving etc.). This has been enabled by the increasing number of transistors on die and is driven by the mobile and server markets, which have evolving and quite different needs. Xeons today do more work per second than Xeons in the past and the scaling is greater than the scaling of the individual CPU core performance.

        • by Anonymous Coward on Friday February 05, 2016 @06:02PM (#51450225)

          Of course, the fundamental problem this presents is that it does *not* automatically result in improved performance.

          Architectural changes require that performance code be tuned or re-tuned, which means every at-scale application has to be somewhere between rejiggered and given a huge dedicated rewrite effort (The DOE's upcoming 300 petaflop GPU machine will have exactly ten applications that can run at full scale, each of which will have an entire dedicated team rewriting it to do so). And, of course, Amdahl's Law puts an ironclad limit on the effect that more parallel hardware can have on performance, and some problems simply cannot be parallelized no matter how much we wish otherwise.

          Contrast with the effect of improving the serial performance of hardware: All else being equal, double the CPU and memory clock rates and absolutely every program will run twice as fast, full stop. That was the desktop miracle from 1990 to 2003 or so - the same exact code screamed twice as fast every year.

          But as processors trend towards slower and wider, everything becomes an exercise in parallel programming. OpenMP parallel, MPI parallel, SSE simd instructions, GPU simd parallel... It's harder to do at all, and harder yet to do *right*, and historically the average programmer has enough trouble working with a runtime that's sequentially consistent.

          Rant aside though, I agree you're right - until we move to diamond substrates & heatsinks, we've hit the thermal brick wall (actually we hit it circa 2003) and there will not be any further increases in serial processing speed. Plus, AFAICT, there's a similar brick wall with access rates to DRAM and the fact that it requires a microwave-frequency bus with literally hundreds of pins extending for entire centimeters... so forget that too.

    • If Intel sticks to what they've done in the last few product generations, they'll still have higher-wattage higher-performance chips at the upper end for servers and workstations. But the ULV parts have been staying at basically the same performance now for a few years, with drastically reduced energy use. I think the current parts are under 4 watts for the same performance you used to have to spend 18 watts to get.

    • Here's the thing though. Even if chips remain equally powerful or 10% slower... if they could fit a 40 core Xeon into a 10watt atom power profile that would be a MASSIVE performance increase in mobiles. I'm relatively satisfied with CPU performance these days with a dual Xeon. If it meant I could get a current workstation in a mobile form, great! However I'm assuming that GPUs do keep improving and we finally see openings for specialized chips for physics and raytracing--the last two areas that would re

  • by transami ( 202700 ) on Friday February 05, 2016 @02:20PM (#51448529) Homepage

    Optical [eetimes.com]

  • by Anonymous Coward on Friday February 05, 2016 @02:20PM (#51448533)

    A flight from London to New York takes as long today as it did about 50 years ago. But the current planes achieve that more efficiently, with slightly larger windows, and some more pressure and humidity in the cabin. How depressing to think that the computing world might be about to enter a similarly dismal stage as well.

    • by bugs2squash ( 1132591 ) on Friday February 05, 2016 @02:25PM (#51448575)
      To be fair, for a while in the middle of the last 50 you could do it in a couple of hours.
    • Concorde was too expensive to succeed in the market - even people who could afford it and were flying routes that Concorde also flew just didn't care enough about the time to spend the money. If you could get them there at trans-sonic speeds, 60 minutes NY-London for Concorde prices, then you might have gotten more interest, but the difference between 6 or 7 hours and 3 or 4 just wasn't enough to tempt enough people - with pre-boarding prep and ground transit, either option basically consumed a full day.

      To

    • But the price of a transatlantic flight was something like 10 times more expensive back then (adjusting for inflation). Air travel has been improving consistently over time.
  • by BoRegardless ( 721219 ) on Friday February 05, 2016 @02:30PM (#51448647)

    Lots of ways to get "speed."

    • slower speed is slower speeds, it doesn't matter how many cores you have. It's still the frequency rating that counts. Talk to AMD. Bottom line unless program take advantage or multiple cores - and they don't - you want faster frequencies not more cores.
      • Bottom line unless program take advantage or multiple cores - and they don't - you want faster frequencies not more cores.

        Or more instructions per clock... or perhaps it's time to start getting serious about clockless designs.

        But, in reality, I think nearly all of the software that really needs lots of power has been parallelized for a few years now. For a couple of decades supercomputers have been all about massive numbers of cores on fast interconnects. The largest computations are done on tens to hundreds of thousands of multi-core computers -- the "computer" is an enormous data center. On the desktop, CPUs are more than

        • It really depends what you're trying to do. Amdahl's law will always apply. A single fast core will always be more versatile than multiple slower cores because the fast single core can perform equally well in sequential and parallel algorithms. Of course, we will always have to compromise and make do with what we can physically make.

          If each step in your algorithm depends on every single preceding step then you will probably want the fastest single core you can get your hands on within a reasonable budget in

    • Give me double clock-speed over a second core any time.
  • "...If true, it's not just the end of Moore's Law, but a rolling back of the progress it made over the last fifty years."

    While Moore's Law was fun to watch and experience while it lasted, this is a bit of a slap in the face when defining progress.

    It's kind of like trying to define the transition from the gas-guzzling muscle car era to the fuel-efficient compact car era as rolling back progress.

    Regardless of the finite resource, there's plenty of good reasons for humans to be consuming less of it.

    • It's kind of like trying to define the transition from the gas-guzzling muscle car era to the fuel-efficient compact car era as rolling back progress.

      You can find lots of car enthusiasts who won't hesitate to tell you it was exactly that.

  • by Joe_Dragon ( 2206452 ) on Friday February 05, 2016 @02:37PM (#51448709)

    With AMD out of the way Intel can F*** us.

    First they cut the pci-e lanes down on a $300-$350+ chip forcing you to pay upped to $350-$400 but then you need jump to $500-$600 to get the same as last gen + a small clock speed boost. This on the server / high workstation side.

    On the desktop side they are still on DMI (Now at pci-e 3.0) + 16 PCI-e 3.0 why no QPI to chip set like AMD's HTX?

    • Comment removed based on user account deletion
  • Efficiency is good, no doubt. But the electricity to run your computer, tablet, or phone, vs. the rest of your house, is comparatively very little. It's almost trivial even... except for those mobiles devices that are dependent on a battery. And the sloth and complacency of the battery manufacturers vs. the tech industry is what's holding us back. If they were investing into the R&D to keep up with Intel and Moore's law... doubling their capacity every 18 months as well... performance compromises li

    • And the sloth and complacency of the battery manufacturers vs. the tech industry is what's holding us back. If they were investing into the R&D to keep up with Intel and Moore's law...

      And how many trillions would this cost? There's actually massive investments into battery technology. We've come a long ways in the last 20 years. But consider, they're figuring out that we had batteries way back in BC times. The Greeks had them, sort of, they think they were used for electroplating stuff.

      But they started entering common use in the 19th century. We've put a huge amount of development work into them. But batteries, it turns out, run into physical laws much quicker than the 'completely

    • Educate yourself on battery technology then post. Long lasting batteries have been the holy grail for just about every application. Research takes time.
    • The problem is that programmers have gotten lazy (excuse me: "man-power efficient") off of the free speed we've been adding over all of these years. Layers upon layers of abstraction from machine code have made it possible to code in languages which are far removed from the actual code the runs on machines. There may now come a time when efficiency of programming matters to everyone, not just the embedded folks.

  • Server 2016 is going per core licencing which means less cores overclocked

  • the software side has been storing up efficiency improvements for a long time. Just get rid of the extras, like bloatware, and hastily programmed apps, and nobody will notice.
  • What I don't understand is why are last generation parts not dropping in price? For the longest time, whenever new stuff came out, the prices of older stuff dropped. But that doesn't seem to happen anymore.

    What's up with that?

    • by AHuxley ( 892839 )
      Generational investors dont like money left on the table. The trapped consumers are used to paying a set amount for any CPU "chip" with the branding, why drop prices?
      This is great news for any disruptive new products. If all the brand can now sell is slow and offer energy savings.
  • by wevets ( 939468 ) on Friday February 05, 2016 @03:54PM (#51449325)
    Contrary to popular belief, Moore's Law doesn't say that processors will double in speed every 18~24 months. It says that the number of transistors that can economically be put on a single chip will double every 18~24 months. Up until recently, that has translated into a doubling of speed for two reasons: 1) more transistors can be used to optimize the processing of instructions through a variety of techniques and 2) the distances signals have to travel is lessened as the transistors shrink. More transistors contribute not only to power consumption but also more heat, which is another problem with high performance processors. This was partially dealt with by putting multiple cores on a die running at less than max clock rates, thereby distributing the heat and making it easier to deal with. It still may be economical to put more and more transistors on a die, but maybe we don't want to. More transistors consume more power. What's your priority, raw speed or power consumption. Maybe you can't optimize for both at the same time.
  • The cortex-A series of chips appears to be catching Intel CISC in some of the raw compute numbers on a per-core basis. Will this possibly rekindle the RISC vs CISC battles of the 90s?

  • Umm, that's not rolling back. It's a tradeoff.

  • As far as I can tell what they are saying is that during the transition period to new technologies there will be a situation where new technologies will not improve and will fall back a little in the area of performance.... which is to be expected. As that new technology improves it will again march reverse and performance will improve. In other words - if power consumption is important to you you will make the leap to the new technology first. If performance is important you will stay with existing tech
  • What about people that care more about performance (per thread) than power consumption? Will we be stuck on old technology?

  • If I can cram more cores in a tighter space with less heat and power consumption then I'll call that a performance boost. Bring on the 24 core i5s :)
  • My dream is to some day have my computer waiting on me. Unlike today where I am constantly waiting on my computer... even with the fastest CPU, video card, SSDs in RAID, 16 gigs of RAM, a RAM disk for the swap file... I still find myself waiting.
  • by Kjella ( 173770 ) on Friday February 05, 2016 @07:08PM (#51450595) Homepage

    You can have strong AI in ~20W, because that's what our brain uses. Each neuron is really, really slow like 100Hz and below, but when you have absurdly many it works. The problem is understanding the programming model, because it's nothing like our one list of instructions.

No spitting on the Bus! Thank you, The Mgt.

Working...