Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Upgrades Technology

Can Our Computers Continue To Get Smaller and More Powerful? 151

aarondubrow (1866212) writes In a [note, paywalled] review article in this week's issue of the journal Nature (described in a National Science Foundation press release), Igor Markov of the University of Michigan/Google reviews limiting factors in the development of computing systems to help determine what is achievable, in principle and in practice, using today's and emerging technologies. "Understanding these important limits," says Markov, "will help us to bet on the right new techniques and technologies." Ars Technica does a great job of expanding on the various limitations that Markov describes, and the ways in which engineering can push back against them.
This discussion has been archived. No new comments can be posted.

Can Our Computers Continue To Get Smaller and More Powerful?

Comments Filter:
  • Obvious (Score:5, Insightful)

    by Russ1642 ( 1087959 ) on Thursday August 14, 2014 @04:48PM (#47673385)

    Yes. Next question please.

    • by Anonymous Coward

      C-C-C-Combo Breaker! In your face Betteridge!

    • More powerful, perhaps. Smaller? Maybe not. We're already at the point where we can have watch-sized displays and full keyboards on our phones. The limiting factor is going to be 1) displays that are small but still readable and 2) input devices that aren't too tiny for human-sized fingers. As far as smart phones go (which, in essence, are tiny computers), I don't see them becoming much smaller due to these factors. However, I'm sure something completely innovative will come along that will make us lo

    • Re: (Score:3, Insightful)

      by bobbied ( 2522392 )

      Actually, the answer is no and that is obvious. Eventually we are going to run into limits driven by the size of atoms (and are in fact already there).

      Once you get a logic gate under a few atoms wide, there is no more room to make things smaller. No more room to make them work on less power. We will have reached the physical limits, at least in the realm of our current lithographic doping processes. We are just about there.

      This is not to say there won't be continued advances. They are going to get more a

      • What's obvious is that we can continue to get smaller and more powerful than what we have already. Do you doubt that in a year's time, let alone five, computers will be smaller, more powerful, and consume less energy? And then there are mobile devices, which have a LONG way to go, especially in regards to batteries. Thinking that we've already reached the limits of speed and size is laughable. It really is up there with "shut down the patent office because everything has been invented" attitude.

        • Re:Obvious (Score:5, Insightful)

          by bobbied ( 2522392 ) on Thursday August 14, 2014 @06:00PM (#47673709)

          If you read my comment.... I'm saying that we are very close to hitting the physical limits. In the past, the limits where set by the manufacturing process, but now we are becoming limited by the material, the size of the of silicon atoms.

          There is basically only one way to reduce the current/power consumption of a device, make it smaller. A smaller logic gate takes less energy to switch states. We are rapidly approaching the size limits of the actual logic gates and are now doing gates measured in hundreds of atoms wide. You are not going to get that much smaller than a few hundred atoms wide. Which means the primary means of reducing power consumption is reaching it's physical limits. Producing gates that small also requires some seriously exacting lithography and doping processes, and we are just coming up the yield curve on some of these, so there is improvement still to come, but we are *almost* there now.

          There are still possible power reducing technologies which remain to be fully developed, but they are theoretically not going to get us all that much more, or we'd have already been pushing them harder. So basic silicon technology is going to hit the physical limits of the material pretty soon.

          • I think the greatest speed limitation now is our "computing dimensions" -- we are still using binary logic in the computer. For instance, if we moved to optical computing -- sure the structures would get larger, and there are density issues, but if you can create a binary logic gate for each color, your "dimension" of computing is limited only by the frequencies you can discern. You add massive parallelism.

            Now if we can move from binary logic at the same time, more computing work can get done per cpu cycle.

      • by tlhIngan ( 30335 )

        The question is, will they have to?

        I mean, maybe back when the original iPhone was released, people were releasing ever-tinier cellphones, then it made sense. But given that cellphones are going bigger and bigger, the pressure to make smaller and smaller SoCs is decreasing.

        I mean, 3.5" was ginormous before. Now we have people buying phones with 6" screens and large, the amount of size reduction needed is practically nil.

        • by sjames ( 1099 )

          I wonder if it will be like portable music. We started with big heavy tube radios. Then they started shrinking until you could put one on the kitchen table. Next, the tinny sounding AM transistor radio. They got a bit bigger after that, but were in stereo and featured 8-track, cassette, and CD with respectable speakers. Then we saw monster 'boom boxes' with wheels and handles and Christmas lights in the speaker grilles (I think it might have had a black and white TV in there somewhere too). I'm pretty sure

      • Re:Obvious (Score:4, Insightful)

        by Beck_Neard ( 3612467 ) on Thursday August 14, 2014 @06:03PM (#47673721)

        We're eventually going to hit limits, but there's no reason to think that that limit is a logic gate a few atoms wide. There's isentropic computing, spintronics, neuromorphic computing, and further down the road, stuff like quantum computing.

      • by AmiMoJo ( 196126 ) *

        We can move a lot of processing off to servers now that we have a fast, cheap and ubiquitous network. That will allow our devices to be smaller and use the resources of a larger server somewhere else.

        • by Yunzil ( 181064 )

          now that we have a fast, cheap and ubiquitous network.

          We do?

          • by AmiMoJo ( 196126 ) *

            Well, some of us do, others are catching up. The UK is currently about 14 years behind the curve, for example.

        • We can move a lot of processing off to servers now that we have a fast, cheap and ubiquitous network. That will allow our devices to be smaller and use the resources of a larger server somewhere else.

          You have a point, sort of. We are already doing this. However, apart from the display and CPU resources (in that order) the third largest power consumer in a cell phone is running the radios. When you start transferring data at high rates, it takes a lot of power. Given the normal distances between the phone and the cell tower, we are just about at the physical limits on this too. It just takes X amount of RF to get your signal over the link and there is not much you can do w/o violating the laws of phy

      • I believe that we can get things smaller. I'll agree that we're approaching the limits as regards what is basically a 2 dimensional layout that we're currently using for chips, but that leaves the 3rd dimension. Of course there is a lot of technical issues to overcome, but I believe that they will be overcome.

        • I don't think going 3D is going to fix the power density problem. You still have to get the heat generated out of the die and keep the device within the operational temperature range it Stacking things 3D only makes this job harder, along with the how do you interconnect stuff on multiple layers?

          Could we develop technologies to make 3D happen? Sure, we actually are already doing this, albeit in very specific cases. But there are multiple technical issues with trying to dope areas in 3D. You can do it,

      • I really hope computers stop getting more powerful, because the trend in last few years has been for software bloat to use up the added capacity, and now computers are getting more powerful but less useful.

      • by Guppy ( 12314 )

        Actually, the answer is no and that is obvious. Eventually we are going to run into limits driven by the size of atoms (and are in fact already there).

        No problem with atomic size limits, let me just whip out my handy quark notcher!

  • by mythosaz ( 572040 ) on Thursday August 14, 2014 @04:52PM (#47673411)

    Even if the electronics fail to get much smaller, there's plenty of room to be had in batteries, screens, and the physical casings of our handheld devices.

    • Even if the electronics fail to get much smaller, there's plenty of room to be had in batteries, screens, and the physical casings of our handheld devices.

      At first glance, I read this as "Even if our electrons fail to get much smaller," and, for a second, I thought, "Whoa. Are people working on that?" Guess I gotta get my eyeglass prescription checked.

  • by Anonymous Coward

    We're running up against physical limitations but "3d" possibilities will take our 2d processes and literally add computing volume in a new dimension.

    So of course it's going to continue, the only question is one of rate divided by cost/benefit.

  • by raymorris ( 2726007 ) on Thursday August 14, 2014 @04:59PM (#47673463) Journal

    Bettridge's law says no.
    Moore's law says yes.

    In the battle of the eponymous laws, which law rules supreme? Find out in this week's epoch TFA.

  • three decades in the industry and I've never seen performance measured or stated in MHz. At various times MIPS (and referencing a specific architecture, e.g. VAX MIPS or Mainframe MIPS) or MFLOPS might have been used, but never clock speed alone. As now other benchmarks also were used.

    • three decades in the industry and I've never seen performance measured or stated in MHz.

      Did someone do that in any of the linked articles?

      • yes, it was first sentence of John Timmer's Ars article set me off: "When I first started reading Ars Technica, performance of a processor was measured in megahertz"

    • by vux984 ( 928602 ) on Thursday August 14, 2014 @05:40PM (#47673555)

      three decades in the industry and I've never seen performance measured or stated in MHz

      Erm... from the 80286 through the Pentium 3 CPU clockspeed was pretty much THE proxy stat for "PC performance".

      • by Misagon ( 1135 )

        I can't tell if you are being sarcastic or not...

        What you say is true only if you bought all your processors from Intel.

        Once AMD came along, it was not entirely true if you compared to them. It was not true if you compared to Mac that used 680x0 and later PowerPC.

        • by vux984 ( 928602 )

          What you say is true only if you bought all your processors from Intel.

          You say that like this wasn't common as dirt for most of a decade or so.

          Once AMD came along

          Yeah, that was mostly later. Pentium 4 vs Athlon XP etc. My suggested time frame ended with the Pentium III for a reason.

          It was not true if you compared to Mac that used 680x0 and later PowerPC.

          Also true, but comparatively few did that. Choosing a Mac vs a PC rarely had anything to do with performance. It was entirely about OS+applications; then o

        • Actually, back in the 386/486 days... YES you did compare amd and intel by MHz... in FACT that was one of AMDs big sellers... Intel's fastest 386 ran at 33MHz, AMDs? 40 MHz..
          486- Intel had 33Ghz, (66 and 100Mhz for DX2/DX4)
          AMD had 40Ghz (80 and 120Mhz respectively)

          they were famous for exploiting the MHz = speed myth... that was the first fall of AMD from grace following that, with the K5 and K6 processors, they wouldn't get back into the mainstream until the Athlon, which also competed on the MHz scale...

      • Marketing and sales to ignorant consumers don't count. The "MHz Myth" has been time and again a subject in many a PC magazines

        More meaningful benchmarks have existed long before that era (e.g. Whetstone from early 70s) and many were (e.g. Dhrystone in mid 80s) used all through the rise of the microprocessor (8080, 6502, etc.)

        • by vux984 ( 928602 ) on Thursday August 14, 2014 @06:55PM (#47674059)

          Marketing and sales to ignorant consumers don't count.

          Originally it was useful enough. Marketing and sales perpetrated it long after it wasn't anymore.

          The "MHz Myth" has been time and again a subject in many a PC magazines

          Only once the truth had become myth. The Mhz "myth" only existed because it was sufficiently useful and accurate to compare intel CPUs by MHz within a generation and even within limits from generation to generation for some 8 generations.

          It wasn't really until Pentium 4 that MHz lost its usefulness. The Pentium 4 clocked at 1.4GHz was only about as fast as a P3 1000 or something; and AMD's Athlon XP series came out and for the first time in a decade MHz was next to useless. Prior to that, however, it was a very useful proxy for performance.

          More meaningful benchmarks have existed long before that era (e.g. Whetstone from early 70s) and many were (e.g. Dhrystone in mid 80s) used all through the rise of the microprocessor (8080, 6502, etc.)

          Sure they did. But for about decade or so, if you wanted a PC, CPU + MHz was nearly all you really needed to know.

          • But there was ALWAYS alternatives to intel processors even for personal computer (e.g. motorola) from day one of the personal computer movement, and so the Megahertz Myth was always meaningless. My home computer in 1991 had a Motorola chip (NeXTStation), in 1996 it had a Sparc chip.

            • and if anyone interested, 1976 I had a SWTP 6800

            • by vux984 ( 928602 )

              But there was ALWAYS alternatives to intel processors even for personal computer (e.g. motorola) from day one of the personal computer movement, and so the Megahertz Myth was always meaningless.

              Only if you cared about comparison with non-intel PCs. People buying Macs weren't worried about performance comparisons with PCs, they were only concerned about performance compared to OTHER macs. The (much larger) DOS/Windows PC crowd only cared about performance relative to other intels.

              My home computer in 1991 ha

              • but what of the 80486 doing about 80% of the MIPS of the clock frequency, while 386 only 33% and the Pentium I did 150% (e.g. 75MHz == 125 million x86 MIPS) ?

                Some would argue Mac with MacOSX with Motorola chip is a next-gen NeXT, and a LOT of those sold.

                Sun was selling 50,000 sparc workstations per quarter in 1992.

                • by vux984 ( 928602 )

                  but what of the 80486 doing about 80% of the MIPS of the clock frequency, while 386 only 33% and the Pentium I did 150% (e.g. 75MHz == 125 million x86 MIPS) ?

                  What about it? That just serves to further amplify the improvement from CPU generation to CPU generation.

                  Some would argue Mac with MacOSX with Motorola chip is a next-gen NeXT, and a LOT of those sold.

                  Perhaps, but they weren't selling them to people who were basing the purchasing decisions based on their performance relative to DOS/Windows PCs.

                  There wa

                  • 200K is one percent of 20M, and that 20M not from a single vendor as 1992 the year everyone and their uncle jumped into PC market as price plummeted

                    Did you know Apple was considered part of the PC market in 1992, and had whopping 19 percent share? That's wasn't an intel platform.

                    • by vux984 ( 928602 )

                      Did you know Apple was considered part of the PC market in 1992, and had whopping 19 percent share?

                      Its entirely beside the point. Virtually nobody was comparing Apples to Intels to Sparcs based on benchmarks to make a buying decision.

                      The decision to buy Apple or Intel or Sparc was made based on OTHER factors (software availability, features, etc), and THEN a buying decision within the chosen platform was made based on price/performance etc.

                      If the platform chosen was intel, then MHz was the primary performa

                    • Nothing you have said reinforces your mistaken notion that MHz ever measured performance. I've already shown that is not true even between Intel processors. You only believe an urban legend, a myth, a falsehood was true. Those of us who did measure performance of machine over the past four decades used benchmarks.

                    • by vux984 ( 928602 )

                      You only believe an urban legend, a myth, a falsehood was true.

                      Give me a break. Everybody who lived at the time buying computers used MHz as a proxy for performance.

                      Those of us who did measure performance of machine over the past four decades used benchmarks.

                      I'm sure you did. I remember the benchmarking tools too. I know anyone professionally measuring performance used them.

                      But the majority of the buying public, and a great deal of corporate/business/enterprise/educational buyers too made all their decisio

          • Actually, I would say that the MHz lost it's usefulness in the x86 world long before the P4 came out. More like the (original) Pentium-era, when Cyrix and AMD starting selling chips with the "PR" rating. Of course, the PR thing was even more meaningless, as a 150MHz Cyrix chip may perform like a Pentium 200 when it came to integer performance (hench "PR200+"), but was more like a Pentium 90 when it came to FPU performance.

      • PC definitely was as parent said. I recall those desktop with a 'turbo' button where u press will double the speed (show in 7-segment LED) from 16 to 32MHz....
    • If you'll allow a few more years, my first TRS-80 had a 1.77MHz Z80, my second had about a 3.2MHz Z80A, and my third had a blisteringly fast 4MHz Z80A.

  • by Anonymous Coward

    Get the original article here: Fuck paywalls [libgen.org]

  • by uCallHimDrJ0NES ( 2546640 ) on Thursday August 14, 2014 @05:42PM (#47673563)

    Next you'll be telling me they'll let us run unsigned code on processors capable of doing so. You need to get onboard, citizens. All fast processing is to occur in monitored silos. Slow processing can be delegated to the personal level, but only with crippled processors that cannot run code that hasn't yet been registered with the authorities and digitally signed. You kids ask the wrong questions. Ungood.

  • Considering the raw power of today's typical smart phone and it's form factor, I'd say we're rapidly approaching the limits on the size of devices, especially when you consider the rooms that computers far less powerful used to occupy in the days of yore.

    There are physical limits to how small electronics can be made, even if new lithography technologies are developed. We'd need to come up with something energy based instead of physical in order to get smaller than those barriers.

    Plus there's the fact

    • I was really amused when my wife took a picture of a Cray-1 supercomputer with her original iPhone. I did some performance comparisons, and the Cray would only be faster for massively parallel floating-point operations. On the other hand, I didn't check out the iPhone's graphics hardware, so that might well have had the Cray beat.

  • Feynman's talk on this seems required reading: There's plenty of room at the bottom [zyvex.com]. None of the linked articles even mention Feynman's name.
    • None of the linked articles even mention Feynman's name.

      Why should they? Not many current astrophysics papers mention Galileo, either. Nor do most papers in modern computing reference the work of John von Neumann.

      In science, an original idea or suggestion by someone, no matter how famous, is built upon by others, who's work is built upon by others, until someone actually turns an incomplete idea into a field of study. And by this time the literature has evolved to view the problem slightly differently, per

      • by dissy ( 172727 )

        But come on, do you really think a 55 year old paper is going to be at the top of impact rankings when computed against current research in a field moving this fast? And, even if so, isn't it more likely this work has been superseded by others? IT'S BEEN 55 GOD DAMN YEARS, FOR CHRISSAKE!!! I think your hero worship is showing. At least find a more modern reference.

        To be fair, this is a perfectly acceptable reference in the given context, and the age only helps the argument not hinders it as you suggest.

        Even at 55 years old, the Feynman paper is based on known technology and physics at the time. This provides a high-end boundary to the answer that is only potentially (in this case definately) inaccurate on exactly how much lower the size can actually get.

        Our tech has changed, but physics not quite as much.
        What we know today about building at the atomic scale is only

  • by Hamsterdan ( 815291 ) on Thursday August 14, 2014 @07:05PM (#47674121)

    As we're nearing the size limit for IC manufacturing technology, what about reducing bloat and coding in a more efficient manner.

    Let's look at the specs of earlier machines

    Palm Pilot. 33Mhz 68000 with 8MB of storage, yet it was fast and efficient.
    C=64 1Mhz 6510 with 64k RAM (38 useable), also fast and efficient, you could run a combat flight simulator on it (Skyfox)
    Heck, even a 16MB 66Mhz 486 was considered almost insane in early 1994 (and it only had a 340 *MB* HDD, and everything was fine. (I bought that in high school for AutoCAD)

    Go back to the same efficient and small code, and our devices will seem about 10 times faster and will last longer.

    • C=64 1Mhz 6510 with 64k RAM (38 useable), also fast and efficient

      It wasn't fast by any stretch (I had the European PAL spec, which was even slower). If you wanted to use "high resolution" mode (320x200 pixels) then it took minutes to draw even simple curves. If you programmed it using the built-in BASIC, anything non-trivial took minutes or more. The only way you could write anything like a useful program was to use assembler, coding directly to the bare metal. Some of the games resulting were impressive
  • There was a time when 1GHz/1GB was overkill, and while CPU/IO speed improves, usability doesn't seem to be getting all that much better. Considering we've had multiple orders of magnitude improvement in raw hardware performance, shouldn't other factors -- usability, reliability, security -- get more focus?

    Sure, those could benefit from more raw hardware capability, but the increased 'power' doesn't seem to be targeted at improving anything other than raw application speed -- and sometimes, not even that.

    • by fyngyrz ( 762201 )

      There was a time when 1GHz/1GB was overkill

      Not for desktop computers, there wasn't. Perhaps for your watch. Then again, probably not.

      There's no such thing as "overkill" in computing power and resources. There is only "I can't get (or afford) anything faster than this right now."

    • If I had a computer that was a million times faster than my current computer I could still use something even faster. Even at a billion times faster I could still use more power. We are at the stage where we can use computer simulations to help bring drugs to market. The computational power needed is HUGE but it is also helping bring drugs (including CURES) to market that would have never been possible otherwise. There are even potential cancer cures that will NOT make it to market ANY other way.

      The average

      • I run Rosetta@Home [bakerlab.org] on my own computers -- I can't believe I forgot about that. Great point.

      • The scientists and engineers that design the US nuclear weapons have computational problems that are measured in CPU months. A senior scientist was talking to a consultant, and explained the importance of these simulations.

        "Just think about it.", he said. "If we get those computations wrong, millions of people could accidentally live."

        -credit to the unknown US nuclear scientist who told this joke to Scott Meyers, who in turn relayed it at a conference.

        • In my case though these calculations will save millions of lives and improve the qualify of life for many millions more. Even the most powerful super computers in the world would take years to solve many of these problems and we keep finding more to solve. We approximate solutions because that is still better than we had before and it is the best we can do for now.

          With more computing power we can save more lives.

  • Three years ago in the uk i bought my daughter a dell laptop, i5 processor, 6Gb RAM, 500Gb hard drive, £350. Recently it died, so i looked around for a replacement. listed in the bargain forums here (hotukdeals.com) only a couple of weeks ago was a laptop i5, 6Gb RAM, 1Tb hard drive, £380. So in three years the price has barely changed for a remarkably simiar spec. Moore's law seems dead? I agree with the original poster!
  • The gating issue is now screen size and finger size. Nice big high def screens need big batteries to keep them lit. I don't think those items are going to get much smaller.

  • The one word answer is "Yes". Betteridge's law of headlines is finally broken.

  • Computers will get faster, they always do.

    But lets be honest, the influx of Java/Ruby/Python and "easy" amature programming are making our computers slower than they were 5 years ago.
    - Slower language before we even start.
    - Single thread
    - No optimizations. Dreadful performance
    - Relying on language safety measures, instead of "good logic". Buggy as hell.
    - Relying on 50+ library's, just to use 1 function in each.

    If only they would learn C++. Our processors probably wouldn't need to be upgraded for another 5 y

    • Are you the same guy that labels porn "amature". It's "amateur". "Amature" doesn't even exist, except if you interpret the "a-" prefix as "not", which then the word would mean "not mature".
      • Are you the same guy that labels porn "amature". It's "amateur". "Amature" doesn't even exist, except if you interpret the "a-" prefix as "not", which then the word would mean "not mature".

        Not i'am just the guy who trusted Google Spell Checker a little too much.

        Not sure what porns got to do with incorrect spelling, but why not.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...