Can Our Computers Continue To Get Smaller and More Powerful? 151
aarondubrow (1866212) writes In a [note, paywalled] review article in this week's issue of the journal Nature (described in a National Science Foundation press release), Igor Markov of the University of Michigan/Google reviews limiting factors in the development of computing systems to help determine what is achievable, in principle and in practice, using today's and emerging technologies. "Understanding these important limits," says Markov, "will help us to bet on the right new techniques and technologies." Ars Technica does a great job of expanding on the various limitations that Markov describes, and the ways in which engineering can push back against them.
Obvious (Score:5, Insightful)
Yes. Next question please.
Re: (Score:1)
C-C-C-Combo Breaker! In your face Betteridge!
Re: (Score:2)
More powerful, perhaps. Smaller? Maybe not. We're already at the point where we can have watch-sized displays and full keyboards on our phones. The limiting factor is going to be 1) displays that are small but still readable and 2) input devices that aren't too tiny for human-sized fingers. As far as smart phones go (which, in essence, are tiny computers), I don't see them becoming much smaller due to these factors. However, I'm sure something completely innovative will come along that will make us lo
Re: (Score:3)
Re: (Score:2)
Zoolander and his phone begs to differ.
Re: (Score:3, Insightful)
Actually, the answer is no and that is obvious. Eventually we are going to run into limits driven by the size of atoms (and are in fact already there).
Once you get a logic gate under a few atoms wide, there is no more room to make things smaller. No more room to make them work on less power. We will have reached the physical limits, at least in the realm of our current lithographic doping processes. We are just about there.
This is not to say there won't be continued advances. They are going to get more a
Re: (Score:2)
What's obvious is that we can continue to get smaller and more powerful than what we have already. Do you doubt that in a year's time, let alone five, computers will be smaller, more powerful, and consume less energy? And then there are mobile devices, which have a LONG way to go, especially in regards to batteries. Thinking that we've already reached the limits of speed and size is laughable. It really is up there with "shut down the patent office because everything has been invented" attitude.
Re:Obvious (Score:5, Insightful)
If you read my comment.... I'm saying that we are very close to hitting the physical limits. In the past, the limits where set by the manufacturing process, but now we are becoming limited by the material, the size of the of silicon atoms.
There is basically only one way to reduce the current/power consumption of a device, make it smaller. A smaller logic gate takes less energy to switch states. We are rapidly approaching the size limits of the actual logic gates and are now doing gates measured in hundreds of atoms wide. You are not going to get that much smaller than a few hundred atoms wide. Which means the primary means of reducing power consumption is reaching it's physical limits. Producing gates that small also requires some seriously exacting lithography and doping processes, and we are just coming up the yield curve on some of these, so there is improvement still to come, but we are *almost* there now.
There are still possible power reducing technologies which remain to be fully developed, but they are theoretically not going to get us all that much more, or we'd have already been pushing them harder. So basic silicon technology is going to hit the physical limits of the material pretty soon.
Re: (Score:2)
I think the greatest speed limitation now is our "computing dimensions" -- we are still using binary logic in the computer. For instance, if we moved to optical computing -- sure the structures would get larger, and there are density issues, but if you can create a binary logic gate for each color, your "dimension" of computing is limited only by the frequencies you can discern. You add massive parallelism.
Now if we can move from binary logic at the same time, more computing work can get done per cpu cycle.
Re: (Score:2)
The question is, will they have to?
I mean, maybe back when the original iPhone was released, people were releasing ever-tinier cellphones, then it made sense. But given that cellphones are going bigger and bigger, the pressure to make smaller and smaller SoCs is decreasing.
I mean, 3.5" was ginormous before. Now we have people buying phones with 6" screens and large, the amount of size reduction needed is practically nil.
Re: (Score:2)
I wonder if it will be like portable music. We started with big heavy tube radios. Then they started shrinking until you could put one on the kitchen table. Next, the tinny sounding AM transistor radio. They got a bit bigger after that, but were in stereo and featured 8-track, cassette, and CD with respectable speakers. Then we saw monster 'boom boxes' with wheels and handles and Christmas lights in the speaker grilles (I think it might have had a black and white TV in there somewhere too). I'm pretty sure
Re:Obvious (Score:4, Insightful)
We're eventually going to hit limits, but there's no reason to think that that limit is a logic gate a few atoms wide. There's isentropic computing, spintronics, neuromorphic computing, and further down the road, stuff like quantum computing.
Re: (Score:2)
The theoretical amount of energy to flip a bit is, I suspect, far less than we're currently using. Nobody's claiming that there are no limits, just that we haven't hit them yet. (Also, if we can have fully reversible computing, the minimum energy cost to flip a bit goes away.)
Re: (Score:2)
We can move a lot of processing off to servers now that we have a fast, cheap and ubiquitous network. That will allow our devices to be smaller and use the resources of a larger server somewhere else.
Re: (Score:2)
now that we have a fast, cheap and ubiquitous network.
We do?
Re: (Score:2)
Well, some of us do, others are catching up. The UK is currently about 14 years behind the curve, for example.
Re: (Score:2)
We can move a lot of processing off to servers now that we have a fast, cheap and ubiquitous network. That will allow our devices to be smaller and use the resources of a larger server somewhere else.
You have a point, sort of. We are already doing this. However, apart from the display and CPU resources (in that order) the third largest power consumer in a cell phone is running the radios. When you start transferring data at high rates, it takes a lot of power. Given the normal distances between the phone and the cell tower, we are just about at the physical limits on this too. It just takes X amount of RF to get your signal over the link and there is not much you can do w/o violating the laws of phy
Re: (Score:2)
Once phones have more memory, we can start hash-addressing files and greatly improve caching.
Re: (Score:3)
I believe that we can get things smaller. I'll agree that we're approaching the limits as regards what is basically a 2 dimensional layout that we're currently using for chips, but that leaves the 3rd dimension. Of course there is a lot of technical issues to overcome, but I believe that they will be overcome.
Re: (Score:2)
I don't think going 3D is going to fix the power density problem. You still have to get the heat generated out of the die and keep the device within the operational temperature range it Stacking things 3D only makes this job harder, along with the how do you interconnect stuff on multiple layers?
Could we develop technologies to make 3D happen? Sure, we actually are already doing this, albeit in very specific cases. But there are multiple technical issues with trying to dope areas in 3D. You can do it,
Too much power dysfunctional (Score:2)
I really hope computers stop getting more powerful, because the trend in last few years has been for software bloat to use up the added capacity, and now computers are getting more powerful but less useful.
Re: (Score:2)
You've missed the point. Software bloat has started accelerating *faster* than hardware is improving, resulting in a net loss for the user.
And the cause is frequently the egos of bad programmers.
Re: (Score:2)
Actually, the answer is no and that is obvious. Eventually we are going to run into limits driven by the size of atoms (and are in fact already there).
No problem with atomic size limits, let me just whip out my handy quark notcher!
yes. Especially per passenger. (Score:5, Interesting)
> Did our jets get faster and lighter and cheaper?
Yes. Especially lighter and cheaper PER PASSENGER, which is the goal for passenger jets.
> it still takes the same amount of energy to fly across the Atlantic.
Nope, fuel efficiency and energy efficiency have improved significantly.
Re: (Score:3)
You're being very dishonest when you leave this part out:
Not by the same degree as computing.
By those standards, airplanes have basically stood still for the last 50 years. Sure they get a bit lighter, a bit better engines, a bit better aerodynamics but they're not radically different nor faster. Already the very first commercial transatlantic flight Berlin-New York was done in 25 hours, like orders faster than a boat and still on the same order - 8.5 hours - today. Same with cars, they've come a long way since the T-Ford but it could do 40-4
Re: (Score:2)
The sad fact though is that flying from London to New York still takes the same time as it did 40 years ago.
Nope. (almost) 40 years ago we had... Concorde!
That miracle of modern engineering took 3h 30min instead of the subsonic 7-8 hours it takes now. And why was Concorde retired? It just didn't make any money.
At the end of the day people want slow, cheap and unbearable to fast, stylish and extortionately expensive.
Re: (Score:2)
That miracle of modern engineering took 3h 30min instead of the subsonic 7-8 hours it takes now. And why was Concorde retired? It just didn't make any money.
It did, for BA at least. It's not clear why it shut down. Mostly it seems that Airbus refused to continue maintainance and also refused to sell the maintainance operation and plans on to anyone else.
So basically, blame the French.
Re: (Score:2)
Yup. However, the Concorde was capable of subsonic flight. In crossing the Atlantic, for example, it would be easy to only go supersonic over the ocean. In crossing the US, not so much.
Re:Obvious (Score:5, Insightful)
Did our jets get faster and lighter and cheaper?
The fastest air breathing aircraft was the SR-71, which went into production in 1962, based on technology from the 1950s. So for at least half a century, jets did not get faster. Aircraft improved enormously between 1903 and 1960. Then the rate of improvements fell off a cliff. That is why Sci-Fi from that era often extrapolated the improvements into flying cars, and fast space travel, but far fewer predicted things like the Internet or Wikipedia.
What's after atoms?
Silicon lithography will hit its limits after a few more iterations. But nano-assembly techniques may allow silicon transistors to be even smaller. After that we may be able to move to carbon nanotube transistors, based on spintronics to lower the heat dissipation. There is still plenty of room at the bottom.
Re: (Score:2)
There is always going with distributed computing, both tightly coupled (cores) and loosely coupled (different CPUs.)
I wouldn't be surprised to see RAM chips with a part of the die dedicated to CPU/FPU/GPU functions. Add more RAM, add more CPUs.
Eventually the concept of a "central" processing unit may give way to passive backplanes and various speed buses, perhaps with a relatively lightweight chip directing everything.
Another example, is the x86 architecture. Intel has been amazing in keeping it going, bu
Re: (Score:2)
I wouldn't be surprised to see RAM chips with a part of the die dedicated to CPU/FPU/GPU functions.
The same package is already commonplace, but the same die is problematic because RAM processes are significantly different from CPU processes.
Eventually the concept of a "central" processing unit may give way to passive backplanes and various speed buses, perhaps with a relatively lightweight chip directing everything.
This is a very bad idea. Moving bits uses orders of magnitude more energy than computation, so you need to concentrate the computing behind multiple caches and move the data as little as possible. So the model will continue to be based around islands of high-performance computing connected by slow, expensive busses, but the "CPU" will contain many smaller parallel pr
Re:Obvious (Score:5, Informative)
That's only true if you're only judging it by outright speed, height, etc. Things have continued to improve in terms of efficiency, thrust-to-weight ratio, noise, cleanliness of fuel burn and above all, reliability.
The original RB211 turbofan (the first big fanjet of the type that all modern airliners use) had a total lifetime of 1,000 hours. Nowadays it's >33,000 hours. That's an incredible achievement. In 1970, as a young kid with a keen interest in aviation, I would watch Boeing 707s fly in and out of my local airport, all trailing plumes of black smoke, all whining loudly (and deafeningly, on take-off), and understanding where all the noise protesters that frequently appeared on the news were coming from. Nowadays you don't have that, because noise is just not the problem it was, there's no black smoke, and jets slip in and out of airports really very quietly, when you consider how much power they are producing (which in turn helps them climb away more quickly).
As far as computing is concerned, you're right - there's still plenty of room at the bottom. But the current fabrication technology is reaching its limits. Perhaps jet engine manufacturers in the late 60s couldn't see how they would overcome fundamental limits in materials technology to produce the jets we have today, but they did.
Re:Obvious (Score:5, Insightful)
Silicon lithography will hit its limits after a few more iterations. But nano-assembly techniques may allow silicon transistors to be even smaller. After that we may be able to move to carbon nanotube transistors, based on spintronics to lower the heat dissipation. There is still plenty of room at the bottom.
The point of the article and the article it references is that its easy to say stuff like that, but also mostly irrelevant to practical computing because in the history of modern computing its never been absolute physical limits that caused major changes to how computing is implemented. Just because there's room at the bottom, doesn't mean its room we can use. We *may* be able to use nano-assemblers for silicon and *may* be able to use carbon nanotube transistors, but unless that gets translated to someone working on actual practical implementations of those technologies, they will apply as much to the average consumer as the SR-71 that's being discussed in this thread means to the average commercial air traveler. In other words, exactly zero.
When I was in college people were already talking about the exotic technologies we would have to migrate to in order to achieve better performance, and that was the late eighties. In the twenty-plus years since then, we're still basically using silicon CMOS. Granted the fabrication technologies and gate technologies have radically improved, but the fundamental manufacturing technology is still the same. Its been the same because there's hundreds of billion dollars of cumulative technological infrastructure and innovation behind silicon lithography. For these other "room at the bottom" technologies to be meaningful, and not just SR-71s, they need to be able to reach the same point silicon lithography with its multi-decade head start and approaching trillion dollar learning curve. Its not enough to just work in theory, or even in practice one-off. If it can't work at the scale and scope of silicon lithography, its just an SR-71. A cool museum piece of advanced technology almost no one will ever see, touch, use, or directly benefit from.
It isn't trivially obvious there exists a technology commercializable in the next few decades that can replace silicon lithography. Anyone who thinks that's obvious doesn't understand the practical realities of scaling these technologies.
Re: (Score:2)
Did our jets get faster and lighter and cheaper?
The fastest air breathing aircraft was the SR-71, which went into production in 1962, based on technology from the 1950s. So for at least half a century, jets did not get faster. Aircraft improved enormously between 1903 and 1960. Then the rate of improvements fell off a cliff. That is why Sci-Fi from that era often extrapolated the improvements into flying cars, and fast space travel, but far fewer predicted things like the Internet or Wikipedia.
Thats because you're basing all aircraft improvement on speed.
This is flat out wrong.
The reason aircraft have not gotten faster than the SR71 is partially because you hit a serious wall at those speeds. The air literally becomes harder to push though. Physics is the enemy here, this is why its expensive to produce a car that goes over 400 KPH and that car is not very reliable. Friction and air resistance need to be overcome, heat dissipation has to be balenced with weight (the Veyron has 11 radiators)
Re: (Score:2)
I think it's fair to say that we've reached a point where we're flying "fast enough" for most practical purposes. Flying to the other side of the world only takes about 18 hours or so, which is pretty amazing, and the fast majority of flights are much shorter hops. Once cost, safety, reliability, and noise all reach a point where they can't be easily improved, aerospace engineers will probably start pushing harder against the speed barrier again. It's not that there's no impetus, it's just that there are
Re: (Score:2)
Oddly, warship speed declined also. In WWII, there were plenty of ships that could hit forty knots or get very close, and any serious warship (except battleships) could break thirty. Nowadays, warships can frequently get only into the mid or high twenties.
Modern warships are better at sustained speeds in bad weather and sea conditions, but don't have the same top speed.
Re: (Score:2)
It takes zero energy to flip a bit. What does take energy is erasing bits, and as it turns out, that does not seem to be fundamental to the idea of computation. The limits of computation have nothing to do with energy per se. Rather, they are about entropy.
http://en.wikipedia.org/wiki/V... [wikipedia.org]
http://en.wikipedia.org/wiki/R... [wikipedia.org]
Re: (Score:2)
Zero? No, that is incorrect---both in theory and in the normal conversational context.
Did you read your own links?
Per Landauer's principle, it takes a small amount of energy. In that same article, it states that modern computer consume millions of times the theoretical minimum. So, technically, the energy requirement is non-zero, and practically it can be quite high.
The limits of computation have a great deal to do with energy, as any given computation must occur on some physical medium, and that medium con
Re: (Score:2)
No, it requires zero energy.
Landauer's principle is about erasing bits (or, more generally, changing the information contained in a bit). In other words, irreversible operations. It does not apply to logically reversible operations (the simplest of which is flipping bits, but you can represent a surprising amount of computation in reversible terms).
Re: (Score:2)
flipping:
1 -> 0
0 -> 1
erasing:
1 -> 0
0 -> 0
Re: (Score:2)
Also, your entire reply is pretty much gibberish.
Re: (Score:2)
'change', in this context, is different from flipping a bit. It refers to erasing a bit, as mentioned, in fact, in just the preceding paragraph:
'It holds that "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information bearing degrees of freedom of the information processing apparatus or its environment". (Bennett 2003)'
Read my other reply about the difference between
Re: (Score:2)
From a practical perspective for a personal computing device there will always be a lower limit on what's useful. Think of the Star Trek: TNG communicator or the Dick Tracy watch, anything made small still has to have a good user interface. In the Star Trek example the UI is entirely voice activated, so we'll either have to rethink our UI, or attempt to cram a more recognizable UI into a smal
Re: (Score:2)
I agree, and I don't have any visual impairment to speak of. That's almost more a storage-density matter though, rather than a processing power issue.
I want a good nonvisual UI because of driving. I think that this push for touchscreens in cars is foolhardy at best, outright hazardous at worst. We need to get away from interfaces for sec
Re: (Score:2)
I would rather not have those that are so visually impaired that reading the phone or GPS unit is difficult to be driving.
Battery, Screen, Body (Score:4, Insightful)
Even if the electronics fail to get much smaller, there's plenty of room to be had in batteries, screens, and the physical casings of our handheld devices.
Re: (Score:2)
Even if the electronics fail to get much smaller, there's plenty of room to be had in batteries, screens, and the physical casings of our handheld devices.
At first glance, I read this as "Even if our electrons fail to get much smaller," and, for a second, I thought, "Whoa. Are people working on that?" Guess I gotta get my eyeglass prescription checked.
They're pretty small now. Efficiency will improve (Score:2, Insightful)
We're running up against physical limitations but "3d" possibilities will take our 2d processes and literally add computing volume in a new dimension.
So of course it's going to continue, the only question is one of rate divided by cost/benefit.
Bettridge vs Moore in the battle of the laws (Score:5, Funny)
Bettridge's law says no.
Moore's law says yes.
In the battle of the eponymous laws, which law rules supreme? Find out in this week's epoch TFA.
Re: (Score:2)
Darwin's Law?
Re: (Score:2)
Cole's Law.
[...thinly sliced cabbage...]
Re: (Score:2)
What if I don't want to slice my cabbage thin? What are you, some sort of cabbage Nazi? Hitler probably liked thin sliced cabbage too!
Godwin's Law
Re: (Score:2)
Finagle's law.
Re:Bettridge vs Moore in the battle of the laws (Score:5, Funny)
In the battle of the eponymous laws, which law rules supreme?
Murphy's Law.
Re: (Score:2)
Sod's law. The correct answer is the least desirable.
performance never measured in MHz (Score:2)
three decades in the industry and I've never seen performance measured or stated in MHz. At various times MIPS (and referencing a specific architecture, e.g. VAX MIPS or Mainframe MIPS) or MFLOPS might have been used, but never clock speed alone. As now other benchmarks also were used.
Re: (Score:2)
three decades in the industry and I've never seen performance measured or stated in MHz.
Did someone do that in any of the linked articles?
Re: (Score:2)
yes, it was first sentence of John Timmer's Ars article set me off: "When I first started reading Ars Technica, performance of a processor was measured in megahertz"
Re:performance never measured in MHz (Score:5, Insightful)
three decades in the industry and I've never seen performance measured or stated in MHz
Erm... from the 80286 through the Pentium 3 CPU clockspeed was pretty much THE proxy stat for "PC performance".
Re: (Score:3)
I can't tell if you are being sarcastic or not...
What you say is true only if you bought all your processors from Intel.
Once AMD came along, it was not entirely true if you compared to them. It was not true if you compared to Mac that used 680x0 and later PowerPC.
Re: (Score:2)
What you say is true only if you bought all your processors from Intel.
You say that like this wasn't common as dirt for most of a decade or so.
Once AMD came along
Yeah, that was mostly later. Pentium 4 vs Athlon XP etc. My suggested time frame ended with the Pentium III for a reason.
It was not true if you compared to Mac that used 680x0 and later PowerPC.
Also true, but comparatively few did that. Choosing a Mac vs a PC rarely had anything to do with performance. It was entirely about OS+applications; then o
Re: (Score:2)
You must be too young for the Pentum 2 vs. K6-2 debates.
You must be too young to remember that in the late 90s / early 00s, no one other than techies even knew there was competition between Intel and AMD. They just bought their Intel Inside Dells and Gateways.
Re: (Score:3)
Re: (Score:2)
Actually, back in the 386/486 days... YES you did compare amd and intel by MHz... in FACT that was one of AMDs big sellers... Intel's fastest 386 ran at 33MHz, AMDs? 40 MHz..
486- Intel had 33Ghz, (66 and 100Mhz for DX2/DX4)
AMD had 40Ghz (80 and 120Mhz respectively)
they were famous for exploiting the MHz = speed myth... that was the first fall of AMD from grace following that, with the K5 and K6 processors, they wouldn't get back into the mainstream until the Athlon, which also competed on the MHz scale...
Re: (Score:2)
Marketing and sales to ignorant consumers don't count. The "MHz Myth" has been time and again a subject in many a PC magazines
More meaningful benchmarks have existed long before that era (e.g. Whetstone from early 70s) and many were (e.g. Dhrystone in mid 80s) used all through the rise of the microprocessor (8080, 6502, etc.)
Re:performance never measured in MHz (Score:4, Insightful)
Marketing and sales to ignorant consumers don't count.
Originally it was useful enough. Marketing and sales perpetrated it long after it wasn't anymore.
The "MHz Myth" has been time and again a subject in many a PC magazines
Only once the truth had become myth. The Mhz "myth" only existed because it was sufficiently useful and accurate to compare intel CPUs by MHz within a generation and even within limits from generation to generation for some 8 generations.
It wasn't really until Pentium 4 that MHz lost its usefulness. The Pentium 4 clocked at 1.4GHz was only about as fast as a P3 1000 or something; and AMD's Athlon XP series came out and for the first time in a decade MHz was next to useless. Prior to that, however, it was a very useful proxy for performance.
More meaningful benchmarks have existed long before that era (e.g. Whetstone from early 70s) and many were (e.g. Dhrystone in mid 80s) used all through the rise of the microprocessor (8080, 6502, etc.)
Sure they did. But for about decade or so, if you wanted a PC, CPU + MHz was nearly all you really needed to know.
Re: (Score:2)
But there was ALWAYS alternatives to intel processors even for personal computer (e.g. motorola) from day one of the personal computer movement, and so the Megahertz Myth was always meaningless. My home computer in 1991 had a Motorola chip (NeXTStation), in 1996 it had a Sparc chip.
Re: (Score:2)
and if anyone interested, 1976 I had a SWTP 6800
Re: (Score:2)
But there was ALWAYS alternatives to intel processors even for personal computer (e.g. motorola) from day one of the personal computer movement, and so the Megahertz Myth was always meaningless.
Only if you cared about comparison with non-intel PCs. People buying Macs weren't worried about performance comparisons with PCs, they were only concerned about performance compared to OTHER macs. The (much larger) DOS/Windows PC crowd only cared about performance relative to other intels.
My home computer in 1991 ha
Re: (Score:2)
but what of the 80486 doing about 80% of the MIPS of the clock frequency, while 386 only 33% and the Pentium I did 150% (e.g. 75MHz == 125 million x86 MIPS) ?
Some would argue Mac with MacOSX with Motorola chip is a next-gen NeXT, and a LOT of those sold.
Sun was selling 50,000 sparc workstations per quarter in 1992.
Re: (Score:2)
but what of the 80486 doing about 80% of the MIPS of the clock frequency, while 386 only 33% and the Pentium I did 150% (e.g. 75MHz == 125 million x86 MIPS) ?
What about it? That just serves to further amplify the improvement from CPU generation to CPU generation.
Some would argue Mac with MacOSX with Motorola chip is a next-gen NeXT, and a LOT of those sold.
Perhaps, but they weren't selling them to people who were basing the purchasing decisions based on their performance relative to DOS/Windows PCs.
There wa
Re: (Score:2)
200K is one percent of 20M, and that 20M not from a single vendor as 1992 the year everyone and their uncle jumped into PC market as price plummeted
Did you know Apple was considered part of the PC market in 1992, and had whopping 19 percent share? That's wasn't an intel platform.
Re: (Score:2)
Did you know Apple was considered part of the PC market in 1992, and had whopping 19 percent share?
Its entirely beside the point. Virtually nobody was comparing Apples to Intels to Sparcs based on benchmarks to make a buying decision.
The decision to buy Apple or Intel or Sparc was made based on OTHER factors (software availability, features, etc), and THEN a buying decision within the chosen platform was made based on price/performance etc.
If the platform chosen was intel, then MHz was the primary performa
Re: (Score:2)
Nothing you have said reinforces your mistaken notion that MHz ever measured performance. I've already shown that is not true even between Intel processors. You only believe an urban legend, a myth, a falsehood was true. Those of us who did measure performance of machine over the past four decades used benchmarks.
Re: (Score:2)
You only believe an urban legend, a myth, a falsehood was true.
Give me a break. Everybody who lived at the time buying computers used MHz as a proxy for performance.
Those of us who did measure performance of machine over the past four decades used benchmarks.
I'm sure you did. I remember the benchmarking tools too. I know anyone professionally measuring performance used them.
But the majority of the buying public, and a great deal of corporate/business/enterprise/educational buyers too made all their decisio
Re: (Score:2)
Actually, I would say that the MHz lost it's usefulness in the x86 world long before the P4 came out. More like the (original) Pentium-era, when Cyrix and AMD starting selling chips with the "PR" rating. Of course, the PR thing was even more meaningless, as a 150MHz Cyrix chip may perform like a Pentium 200 when it came to integer performance (hench "PR200+"), but was more like a Pentium 90 when it came to FPU performance.
Re: (Score:2)
Re: (Score:2)
If you'll allow a few more years, my first TRS-80 had a 1.77MHz Z80, my second had about a 3.2MHz Z80A, and my third had a blisteringly fast 4MHz Z80A.
Down with paywalls (Score:1)
Get the original article here: Fuck paywalls [libgen.org]
Our own computers? In the FUTURE? (Score:5, Insightful)
Next you'll be telling me they'll let us run unsigned code on processors capable of doing so. You need to get onboard, citizens. All fast processing is to occur in monitored silos. Slow processing can be delegated to the personal level, but only with crippled processors that cannot run code that hasn't yet been registered with the authorities and digitally signed. You kids ask the wrong questions. Ungood.
Considering (Score:2)
Considering the raw power of today's typical smart phone and it's form factor, I'd say we're rapidly approaching the limits on the size of devices, especially when you consider the rooms that computers far less powerful used to occupy in the days of yore.
There are physical limits to how small electronics can be made, even if new lithography technologies are developed. We'd need to come up with something energy based instead of physical in order to get smaller than those barriers.
Plus there's the fact
Re: (Score:2)
I was really amused when my wife took a picture of a Cray-1 supercomputer with her original iPhone. I did some performance comparisons, and the Cray would only be faster for massively parallel floating-point operations. On the other hand, I didn't check out the iPhone's graphics hardware, so that might well have had the Cray beat.
Obligatory: "There's Plenty of Room at the Bottom" (Score:2)
Re: (Score:2)
None of the linked articles even mention Feynman's name.
Why should they? Not many current astrophysics papers mention Galileo, either. Nor do most papers in modern computing reference the work of John von Neumann.
In science, an original idea or suggestion by someone, no matter how famous, is built upon by others, who's work is built upon by others, until someone actually turns an incomplete idea into a field of study. And by this time the literature has evolved to view the problem slightly differently, per
Re: (Score:2)
But come on, do you really think a 55 year old paper is going to be at the top of impact rankings when computed against current research in a field moving this fast? And, even if so, isn't it more likely this work has been superseded by others? IT'S BEEN 55 GOD DAMN YEARS, FOR CHRISSAKE!!! I think your hero worship is showing. At least find a more modern reference.
To be fair, this is a perfectly acceptable reference in the given context, and the age only helps the argument not hinders it as you suggest.
Even at 55 years old, the Feynman paper is based on known technology and physics at the time. This provides a high-end boundary to the answer that is only potentially (in this case definately) inaccurate on exactly how much lower the size can actually get.
Our tech has changed, but physics not quite as much.
What we know today about building at the atomic scale is only
Remove the Bloat (Score:3)
As we're nearing the size limit for IC manufacturing technology, what about reducing bloat and coding in a more efficient manner.
Let's look at the specs of earlier machines
Palm Pilot. 33Mhz 68000 with 8MB of storage, yet it was fast and efficient.
C=64 1Mhz 6510 with 64k RAM (38 useable), also fast and efficient, you could run a combat flight simulator on it (Skyfox)
Heck, even a 16MB 66Mhz 486 was considered almost insane in early 1994 (and it only had a 340 *MB* HDD, and everything was fine. (I bought that in high school for AutoCAD)
Go back to the same efficient and small code, and our devices will seem about 10 times faster and will last longer.
Re: (Score:2)
It wasn't fast by any stretch (I had the European PAL spec, which was even slower). If you wanted to use "high resolution" mode (320x200 pixels) then it took minutes to draw even simple curves. If you programmed it using the built-in BASIC, anything non-trivial took minutes or more. The only way you could write anything like a useful program was to use assembler, coding directly to the bare metal. Some of the games resulting were impressive
Re: (Score:2)
Mayhem in Monsterland looks pretty good I think.
Does it matter? (Score:2)
There was a time when 1GHz/1GB was overkill, and while CPU/IO speed improves, usability doesn't seem to be getting all that much better. Considering we've had multiple orders of magnitude improvement in raw hardware performance, shouldn't other factors -- usability, reliability, security -- get more focus?
Sure, those could benefit from more raw hardware capability, but the increased 'power' doesn't seem to be targeted at improving anything other than raw application speed -- and sometimes, not even that.
Re: (Score:2)
Not for desktop computers, there wasn't. Perhaps for your watch. Then again, probably not.
There's no such thing as "overkill" in computing power and resources. There is only "I can't get (or afford) anything faster than this right now."
Re: (Score:2)
If I had a computer that was a million times faster than my current computer I could still use something even faster. Even at a billion times faster I could still use more power. We are at the stage where we can use computer simulations to help bring drugs to market. The computational power needed is HUGE but it is also helping bring drugs (including CURES) to market that would have never been possible otherwise. There are even potential cancer cures that will NOT make it to market ANY other way.
The average
Re: (Score:2)
I run Rosetta@Home [bakerlab.org] on my own computers -- I can't believe I forgot about that. Great point.
Re: (Score:2)
The scientists and engineers that design the US nuclear weapons have computational problems that are measured in CPU months. A senior scientist was talking to a consultant, and explained the importance of these simulations.
"Just think about it.", he said. "If we get those computations wrong, millions of people could accidentally live."
-credit to the unknown US nuclear scientist who told this joke to Scott Meyers, who in turn relayed it at a conference.
Re: (Score:2)
In my case though these calculations will save millions of lives and improve the qualify of life for many millions more. Even the most powerful super computers in the world would take years to solve many of these problems and we keep finding more to solve. We approximate solutions because that is still better than we had before and it is the best we can do for now.
With more computing power we can save more lives.
Moore's law (Score:2)
The Gating Issue (Score:2)
The gating issue is now screen size and finger size. Nice big high def screens need big batteries to keep them lit. I don't think those items are going to get much smaller.
Betteridge's law of headlines - finally broken (Score:2)
The one word answer is "Yes". Betteridge's law of headlines is finally broken.
Computers Yes. But theres no point (Score:2)
Computers will get faster, they always do.
But lets be honest, the influx of Java/Ruby/Python and "easy" amature programming are making our computers slower than they were 5 years ago.
- Slower language before we even start.
- Single thread
- No optimizations. Dreadful performance
- Relying on language safety measures, instead of "good logic". Buggy as hell.
- Relying on 50+ library's, just to use 1 function in each.
If only they would learn C++. Our processors probably wouldn't need to be upgraded for another 5 y
Re: (Score:2)
Re: (Score:2)
Are you the same guy that labels porn "amature". It's "amateur". "Amature" doesn't even exist, except if you interpret the "a-" prefix as "not", which then the word would mean "not mature".
Not i'am just the guy who trusted Google Spell Checker a little too much.
Not sure what porns got to do with incorrect spelling, but why not.