Forget Moore's Law? 406
Roland Piquepaille writes "On a day where CNET News.com releases a story named "Moore's Law to roll on for another decade," it's refreshing to look at another view. Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous. "An extraordinary announcement was made a couple of months ago, one that may mark a turning point in the high-tech story. It was a statement by Eric Schmidt, CEO of Google. His words were both simple and devastating: when asked how the 64-bit Itanium, the new megaprocessor from Intel and Hewlett-Packard, would affect Google, Mr. Schmidt replied that it wouldn't. Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors." Check this column for other statements by Marc Andreessen or Gordon Moore himself. If you have time, read the long Red Herring article for other interesting thoughts."
clustering (Score:5, Interesting)
I guess this is better to use interconnected devices in an interconnected world.
where I work, we recently traded our Sun E10k for several E450 between which we load balance request.
It surprisingly works very well.
I guess Google's approach is then an efficient one.
Sincere question (Score:1, Interesting)
Re:clustering (Score:5, Interesting)
Google's approach is good for google. If Google would want to make good use of significantly faster CPUs, they would also need significantly more RAM in their machines (a CPU faster by a factor of 10 can't yield a speed-up factor of ten, if the network can't deliver the data fast enough).
For Google it's fine, if a request can be done in say half-a-second on a slower machine, that is a lot cheaper then a 10* as fast machine doing each request in
On the other hand, if you have a job that can only be done sequentially (or can't be parallelized all to well), then having 100s of computers won't help you very much...
The more expensive servers will definitely be more expensive when you buy them - on the other hand the more expensive faster machines might save you a lot of money in turns of less rent for the offices (lower space requirements) or - perhaps even more important - save on energy...
The company where I'm working switched all their work PCs to TFTs relatively early, when TFTs were still expensive. The company said, that this step was done on the expected cost saving in power bills and also saving on air conditioning in rooms with lots of CRTs...
Xeon beats Itanium on value (Score:4, Interesting)
The Itanium chip will eventually succeed, but not until the price drops and the performance steps up another gear.
Moore's Law still valid (Score:2, Interesting)
Just because Google (and I assume many other companies) are looking to use smaller, cheaper processors, it does not mean that Moore's law will not continue to hold.
Moores Law is a statement about the number of transitors per square inch, not per CPU. Google's statement is more about the (flawed) concept of "One CPU to rule them all", rather than any indictment of Moore's Law or those that follow it.
Re:Misapprehensions (Score:4, Interesting)
The point is that it involved numerical simulations of a plasma - specifically, the plasma created when a solid is hit by a high intensity, short pulse laser. I was doing the work on an Alpha-based machine at uni, but having recently installed Linux on my home PC, I thought, "why not see if I can get it running on that, it might save me having to go in everyday".
Well, I tried, but always got garbage results, littered with NaNs. I didn't spend too much time on it, but my assumption at the time was that the numbers involved were simply too big for my poor little 32bit CPU and OS. It looked like things quickly overflowed, causing the NaN results. (The code was all in Fortran, incidently)
I am now a programmer at a web agency, but I've not forgotten my Physics "roots", nor lost my interest in the subject. I'm currently toying with doing some simulation work on my current home PC, and would like to know that I'm not going to run into the same sorts of problems. Of course, I can scale things to keep the numbers within sensible bounds, but it would be easier (and offer less scope for silly mistakes) if I didn't have to.
Not only that, of course, but the scope of simulating physical situations can often be memory limited. Okay, so I can't currently afford 4 gig of RAM, but if I could, I could easily throw together a simulation that would need more than that. In the future, that limit might actually become a problem.
Yes, I know I'm not a "typical" user - but the point is that it's not only video editing that would benefit from a move to 64 bit machines.
Moore's Law (Score:2, Interesting)
"Moore's Law" has been bastardized beyond belief. Take an opportunity to read Moore's Paper [wpi.edu] (1965), which is basically Gordon Moore's prediction on the future direction of the IC industry.
Future of Supercomputing (Score:5, Interesting)
Check out who's on top of the TOP 500 [top500.org] supercomputers. US? Nope. Cluster? Nope. The top computer in the world is the Earth Simulator [jamstec.go.jp] in Japan. It's not a cluster of lower end processors. It was built from the ground up with one idea -- speed. Unsurprisingly it uses traditional vector processing techniques developed by Cray to achieve this power. And how does it compare with the next in line? It blows them away. Absolutely blows them away.
I recently read a very interesting article about this (I can't remember where - I tried googling) which basically stated that the US has lost it's edge in supercomputing. The reason was two fold: (1) less government and private funding for supercomputing projects and (2) a reliance on clustering. There is communication overhead in clustering that dwarfs similar problems in traditional supercomputers. Clusters can scale, but the max speed is limited.
Before you start thinking that it doesn't matter and that the beowulf in your bedroom can compare to any Cray, recognize that there are still problems within science that would take ages to complete. These are very different problems from those facing Google, but they are nonetheless real and important.
What is an example that can't run in parallel? (Score:4, Interesting)
I tend to be a believer that massively parallel machines are the (eventual) future. e.g. just as we would brag about how many K, and then eventually megabytes our memory was, or how big our hard di_k was, or how many megahertz, I think that in the future shoppers will compare: "Oh, the machine at Worst Buy has 128K processors, while the machine at Circus Shitty has only 64K processors!"
More Expensive is no longer better (Score:2, Interesting)
Re:Upgrading Good... (Score:3, Interesting)
Google surely will upgrade to a more modern processor when they start to replace older hardware. However, they will just be driven by pure economic terms. No need to get the top of the pop processor. E.g. power/speed is an issue for them.
The ordinary desktop user
For gamers the processing power is far enough currently. The limit is the GPU not the CPU, or the memory bandwith between them, or in case of online gaming, the internet connection.
Killerapps are only a question of ideas. Not of processing power. And after having the idea its time to market and money and workers(coders) to make it a killerapp.
The future will move into more network centric computing, having every information available everywhere easyly. If possible dynamic adapting interfaces between information stores(aka UDDI, WSDL and SOAP). A step toward is the stronger getting interconnection. We are close to the point that we can have internet connections where a packet makes only 3 hops between arbitrary points on earth. With ping times around a tenth of a second approaching 10ms in the next years stuff like grid computing and any other form of pervasive computing will be the money maker.
I expect that chip producing companies will have a much smaler market in about 15 years like now, but till that they will grow. The next closing window is then the interconnection industry, routers and such. When all points are connected with a ping below 10ms
So whats open? Not making faster and faster chips
Different version of parallel computing, there are a lot of thinkable parallel machines not truely implementatabel. But they could be "simulated" close enough to have indeed a speedup for certain algorithms. (Instead of having infinite processors, what some of those machines require, some millions could be enough).
And of course tiny chips like radio (RFC?)
activated chips with very low processing power. Implantable chips, likely with some more hardware to stimulate brain areas for parkinson people, to reconnect spines etc.
Chips suitable for analysis of chemical substances, water and air control, picking up poisons etc.
Chips to track stuff, lend books or paper files in big companies.
I'm pretty sure that the race is going to an end, just like in super sonic air flight. At a certain point you gain nothing from more speed, but you have still plenty of room to make "smart" solutions/improvements.
angel'o'sphere
Re:Google's decision is economic (Score:3, Interesting)
Eventually, the dwindling number of remaining problems won't have enough funding behind them to support the frantic pace of innovation needed to support Moore's law. I think that CPU development will hit this economic barrier well before it becomes technically impossible to improve performance.
how does google's comment violate moore's law? (Score:2, Interesting)
if google uses a large number of low speed chips, they will still benefit from smaller sized chips with lower power consumption and lower price.
Comment removed (Score:3, Interesting)
But what if Moores law is too slow? (Score:4, Interesting)
Everyone seems to be acting like Moore's law is too fast, that over the next centruy our technology could never grow as fast as it predicts. However, consider for a moment that perhaps it's too slow, that technology can and will grow faster than it's predictions like it or not. Yes silicon has limits, but physics wise - there is no law I know of inherent in the universe that says mathematical calculations can never be calculated faster than xyz, or the rate of growth in calculation ability can never accellerate faster than abc. These constraints are defined by human limits, not physical ones.
In fact, it could be argued that Moore's law is slowing down progress because inverstors see any technology that grows faster than it predicts as too good to be true, and therefore too risky to invest in. However, from time to time when companies have been in dire straights to outdo their competitors "magical" things have happened that seem to have surpassed it for at least brief periods. Also, from what I understand, the rate of growth in optical technology *IS* faster than moores law, but people expect it to fizzle off when it reaches the abilities of silicon - I doubt it.
The last time Intel was declaring the death of Moores law was when they were under heavy attack from predictions that they couldn't match advances in RISC technology. Funny, when they finally redesigned their CPU with RISC underpinnings - these death predictions silently faded away. (at least till now) I wonder what's holding them back this time?
Re:64bit matters, for Google, too (Score:3, Interesting)
You can't even memory map files anymore reliably because many of them are bigger than 4G, which means that pretty much no program that deals with I/O can rely on memory mapping. Shared memory, too, needs to be shoe-horned into 32bit. 32bit addressing has a profoundly negative effect on software and hardware architecturs. We are back to PDP-11 style computing.
Why limit ourselves to 64 bit addresses ï½ I can foresee valid applications for 128,256 and 512 bit (and larger) address schemes (consider, for example, distributed grid computing.)
Sorry, I can't. 64bit addressing is driven by the fact that we can have easily more storage on a single machine than can be addressed with 32 bits. With 64bit computing, we can have a global unified address space for every single computer in the world for some time to come. There will probably be one more round of upgrades to 128bit addressing at one point, but that's it.
The distinction Iï½d like to make is one of diminishing returns for typical applications of increasing the ï½default word lengthï½.
First of all, for floating point software, it makes sense to go to 64bit anyway: 32bit floating point values are a complicated and dangerous compromise.
But in general, you are right: 32bit numerical quantities are good enough for a lot of applications. But, as I was saying, we tried making machines that are mostly n-bits and have provisions for addressing >n-bits, and the software becomes a mess. Going to uniform 64bit architectures is driven by the fact that some software needs 64bits, and once some software does, the cost of only partial support is too high.
AMD has a good compromise: they give you blazing 32bit performance and decent 64bit performance, with very similar instruction sets. That way, you can keep running existing 32bit software in 32bit mode.
Re:But what if Moores law is too slow? (Score:3, Interesting)
Contrary to what some people think, quantum physics and thermodynamics define the limits of what a computer can do.
For example, you can't send data faster than the speed of light, and you can't have two memory blocks closer together than one plank's constant. Likewise you cannot store more information than the total amount of information in the universe etc. etc.
According to physics as it is today, there are dead ends for computers where they cannot get any faster, bigger, or more powerfull. We may never reach those limits, but they still exist.
Re:What is an example that can't run in parallel? (Score:3, Interesting)
An example round from MD5 (line 198):
MD5STEP (F1, a, b, c, d, in[0] + 0xd76aa478, 7);
This expands to:
( w += (d ^ (b & (c ^ d))) + in[0] + 0xd76aa478, w = w<<7 | w>>(32-7), w += b )
Notice that there are two additions in the first subexpression. The addition (in[0] + 0xd76aa478) can be computed simultaneously with (d ^ (b & (c ^ d))).
This is the only spot where anything could be parallelized. Assuming that all the principle operations can be performed in the same amount of time, then you could potentially go 25% faster by computing the addition in parallel.
But that's the furthest you can go.