Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Forget Moore's Law? 406

Roland Piquepaille writes "On a day where CNET News.com releases a story named "Moore's Law to roll on for another decade," it's refreshing to look at another view. Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous. "An extraordinary announcement was made a couple of months ago, one that may mark a turning point in the high-tech story. It was a statement by Eric Schmidt, CEO of Google. His words were both simple and devastating: when asked how the 64-bit Itanium, the new megaprocessor from Intel and Hewlett-Packard, would affect Google, Mr. Schmidt replied that it wouldn't. Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors." Check this column for other statements by Marc Andreessen or Gordon Moore himself. If you have time, read the long Red Herring article for other interesting thoughts."
This discussion has been archived. No new comments can be posted.

Forget Moore's Law?

Comments Filter:
  • clustering (Score:5, Interesting)

    by mirko ( 198274 ) on Tuesday February 11, 2003 @09:39AM (#5278857) Journal
    he said, the company intends to build its future servers with smaller, cheaper processors

    I guess this is better to use interconnected devices in an interconnected world.

    where I work, we recently traded our Sun E10k for several E450 between which we load balance request.
    It surprisingly works very well.

    I guess Google's approach is then an efficient one.
  • Sincere question (Score:1, Interesting)

    by KillerHamster ( 645942 ) on Tuesday February 11, 2003 @09:48AM (#5278904) Homepage
    Could someone please explain to me why this 'Moore's Law' is so important? The idea of expecting technology to grow at a certain, predictable rate seems stupid to me. I'm not trolling, I just would really like to know why anyone cares.
  • Re:clustering (Score:5, Interesting)

    by beh ( 4759 ) on Tuesday February 11, 2003 @09:52AM (#5278931)
    The question is always, what you're doing.

    Google's approach is good for google. If Google would want to make good use of significantly faster CPUs, they would also need significantly more RAM in their machines (a CPU faster by a factor of 10 can't yield a speed-up factor of ten, if the network can't deliver the data fast enough).

    For Google it's fine, if a request can be done in say half-a-second on a slower machine, that is a lot cheaper then a 10* as fast machine doing each request in .05 seconds, but the machine costs 50* more than the slower machine.
    On the other hand, if you have a job that can only be done sequentially (or can't be parallelized all to well), then having 100s of computers won't help you very much... ...on the other hand - there is one question left: Is it really worth while having 100s or 1000s of PC class servers working your requests as opposed to a handful really fast servers?

    The more expensive servers will definitely be more expensive when you buy them - on the other hand the more expensive faster machines might save you a lot of money in turns of less rent for the offices (lower space requirements) or - perhaps even more important - save on energy...

    The company where I'm working switched all their work PCs to TFTs relatively early, when TFTs were still expensive. The company said, that this step was done on the expected cost saving in power bills and also saving on air conditioning in rooms with lots of CRTs...

  • by Macka ( 9388 ) on Tuesday February 11, 2003 @09:57AM (#5278957)
    I was at a customer site last week, and they were looking at options for a 64 node (128 cpu) cluster. They had a 2cpu Itanium system on loan for evaluation from HP. They liked it, but decided instead to go with Xeon's rather than Itanium. The reason .. Itanium systems are just too expensive at the moment. Bang for Buck, Xeon's are just too attractive by comparison.

    The Itanium chip will eventually succeed, but not until the price drops and the performance steps up another gear.
  • by TomHoward ( 576101 ) <tom@howardf a m i ly.id.au> on Tuesday February 11, 2003 @10:00AM (#5278983) Homepage
    he said, the company intends to build its future servers with smaller, cheaper processors

    Just because Google (and I assume many other companies) are looking to use smaller, cheaper processors, it does not mean that Moore's law will not continue to hold.

    Moores Law is a statement about the number of transitors per square inch, not per CPU. Google's statement is more about the (flawed) concept of "One CPU to rule them all", rather than any indictment of Moore's Law or those that follow it.

  • Re:Misapprehensions (Score:4, Interesting)

    by Tim C ( 15259 ) on Tuesday February 11, 2003 @10:16AM (#5279085)
    5 or so years ago I was working on a Phd in plasma physics. I never finished it, but that's besides the point.

    The point is that it involved numerical simulations of a plasma - specifically, the plasma created when a solid is hit by a high intensity, short pulse laser. I was doing the work on an Alpha-based machine at uni, but having recently installed Linux on my home PC, I thought, "why not see if I can get it running on that, it might save me having to go in everyday".

    Well, I tried, but always got garbage results, littered with NaNs. I didn't spend too much time on it, but my assumption at the time was that the numbers involved were simply too big for my poor little 32bit CPU and OS. It looked like things quickly overflowed, causing the NaN results. (The code was all in Fortran, incidently)

    I am now a programmer at a web agency, but I've not forgotten my Physics "roots", nor lost my interest in the subject. I'm currently toying with doing some simulation work on my current home PC, and would like to know that I'm not going to run into the same sorts of problems. Of course, I can scale things to keep the numbers within sensible bounds, but it would be easier (and offer less scope for silly mistakes) if I didn't have to.

    Not only that, of course, but the scope of simulating physical situations can often be memory limited. Okay, so I can't currently afford 4 gig of RAM, but if I could, I could easily throw together a simulation that would need more than that. In the future, that limit might actually become a problem.

    Yes, I know I'm not a "typical" user - but the point is that it's not only video editing that would benefit from a move to 64 bit machines.
  • Moore's Law (Score:2, Interesting)

    by ZeLonewolf ( 197271 ) on Tuesday February 11, 2003 @10:18AM (#5279097) Homepage

    "Moore's Law" has been bastardized beyond belief. Take an opportunity to read Moore's Paper [wpi.edu] (1965), which is basically Gordon Moore's prediction on the future direction of the IC industry.
  • by jaaron ( 551839 ) on Tuesday February 11, 2003 @10:43AM (#5279304) Homepage
    Clustering has definitely won out in the United States mostly due to the appeal of cheap processing power, but that doesn't mean that clustering is always best. Like another poster mentioned, it depends on what you're doing. For Google, clustering is probably a good solution, but for high end supercomputing, it doesn't always work.

    Check out who's on top of the TOP 500 [top500.org] supercomputers. US? Nope. Cluster? Nope. The top computer in the world is the Earth Simulator [jamstec.go.jp] in Japan. It's not a cluster of lower end processors. It was built from the ground up with one idea -- speed. Unsurprisingly it uses traditional vector processing techniques developed by Cray to achieve this power. And how does it compare with the next in line? It blows them away. Absolutely blows them away.

    I recently read a very interesting article about this (I can't remember where - I tried googling) which basically stated that the US has lost it's edge in supercomputing. The reason was two fold: (1) less government and private funding for supercomputing projects and (2) a reliance on clustering. There is communication overhead in clustering that dwarfs similar problems in traditional supercomputers. Clusters can scale, but the max speed is limited.

    Before you start thinking that it doesn't matter and that the beowulf in your bedroom can compare to any Cray, recognize that there are still problems within science that would take ages to complete. These are very different problems from those facing Google, but they are nonetheless real and important.
  • by Anonym0us Cow Herd ( 231084 ) on Tuesday February 11, 2003 @10:58AM (#5279426)
    I'm not disputing that they exist. But I'm drawing a blank. Can someone please give an example of a computing task that CANNOT be subdivided into smaller tasks and run in parallel on many processing elements? The kind of task that requires an ever faster single processor.

    I tend to be a believer that massively parallel machines are the (eventual) future. e.g. just as we would brag about how many K, and then eventually megabytes our memory was, or how big our hard di_k was, or how many megahertz, I think that in the future shoppers will compare: "Oh, the machine at Worst Buy has 128K processors, while the machine at Circus Shitty has only 64K processors!"
  • by diablobynight ( 646304 ) on Tuesday February 11, 2003 @11:12AM (#5279559) Journal
    The problem is that the processors that are out for home use such as the new Barton chip and the newer Pentium 4s are amazing processors. So the speed increase overall is not even close to a power of ten, if it's even close to twice as fast I would be suprised. Especially with NForce motherboards offering dual memory channels and server board like performance. A lot of companies switched from these overated quad Xeon systems and other such expensive servers a couple of years ago, A local company last year, switched from buying overtly expensive servers with raid drives that cost ten times as much than IDE for hard drive space and the quad Xeons that they were only getting 10-15% performance increase from that 4th 560$ processor, and now they run Dual Athlon systems. With standard ata 133 drives inside, two deal with the lack of caching in an IDE system, They installed an impressively low priced IDE SAN sold by IBM and now have 10 times as much hard drive space as they used to for slightly less money. The Dual Athlon Systems, built by me actually (Which I admit I was scared to put in a server environment, because of previous instabilities in ADM chipsets), these systems run superbly, and were inexpensive to build by comparison of 8000$ Dell Servers or $$$$$$$ Sun Servers. I am not going to tell you that a dual athlon is a better server than a Sun, no way, I know Suns are rock solid, but since I could build the Athlons so cheap we built 6 servers to share load, run pearl, run SQL, run their web server, run their domain controller, and I dedicated one system just to Lotus Notes, a program I despise but this company loves. Anyhow the point is, the price was so low, they had me build them 3 extra servers and prepare a server image for them. Currently these servers work to share load of the web server and they supply the company with an added 400GB of file space per server, 4-120GB IBM drives per server and I dedicate 80GB per system to empty space. Just because keeping 20GB free per Hard drive is probably a good idea so that defrag processes can run properly. I don't know if this is an answer for everyone, but for what most server rooms handle, mail, file space for the company, print server, web server, CAD drawing servers, These smaller powerhouses have been doing excellent. I am glad to see companies that no longer believe IBM, Dell, Intel, more expensive is better.
  • Re:Upgrading Good... (Score:3, Interesting)

    by angel'o'sphere ( 80593 ) <angelo.schneider@oomento r . de> on Tuesday February 11, 2003 @11:15AM (#5279590) Journal
    You are right but I dissagree at one point:

    Google surely will upgrade to a more modern processor when they start to replace older hardware. However, they will just be driven by pure economic terms. No need to get the top of the pop processor. E.g. power/speed is an issue for them.

    The ordinary desktop user .... well, MS Office, or KDE runs quite well on anything availabel now. No need for a 64Bit processor for anyone of us.

    For gamers the processing power is far enough currently. The limit is the GPU not the CPU, or the memory bandwith between them, or in case of online gaming, the internet connection.

    Killerapps are only a question of ideas. Not of processing power. And after having the idea its time to market and money and workers(coders) to make it a killerapp.

    The future will move into more network centric computing, having every information available everywhere easyly. If possible dynamic adapting interfaces between information stores(aka UDDI, WSDL and SOAP). A step toward is the stronger getting interconnection. We are close to the point that we can have internet connections where a packet makes only 3 hops between arbitrary points on earth. With ping times around a tenth of a second approaching 10ms in the next years stuff like grid computing and any other form of pervasive computing will be the money maker.

    I expect that chip producing companies will have a much smaler market in about 15 years like now, but till that they will grow. The next closing window is then the interconnection industry, routers and such. When all points are connected with a ping below 10ms ... only wireless networking will have a furhter market share. After that something NEW is needed!

    So whats open? Not making faster and faster chips ... but probably making different kinds of super computers: FPGA driven, or data flow machines, more extensive usage of multithreading on a die.

    Different version of parallel computing, there are a lot of thinkable parallel machines not truely implementatabel. But they could be "simulated" close enough to have indeed a speedup for certain algorithms. (Instead of having infinite processors, what some of those machines require, some millions could be enough).

    And of course tiny chips like radio (RFC?)
    activated chips with very low processing power. Implantable chips, likely with some more hardware to stimulate brain areas for parkinson people, to reconnect spines etc.

    Chips suitable for analysis of chemical substances, water and air control, picking up poisons etc.

    Chips to track stuff, lend books or paper files in big companies.

    I'm pretty sure that the race is going to an end, just like in super sonic air flight. At a certain point you gain nothing from more speed, but you have still plenty of room to make "smart" solutions/improvements.

    angel'o'sphere
  • by Waffle Iron ( 339739 ) on Tuesday February 11, 2003 @11:16AM (#5279595)
    But as CPUs get faster, more and more problems can be parallelized at the granularity of a commodity CPU. That leaves fewer problems left that demand a new, faster CPU.

    Eventually, the dwindling number of remaining problems won't have enough funding behind them to support the frantic pace of innovation needed to support Moore's law. I think that CPU development will hit this economic barrier well before it becomes technically impossible to improve performance.

  • by k2enemy ( 555744 ) on Tuesday February 11, 2003 @12:17PM (#5280108)
    moore's law says nothing about the speed or power of a chip, only the density of transistors. if you hold the size of the chip constant, the number of transistors can increase, or if you hold the number of transistors constant, the size of the chip can decrease.

    if google uses a large number of low speed chips, they will still benefit from smaller sized chips with lower power consumption and lower price.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Tuesday February 11, 2003 @02:40PM (#5281516)
    Comment removed based on user account deletion
  • by argoff ( 142580 ) on Tuesday February 11, 2003 @03:21PM (#5281988)

    Everyone seems to be acting like Moore's law is too fast, that over the next centruy our technology could never grow as fast as it predicts. However, consider for a moment that perhaps it's too slow, that technology can and will grow faster than it's predictions like it or not. Yes silicon has limits, but physics wise - there is no law I know of inherent in the universe that says mathematical calculations can never be calculated faster than xyz, or the rate of growth in calculation ability can never accellerate faster than abc. These constraints are defined by human limits, not physical ones.

    In fact, it could be argued that Moore's law is slowing down progress because inverstors see any technology that grows faster than it predicts as too good to be true, and therefore too risky to invest in. However, from time to time when companies have been in dire straights to outdo their competitors "magical" things have happened that seem to have surpassed it for at least brief periods. Also, from what I understand, the rate of growth in optical technology *IS* faster than moores law, but people expect it to fizzle off when it reaches the abilities of silicon - I doubt it.

    The last time Intel was declaring the death of Moores law was when they were under heavy attack from predictions that they couldn't match advances in RISC technology. Funny, when they finally redesigned their CPU with RISC underpinnings - these death predictions silently faded away. (at least till now) I wonder what's holding them back this time?
  • by g4dget ( 579145 ) on Tuesday February 11, 2003 @05:34PM (#5283449)
    I donï½t, at present, see applications which would significantly benefit from register values of these sizes ï½

    You can't even memory map files anymore reliably because many of them are bigger than 4G, which means that pretty much no program that deals with I/O can rely on memory mapping. Shared memory, too, needs to be shoe-horned into 32bit. 32bit addressing has a profoundly negative effect on software and hardware architecturs. We are back to PDP-11 style computing.

    Why limit ourselves to 64 bit addresses ï½ I can foresee valid applications for 128,256 and 512 bit (and larger) address schemes (consider, for example, distributed grid computing.)

    Sorry, I can't. 64bit addressing is driven by the fact that we can have easily more storage on a single machine than can be addressed with 32 bits. With 64bit computing, we can have a global unified address space for every single computer in the world for some time to come. There will probably be one more round of upgrades to 128bit addressing at one point, but that's it.

    The distinction Iï½d like to make is one of diminishing returns for typical applications of increasing the ï½default word lengthï½.

    First of all, for floating point software, it makes sense to go to 64bit anyway: 32bit floating point values are a complicated and dangerous compromise.

    But in general, you are right: 32bit numerical quantities are good enough for a lot of applications. But, as I was saying, we tried making machines that are mostly n-bits and have provisions for addressing >n-bits, and the software becomes a mess. Going to uniform 64bit architectures is driven by the fact that some software needs 64bits, and once some software does, the cost of only partial support is too high.

    AMD has a good compromise: they give you blazing 32bit performance and decent 64bit performance, with very similar instruction sets. That way, you can keep running existing 32bit software in 32bit mode.

  • by jbischof ( 139557 ) on Tuesday February 11, 2003 @06:45PM (#5283917) Journal
    I read an article a while back about the theoretical limits of computers.

    Contrary to what some people think, quantum physics and thermodynamics define the limits of what a computer can do.

    For example, you can't send data faster than the speed of light, and you can't have two memory blocks closer together than one plank's constant. Likewise you cannot store more information than the total amount of information in the universe etc. etc.

    According to physics as it is today, there are dead ends for computers where they cannot get any faster, bigger, or more powerfull. We may never reach those limits, but they still exist.

  • by pclminion ( 145572 ) on Tuesday February 11, 2003 @07:28PM (#5284135)
    There is only part of the algorithm that can be done in "parallel." For reference see md5.c [l2tpd.org]

    An example round from MD5 (line 198):

    MD5STEP (F1, a, b, c, d, in[0] + 0xd76aa478, 7);

    This expands to:

    ( w += (d ^ (b & (c ^ d))) + in[0] + 0xd76aa478, w = w<<7 | w>>(32-7), w += b )

    Notice that there are two additions in the first subexpression. The addition (in[0] + 0xd76aa478) can be computed simultaneously with (d ^ (b & (c ^ d))).

    This is the only spot where anything could be parallelized. Assuming that all the principle operations can be performed in the same amount of time, then you could potentially go 25% faster by computing the addition in parallel.

    But that's the furthest you can go.

The last thing one knows in constructing a work is what to put first. -- Blaise Pascal

Working...