Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

Forget Moore's Law? 406

Roland Piquepaille writes "On a day where CNET News.com releases a story named "Moore's Law to roll on for another decade," it's refreshing to look at another view. Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous. "An extraordinary announcement was made a couple of months ago, one that may mark a turning point in the high-tech story. It was a statement by Eric Schmidt, CEO of Google. His words were both simple and devastating: when asked how the 64-bit Itanium, the new megaprocessor from Intel and Hewlett-Packard, would affect Google, Mr. Schmidt replied that it wouldn't. Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors." Check this column for other statements by Marc Andreessen or Gordon Moore himself. If you have time, read the long Red Herring article for other interesting thoughts."
This discussion has been archived. No new comments can be posted.

Forget Moore's Law?

Comments Filter:
  • BBC Article (Score:4, Informative)

    by BinaryCodedDecimal ( 646968 ) on Tuesday February 11, 2003 @09:38AM (#5278854)
    BBC Article on the same story here [bbc.co.uk].
    • by argoff ( 142580 ) on Tuesday February 11, 2003 @03:21PM (#5281988)

      Everyone seems to be acting like Moore's law is too fast, that over the next centruy our technology could never grow as fast as it predicts. However, consider for a moment that perhaps it's too slow, that technology can and will grow faster than it's predictions like it or not. Yes silicon has limits, but physics wise - there is no law I know of inherent in the universe that says mathematical calculations can never be calculated faster than xyz, or the rate of growth in calculation ability can never accellerate faster than abc. These constraints are defined by human limits, not physical ones.

      In fact, it could be argued that Moore's law is slowing down progress because inverstors see any technology that grows faster than it predicts as too good to be true, and therefore too risky to invest in. However, from time to time when companies have been in dire straights to outdo their competitors "magical" things have happened that seem to have surpassed it for at least brief periods. Also, from what I understand, the rate of growth in optical technology *IS* faster than moores law, but people expect it to fizzle off when it reaches the abilities of silicon - I doubt it.

      The last time Intel was declaring the death of Moores law was when they were under heavy attack from predictions that they couldn't match advances in RISC technology. Funny, when they finally redesigned their CPU with RISC underpinnings - these death predictions silently faded away. (at least till now) I wonder what's holding them back this time?
      • I read an article a while back about the theoretical limits of computers.

        Contrary to what some people think, quantum physics and thermodynamics define the limits of what a computer can do.

        For example, you can't send data faster than the speed of light, and you can't have two memory blocks closer together than one plank's constant. Likewise you cannot store more information than the total amount of information in the universe etc. etc.

        According to physics as it is today, there are dead ends for computers where they cannot get any faster, bigger, or more powerfull. We may never reach those limits, but they still exist.

  • clustering (Score:5, Interesting)

    by mirko ( 198274 ) on Tuesday February 11, 2003 @09:39AM (#5278857) Journal
    he said, the company intends to build its future servers with smaller, cheaper processors

    I guess this is better to use interconnected devices in an interconnected world.

    where I work, we recently traded our Sun E10k for several E450 between which we load balance request.
    It surprisingly works very well.

    I guess Google's approach is then an efficient one.
    • Re:clustering (Score:5, Interesting)

      by beh ( 4759 ) on Tuesday February 11, 2003 @09:52AM (#5278931)
      The question is always, what you're doing.

      Google's approach is good for google. If Google would want to make good use of significantly faster CPUs, they would also need significantly more RAM in their machines (a CPU faster by a factor of 10 can't yield a speed-up factor of ten, if the network can't deliver the data fast enough).

      For Google it's fine, if a request can be done in say half-a-second on a slower machine, that is a lot cheaper then a 10* as fast machine doing each request in .05 seconds, but the machine costs 50* more than the slower machine.
      On the other hand, if you have a job that can only be done sequentially (or can't be parallelized all to well), then having 100s of computers won't help you very much... ...on the other hand - there is one question left: Is it really worth while having 100s or 1000s of PC class servers working your requests as opposed to a handful really fast servers?

      The more expensive servers will definitely be more expensive when you buy them - on the other hand the more expensive faster machines might save you a lot of money in turns of less rent for the offices (lower space requirements) or - perhaps even more important - save on energy...

      The company where I'm working switched all their work PCs to TFTs relatively early, when TFTs were still expensive. The company said, that this step was done on the expected cost saving in power bills and also saving on air conditioning in rooms with lots of CRTs...

      • With all those hands [club-internet.fr]

      • 'His words were both simple and devastating: when asked how the 64-bit Itanium, the new megaprocessor from Intel and Hewlett-Packard, would affect Google, Mr. Schmidt replied that it wouldn't. Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors." '

        The parent comment is correct, but the entire issue is confused. In a few years, the Itanium will be the cheapest processor available, and Google will be using it.
      • by Anonym0us Cow Herd ( 231084 ) on Tuesday February 11, 2003 @10:58AM (#5279426)
        I'm not disputing that they exist. But I'm drawing a blank. Can someone please give an example of a computing task that CANNOT be subdivided into smaller tasks and run in parallel on many processing elements? The kind of task that requires an ever faster single processor.

        I tend to be a believer that massively parallel machines are the (eventual) future. e.g. just as we would brag about how many K, and then eventually megabytes our memory was, or how big our hard di_k was, or how many megahertz, I think that in the future shoppers will compare: "Oh, the machine at Worst Buy has 128K processors, while the machine at Circus Shitty has only 64K processors!"
        • a one of matrix inversion. well parts of it can't be done efficiently in parallel.

          Though the resulting matrix would probably be applied accross a lot of data and that can be done in parallel.

          A matrix inversion can be done very fast if you have a Very MPP system (say effectivly 2^32 processors!) like a quantum computer.

        • Matrix inversion comes to mind -- it is very difficult to parallelize.

          I found a nice little read about how to decide if any particular problem you are looking at is easily parallelizable.

          It is in pdf (looks like a power point presentation).

          http://cs.oregonstate.edu/~pancake/presentations /s dsc.pdf
        • by crawling_chaos ( 23007 ) on Tuesday February 11, 2003 @11:50AM (#5279881) Homepage
          Any problem that requires a big working set can benefit from running on big iron. If you can't sub-divide the memory usage, you'll spend a lot of time whipping memory requests out over very slow links. Cray has a bunch of data on this. The short of it is that it's all about memory latency. The X1 series is built have extremely low latency.

          That's not to say that every complex problem needs a supercomputer. That's why Cray also sells Intel clusters. Right tool for the right job and all of that.

        • I dream of the day when motherboard manufacturors sell cheap 4 cpu boards and AMD/Intel sell low powered/low heat processors (something akin to transmeta). Yeah the Quad Xeon exists but Intel wants you to pay through the nose for those (and they don't run cool). I would love to have 4 (900 mhz "Barton's") Athlon MP cpus in a box that ran cool and reliably. It may not even run as fast as even one Intel P4 3.06 ghz HT for many applications but from what I have seen of smp machines is that they run much SMOOTHER. When smp machines are dished out a lot of work, it does not effect the responsiveness of the whole system. Instead of having one servant who is on supersteriods and is the best at everything but can really only do one thing at a time, I would rather have four servants (which even get in the way of each other at times) who can't do as much but they all can be doing different things at once.
        • Can someone please give an example of a computing task that CANNOT be subdivided into smaller tasks and run in parallel on many processing elements?

          The technical issue here is known as "linear speedup". Take chess, for example: the standard search algorithm for chess play is something called minimax search with alpha-beta pruning. It turns out that the alpha-beta pruning step effectively involves information from the entire search up to this point. With only a subset of this information, exponentially more work will be needed: a bad thing.

          How do parallel chess computers such as Deep Blue work, then? Very fancy algorithms that still get sublinear but interesting speedups at the expense of a ton of clever programming. This is a rough explanation of why today's PC chess programs are probably comparable with the now-disassembled Deep Blue: the PC chess programmers can use much simpler search algorithms, concentrating on other important programming tasks, also a 10x speedup in uniprocessor performance has a 10x search speed increase, whereas using 10x slow processors isn't nearly so effective. Note that Deep Blue was decommissioned largely because of maintenance costs: a lot of rework would have to be done to make Deep Blue take advantage of Moore's Law.

          That said, many tasks are "trivially" parallelizable. Aside from the pragmatic problem of coding for parallel machines (harder than writing serial code even for simple algorithms), there is also the silicon issue: given a transistor budget, are manufacturers better spending it on a bunch of little processors or one big one? This is the real question, and so far the answer is generally "one big one". YMMV. HTH.

          (BTW, why can't I use HTML entities for Greek alpha and beta in my Slashdot article? What are they protecting me from?)

          • by Anonym0us Cow Herd ( 231084 ) on Tuesday February 11, 2003 @02:33PM (#5281429)
            the standard search algorithm for chess play is something called minimax search with alpha-beta pruning.

            This algorithm is something I'm familiar with. (Not chess, but other toy games in LISP, like Tic Tac Toe, Checkers, and Reversi, all of which I've implemented using a generic minimax-alphabeta subroutine I wrote.) (All just for fun, of course.)

            If you have a bunch of parallel nodes, you throw all of the leaf nodes at it in parallel. As soon as leaf board scores start comming in, you min or max them up the tree. You may be able to alpha-beta prune off entire subtrees. Yes, at higher levels, the process is still sequential. But many boards' scores at the leaf nodes need computed, and could be done in parallel. Yes, you may alpha-beta prune off a subtree that has already had some of your processors thrown at it's leaf nodes -- you abort those computations and re-assign those processors to the leaf nodes that come after the subtree that just got pruned off.

            Am I missing anything important here? It seems like you could still significantly benefit from massive parallel processing. If you have enough processors, the alpha-beta pruning itself might not even be necessary. After all the alpha-beta pruning is just an optmization so that sequential processing doesn't have to examine subtrees that wouldn't end up affecting the outcome. But let's say, each board can have 10 possible moves made by each player. I want to look 4 moves ahead. This is 10,000 leaf boards to score. If I have more than 10,000 processors, why even bother to alpha-beta prune? Now, if I end up needing to examine 1 million boards (more realistic perhaps) and I can do them 10,000 at a time, I still may end up being able to take advantage of some alpha-beta pruning. And 10,000 boards examined at once, sequentially, is still faster than 1 at a time.

            Vector processors wouldn't be any more helpful here (would they?) than massively parallel?

            Of course, whether a mere 10,000 processors constitutes massively parallel or not is a matter of interpretation. Some people say a 4-way SMP is massively parallel. I suppose it depends on your definition of "massively".
        • by pclminion ( 145572 ) on Tuesday February 11, 2003 @02:18PM (#5281296)
          Can someone please give an example of a computing task that CANNOT be subdivided into smaller tasks and run in parallel on many processing elements? The kind of task that requires an ever faster single processor.

          Computing the MD5 sum of 1TB of data. :-) MD5 depends on (among other things) being non-parallelizable for its security.

      • Re:clustering (Score:3, Informative)

        by Zeinfeld ( 263942 )
        Google's approach is good for google. If Google would want to make good use of significantly faster CPUs, they would also need significantly more RAM in their machines (a CPU faster by a factor of 10 can't yield a speed-up factor of ten, if the network can't deliver the data fast enough).

        I think you have the right idea, slightly mistated. The crux for Google is that their problem is actually creating a huge associative memory, many terabytes of RAM. The speed of the processors is not that important, the speed of the RAM is the bottleneck. Pipelining etc have little or no effect on data lookups since practically every lookup is going to be outside the cache.

        That does not support the idea that Moore's law is dead. It merely means that google is more interested in bigger and faster RAM chips rather than bigger and faster processors.

        Long ago when I built this type of machine the key question was the cost of memory. You wanted to have fast processors because you could reduce the total system cost if you had fewer and faster processors with the same amout of RAM. Today however RAM cost is not a big issue, the faster processors tend to require faster RAM so you can make savings by having 10 CPUS running at half the speed rather than 5 really fast processors at three times the cost.

      • Re:clustering (Score:3, Insightful)

        by SpikeSpiff ( 598510 )
        The question is how important is what you're doing?

        If Google screws up 1 in 1000 requests, I wouldn't even notice. Refresh and on my way.

        Citibank trades roughly $1 Trillion in currency a day. If they had 5 9's accuracy, they would be misplacing $10,000,000 a day. In that environment, commodity machines are unacceptable.

        And it scales down: paychecks? billing records? The video check-out at Blockbuster?

    • Re:clustering (Score:5, Informative)

      by e8johan ( 605347 ) on Tuesday February 11, 2003 @09:53AM (#5278935) Homepage Journal
      Google supports thousands of user request sessions, not one huge straight-line serial command sequence. This means that a huge bunch of smaller servers will do the jobb quicker than a big super-server. Not only because of the raw computing power, but due to the parallellalism that is extracted by doing so and the loss of overhead introduced by running too many tasks on one server.
    • NoW (Score:4, Informative)

      by Root Down ( 208740 ) on Tuesday February 11, 2003 @10:19AM (#5279104) Homepage
      The NoW (Network of Workstations) approach has been on ongoing trend over the last few years as the throughput achieved by an N distinct processors connected by a high speed network is nearly as good (and sometimes better) than an N processor mainframe. All this comes at a cost that is much less than that of a mainframe. In Google's case, it is the volume that is the problem, and not necessarily the complexity of the tasks presented. Thus, Google (and many other companies) can string together a whole bunch of individual servers (each with their own memory and disk space so there is no memory contention - another advantage over the mainframe approach) quite (relatively) cheaply and get the job done by load balancing across the available servers. Replacement and upgrades - yes, eventually to the 64 chips - can be done iteratively so as to not impact service, etc. Lots of advantages...

      Here is a link to a seminal paper on the issue if you are interested:
      [nec.com]
      http://citeseer.nj.nec.com/anderson94case.html

    • by jaaron ( 551839 ) on Tuesday February 11, 2003 @10:43AM (#5279304) Homepage
      Clustering has definitely won out in the United States mostly due to the appeal of cheap processing power, but that doesn't mean that clustering is always best. Like another poster mentioned, it depends on what you're doing. For Google, clustering is probably a good solution, but for high end supercomputing, it doesn't always work.

      Check out who's on top of the TOP 500 [top500.org] supercomputers. US? Nope. Cluster? Nope. The top computer in the world is the Earth Simulator [jamstec.go.jp] in Japan. It's not a cluster of lower end processors. It was built from the ground up with one idea -- speed. Unsurprisingly it uses traditional vector processing techniques developed by Cray to achieve this power. And how does it compare with the next in line? It blows them away. Absolutely blows them away.

      I recently read a very interesting article about this (I can't remember where - I tried googling) which basically stated that the US has lost it's edge in supercomputing. The reason was two fold: (1) less government and private funding for supercomputing projects and (2) a reliance on clustering. There is communication overhead in clustering that dwarfs similar problems in traditional supercomputers. Clusters can scale, but the max speed is limited.

      Before you start thinking that it doesn't matter and that the beowulf in your bedroom can compare to any Cray, recognize that there are still problems within science that would take ages to complete. These are very different problems from those facing Google, but they are nonetheless real and important.
      • it is a cluster.... [jamstec.go.jp]

        640 processor nodes, each consisting of eight vector processors are connected as a high speed interconnection network.

        That makes it a cluster (640 processor nodes) of clusters (8 vector processors)
      • by Troy Baer ( 1395 ) on Tuesday February 11, 2003 @11:47AM (#5279854) Homepage
        Check out who's on top of the TOP 500 supercomputers. US? Nope. Cluster? Nope. The top computer in the world is the Earth Simulator in Japan. It's not a cluster of lower end processors. It was built from the ground up with one idea -- speed. Unsurprisingly it uses traditional vector processing techniques developed by Cray to achieve this power. And how does it compare with the next in line? It blows them away. Absolutely blows them away.

        It's worth noting that the Earth Simulator is actually a cluster of vector mainframes (NEC SX-6s) using a custom interconnect. You could do something similar with the Cray X-1 if you had US$400M or so to spend.

        I recently read a very interesting article about this (I can't remember where - I tried googling) which basically stated that the US has lost it's edge in supercomputing. The reason was two fold: (1) less government and private funding for supercomputing projects and (2) a reliance on clustering.

        If you're referring to the article I think you are, it was specifically talking in the context of weather simulation -- an application area where vector systems are known to excel (hence why the Earth Simulator does so well at it). The problem is that vector systems aren't always as cost-effective as clusters for a highly heterogeneous workload. With vector systems, a good deal of the cost is in the memory subsystem (often capable of several 10s of GB/s in memory bandwidth), but not every application needs heavy-duty memory bandwidth. Where I work, we've got benchmarks that show a cluster of Itanium-2 systems wiping the walls with a vector machine for some applications (specifically structural dynamics and some types of quantum chemistry calcuations), and others where a bunch of cheap AMDs beat everything in sight (on some bioinformatics stuff). It all depends on what your workload is.

        --Troy
  • Upgrading Good... (Score:4, Insightful)

    by LordYUK ( 552359 ) <jeffwright821@noSPAm.gmail.com> on Tuesday February 11, 2003 @09:39AM (#5278858)
    ... But maybe Google is more attuned to the mindset of "if it aint broke dont fix it?"

    Of course, in true /. fashion, I didnt read the article...
    • ... But maybe Google is more attuned to the mindset of "if it aint broke dont fix it?"

      Exactly. And if they run out of capacity, the just add more cheap nodes, rather than buy a crazyily expensive supercomputer like ebay has.
    • by ergo98 ( 9391 ) on Tuesday February 11, 2003 @09:48AM (#5278899) Homepage Journal
      Google is of the philosophy of using large clusters of basically desktop computers rather than mega servers, and we've seen this trend for years and it hardly spells the end of Moore's Law (Google is just as much taking advantage of Moore's Law as anyone: They're just buying at a sweet point. While the CEO might forebodingly proclaim their separation from those new CPUs, in reality I'd bet it highly likely that they're running 64-bit processors once the pricing hits the sweet spot).

      This is all so obtuse anyways. These articles proclaim that Moore's Law is some crazy obsession, when in reality Moore's Law is more of a marketing law than a technical law: If you don't appreciably increase computing power year over year, no new killer apps will appear (because the market isn't there) encouraging owners of older computers to upgrade.
      • by jacquesm ( 154384 )
        With all respect for Moore's law (and even if it is called a law, it's no such thing since it approaches infinity really rapidly and that'a phyiscal impossibility): Killer apps and harware have very little to do with each other. While hardware can enable programmers to make 'better' software the basic philosophy does not change a lot, with the exception of gaming.


        Computers are productivity tools, and a 'google' like application would have been perfectly possible 15 years ago, the programmers would have had to work a little bit harder to achieve the same results. Nowadays you can afford to be reasonably lazy. It's only an economics thing, where cost of developement and cost of hardware balance at an optimimum.


        In that light, if google were developed 15 years ago it would use 286's, and if it would have been developed in 15 it would use what's in vogue and at the econonmical right pricepoint for that time.

        • by ergo98 ( 9391 )
          Killer apps and harware have very little to do with each other. While hardware can enable programmers to make 'better' software the basic philosophy does not change a lot, with the exception of gaming.

          Killer apps and hardware have everything to do with each other. Could I browse the Internet on an Atari ST? Perhaps I could do lynx like browsing (and did), however the Atari ST didn't even have the processor capacity to decompress a jpeg in less than a minute (I recall running a command line utility to decompress those sample JPEGs hosted on the local BBS to ooh and ahh over the graphical prowess). Now we play MP3s and multitask without even thinking about it, and we wouldn't accept anything less. As I mentioned in another post I believe the next big killer app that will drive the hardware (and networking) industry is digital video: When every grandma wants to watch 60 minute videos of their grandchild over the "Interweeb" suddenly there will be a massive demand for the bandwidth and the computation power (I've yet to see a computer that can compress DV video in real-time).
      • I think you're exactly right, and I find it incomprehensible that the author of an article on Moore's law does not even know how it goes. It has always been an index of performance per unit of cost, and of how this ratio changes with time. The author seems to think it's all about how chips get faster and faster, and that's an oversimplification we don't even need to make for a schoolchild.

        Google are taking advantage of cheap, high-performing chips, exactly the things predicted by Gordon Moore.

      • You are right but I dissagree at one point:

        Google surely will upgrade to a more modern processor when they start to replace older hardware. However, they will just be driven by pure economic terms. No need to get the top of the pop processor. E.g. power/speed is an issue for them.

        The ordinary desktop user .... well, MS Office, or KDE runs quite well on anything availabel now. No need for a 64Bit processor for anyone of us.

        For gamers the processing power is far enough currently. The limit is the GPU not the CPU, or the memory bandwith between them, or in case of online gaming, the internet connection.

        Killerapps are only a question of ideas. Not of processing power. And after having the idea its time to market and money and workers(coders) to make it a killerapp.

        The future will move into more network centric computing, having every information available everywhere easyly. If possible dynamic adapting interfaces between information stores(aka UDDI, WSDL and SOAP). A step toward is the stronger getting interconnection. We are close to the point that we can have internet connections where a packet makes only 3 hops between arbitrary points on earth. With ping times around a tenth of a second approaching 10ms in the next years stuff like grid computing and any other form of pervasive computing will be the money maker.

        I expect that chip producing companies will have a much smaler market in about 15 years like now, but till that they will grow. The next closing window is then the interconnection industry, routers and such. When all points are connected with a ping below 10ms ... only wireless networking will have a furhter market share. After that something NEW is needed!

        So whats open? Not making faster and faster chips ... but probably making different kinds of super computers: FPGA driven, or data flow machines, more extensive usage of multithreading on a die.

        Different version of parallel computing, there are a lot of thinkable parallel machines not truely implementatabel. But they could be "simulated" close enough to have indeed a speedup for certain algorithms. (Instead of having infinite processors, what some of those machines require, some millions could be enough).

        And of course tiny chips like radio (RFC?)
        activated chips with very low processing power. Implantable chips, likely with some more hardware to stimulate brain areas for parkinson people, to reconnect spines etc.

        Chips suitable for analysis of chemical substances, water and air control, picking up poisons etc.

        Chips to track stuff, lend books or paper files in big companies.

        I'm pretty sure that the race is going to an end, just like in super sonic air flight. At a certain point you gain nothing from more speed, but you have still plenty of room to make "smart" solutions/improvements.

        angel'o'sphere
  • Misapprehensions (Score:5, Insightful)

    by shilly ( 142940 ) on Tuesday February 11, 2003 @09:41AM (#5278864)
    For sure, Google might not need the latest processors...but other people might. Mainframes don't have fantastic computing power either -- 'cos they don't need it. But for those of us who are busy doing things like digital video, the idea that we have reached some sort of computing nirvana where we have more power than we need is laughable. Just because your favourite word processor is responsive doesn't mean you're happy with the performance of all your other apps.
    • by e8johan ( 605347 ) on Tuesday February 11, 2003 @09:50AM (#5278917) Homepage Journal

      This is where FPGAs and other reconfigurable hardware will enter. There are allready transparent solutions, converting C code to both machine code and hardware (i.e. a bitstream to download into an FPGA on a PCI card).

      When discussing video and audio editing, you must realize that the cause of the huge performance need is not the complexity of the task, but the lack of parallel work in a modern PC. As a matter of a fact, smaller computing units, perhaps thousands of CUs inside a CPU chip, would give you better performance (when editing videos if the code was adapted to take advantage of it) than a super chip from intel.

      If you want to test parallellalism, bring together a set of Linux boxes and run mosix. It works great!

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • I think you're putting the cart before the horse: the reason why only a relatively smaller fraction of the end-user population is fiddling with digital video is because the hardware (and software) to do so affordably has only recently become available. People like mucking around with pictures just as much as they like mucking around with words. If it's cheap and easy to do it, they will. It's part of how Apple makes its money.
        • Re:But why? (Score:3, Insightful)

          by micromoog ( 206608 )
          Image editing has been around for many years now, and there's still a much smaller percentage of people doing that than basic email/word processing. Video editing will always be a smaller percentage still.

          Believe it or not, there's a large number of people that don't see their computer as a toy, and really only want it to do things they need (like write letters). Just because the power's there doesn't mean a ton of people will suddenly become independent filmmakers (no matter what Apple's ads tell you).

    • by LinuxXPHybrid ( 648686 ) on Tuesday February 11, 2003 @09:58AM (#5278968) Journal
      > For sure, Google might not need the latest processors...but other people might.

      I agree. Also the article's concluding that big companies have no future because Google has no intention of investing in new technology is premature. Google is a great company, a great technology company, but it is just one of many. Google probably does not represent the very edge of cutting edge technology, either. Stuff like Molecular Dynamics Simulation requires more computer power; I'm sure that people who work in such areas can't wait to hear Intel, AMD and Sun announcing faster processor, 64bit, more scalability.
    • Re:Misapprehensions (Score:3, Informative)

      by SacredNaCl ( 545593 )
      Mainframes don't have fantastic computing power either -- 'cos they don't need it. Yeah, but they usually have fantastic I/O -- where they do need it. Still a ton of improvements in this area that could be made.

  • by dynayellow ( 106690 ) on Tuesday February 11, 2003 @09:42AM (#5278868)
    Is millions of geeks going catatonic over the thought of not being able to overclock the next, fastest chip.
    • <offtopic>
      It always seemed to me like the money you saved buying a reasonable cooling solution rather than a peltier would be better used buying a faster processor that you won't need to overclock.
      </offtopic>
  • by betanerd ( 547811 ) <segatech&email,com> on Tuesday February 11, 2003 @09:43AM (#5278873) Homepage
    Why is it called Moore's Law and not Moore's Therom? Doesn't "Law" imply that it could be applied to all situations in all times and still be true? Or am I reading way to much into this?
  • by g4dget ( 579145 ) on Tuesday February 11, 2003 @09:46AM (#5278884)
    Assume, for a moment, that we had processors with 16bit address spaces. Would it be cost-effective to replace our desktop workstations with tens of thousands of such processors, each with 64k of memory? I don't think so.

    Well, it's not much different with 32bit address spaces. It's easy in tasks like speech recognition or video processing to use more than 4Gbytes of memory in a single process. Trying to squeeze that into a 32bit address space is a major hassle. And it's also soon going to be more expensive than getting a 64bit processor.

    The Itanium and Opteron are way overpriced in my opinion. But 64bit is going to arrive--it has to.

    • 64bit has been here for a while, called Alpha Processors and they work very nicely.

      Why stay stuck in the Intel world? There's more to computers that what you buy from Dell.

    • by drix ( 4602 ) on Tuesday February 11, 2003 @10:04AM (#5279008) Homepage
      Right, thank you, glad someone else got that. No one is saying that Google has abandoned Itanium and 64-bit-ness for good. Read that question in the context of the article and what Schmidt is really being asked is how will the arrival of Itanium affect Google. And of course the answer is that it won't, since as we all know Google has chosen the route of 10000 (or whatever) cheap Linux-based Pentium boxes in place of, well, an E10000 (or ten). But that sure doesn't mean Google is swearing off 64-bit for good--just that it has no intention of buying the "superchip." But bet your ass that when Itanium becomes more readily available and cheap, a la the P4 today, when Itanium has turned from "superchip" to "standardchip," Google will be buying them just as voraciously as everyone else. So for me these doomsday prognostications that Malone flings about don't seem that foreboding to me--Itanium will sell well, just not as long as it's considered a high-end niche item. But that never lasts long anyways. One-year-ago's high-end niche processor comes standard on every PC at CompUSA today.
    • The interesting question is not when can I have 64 bit registers, but rather when can I have larger address bus, VM address space? In my view the benefits of 64 bit computing (in a way analogous to 32 bit computing) are not clearly proven. I propose, though don't offer empirical evidence here, that the vast majority of modern software has a property I will refer to loosely as locality - i.e. - the idea that typically register values are small and that the bottlenecks executing a properly optimised program will predominantly use a relatively small portion of the address space. If this is the case, I see no valid reason to want to manipulate 64 bit quantities atomically within the processor - wouldn't simply extending the 32bit MMU architecture (with appropriate compiler optimisations) prove more cost effective for the foreseeable future?

    • 64 bit will arrive, but the point of the article is that it may not arrive as fast as Moore said it would.

      Right now, we are at the point where its just a waste to build bigger and bigger hammers when you can get 100 smaller hammers to do more than a few bigger hammers and do it more quickly, cheaply and efficiently.

      Parallel computing is really coming of age now for consumers and small buisinesses. While in the past only a big megacorp or the government could afford a Cray class machine, now you can build equivalent power (maybe not up to today's supercomputers, but certainly equivilent to ones 10 years ago which is still pretty significant) in your basement with a few Powermac's/PC's, some network cable and open source software for clustering.

      So it makes more sense for Google to invest in a load of current technology and use it in the most effecient way possible than to spend money on expensive and untested (in the "real world") hardware.

      After all, just take a look at what Apple's done with the X-Serve. Affordable, small, efficient clustering capability for buisiness. Two CPU's per machine and you can beowulf them easily. Add in the new X-Raid and you have yourself a powerful cluster that probably (even at Apple's prices) will cost a lot less than a bunch of spanking new Itanium machines.

      64 bit will arrive (Probably when Apple introduces it ;), but it will just take a bit longer since we can get a lot out of what we already have.
  • Damn it! (Score:3, Funny)

    by FungiSpunk ( 628460 ) on Tuesday February 11, 2003 @09:46AM (#5278888)
    I want my quad 64GHz processor! I want it in 2 years time and I want quad-128Ghz ready by the following year!!!
  • well now... (Score:5, Funny)

    by stinky wizzleteats ( 552063 ) on Tuesday February 11, 2003 @09:47AM (#5278892) Homepage Journal

    This makes me feel a lot less like a cantankerous, cheap old fart for not replacing my Athlon 650.

  • Its a prediction that has held pretty true. Its a good benchmark but is not a true Law.

    And every 6 months its either a) dead or b) to continue for ever c) dead real soon. Most often its all three every week.

    • Its a prediction that has held pretty true. Its a good benchmark but is not a true Law.

      The majority of laws are empirical in nature. Even Newton's laws of motion don't come from the theory, rather they are axioms that underly it.
  • Reply I've run into similar situations with clients of mine, when trying to figure out for them which the best solution for their new servers/etc would be.

    Time and time again, it always comes down to;

    Buy them small and cheap, put them all together, and that way if one dies, it's a hell of a lot easier and less expensive to replace/repair/forget.

    So Google's got the right idea, they're just confirming it for the rest of us! :)
  • Danger (Score:2, Funny)

    by Anonymous Coward
    "Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous."

    If only all dangerous things would go away as soon as we choose to forget them...
  • by The Night Watchman ( 170430 ) <smarotta@[ ]il.com ['gma' in gap]> on Tuesday February 11, 2003 @09:48AM (#5278903)
    I'm waiting for DNA Computers [udel.edu]! Shove a hamburger into where the floppy drive used to be, run gMetabolize for Linux (GNUtrients?), in a few hours my machine isn't obsolete anymore.

    Either that, or it mutates into an evil Steve Wozniak and strangles me in my sleep.

    /* Steve */
  • by Anonymous Coward on Tuesday February 11, 2003 @09:52AM (#5278930)
    I mean the guy was involved in Netscape.

    He hit the lottery. He was a lucky stiff. I wish I was that lucky.

    But that's all it was. And I don't begrudge him for it. But I don't take his advice.

    As for google. Figure it out yourself.

    Google isnt' driving the tech market. What's driving it are new applications like video processing that guess what...needs much faster processors than we've got now.

    So while Google might not need faster processors, new applications do.

    And I say that loving google, but its not cutting edge in terms of hardware. They have some good search algorithms.
    • by shimmin ( 469139 ) on Tuesday February 11, 2003 @12:05PM (#5279981) Journal
      I lived in the apartment building he lived in college, albeit after he left. When I was leaving the building, I asked the landlord what their guidelines on how clean "clean" was for purposes of getting a damage deposit back. She told me her two largest damage deposit deduction stories.

      In the largest, a bunch of guys, the day before reporting to duty for boot camp, held a very wild party. It involved using a sofa as a battering ram. There was a stove-sized hole in one wall. There was a refrigerator-sized one in the other.

      Andreessen was the second largest. No major damage, but he just left EVERYTHING. Clothes, furniture, papers, food, everything. They had to clean out a man's entire life. She guessed he left town with a backpack, a change of clothes, and his portable.

      When he started Netscape, he saw the niche, left town, and dumped everything on it NOW. Maybe that's luck, but maybe it's being insightful enough to know what risks are worth leaving everything for. I'd give someone who showed that kind of insight a fair shake, if they had something else to say.

    • While you may validly question his business acumen, he has worked with RMS, JWZ, and knows everybody. He is a reasonable coder and a team player; we need more of him.

    • Re: (Score:3, Interesting)

      Comment removed based on user account deletion
  • If you have time, read the long Red Herring article...


    Of course we have time. Ain't we reading slashdot?

  • by praetorian_x ( 610780 ) on Tuesday February 11, 2003 @09:56AM (#5278955)

    "The rules of this business are changing fast," Mr. Andreessen says, vehemently poking at his tuna salad. "When we come out of this downturn, high tech is going to look entirely different."
    *gag* Off Topic, but has *anyone* become as much of a caricture of themselves as Andreessen?

    This business is changing fast? Look entirely different? Thanks for the tip Marc.

    Cheers,
    prat
  • by Macka ( 9388 ) on Tuesday February 11, 2003 @09:57AM (#5278957)
    I was at a customer site last week, and they were looking at options for a 64 node (128 cpu) cluster. They had a 2cpu Itanium system on loan for evaluation from HP. They liked it, but decided instead to go with Xeon's rather than Itanium. The reason .. Itanium systems are just too expensive at the moment. Bang for Buck, Xeon's are just too attractive by comparison.

    The Itanium chip will eventually succeed, but not until the price drops and the performance steps up another gear.
  • by rillian ( 12328 ) on Tuesday February 11, 2003 @09:57AM (#5278958) Homepage

    Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors.

    How is this not Moore's law? Maybe not in the strict sense of number of transistors per cpu, but it's exactly that increase in high-end chips that make mid-range chips "smaller, cheaper" and still able to keep up with requirements.

    That's the essense of Moore's law. Pretending it isn't is just headline-writing manipulation, and it's stupid.

    • I think you've hit it on the head here. Google still wants Moore's law to continue. The plus side to it would be that they can get the same amount of performance per processor they have now (which is sufficient for them) for *much* less money.

      Think about the "price shadow" of products - when a new product comes out, the older/slower/less sophisticated product becomes cheaper. If this happens *really quickly*, then the prices are likely to go down a lot, and very soon. If you've already got what you want, it's a great place to be in.

      This doesn't happen much with industries where there aren't many advances (think electric range). A two year old stove is pretty close in price to a brand new one. Whereas, a two year old processor (and 50 cents) will get you a cup of coffee.

    • by datadictator ( 122615 ) <ajventer&direqlearn,org> on Tuesday February 11, 2003 @10:10AM (#5279046) Homepage Journal
      And that day the spirits of Turing and Von Neumann spoke unto Moore of Intel granting him insight and wisdomn to understand the future. And Moore was with chip and he brought forth the chip and named it 4004. And Moore did bless the chip saying: "Thou art a breakthrough, with my own corporation have I fabricated thee. Thou art yet as small as a dust mote, yet shall thou grow and replicate unto the size of a mountain and conquer all before thee. This blessing I give unto thee: Every eighteen months shall thou double in capacity, until the end of the age." This is Moores law, which endures to this day.

      Do not mess with our religion :-)

      Untill the end of the epoch, Amen.

      PS. With thanks to a source which I hope is obvious.
  • by Hays ( 409837 ) on Tuesday February 11, 2003 @09:58AM (#5278965)
    They're not saying they don't want faster processors with higher address spaces, who wouldn't. They're simply saying that the price/performance ratio is likely to be poor, and they have engineered a good solution using cheaper hardware.

    Naturally there are many more problems which can not be parallelized and are not so easily engineered away. Google's statement is no great turning point in computing. Faster processors will continue to be in demand as they tend to offer better price/performance ratios, eventually, even for server farm situations.

    • But as CPUs get faster, more and more problems can be parallelized at the granularity of a commodity CPU. That leaves fewer problems left that demand a new, faster CPU.

      Eventually, the dwindling number of remaining problems won't have enough funding behind them to support the frantic pace of innovation needed to support Moore's law. I think that CPU development will hit this economic barrier well before it becomes technically impossible to improve performance.

  • Mushy writing (Score:5, Insightful)

    by icantblvitsnotbutter ( 472010 ) on Tuesday February 11, 2003 @10:00AM (#5278982)
    I don't know, but am I the only one who found Malone's writing to be mushy? He wanders around, talking about how Moore's Law applies to the burst Web bubble, that Intel isn't surviving because of an inability to follow it's founder's law, and yet that we shouldn't be enslaved by this "law".

    In fact, the whole article is based around Moore's Law still applying, desptie being "unhealthy". Well, duh. I think he had a point to make somewhere, but lost it on the way to the deadline. Personally, I would have appreciated more concrete reasons about why Google's bucking the trend is so interesting (to him).

    He did bring up one very interesting point, but didn't explore it enough to my taste. Where is reliability in the equation? What happens if you keep all three factors the same, and use the cost savings in the technology to address failure points?

    Google ran into bum hard drives, and yet the solution was simply to change brands? The people who are trying to address that very need would seem to be a perfect fit for a story about why Moore's Law isn't the end-all be-all answer.
  • he said, the company intends to build its future servers with smaller, cheaper processors

    Just because Google (and I assume many other companies) are looking to use smaller, cheaper processors, it does not mean that Moore's law will not continue to hold.

    Moores Law is a statement about the number of transitors per square inch, not per CPU. Google's statement is more about the (flawed) concept of "One CPU to rule them all", rather than any indictment of Moore's Law or those that follow it.

  • We should not simply and blindly follow Moore's law as a guide to producing CPU's. We are capable of crushing Moore's law, however, CPU companies are not interrested in creating fast computers, they are interested in making a profit. This translates to small increments in CPU speed which they can charge large increments of price for.

    Other possibilites such a quantum computing are left to a number of small university lectures to study and conduct research in, small compared to the revenue of the chip companies.
  • by Jack William Bell ( 84469 ) on Tuesday February 11, 2003 @10:02AM (#5278999) Homepage Journal
    The problem is that cheaper processors don't make much money -- there isn't the markup on commodity parts that there is on the high end. The big chip companies are used to charging through the nose for their latest and greatest and they use much of that money to pay for the R & D, but the rest is profit.

    However profit on the low end stuff is very slight because you are competing with chip fabs that don't spend time and money on R & D; buying the rights to older technology instead. (We are talking commodity margins now, not what the market will bear.) So if the market for the latest and greatest collapses the entire landscape changes.

    Should that occur my prediction is that R & D will change from designing faster chips to getting better yields from the fabs. Because, at commodity margins, it will be all about lowering production costs.

    However I think it is still more likely that, Google aside, there will remain a market for the high end large enough to continue to support Intel and AMD as they duke it out on technological edge. At least for a while.
  • by binaryDigit ( 557647 ) on Tuesday February 11, 2003 @10:05AM (#5279017)
    For their application having clusters of "smaller" machines make sense. Lets compare this to ebay.

    The data google deals with is non real time. They churn on some data and produce indices. A request comes in over a server, that server could potentially have it's own copy of the indices and can access a farm of servers that hold the actual data. The fact that the data and indices live on farms is no big deal as there is no synchronization requirement between them. If server A serves up some info but is 15 minutes behind server Z, that's ok. This is a textbook application for distributed non-stateful server farms

    Now ebay, ALL their servers (well the non listing ones) HAVE to be going after a single or synchronized data source. Everybody MUST have the same view of an auction and all requests coming in have to be matched up. The "easiest" way to do this is by going against a single data repository (well single in the sense that the data for any given auction must reside in one place, different auctions can live on different servers of course). All this information needs to be kept up on a real time basis. So ebay also has the issue of transactionally updating data in realtime. Thus their computing needs are significantly different than that of google.
    • That's not entirely right. EBay isn't really any more synchronised than Google.

      You might have noticed when posting an auction that you can't search for it until quite a bit after posting. That's because the EBay servers don't synchronise and reindex as frequently as one might think. Their pages are kept as static as possible to reduce the load on their servers.

  • Eh? (Score:5, Funny)

    by Mr_Silver ( 213637 ) on Tuesday February 11, 2003 @10:08AM (#5279040)
    Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous.

    How can Moore's law become dangerious?

    If you break it, will you explode into billions of particles?

    • Re:Eh? (Score:3, Funny)

      by sql*kitten ( 1359 )
      If you break it, will you explode into billions of particles?

      The danger is that soon enough an Intel processor will get hot enough to trigger a fusion reaction in atmospheric hydrogen, turning Earth into a small star. We must abandon this dangerous obsession with Moore's law before it's too late!
  • Is it possible that chip manufacturers feel they have to deliver new products in accordance with ML but not exceed it? Apparently Intel have had 8GHz P4s running (cooled by liquid nitrogen, but you had to do this to get fairly modest overclocks not so long ago).

    I fully expect this to get modded down, but I still think chip manufacturers are deliberately drip-feeding us incremental speeds to maximise profits. There's not much evidence of a paradigm shift on the horizon; Hammer is an important step but it's still a similar manufacturing process. As a (probably flawed) analogy, if processing power became as important to national security as aircraft manufacture in WWII, look how fast progress could be made!

  • Wether you use a super chip or several low cost chips, the computing power at your disposal still grows exponentially, I guess. So no refutation of Moore's law.
  • render farms (Score:3, Informative)

    by AssFace ( 118098 ) <`moc.liamg' `ta' `77znets'> on Tuesday February 11, 2003 @10:12AM (#5279059) Homepage Journal
    google doesn't really do much in terms of actually hardcore processing - it just takes in a LOT of requests - but each one isn't intense, and it is short lived.

    On the other hand, say you are running a renderfarm - in that case you want a fast distributed network, the same way google does, but you also want each individual node as fast as freakin possible.
    They have been using Alphas for a long time for that exact reason - so now with the advent of the Intel/AMD 64s, that will drive prices down on all of it - so I would imagine the render farms are quite happy about that. That means that they can either stay at the speed at which they do things now, but for cheaper - or they can spend what they do now and get much more done in the same time... either way leading to faster production and argueably more profit.

    The clusters that I am most familiar with are somewhere in between - they don't need the newest fastest thing, but they certainly wouldn't be hurt by a faster processor.
    For the stuff I do though, it doesn't matter too much - if I have 20 hours or so to process something, and I have the choice of doing it in 4 minutes or 1 minute, I will take whichever is cheaper since the end result might as well be the same otherwise in my eyes.
  • what moore said.. (Score:5, Insightful)

    by qoncept ( 599709 ) on Tuesday February 11, 2003 @10:12AM (#5279063) Homepage
    I think people are missing the point of Moore's law. When he said he thought transistors would double every 2 years, thats what he thought would happen. Thats not a rule set that anyone has to follow (which, as far as I can figure, is the only way it could be "dangerous," because people might be trying to increase the number of transistors to meet it rather than do whatever else might be a better idea..????). It's not something he thought would always be the rule, forever, no matter what. The fact that he's been right for 35 years already means he was more right than he could have imagined.
  • by Lumpy ( 12016 ) on Tuesday February 11, 2003 @10:14AM (#5279077) Homepage
    Software over the past 20 years has gotten bigger not better. We dont do anything different than what I was able to do in 1993. And it doesnt affect just windows and commercial apps. Linux and It's flotilla of apps are all affected. Gnome and KDE are bigger and not better. They do not do the desktop thing any better than what they did 5 years ago. Sure small features have finally been fixed, but at the cost of adding 100 eye-candy opetions for every fix. Mozilla is almost as big as IE, Open Office is still much larger than it needs to be. X windows hasn't been on a diet for years.

    granted it is much MUCH worse on the windows side. Kiplingers TaxCUT is 11 megabytes in size for the executable.. FOR WHAT?? eye candy and other useless features that don't make it better.... only bigger.

    Too many apps and projects add things for the sake of adding them... to look "pretty" or just for silly reasons.

    I personally still believe that programmers should be forced to run and program on systems that are 1/2 to 1/3rd of what is typically used. this will force the programmers to optimize or find better ways to make that app or feature work.

    It sounds like google is tired of getting bigger and badder only to watch it become no faster than what they had only 6 months ago after the software and programmers slow it down.

    remember everyone... X windows and a good windows manager in linux RAN VERY GOOD on a 486 with 16 meg of ram and a decent video card.. Today there is no chance in hell you can get anything but blackbox and a really old release of X to run on that hardware (luckily the Linux kernel is scalable and it heppily runs all the way back to the 386.)

  • Moore's Law (Score:2, Interesting)

    by ZeLonewolf ( 197271 )

    "Moore's Law" has been bastardized beyond belief. Take an opportunity to read Moore's Paper [wpi.edu] (1965), which is basically Gordon Moore's prediction on the future direction of the IC industry.
  • Let's face it: an Intel Pentium4 or AMD Athlon are more than sufficient for 99% of all needs out there.

    If you need more power than what a single CPU has to offer, buy an SMP machine. Or make a Beowulf cluster.

    And no, this is not a joke: this is exactly what google has been doing: build a humongous cluster a split eveything between hundreds of machines, right?

    Since Linux and the *BSDs have appeared, this means that pretty much every task can be managed by cheap, standardized machines. It's highly possible that, like the Red Herring article said, we'll see big chip makers 'go under' just because the research balloon out of control.

    Very interesting articles. Moore's Law may end, not because it's impossible to build a better chip, but because it has become un-economical to build one.
  • Oh Really? (Score:2, Insightful)

    This article is certainly thought-provoking, and it is always worthwhile to challenge conventional wisdom once in a while. Nonetheless, I can't shake the feeling that this is a lot of sound and fury about nothing. As many others have the pointed out, Google's case may not be typical, and in my long career in the computer industry I seem to remember countless similar statements that ended up as more of an embarrassment to the speaker than anything remotely prescient (anyone remember Bill Gates's claim that no one would EVER need more than 640K of RAM?).

    I use a PC of what would have been unimaginable power a few short years ago, and it is still woefully inadequate for many of my purposes. I still spend a lot of my programming time optimizing code that I could leave in its original, elegant but inefficient state if computers were faster. And in the field of artificial intelligence, computers are finally starting to do useful things, but are sorely hampered by insufficient processing power (try a few huge matrix decompositions -- or a backgammon rollout! -- and you'll see what I mean).

    Perhaps the most insightful comment in the article is the observation that no one has ever won betting against Moore's Law. I'm betting it'll be around another 10 years with change. Email me if you're taking...

    • Maybe at some point we will need to work with that much data, but do we need it today?


      Each previous generation of processors was released more or less in tandem with a new generation of apps that needed the features of the new chips. What app do you run that needs an IA64 or a Dec Alpha? If you want raw performance, better to get either the fastest IA32 chip you can or, maybe, a PowerPC with Altivec support. (Assuming you're writing your own app and can support the vector processor, of course....)

  • by porkchop_d_clown ( 39923 ) <mwheinz@nOSpAm.me.com> on Tuesday February 11, 2003 @10:23AM (#5279127)

    My experience with 64 bit chips is that they don't offer any compelling advantages over a multi-processor 32 bit system.


    The only real advantage they have is a bigger address space and even that doesn't offer much advantage over a cluster of smaller systems.


  • by jj_johny ( 626460 ) on Tuesday February 11, 2003 @10:35AM (#5279225)
    Here is the real deal about Moore's law and what it means. If you don't take Moore's law into account, it will eventually change the dynamics of your industry and cause great problems for most companies.

    Example 1 - Intel - This company continues to pump out faster and faster processors. They can't stop making new processors or AMD or someone else will. The costs of making each processor goes up but the premium for new, faster processors continues to drop as fewer people need the absolute high end. So if you look at Intel's business 5 years ago, they always had a healthy margin for the high end. That is no longer the case and if you exprapolate out a few years, it is tough to imagine that Intel will be the same company it is today.

    Example 2 - Sun - These guys always did a great job of providing tools to companies that needed the absolute fastest machines to make it work. Unfortunately, Moore's law caught up and made their systems a luxury compared to lots of other manufacturers.

    The basic problem that all these companies have is that Moore's Law eventually changes every business into a low end commodity business.

    You can't stop the future. You can only simulate it by stopping progress

  • no need for speed (Score:4, Insightful)

    by MikeFM ( 12491 ) on Tuesday February 11, 2003 @10:35AM (#5279229) Homepage Journal
    Seriously at this point most people don't need 1Thz CPU's. What most people need is cheaper, smaller, more energy effecient, cooler CPU's. You can buy 1Ghz CPU's now for the cost of going to dinner. If you could get THOSE down to $1 each so they could be used in embedded apps from clothing to toasters you would be giving engineers, designers, and inventors a lot to work with. You'd see a lot more innovation in the business at that price point. Once powerful computing had spread into every device we use THEN new demand for high end processors would grow. The desktop has penetrated modern life - so it's dead - time to adjust to the embedded world.
  • by imnoteddy ( 568836 ) on Tuesday February 11, 2003 @10:59AM (#5279432)
    According to this article [nytimes.com] the issue had to do with both price and power consumption.

    From the article:

    Eric Schmidt, the computer scientist who is chief executive of Google,

    told a gathering of chip designers at Stanford last month that the computer
    world might now be headed in a new direction. In his vision of the future,
    small and inexpensive processors will act as Lego-style building blocks
    for a new class of vast data centers, which will increasingly displace the
    old-style mainframe and server computing of the 1980's and 90's.

    It turns out, Dr. Schmidt told the audience, that what matters most to the
    computer designers at Google is not speed but power -- low power, because data
    centers can consume as much electricity as a city.

    He gave the Monday keynote at the "Hot Chips" [hotchips.org] conference at Stanford last August.
    There is an abstract [hotchips.org] of his keynote.

  • by Tikiman ( 468059 ) on Tuesday February 11, 2003 @11:25AM (#5279671)
    Apparently nobody has noticed the Sony Bono Moore's Law Extension Act, which retroactively extended Moore's Law an additional 10 years after Moore's Law was due to expire
  • by Noren ( 605012 ) on Tuesday February 11, 2003 @11:31AM (#5279732)
    This arguement is ignoring a major point. Sure, home PCs, web servers, search engines, databases may all get fast enough that further computational speed is irrelevant.

    But when computers are used for crunching numbers we still want machines to be as fast as possible. Supercomputers still exist today. Countries and companies are still spending millions to build parallel machines thousands of times faster than home PCs. They're doing this because the current crop of processors is not fast enough for what they want to calculate.

    Current computational modeling of the weather, a nuclear explosion, the way a protein folds, a chemical reaction, or any of a large number of other important real-world phenomena is limited by current computational speed. Faster computers will aid these fields tremendously. More power is almost always better in mathematical modeling- I don't expect we'll ever get to the point where we have as much computational power as we want.

  • by hey! ( 33014 ) on Tuesday February 11, 2003 @11:35AM (#5279758) Homepage Journal
    Moore's law not quite what most people think. If I'm not mistaken, it isn't that processor power will double every eighteen months, but that transsistor density will double. Processsor speed doubling is a side effect of this.

    I think there will always be a market for the fastest chips possible. However, there are other ways for this trend to take us rather than powerful CPU chips. These would include lower power, lower size, higher system integration, and lower cost.

    The EPIA series mini-ITX boards are an example of this. Once the VIA processors get powerful enough to decode DVDs well, it is very likely that they won't need to get more powerful for most consumer applications. However, if you look at the board itself (e.g. here [viavpsd.com]),
    you'll see that component count is stil pretty high; power consumption, while small, still requires a substantial power supply in the case or a large brick.

    When something like this can be put together, capable of DVD decoding, having no external parts other than memory (and maybe not even that), and the whole thing runs on two AAA batteries, then you'd really have something. Stir bluetooth (or more likely its sucessors) into the mix and you have ubiquitous computing, capable of adapting to their environement and adapting the environment to suit human needs.

    • I'm about to rant a little here.

      IIRC, there an article a while back (discussed here) that reviewed Moore's law, as Moore used it over a number of years and found that Moore himself seemed to redefine it every couple years. It's a marketing term which describes the general phenomenon of faster computers getting cheaper in a regular way.

      You're right though, Moore never really talked about doubling of 'processor power,' he discussed things in terms of devices such as transistors. Trouble is sometimes RAM was included in the 'device' total sometimes not... it's easy to fudge a bit during the slow downs and speed ups if you change how the thing is defined.

      Top it off with the fact that the whole thing was eventually cast in terms of the cost optimal solution. Given the degree to which the size of the market for computers has changed I'd say that this is a very difficult thing to define. As everyone is likely to point out, commodity desktop PC's have a very different optimum from massive single-system image computers. Of course, if you consider that a calculator is a computer, be they $1 cheapo's or the latest graphing programable whoopdeedoo they are all computers. There are so many markets for computers now, each with their own optimum that it's pretty artificial to talk about Moore's law at all. I've never seen anyone plot out Moore's law with a bunch of branches. Further, cost optimal becomes pretty subjective in all the markets when there are so many variables. Finally, there are points where Moore's law breaks down... the number of devices in cheapo calculators probably hasn't changed much in the last few years, but the price changes. Moore's law doesn't really allow for this sort of behavior, that there is a maximum necessary power for a certain kind of device, if it doesn't have to do anything else, then the complexity levels off and the price goes down. This may well happen at some point in the commodity sector. It is possible that the number of features in a conventional desktop will level off at some point. Hell with a $200 WalMart cheapo PC, maybe we're there now...

      Intuitively, everyone applies Moore's law to desktops but there's no particular reason to do so. Considering the history of it the massive mainframe style computer is probably the best application of it, but this is seldom done. Mainframes these days can be a complex as you're willing to pay for, which pretty much means that there is a cost optimal solution for an given problem, not just for fabrication, which is what Moore was talking about. Seems like we have turned a corner, it's time to redefine Moore's law yet again.
  • by HiThere ( 15173 ) <charleshixsn@@@earthlink...net> on Tuesday February 11, 2003 @12:49PM (#5280451)
    The mainframes basically stopped at 32 bits. There were models that went to 128 bits, and CDC liked 60 bits, but the workhorse (IBM 360, etc.) never went beyond 32 bits.

    Perhaps the next step instead of being towards larger computers will be towards smaller ones. Moore's law remains just as important, but the application changes. Instead of building faster computers, you build smaller, cheaper ones. The desktops will remain important for decades as the repository of printers, large hard disks, etc. And the palmtops/wristtops/fingernailTops/embedded will communicate with them for archiving, etc.

    This means that networking is becoming more important. This means that clusters need to be more integrated. I conceive of future powerful computers as a net of nets, and at the bottom of each net is a tightly integrated cluser of cpus, each more powerful than the current crop. These are going to need a lot of on-chip ram, and ram attached caches, because their access to large ram will be slow, and mediated through gatekeepers. There will probably be multiported ram whiteboards, where multiple cpus can share their current thoughts, etc.

    For this scenario to work, computers will need to be able to take their programs in a sort of pseudo-code, and re-write it into a parallelized form. There will, of course, be frequent bottle necks, etc. So there will be lots of wasted cycles, but some of them can be used on other processes with lower priority. And at least each cluster will have one cpu that spends most of it's time scheduling. etc.

    Consider the ratio between gray matter and white matter. I've been told that most of the computation is done in the gray matter, and the white matter acts as a communications link. This may not be true (it was an old source), but it is a good model of the problem. So to make this work, the individual processors need to get smaller and cheaper. But that's one of the choices that Moore's law offers!

    So this is, in fact, an encouraging trend. But it does mean that the high end cpus will tend to be short-term solutions to problems, faster at any particular scale of the technology, but too expensive for most problems, and not developing fast enough to stay ahead of their smaller brethern. Because they are too expensive to be used in a wasteful manner.

    Perhaps the "final" generation will implement these longer word length cpus, at least in places. And it would clearly use specialized hardware for the signal switchers, just as the video cards use specialized hardware, though they didn't at first. But the first versions will be built with cheap components, and the specialized hardware will only come along later, after the designs have stabilized.

  • by lamz ( 60321 ) on Tuesday February 11, 2003 @01:14PM (#5280672) Homepage Journal
    This article is full of overblown rhetoric. It goofily applies Moore's Law to too many other things, like Dot-coms. Note that at no point in the article is Moore's Law clearly stated -- it would spoil too many of the article's conclusions.

    That said, I remember the first time I noticed that technology was 'good enough,' and didn't need to double ever again: with the introduction of CDs, and later, CD-quality sound cards. Most people are not physically capable of hearing improvements if the sampling rate of CDs is increased, so we don't need to bother. Certainly, people tried, and the home theatre style multi-channel stuff is an improvement over plain stereo CDs, but it is an insignificant improvement when compared to CDs over older mono formats. Similarily, the latest SoundBlaster cards represent an insignificant improvement over the early beeps of computers and video games. (Dogs and dolphins might wish that audio reproduction was improved, but they don't have credit cards.)

    Back in the early 80s, when most bulletin board access was by 300 baud modem, paging of long messages was optional, since most people can read that fast. Of course, we need faster modems for longer files and applications, but as soon as say, HD-quality video and sound can be streamed at real-time speeds, then bandwidth will be 'enough.'

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...