Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Technology Hardware Science

Forty Years of Moore's Law 225

kjh1 writes "CNET is running a great article on how the past 40 years of integrated chip design and growth has followed [Gordon] Moore's law. The article also discusses how long Moore's law may remain pertinent, as well as new technologies like carbon nanotube transistors, silicon nanowire transistors, molecular crossbars, phase change materials and spintronics. My favorite data point has to be this: in 1965, chips contained about 60 distinct devices; Intel's latest Itanium chip has 1.7 billion transistors!"
This discussion has been archived. No new comments can be posted.

Forty Years of Moore's Law

Comments Filter:
  • Keeping Count (Score:5, Informative)

    by ackthpt ( 218170 ) * on Tuesday April 05, 2005 @07:35PM (#12149170) Homepage Journal
    Intel's latest Itanium chip has 1.7 billion transistors!"

    That's Montecito dual core Itanium, w/24MB of cache (only about 120 million transistors actually per CPU with the balance largely that motherlode of cache) and you could probably fry a steak on.

    "We can keep Moore's Law alive just by stuffing the cache!"
    "Brilliant!"
    "Brilliant!"
    Suddenly they were crushed by a giant can of Guinness containing not even an electronic sausage...

    • Re:Keeping Count (Score:4, Insightful)

      by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Tuesday April 05, 2005 @07:37PM (#12149191) Homepage Journal
      "We can keep Moore's Law alive just by stuffing the cache!"

      If it actually works, then there's little to complain about. Unfortunately, I don't think that things are quite so easy...
      • Re:Keeping Count (Score:3, Insightful)

        by Anonymous Coward
        Moore's law was about transistors, not computing power like it has commonly been misinterpreted as. I feel that using the phrase "stuffing the cache" is somehow implying that using the transistors for cache is somehow cheating. It is not cheating in any way shape or form. Moore's law is about transistors, regardless of how you use them.

        • Re:Keeping Count (Score:5, Informative)

          by timeOday ( 582209 ) on Wednesday April 06, 2005 @01:48AM (#12151459)
          But this misinterpretation is the only reason anybody cares about the "law" in the first place. There's no reason to care about increasing transistor counts unless there's a payoff.

          The problem with bigger & bigger cache is that it has diminishing returns. This is why Intel's "Extreme" chips are a waste of money.

          The inability to do anything useful with all those transistors is why we're seeing the advent of multi-core chips, which are neat but fail to preserve the conventional single-threaded programming model. This places the burden of creating explicit parallelism on the programmer, and leads to more complicated code, which means it costs more to write and also contains more bugs.

      • Re:Keeping Count (Score:5, Insightful)

        by MOBE2001 ( 263700 ) on Tuesday April 05, 2005 @10:28PM (#12150370) Homepage Journal
        If it actually works, then there's little to complain about.

        It can only work for so long. The biggest problem that is keeping performance down is not the processor but the memory retrieval and writing system: only one memory location can be accessed at any one time. This is also known as the von Neumann bottleneck. Not even clustering can get around this problem because there is a need for inter-process communication that slows things down. If someone could come up with a system that allows unlimited random and simultaneous memory access, the physical limit to processor speed would not be such a big deal anymore. We would have found the holy grail of fast computing.
    • Re:Keeping Count (Score:2, Interesting)

      by Ruediger ( 777619 )
      I still find amazing that they managed to fit 1.7 billion transitors in a chip.
    • I was looking for logic vs. cache break-down numbers for a while, obviously Intel is not keen on providing it on their own.

      The way I see it, 24 MB = 1024*1024*8*24 * 6 transistors/SRAM cell = 1.2B transistors for cache, still leaving 500M for logic. Well, we can factor in address storage and cache access logic, but I'd still like to see some harder data than this.

      Paul B.
    • The Itanium story

      Number of transistors= 1.7 billion
      Number of units sold = 1.7K
      Money invested= gazillion dollars

      Tasting dirt from your puny competition (read AMD)= priceless

  • by gaber1187 ( 681071 ) * on Tuesday April 05, 2005 @07:36PM (#12149185)
    So many people really doubt Moore's law will die anytime soon. Just because intel isn't jumping MHz every year, doesn't mean its ending... There are so many things left to do to squeeze out more performance in the same area or smaller. You can go to 3D stacks of transistors, higher K oxide dielectric, the list goes on and on. I agree with the article that says that we could see it go into the 2020s... the main problem that will hinder moore's law will be the economics of investing in new fabs, and waning demand of chips, not research and technology limitations. I see more money being pumped into memory chips and special purpose ARM style chips with a focus on low power. Eventually, people will just say, "Moore's law just doesn't matter anymore, the market has changed".
    • by Anonymous Coward
      "Just because intel isn't jumping MHz every year, doesn't mean its ending..."

      Maybe not, but there's certainly been a bit of a bump in progress recently; no notable new desktop CPUs, and certainly no increase in the complexity, component count or speed - unless you want to count cache - nothing in the last 18 months has fulfilled the criteria set out in Moore's Law. Having said that, this anomaly only applies to CPUs.

      I would hazard a guess that the law still holds true in memory - major advances there in
      • GPUs already use massive parallelization.
      • http://www.theinquirer.net/?article=21648 [theinquirer.net]

        Physics to your heart's content.
      • by Doppler00 ( 534739 ) on Tuesday April 05, 2005 @09:08PM (#12149803) Homepage Journal
        This is a good point. I have money saved up just waiting to buy the latest greatest thing... yet it's not here? My 3.0GHz P4 I bought in Jan 2004 is within %20 of the speed of any of Intel's offerings now (within the same class: desktop/consumer). And even when the dual core devices are released, I'm not confident that they will provide a doubling of performance.

        And what about Nvidia? They're last product jump from 5900 to the 6800 was absolutely amazing. A very clear %100 increase in performance. I'd be very surprised to see Nvidia be able to match that leap sooner than 4Q 2006.
        • And what about Nvidia? They're last product jump from 5900 to the 6800 was absolutely amazing. A very clear %100 increase in performance. I'd be very surprised to see Nvidia be able to match that leap sooner than 4Q 2006.

          Maybe, maybe not.

          Clock speeds of GPUs have been inching upwards just like those of CPUs, but the number of pipelines has been growing rapidly - from 8 to 16 in your example. They haven't hit a practical limit there yet, though power consumption is getting to be a worry. You might well s
        • I hear that.
          My beef is that I'm still running a dual PIII setup. (1.2ghz) The latest and greatest desktops from intel or amd really don't offer enough of a performance boost to warrant _upgrading_ from a dual to a single chip. There are no desktop dual chip boards for P4. So, I can get more raw speed but sacrifice awesome multitasking performance. Wtf? And no, a single Hyperthreaded P4 does most certainly not compare to 2 dedicated chips. Its just a neat trick that can help in some cases. (Been using HT Xen
    • special purpose ARM style chips with a focus on low power

      The ARM was designed in the late 80s as a general-purpose CPU for the British Acorn computer (originally ARM stood for Acorn RISC Machine).

      Because of its very efficiently coded instruction set, it turned out to use very little power. This is why, in subsequent years, it started to find its way into embedded applications.

      After Acorn went bust, ARM remained as the only profitable part of the company, focusing mainly on embedded applications of i

    • I can remember when it was normal for myself, and all of my friends, to upgrade computers on an annual basis. As software developers, the increase in speed always paid for itself rather quickly in time saved waiting for compiles to finish. When I left 'big corp' and struck out on my own full time, I bought the fastest development workstation money could buy at the time, a 486dx2 running at a whopping 66mhz internally. My friends were in awe, it was the first time any of us had seen a desktop that needed
    • That it's mostly useless in real-world terms anyway.

      Sure, taking Moore's law literally, computers are 1 million times faster than 30 years ago. Arguably that should translate into _more_ than 1 million times more work per second, because compilers have evolved too, and expensive optimization techniques have become more affordable. (A compiler optimization technique that would have taken a week on a 70's mainframe, now takes seconds.) We also have better tools.

      But are we doing 1 million times more with the
  • Kinda obvious.... (Score:3, Insightful)

    by FalconZero ( 607567 ) * <FalconZero&Gmail,com> on Tuesday April 05, 2005 @07:40PM (#12149215)
    ...but the article doesn't point out that the law is based on silicon transistor based computing. Obviously, if we switch to other bases for computation, it probably wont apply. IE quantum or plasmonic (yes, I know the latter will probably be in silicon).

    Before anyone says, well we've adjusted the length of time for doubling already, we'll do it again. For what its worth, its a bit silly saying X=2^Y/T is a law if you redefine T everytime it doesn't fit.
    • Re:Kinda obvious.... (Score:4, Interesting)

      by fm6 ( 162816 ) on Tuesday April 05, 2005 @07:53PM (#12149318) Homepage Journal
      Strictly speaking, you're right. But Moore's law, despite the name, isn't a law of nature. It's an observation about the progress of the chip industry. And that progress is motivated by a simple feedback loop: other industries put ICs into their products, which motivates the IC industry to retool to make better, cheaper ICs, which motivates other industries to put ICs into their products...

      Moore's original observation, that transistor density doubles every 18 months, will obviously cease to apply once it becomes impossible to make transistors. But as long as that feedback loop continues to churn, it continues to make sense to talk about Moore's law.

      • by Temsi ( 452609 ) on Tuesday April 05, 2005 @08:53PM (#12149714) Journal
        Actually, you're wrong in assuming his law will cease to apply once it becomes impossible to make transistors, as the law didn't apply specifically to transistors in the first place.

        His observation was made to Electronics magazine, in the April 19th, 1965 edition.
        He didn't mention transistor density.
        He didn't mention processors (as microprocessors were still 6 years away from being invented).

        He was describing component integration on economical integrated circuits.
        He observed that component integration doubled approximately every 12 months. He increased that number to 24 months, in 1975. Since then, other people have split the difference to 18 months.

        None of those figures, 12, 18 or 24 months, are accurate.
        If the 18 month figure was accurate, today's chips would have 75 Billion transistors.
        With his original 12 month figure, 27 Trillion.
        With his revised 24 month figure, 37 Million...

        Also, this isn't even a law... it's an observation.

        Please note... I relied on Tom R. Halfhill's column in Maximum PC (April 2005) "The Myths of Moore's Law" for this reply.
  • by Arcanix ( 140337 ) on Tuesday April 05, 2005 @07:41PM (#12149217)
    The amount of articles mentioning Moore's law will double each year.
  • law? (Score:2, Insightful)

    by wpiman ( 739077 ) *
    Shouldn't it be Gordon's theorem is we are questioning it? People don't question the theory of relativity or the theory of evolution (ok- I meant educated people)- and we still refer to these as theories.
    • Re:law? (Score:3, Funny)

      by Taladar ( 717494 )
      People are questioning Copyright Law and it is not called theory because of that either.
    • Re:law? (Score:3, Informative)

      by kaosrain ( 543532 )
      A theory is an integrated set of principles that organizes and predicts observations.
    • Questioning the theory of relativity and the theory of evolution is something that is frequently done [fosters.com] by educated people.

      This is how we get a better and more refined understanding.
  • Solving problems. (Score:2, Insightful)

    by brejc8 ( 223089 ) *
    People always talk about the end to Moors law stating that we cannot solve some challenges. Other people always reply "well we always manged to solve challenges and we probably always will".
    What I think is more interesting is how far ahead we can solve them. The clock distribution problem was a problem for seen and solved years ahead of it biting hard. Nowadays the problems arise and we have shorter and shorter time to react before they cause serious problems.
    This is the strongest proof I found that this te
  • by panaceaa ( 205396 ) on Tuesday April 05, 2005 @07:43PM (#12149236) Homepage Journal
    What about the Slashdot corollary? That is:
    Despite the fact that Moore's Law has been around for 40 years, and widely known about for almost as long, Slashdot will report about it at least once a month.
    It's almost as prevalent as the popular media corollary, which is:
    Popular media will always say that Moore's law is ending now, while ironically citing examples where such earlier predictions were premature.
    • Or the other popular geek corollary:
      BSD is dying. [google.com]
      Sometimes followed up with by another corollary:
      Each slashdot story is repeated within a small time of the original posting, leading to a doubling in the amount of Moore's law stories.
  • by sTalking_Goat ( 670565 ) on Tuesday April 05, 2005 @07:43PM (#12149240) Homepage
    at each iteration the time until the next "Death of Morre's Law" article is halved?

    If not I herbey proclaim it Goat's Law.

    • Finally corallary, The methodology of determining that moores law applies will change every six months as well. (It was mhz then mips then then then) problem is.. no matter how fast they get I still can't do realtime music composition and playback the way I could on my Amiga 500.

      *sigh*
  • by Anonymous Coward on Tuesday April 05, 2005 @07:49PM (#12149281)
    My favorite data point has to be this: in 1965, chips contained about 60 distinct devices; Intel's latest Itanium chip has 1.7 billion transistors!

    Uh, wouldnt that be two data points?
    • by mangu ( 126918 )
      Uh, wouldnt that be two data points?


      The second point is a datum, the first point is a reference. If you say "this site is 150 meters above sea level", how many data points do you have?

    • is the amount of data points doubling between the time of the article being posted and the time your your comment was posted due to Moore's Law? It's NOT DEAD!!!
  • by OAB_X ( 818333 ) on Tuesday April 05, 2005 @07:53PM (#12149317)
    Intel's latest Itanium chip has 1.7 billion transistors!"

    No wonder they call it the Itanic! Both were big and huge and failed miserably.
  • It's not a law... (Score:5, Insightful)

    by GrahamCox ( 741991 ) on Tuesday April 05, 2005 @07:53PM (#12149322) Homepage
    It's not a law, it's an observation. Did you know the term 'law' for a scientific theory was coined by Isaac Newton, who felt that his 'Laws of Motion' were so right and pervaded the universe so deeply that they had to be a law? He wanted to convey they had a deeper significance than a mere theory. In time of course, even these 'laws' came to be shown to be incomplete or only true for slow moving objects. Ever since, every theory both worthy and crackpot has been called a 'law'. It's about time we returned to the humbler 'theory', 'theorem' or 'observation'. In the case of Moore's 'Law', it's not even a very good theory, since it only describes a very general trend, it cannot predict with any accuracy exactly how fast/how many transistors or elements a chip will have at any time in the future.

    By the way, if the Itanium has 1.7 billion transistors, (I'll take the poster's word for it) then one has to ask - are they all pulling their weight? It seems a hell of a lot for what it does. Surely one way to squeeze more out of Moore's Observation is to come up with more efficient architectures and use fewer devices, working more efficiently/smarter/harder. Just a thought.
    • It's not a law

      That would be true, if it weren't for the fact that it is a law.

      A law is just a general or universal statement of the way things are. Some are imposed by man, some are imposed by nature, and others are based on the observation of trends.

      That's what makes Murphy's Law a law, what makes Godwin's Law a law, and, yes, what makes Moore's Law a law.

      Laws don't even have to be right to be a law.

      it's an observation

      It's that too. These aren't mutually exclusive things.

      Did you know the term 'l
  • by TimeTraveler1884 ( 832874 ) on Tuesday April 05, 2005 @07:54PM (#12149328)
    Michale Moore has a law now? Great, and I haven't even seen his film Rescue 911 yet. Now I understand why Disney tried to crush him and his law-making ego.

  • by snuf23 ( 182335 ) on Tuesday April 05, 2005 @07:55PM (#12149332)
    It's buried right next to BSD, adjacent to the freshly dug grave for World of Warcraft.
  • Moore was at Intel, and was pushing that goal for most of those years.
  • Self fulfilling (Score:5, Interesting)

    by Bifurcati ( 699683 ) on Tuesday April 05, 2005 @07:56PM (#12149340) Homepage
    One can't help but wonder whether there's a self fulfilling element to these sort of prophecies - do computer manafacturers feel pressure to adhere to Moore's law? Is it a challenge to keep up? Or is it really just chance?

    Also, for the record as a physicist, quantum computers won't remove the need for conventional computers in most areas - a big thing is (as I understand it) that they're not programmable, and have to be built to a certain specification. Therefore, classical computers will always have their use.

  • by highfreq2 ( 575192 ) on Tuesday April 05, 2005 @07:58PM (#12149358)
    Somewhere around there the number of transistors in a chip becomes equal to the number of atoms in the known universe.
    • Look at the world was like even 150 years ago. Do you really think that we have any clue what the building blocks of society will be? 150 years ago the telegraph was pretty hot stuff.
    • 2 words: quantum computing.
    • by kesuki ( 321456 ) on Wednesday April 06, 2005 @12:01AM (#12150969) Journal
      If the number started at 60 40 years means ~27 doubling of 60 so today's processores should have 8 billion tansistors 200 doublings of 8 billion is about 1.32*10^74 According to answers.com earth is composed of roughly 10^50 atoms [answers.com] and the Observable universe is estimated at 10^80 to 10^85 which is 335-356 years from now, not 300-400 Also, composing a transistor out of a single atom it pretty tough. plus you have to have gates etc. And if the whole observable universe is the processor, where is the rest of the system? ;) obviously you could make a system on a chip, but even then valuable atoms are being used and taking away from moore's law. plus the atoms of the device used to fabricate the observable universe into a giant processor... on the plus side, with that many transistors, you can probabbly encode the entire history of the universe into a mathmatically lossless codec that can achieve fit the entire sum of knowledge into a single byte of data. Some people believe this already happened, and the resulting processing caused the universe to collapse into a singularity and expolode into a new universe.
      • According to answers.com earth is composed of roughly 10^50 atoms and the Observable universe is estimated at 10^80 to 10^85 which is 335-356 years from now, not 300-400

        Thanks for the useful maths. However, why did you feel the need to correct the orignal poster's 300-400 year estimate? Your more refined estimate does not invalidate the OP. And besides, given the large number of (intelligent) guesswork in the calculations, I think it'd be more realistic to use 300-400 years than 335-356 years.

        Anyway, app
  • Graphs???? (Score:4, Interesting)

    by King-Raz ( 51985 ) on Tuesday April 05, 2005 @07:58PM (#12149359)
    Has anyone got any pretty graphs of the performance of particular CPUs against time? It would be cool to have some sort of visual representation of the validity of Moore's law.
    • Moore's Law does not directly predict "performance", rather it predicts the number of transistors on a chip, e.g. "the number of transistors doubles every 18 months." Meaning we should have 67,108,864 times as many transistors as we did 39 years ago... if they have 1.7 billion now, then they should have had about 25 transistors per chip 39 years ago... it appears that we're slightly behind on keeping up with Moore's Law.
    • Re:Graphs???? (Score:2, Interesting)

      by CurbyKirby ( 306431 )
      First, to answer your question: yes, Tomshardware recently updated their CPU benchmark test to now include over 100 CPUs from the last ten years. Starts here (graphs come later):

      http://www20.tomshardware.com/cpu/20041220/index.h tml [tomshardware.com]

      Now to explain why you're asking the wrong question: Moore's observation says nothing directly about performance. He merely suggested that the complexity of ICs double every 18 months or so. In general, this has nothing to do with a comparable trend in clock speeds on CPU
    • It's even cooler to look at the performance improvements of supercomputers [top500.org]. They double their speed faster than every year with amazing regularity. The top 500 supercomputers had a total processing power of 1.12 Teraflops in 1993. By mid-2004 they were at 1127.41 Teraflops. Look at the graph [top500.org], it really is impressive.
  • Bugs (Score:5, Interesting)

    by sicking ( 589500 ) on Tuesday April 05, 2005 @08:01PM (#12149381)
    What amazes me the most is the amount of bugs a device with 1.7 billion transistors has compared to the number of bugs in, say, Windows XP, GIMP or Firefox.

    And don't give me any crap about that software is somehow inherently harder to keep bugfree. I develop both and there really is little difference when it comes to complexity.

    Sure, software performs more complex tasks, but when you add 'parallel-ness' of hardware, as well as timing issues, temperature and manufacturing issues, clock distribution, leakage and crosstalk, hardware defenetly is a pretty good match.

    The simple truth is that there is simply vastly more testing that goes into hardware then most software (software in mars rovers and lunar landers would be an exception). And I bet that there are better design methods and safty guards too.
    • Re:Bugs (Score:5, Insightful)

      by rbarreira ( 836272 ) on Tuesday April 05, 2005 @08:21PM (#12149525) Homepage
      Well, several reasons come to mind:

      - Software usually performs a more diverse set of options

      - The environment where hardware runs is more predictable than the software one

      - Formal verification is probably easier to perform with hardware.

      - It's easier to verify low level stuff than high level abstractions.

      I'd add more, but I've got other things to do unfortunately...
      • - Formal verification is probably easier to perform with hardware.

        True. And this is the reason that we should be writing software pretty much the same way logic designers design logic circuits. That's the basic idea behind synchronous reactive programming languages like Esterel, Signal, Occam and others. Also check out Project COSA at the link below.
    • The simple truth is that there is simply vastly more testing that goes into hardware then most software

      The truth is not so simple. Given that the largest part of a modern CPU is cache, as opposed to logic, the transistor count does not reflect the net complexity. If one considers the ISA of a CPU to be it's specification, a chip is a far less complex construct than a non-trivial piece of software. ISA evolution is measured in years and decades. An equivalent piece of software has a relatively small nu
      • Re:Bugs (Score:2, Insightful)

        by Anonymous Coward
        Excuses, excuses, excuses.

        While you're right about most of the transistors being cache, the fact is that chip designs do go through a lot more testing (ie simulation) than most software.

        Largely it's economics. It's been a few years since I was involved in chip design (0.25 um) stuff, but IIRC it cost a few hundred $k just to make the masks for a silicon rev. At least 90% of the effort went into simulations and testbenches that are run before you see first silicon. The only software that gets that kind of
    • Re:Bugs (Score:4, Insightful)

      by earthforce_1 ( 454968 ) <earthforce_1@y[ ]o.com ['aho' in gap]> on Tuesday April 05, 2005 @09:16PM (#12149862) Journal
      There are plenty of silicon bugs and I have seen many of them. Some were real ugly. (I currently do ASIC verification in my day job) - I remember seeing about 3 or 4 pages of errata on the 386. In most cases, they had software workarounds except for the infamous fdiv bug - i.e. don't use these two instructions together, pad certain things with a nop, flush the cache if you cross a page boundary under certain conditions, etc.

      After the FDIV bug, they added a means of "patching" the instruction set in software as part of the BIOS boot procedure. Of course, there is no substitute for testing the hell out of it as much as possible before releasing.

      Software can be just as reliable if you put the effort into it. Usually it isn't done, because it is usually easy to patch the software on the fly, but a bad ASIC bug means an expensive respin.

      Hardware design is actually software design anyway - they have special languages for it such as Verilog and VHDL. If you have a foot in both camps, you would be suprise how little difference there is between hardware and software design methodologies.

    • Obviously there is nothing wrong with my simple programs. Any odd behavior can be explained by the complex hardware, you know, sun spots flipping bits and other errors induced in hardware, I know it is not my code ;-)
    • But the device in question - the vast majority of the 1.7 billion transistors are all doing the same thing i.e. being part of the cache. Software isn't written like that - a 1.7 billion LOC software program, most of the lines of code would all be doing different things rather than a vast array of identical devices.
  • by Infinityis ( 807294 ) on Tuesday April 05, 2005 @08:03PM (#12149395) Homepage
    I can just see Dr. Evil now...

    "I demand the chip have...SIXTY TRANSISTORS!" (pinky lightly touches corner of mouth).

    The guys at Intel start laughing hysterically...

    "I've changed my mind...I demand the chip have...ONE POINT SEVEN BILLION TRANSISTORS!" (pinky lightly touches corner of mouth)

    Intel guys gasp in shock...
  • Rather than calculating this forward in time, didn't someone trace this backwards in time, i.e. that you can see it halving every 18 months going back to the nineteenth century? I can't find a link on Google but I swear I saw it somewhere...
  • by exp(pi*sqrt(163)) ( 613870 ) on Tuesday April 05, 2005 @08:10PM (#12149445) Journal
    ...the moment. It depends on your application of course. But for number crunching it's hard to beat the GPU on recent graphics cards. For non-graphics applications you can expect speedups from 5-15 times (not %) for things like linear algebra, option pricing and singnal processing. This has been increasing faster than Moore's Law and will likely increase faster. Code written for GPUs is inherently streaming code, and hence easily parallelisable, so many of the complex dependencies that make CPUs tricky to speed up go away. These are exciting times and a big shift in programming paradigm is taking place.
    • Here's a GPU performance graph [plasma-online.de] which illustrates his point.
    • ...that specialized processors can temporarily exceed Moore's "law", by doing things smarter. It's kinda like finding a new algorithm that scales to order n*log n instead of n^2, and claim the processor got faster though.

      Code written for GPUs is inherently streaming code, and hence easily parallelisable, so many of the complex dependencies that make CPUs tricky to speed up go away.

      I think you put the cart before the horse there. The task you're trying to solve must be easily parallelizable and thus free
      • People were already doing parallelisable problems before GPUs appeared. For example 3D rendering is highly parallelisable. But unless you had access to specialised hardware you were unable to exploit it. Consider GPU Gems 1 or 2. The applications are from quite a few different disciplines (computational chemistry to finance) and yet very little reference is made to parallel programming because the code to do these things was already completely in a streaming form.
  • by Saeger ( 456549 ) <farrellj@nosPAM.gmail.com> on Tuesday April 05, 2005 @08:14PM (#12149477) Homepage
    Few people realize that Moore's Law is just one component of an even greater overall exponential trend which has been called The Law of Accelerating Returns [kurzweilai.net] (by Ray Kurzweil).

    Basically, it has been observed that any evolutionary process (including technology) will progress exponentially as it builds on past progress, with barely perceptable slow-down/speed-up "S-curves" as paradigm shifts occur.

    Moore's Law is certainly an important component of this trend, as it relates to computing power and eventual AI/IA accelerating to Singularity [singinst.org] in ~25 years, but there are many others in parallel: storage space, networking bandwidth, # of internet nodes, transportation speed, etc.

    One thing that certainly ISN'T keeping pace with our technology is our old evolutionary psychology; hopefully we can fix [hedweb.com] some of the more disgusting aspects of human nature before it's too late [gmu.edu].

    • Hubbert's Curve (peak oil) is going to trump Moore's Law. There will be no accelerating returns.
    • Heard of it... (Score:3, Interesting)

      by Goonie ( 8651 )
      ..but think it's bunk. There is absolutely no evidence to suggest that more-than-human AI is an inevitable consequence of continued development of computer hardware. The last 50 years of faster computers haven't helped much so far. Nor am I aware of some brilliant AI technique that will be made possible by much faster conventional computers. Technological progress generally happens in fits and starts, with radical jumps long periods of slow, gradual improvement in between. The chip industry is possibly
      • There is absolutely no evidence to suggest that you have given more than a cursory glance to these ideas and no evidence that you are qualified to speak about it at all.

        So just shut up and don't pretend to understand things that you really don't. Thanks.
  • Gates Law (Score:4, Funny)

    by xs650 ( 741277 ) on Tuesday April 05, 2005 @08:59PM (#12149758)
    Gates Law: MS Code bloat will double at the same interval as Moores law.
  • by Jhyrryl ( 208418 ) on Tuesday April 05, 2005 @09:44PM (#12150074)
    My favorite data point has to be this: in 1965, chips contained about 60 distinct devices; Intel's latest Itanium chip has 1.7 billion transistors!"

    From Popular Mechanics, march 1949:

    "...computers in the future may have only 1000 vacuum tubes and perhaps weigh only 1 1/2 tons."

  • Two page story on the "new" Intel and Craig Barett's successor...I thought it was an add at first. don't mistake my sniping at their PR machine for dislike or ill wishes...we all have a lot riding on Intel even if AMD is coming on strong.
  • 007 (Score:2, Funny)

    by Rixel ( 131146 )
    Somehow, sometime, Moores law will fail.

    Then you will have Lazenby's, Connery's, Dalton's, then (perhaps) Brosnan's law fail as well. Some laws can be.....broken, and twisted, and....um suckey. That last illiterative is mine....all mine, Mr. Bond.
  • ...in 1956, when they managed to fit one component on to a device.

  • Intel's latest Itanium chip has 1.7 billion transistors!"

    And it's as hot as one vacuum tube!

    (insert drom-rulls and cymbal hit)
  • 40? (Score:4, Funny)

    by Chainsaw Messiah ( 223587 ) on Wednesday April 06, 2005 @08:40AM (#12152825)
    40th anniversary? That's weird, I swear just about a year and a half ago it was the 20th anniversary.
  • The Itanium contains 300,000,000 times the number of devices than the 1965 devices (60). This 2^28 power over 480 months, or 17 months each doubling.

One man's constant is another man's variable. -- A.J. Perlis

Working...