Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Technology Hardware Science

Forty Years of Moore's Law 225

kjh1 writes "CNET is running a great article on how the past 40 years of integrated chip design and growth has followed [Gordon] Moore's law. The article also discusses how long Moore's law may remain pertinent, as well as new technologies like carbon nanotube transistors, silicon nanowire transistors, molecular crossbars, phase change materials and spintronics. My favorite data point has to be this: in 1965, chips contained about 60 distinct devices; Intel's latest Itanium chip has 1.7 billion transistors!"
This discussion has been archived. No new comments can be posted.

Forty Years of Moore's Law

Comments Filter:
  • Re:Keeping Count (Score:4, Insightful)

    by AKAImBatman ( 238306 ) * <akaimbatman@gmaYEATSil.com minus poet> on Tuesday April 05, 2005 @07:37PM (#12149191) Homepage Journal
    "We can keep Moore's Law alive just by stuffing the cache!"

    If it actually works, then there's little to complain about. Unfortunately, I don't think that things are quite so easy...
  • Kinda obvious.... (Score:3, Insightful)

    by FalconZero ( 607567 ) * <FalconZero@Gm[ ].com ['ail' in gap]> on Tuesday April 05, 2005 @07:40PM (#12149215)
    ...but the article doesn't point out that the law is based on silicon transistor based computing. Obviously, if we switch to other bases for computation, it probably wont apply. IE quantum or plasmonic (yes, I know the latter will probably be in silicon).

    Before anyone says, well we've adjusted the length of time for doubling already, we'll do it again. For what its worth, its a bit silly saying X=2^Y/T is a law if you redefine T everytime it doesn't fit.
  • law? (Score:2, Insightful)

    by wpiman ( 739077 ) * on Tuesday April 05, 2005 @07:42PM (#12149232)
    Shouldn't it be Gordon's theorem is we are questioning it? People don't question the theory of relativity or the theory of evolution (ok- I meant educated people)- and we still refer to these as theories.
  • Solving problems. (Score:2, Insightful)

    by brejc8 ( 223089 ) * on Tuesday April 05, 2005 @07:42PM (#12149233) Homepage Journal
    People always talk about the end to Moors law stating that we cannot solve some challenges. Other people always reply "well we always manged to solve challenges and we probably always will".
    What I think is more interesting is how far ahead we can solve them. The clock distribution problem was a problem for seen and solved years ahead of it biting hard. Nowadays the problems arise and we have shorter and shorter time to react before they cause serious problems.
    This is the strongest proof I found that this technology will (eventually) stagnate.
  • Re:Keeping Count (Score:3, Insightful)

    by Anonymous Coward on Tuesday April 05, 2005 @07:51PM (#12149295)
    Moore's law was about transistors, not computing power like it has commonly been misinterpreted as. I feel that using the phrase "stuffing the cache" is somehow implying that using the transistors for cache is somehow cheating. It is not cheating in any way shape or form. Moore's law is about transistors, regardless of how you use them.

  • It's not a law... (Score:5, Insightful)

    by GrahamCox ( 741991 ) on Tuesday April 05, 2005 @07:53PM (#12149322) Homepage
    It's not a law, it's an observation. Did you know the term 'law' for a scientific theory was coined by Isaac Newton, who felt that his 'Laws of Motion' were so right and pervaded the universe so deeply that they had to be a law? He wanted to convey they had a deeper significance than a mere theory. In time of course, even these 'laws' came to be shown to be incomplete or only true for slow moving objects. Ever since, every theory both worthy and crackpot has been called a 'law'. It's about time we returned to the humbler 'theory', 'theorem' or 'observation'. In the case of Moore's 'Law', it's not even a very good theory, since it only describes a very general trend, it cannot predict with any accuracy exactly how fast/how many transistors or elements a chip will have at any time in the future.

    By the way, if the Itanium has 1.7 billion transistors, (I'll take the poster's word for it) then one has to ask - are they all pulling their weight? It seems a hell of a lot for what it does. Surely one way to squeeze more out of Moore's Observation is to come up with more efficient architectures and use fewer devices, working more efficiently/smarter/harder. Just a thought.
  • by exp(pi*sqrt(163)) ( 613870 ) on Tuesday April 05, 2005 @08:10PM (#12149445) Journal
    ...the moment. It depends on your application of course. But for number crunching it's hard to beat the GPU on recent graphics cards. For non-graphics applications you can expect speedups from 5-15 times (not %) for things like linear algebra, option pricing and singnal processing. This has been increasing faster than Moore's Law and will likely increase faster. Code written for GPUs is inherently streaming code, and hence easily parallelisable, so many of the complex dependencies that make CPUs tricky to speed up go away. These are exciting times and a big shift in programming paradigm is taking place.
  • by Anonymous Coward on Tuesday April 05, 2005 @08:11PM (#12149454)
    Yes, but in the sense so do CPUS - multiple registers, pipelines etc.

    I should have been clearer, but what I'm expecting is that when GPU designers hit a brick wall they'll take two cores (with their own internal parallelized structures) and bolt them together - more brute force than smart answer.

    In fact, now you mention it, I suppose SLI is pretty much that - use two cards rather than one...
  • by norkakn ( 102380 ) on Tuesday April 05, 2005 @08:21PM (#12149524)
    Look at the world was like even 150 years ago. Do you really think that we have any clue what the building blocks of society will be? 150 years ago the telegraph was pretty hot stuff.
  • Re:Bugs (Score:5, Insightful)

    by rbarreira ( 836272 ) on Tuesday April 05, 2005 @08:21PM (#12149525) Homepage
    Well, several reasons come to mind:

    - Software usually performs a more diverse set of options

    - The environment where hardware runs is more predictable than the software one

    - Formal verification is probably easier to perform with hardware.

    - It's easier to verify low level stuff than high level abstractions.

    I'd add more, but I've got other things to do unfortunately...
  • Re:Bugs (Score:2, Insightful)

    by Anonymous Coward on Tuesday April 05, 2005 @08:43PM (#12149658)
    Excuses, excuses, excuses.

    While you're right about most of the transistors being cache, the fact is that chip designs do go through a lot more testing (ie simulation) than most software.

    Largely it's economics. It's been a few years since I was involved in chip design (0.25 um) stuff, but IIRC it cost a few hundred $k just to make the masks for a silicon rev. At least 90% of the effort went into simulations and testbenches that are run before you see first silicon. The only software that gets that kind of testing effort is true hi-rel stuff (ie fly-by-wire).

    As far as ISA being the spec...that's the simple part. Modern CPU design puts a lot more effort into fun stuff like instruction scheduling, branch prediction, yada, yada, yada (not my specialty).

  • by s1234d ( 542588 ) on Tuesday April 05, 2005 @08:44PM (#12149664)
    Hubbert's Curve (peak oil) is going to trump Moore's Law. There will be no accelerating returns.
  • by Temsi ( 452609 ) on Tuesday April 05, 2005 @08:53PM (#12149714) Journal
    Actually, you're wrong in assuming his law will cease to apply once it becomes impossible to make transistors, as the law didn't apply specifically to transistors in the first place.

    His observation was made to Electronics magazine, in the April 19th, 1965 edition.
    He didn't mention transistor density.
    He didn't mention processors (as microprocessors were still 6 years away from being invented).

    He was describing component integration on economical integrated circuits.
    He observed that component integration doubled approximately every 12 months. He increased that number to 24 months, in 1975. Since then, other people have split the difference to 18 months.

    None of those figures, 12, 18 or 24 months, are accurate.
    If the 18 month figure was accurate, today's chips would have 75 Billion transistors.
    With his original 12 month figure, 27 Trillion.
    With his revised 24 month figure, 37 Million...

    Also, this isn't even a law... it's an observation.

    Please note... I relied on Tom R. Halfhill's column in Maximum PC (April 2005) "The Myths of Moore's Law" for this reply.
  • Re:Bugs (Score:4, Insightful)

    by earthforce_1 ( 454968 ) <earthforce_1 AT yahoo DOT com> on Tuesday April 05, 2005 @09:16PM (#12149862) Journal
    There are plenty of silicon bugs and I have seen many of them. Some were real ugly. (I currently do ASIC verification in my day job) - I remember seeing about 3 or 4 pages of errata on the 386. In most cases, they had software workarounds except for the infamous fdiv bug - i.e. don't use these two instructions together, pad certain things with a nop, flush the cache if you cross a page boundary under certain conditions, etc.

    After the FDIV bug, they added a means of "patching" the instruction set in software as part of the BIOS boot procedure. Of course, there is no substitute for testing the hell out of it as much as possible before releasing.

    Software can be just as reliable if you put the effort into it. Usually it isn't done, because it is usually easy to patch the software on the fly, but a bad ASIC bug means an expensive respin.

    Hardware design is actually software design anyway - they have special languages for it such as Verilog and VHDL. If you have a foot in both camps, you would be suprise how little difference there is between hardware and software design methodologies.

  • by SageMadHatter ( 546701 ) on Tuesday April 05, 2005 @09:22PM (#12149919)
    I'm surprise that no one has spoken up and pointed out that Moore's law has not been true for the past few years. In 2003, I purchased a P4 3.06ghz, which I'm using right now to type this message. 2005-2003 = 2 years. Where are the 6.12ghz machines?

    "Well, it's not about hertz, it's about perforamnce!"

    Judging from benchmarks, the current top of the line CPUs are not twice as powerful as my P4 3.06ghz. Sooo... anyone care to explain how Moore's Law is still been used?
  • Re:Keeping Count (Score:5, Insightful)

    by MOBE2001 ( 263700 ) on Tuesday April 05, 2005 @10:28PM (#12150370) Homepage Journal
    If it actually works, then there's little to complain about.

    It can only work for so long. The biggest problem that is keeping performance down is not the processor but the memory retrieval and writing system: only one memory location can be accessed at any one time. This is also known as the von Neumann bottleneck. Not even clustering can get around this problem because there is a need for inter-process communication that slows things down. If someone could come up with a system that allows unlimited random and simultaneous memory access, the physical limit to processor speed would not be such a big deal anymore. We would have found the holy grail of fast computing.
  • by Anonymous Coward on Tuesday April 05, 2005 @10:28PM (#12150375)
    You kind of forgot the major reason there ...

"Engineering without management is art." -- Jeff Johnson

Working...