Forty Years of Moore's Law 225
kjh1 writes "CNET is running a great article on how the past 40 years of integrated chip design and growth has followed [Gordon] Moore's law. The article also discusses how long Moore's law may remain pertinent, as well as new technologies like carbon nanotube transistors, silicon nanowire transistors, molecular crossbars, phase change materials and spintronics. My favorite data point has to be this: in 1965, chips contained about 60 distinct devices; Intel's latest Itanium chip has 1.7 billion transistors!"
Keeping Count (Score:5, Informative)
That's Montecito dual core Itanium, w/24MB of cache (only about 120 million transistors actually per CPU with the balance largely that motherlode of cache) and you could probably fry a steak on.
"We can keep Moore's Law alive just by stuffing the cache!"
"Brilliant!"
"Brilliant!"
Suddenly they were crushed by a giant can of Guinness containing not even an electronic sausage...
Don't hold your breath... (Score:5, Informative)
Re:Don't hold your breath... (Score:2, Informative)
Maybe not, but there's certainly been a bit of a bump in progress recently; no notable new desktop CPUs, and certainly no increase in the complexity, component count or speed - unless you want to count cache - nothing in the last 18 months has fulfilled the criteria set out in Moore's Law. Having said that, this anomaly only applies to CPUs.
I would hazard a guess that the law still holds true in memory - major advances there in transistors per square inch - and almost certainly in graphics processors. I envisage more specialised chips appearding to take a lot of the core work from the CPU - World Physics Processors anyone?
With the current circumvention of limitations being to cobble two cores together on a chip, could this also be the route that GPU manufacturers take in a few years or so?
Do you have a source for the 120M transistors ? (Score:3, Informative)
The way I see it, 24 MB = 1024*1024*8*24 * 6 transistors/SRAM cell = 1.2B transistors for cache, still leaving 500M for logic. Well, we can factor in address storage and cache access logic, but I'd still like to see some harder data than this.
Paul B.
Re:law? (Score:3, Informative)
Re:Do you have a source for the 120M transistors ? (Score:5, Informative)
I don't know how many additional SRAM cells Intel is planning in each of the cache levels, so the 1.2B transistors for cache can climb up to 1.4-1.6B.
Someone posted a number of 1.47B transistors for the L3 cache at Real World Tech [realworldtech.com]. I'm not sure how credible or accurate that number is.
Another article on RWT shows approximate die floor plan and othat info at:
http://www.realworldtech.com/page.cfm?ArticleID=R
Re:Electronics Magazine! (Score:3, Informative)
Re:When was the last time Moore's law was correct? (Score:5, Informative)
First of all, Moore's Law implies that the number of transistors per integrated circuit will double every 18 months (which, is not really what he said, see Understanding Moore's Law [arstechnica.com]).
Second of all, this has held true and is continuing to hold true.
Third of all, clock speed does not reflect transistor number or density, neither of which are the sole contributing factor to 'power' or 'performance'.
I don't know what's sadder; wondering if the parent was actually a joke, or wondering how it got +5 insightful. Damn.
Re:It definitely has less that 300 - 400 years. (Score:5, Informative)
In addition... (Score:2, Informative)
Peak Oil folks take one valid idea (oil is finite, and running out will be painful), but then devolve into irrational fear-mongering about it. If thermal depolymerization can net the US four billion barrels of oil from agricultural waste we currently throw away, running out of ground oil ain't going to be causing a new Stone Age.
Re:Keeping Count (Score:5, Informative)
The problem with bigger & bigger cache is that it has diminishing returns. This is why Intel's "Extreme" chips are a waste of money.
The inability to do anything useful with all those transistors is why we're seeing the advent of multi-core chips, which are neat but fail to preserve the conventional single-threaded programming model. This places the burden of creating explicit parallelism on the programmer, and leads to more complicated code, which means it costs more to write and also contains more bugs.
You know what bothers me even more? (Score:3, Informative)
Sure, taking Moore's law literally, computers are 1 million times faster than 30 years ago. Arguably that should translate into _more_ than 1 million times more work per second, because compilers have evolved too, and expensive optimization techniques have become more affordable. (A compiler optimization technique that would have taken a week on a 70's mainframe, now takes seconds.) We also have better tools.
But are we doing 1 million times more with them? Nope.
Every time we get better tools, the accounting dept just get the idea "w00t! Now we can _really_ hire untrained monkeys to use them." In fact, the better tools and computers you get, the worse code you get.
It's not just code _performance_ that went south, any clue about security or good design went south too. Actually analyzing what could go wrong got at some point replaced by magic talismans like "we use Java so we can't possibly have a security problem" or "we use HTTPS, so our site is by definition secure." Too bad that one only has to edit an URL to bypass all those magic talismans.
And then there's the BDA (Buzzword Driven Architecture) effect.
The whole computer industry is one big scam where marketting is in control, and the biggest outright liar and con wins the contract. So every single dud or unfinished (or outright _stupid_) idea is marketted as _the_ second coming of christ, cure for all enterprise problem, cure for cancer, etc. And there's one born every minute who actually believes that drivel... yet again.
So programs are written with the sole purpose of having as many buzzwords in them as possible. Everything _must_ involve a SOAP call, to an EJB, which uses XSLT instead of just processing the damn data, etc.
True story: I've actually benchmarked one such crap buzzword-driven framework we were forced to use here. It took 1.1 seconds for a call to an empty method, on a 2.26 GHz P4 computer. No, not milliseconds. 1.1 _seconds_. A cool 2.5 billion CPU cycles just for a function call to an empty function.
We've actually exceeded Moore's law. A computer in '70 may have been 1 million times slower, but we're taking a _billion_ times more computer cycles to do the same. Yep, the modern version actually runs _slower_.
Being an ex-assembly programmer, that realization hurt. I'm talking physical pain.
So to end this long rant, IMHO I'm not sure that Moore's law will become that irrelevant any time soon. You could increase the CPU speed another 100 times, and someone will just find the monkeys to write 1000 times slower code for it.
Re:Moore's Law is probably being exceeded at... (Score:1, Informative)
. 8086. . . . .: 0.03 million transistors (1978) .: 0.13 million transistors (1982) .: 0.27 million transistors (1985) .: 1.2 million transistors .: 3 . million transistors .: 5.5 million transistors .: 7.5 million transistors .: 9 . million transistors .: 9.3 million transistors (1994) .: 15.2 million transistors (1998) .: 22 . million transistors .: 23 . million transistors .: 28 . million transistors .: 33 . million transistors .: 42 . million transistors .: 52 . million transistors .: 57 . million transistors .: 63 . million transistors .: 110. million transistors .: 125. million transistors .: 125. million transistors .: 160. million transistors .: 178. million transistors
. 80286 . . .
. 80386DX . .
. 486 . . . .
. Pentium . .
. Pentium Pro
. Pentium 2 .
* Nvidia TNT2
. Alpha 21164
. Alpha 21264
. PPC G3. . .
* Geforce 256
. Pentium 3 .
. PPC G4. . .
. Pentium 4 .
. PPC G5. . .
. P4 Northwood : 55 . million transistors
* GeForce 3 .
* GeForce 4 .
* Radeon 9700
* GeForce FX.
. P4 Prescott
* Radeon X800
. P4 EE . . .
* GeForce 6800 : 220. million transistors
properly formatted at http://nothings.org/trans.txt [nothings.org]
I wish I'd bothered to keep citations for all these numbers, but I didn't realize when I started this how long it was going to go.