Moore's Law set to continue 112
Chips are made by etching tiny wires and transistors onto a silicon substrate.
The process used is lithography, which resembles photography: layers of special
chemicals are added onto the silicon base. Shining light through a mask changes
the properties of the layers where the light hits, allowing further treatment
to produce transistors, wires, and other so-called features. Classical physics
limits the size of features achievable with a given wave-length lambda to the
Rayleigh diffraction limit of lambda/2. This is achieved by using optical
interference. In 1999, Yablonovitch and Vrijen suggested using two-photon
exposure techniques to increase this resolution. Their interference pattern
contained a high frequency 4* term (allowing lambda/4 sized features), but also
a lower frequency 2* term of greater intensity which made it unusable for
lithography.
Now researchers at the JPL (USA) and the University of Wales (UK)
have
shown that using entangled photons removes the 2* term allowing features
of lambda/4 to be created. Their paper goes on to show that in general
features as small as lambda/2N should be possible for N-photon absorbing
substrates. Slashdot contacted one of the authors Jonathan Dowling who told
us that experimental validation of these results is underway at UMD and is
looking good. This means that Moore's law that the speed of chips will increase
two-fold every 18 months will probably not encounter a limit due to lithography.
Thanks to B1FFMaN for bringing the story
to our attention, and to Jonathan Dowling for emailing us the article in advance of its publication.
What happened to the electron barrier? (Score:2)
Re:Yup, (Score:2)
the human imagination and knowledge set.
By most accounts, physics hasn't changed over the past 100 years and won't over the next 100. Only our understanding changes.
what about tunneling? (Score:2)
-----------------
According to my calculations... (Score:1)
Re:Hard limit in lithography that is overlooked (Score:2)
Ouch (Score:2)
-jpowers
Re:Boyle's Law of Electronics (Score:2)
We'll all be back to programming everything directly in assembler, writing to bare metal, and economizing every last cycle of bandwidth and bit of cache. Programming will be painful, and software will be elegant. GUIs will be a federal offense.
Re:Moore's Law?? (Score:1)
Or was it Michael Moore, and his theory that twice as many dollars in corporate welfare is spent by the US gov't every 2 years? One of those two...
Re:Boyle's Law of Electronics (Score:2)
Streaming MP3s and video would sound like Star Trek to someone in the early 1980's!
Yeah. Streaming MP3s from the point of view of the early 1980s: "So, you mean it's like the radio, only it's bigger than a bread box and has a big-assed TV sitting on top? Good deal!"
Likewise, streaming video: "So it's like TV, only in a little teeny sub-section of a regular size TV, and the clips are a few seconds long and sometimes break up for no apparent reason."
Personally, I'm glad to be living in these enlightened times. I pity the poor saps from that primitive generation.
Re:So how log can Mores Law continue? (Score:2)
Well, these guys claim they can switch a single hydrogen atom between two silicon atoms.
[mic.dtu.dk]
Check out the press release
And the slashdot discussion about it [slashdot.org]
Re:Yup, (Score:2)
Did you read the last part of the comment? ONLY OUR UNDERSTANDING CHANGES. Physics hasn't changed. Our KNOWLEDGE OF PHYSICS has.
Re:It's the Substrate, Stupid (Score:1)
Re:Innovation (Score:1)
The current speed record for a digital flip-flop is 770GHz. [sunysb.edu].
While this technique is nowhere close to go into mainstream (or even scientific) computing it still shows that circuits operating in close-to-one-THz range are possible. Things might be different in twenty years. And Prozessors in the THz range would for sure be nowhere close to the CPUs we have today. Probably heaviliy asynchronous processors using architectures like systolic arrays etc. have to be used.
Also dont forget that mainstream CPUs are not made with the fastest technology available, but with the cheapest. By the use of GaAs Cray was able to achieve clocking speeds of around 1GHz in a time when a stock PC was clocked at 33MHz - so what might a GaAs CPU with current technology scale up to today ? Or how about BiCMOS ? (given that you have a personal power plant and some insane cooling device ;) )
Mod this up! (Score:1)
Of course, barring other difficulties, this still is an improvement from .14 to .02, which improves the circuit density by a factor of forty-nine. After that, I'm not sure what would be the next leap. Nanotechnology? Those electricity-conducting DNA strands? Etching with electron beams?
Re:What happened to the electron barrier? (Score:1)
Re:Moore's law and software (Score:1)
Not quite yet (Score:2)
Lambda (Score:2)
If you wanna explain it for us, do it so that the average grade 12 student can get it, please
The point is, this is potential solution to 1 prob (Score:1)
I think these people are missing the point. Sure there are thousands of really challenging problems that need to be overcome in order to keep Moore's Law on track through the next decade. But... each problem has to be solved individually and this is one potential solution to one problem.
EBeam, XRay and other Next Generation Lithography methods look promising but each of these has their own problems as well. The industry has long taken the approach of attacking problems from various angles and letting the best suited technology lead the way.
That's what this article is about. Its one possible approach to one problem that we know we need to solve.
Lithography ain't the problem... (Score:2)
Think of it another way, ever had a poorly shielded speaker wire crossing over a power cable? Remember that buzzing noise? Same concept, and it's true for processors too. In fact, comapnies like Intel and Motorolla have lots of research money invested in finding out how slow a turn has to be, what a turn can be near and so forth.
Pretty soon the lithography will be so small electrons will be useless. :-)
Re:Who moderated my post as 'funny'? (Score:1)
So how log can Mores Law continue? (Score:2)
Re:Yup, nope (Score:2)
It's all true! ±5%
Re:Moore's law and software (Score:1)
As the processor speed increases, the amount a program can do in a given time increases. And it does more stuff because people like you ask it to:
'I want a command prompt'.
'I want a file shell'.
'I want a GUI'.
'I want multitasking'.
'I want true multithreading'.
'I want networkability'.
'I want a punk-ass paperclip to annoy the hell out of me'.
etc..
So yeah, of course it's going to take up the processor time - we're asking our operating systems and programs to do WAY more than they did even just a few years ago. It has nothing to do with bad code.
(ps, slashdotpeople: don't be anal and tell me that multitasking came before gui, or shit like that. i'm aware. and don't care.).
rhyac.
We'll be waiting for this one... (Score:1)
This new quantum approach looks really promising, but as the article states, the engineering challenges are going to dalay the actual use of this tech for ... a long time. Guesses? 10 years? Who can say. I like the comment about Moore's law pushing forward, but really.... this tech will take a while to have any effect on our CPU purchases.
Sengan's back??!?!? (Score:1)
--
Re:Physical Limits (Score:1)
Noise and decay can also be fought by standard techniques, but I do suspect that before long we're either going to run into a size barrier using current methods, or at least technology advance will slow to a crawl. The question isn't whether our current fabrication methods will change, but what will take their place...
Re:Moore's Hypothesis (Score:2)
"Moore's Law" has always been considered more of a goal that a "Law".
Re:Finally!!! (Score:1)
--
Re:So how log can Mores Law continue? (Score:1)
Moore's Law? (Score:1)
physical limits to computation... (Score:1)
THz computer? No Problem! (with caveats[*]) (Score:1)
Way, WAY, beyond Moore's Law.
Here is truly, The Last Computer [newscientist.com]
* "Admittedly, it might be a bit inconvenient putting a nuclear fireball on your desk."
It's the Fabs, stupid (Score:2)
Moore's Law in Wired (Score:2)
Re:Maybe not lithography .... (Score:1)
The speed of light limitation does not exist at the quantum level, significantly altering your back-of-envelope calculation.
Re:Maybe not lithography .... (Score:1)
Re:Lambda (Score:1)
Re:Maybe not lithography .... (Score:1)
Re:Innovation (Score:1)
Re:Sengan's back??!?!? (Score:1)
Where's pinguin?
IIRC, the /. login user ids started when ppl started posting using nicks like BOredAtWork. Some of you moderators are probably saying, huh? what the hell are you talking about.
I miss Meeept!!!
Re:We'll be waiting for this one... (Score:1)
I agree and I'll add that they've kept them there for a long time in the past. Industrial scale X-ray lithography looks as much like a pipe dream now as it did ten years ago. As an undergrad I took a course in quantum electronics (basically a course in quantum physics and electronic device applications). Tangentially, X-ray lithography was mentioned as a "Good thing", if the engineering details could be worked out. That was around 1990 and they still haven't been worked out to my knowledge. Any new approach intrigues me. Maybe enough new approaches will yield something that can be worked out in the near future. X-ray lithography is starting to look like fusion: something amazingly good that might be worked out at some indefinite point in the future. We need something quicker than that!
"It's the Wires" (Score:2)
Even assuming we can reach 35nm gate lengths (that's a .035um process), the speed of the wires will be problematic because (to a rough approximation) the delay of a wire increases as 1/scale factor squared. In other words, decrease the feature size of your chips by a factor of 0.5, the delay of global wires goes up 4x. (Transistors are roughly sped up by 2x, however.)
Global wires are used to connect big functional blocks of a uP, like the ALUs, Cache, register file, etc.
The delay of small little wires (connecting adjacent gates, for example) stays about the same, but this still poses a problem since the transistors will have to wait for the wires as they get even faster.
Wiring is already responsible for much of the delay of a uP, and is only going to get worse. Even if transistors get to 35nm (which the SIA predicts will happen in 2014), they only get 7x faster. This corresponds to a 15% annual improvement rate, well short of Moore's law 58%.
A bunch of this is described here [wisc.edu].
Imagine a plot of the relative performances of the fastest uniprocessor machine on earth compared to the fastest uP, graphed vs. time. see paper [wisc.edu]. What you'd see is that, in fact, the fastest computers in the world have been improving at a rate closer to 12-14% annually. uP's got a late start and were many orders of magnitude slower. uP's have been catching up, borrowing technologies from minis and supercomputers which have been resulting in yearly advantages of 50-60% over the last 20 years or so. But uP's are about to hit the same hurdles that have been bothering supers for a long time. (Supercomputers have been communication bound for some time.) Until something fantastic happens (optical? organic?) uPs and supercomputers may be very similar in performance.
One last thing to note is that the bad news about future growth of uPs places an assumption on the microarchitecture--that it remain largely the same as today's. There are other possibilities being researched, for example, the RAW group at MIT. They may be able to cope with the wire delays in ways that a conventional uP cannot.
-Ed
Re:According to my calculations... (Score:1)
Bingo with the 1-d comment. The classical Rayleigh limit is inherently a 1-d measure. Halving that limit gives the potential for 4x as many transistors.
Re:#127 (Score:1)
Innovation (Score:2)
I doubt that we will see the slow down of processors anytime soon. When lithography's run finally comes to a standstill, quantum computing will have matured enough to grab the baton and keep up the race. To what ened, I don't know. Right now I'd just love to see a decent memory tech come out. DDR may beat RAMBUS, but that's not saying much.
Yup, (Score:1)
hit the 2GHz mark, Windows 2002 will be requiring 1.66GHz. The only thing
that'll really wind up being the boundary will be physics. I wonder when
we'll end up with a small fusion reactor on top of the processor (:
Re:Moore's Law?? (Score:2)
Maybe not lithography .... (Score:1)
And then poof! .... nothing smaller than atoms ... Moore's law breaks down.
Re:Moore's Law?? (Score:1)
would double every 18-21 months [Moore's Law], as opposed to Gates' Law:
software speed halves every 18-21 months.
Re:Moore's Law?? (Score:2)
--
I hate to be pedantic, but... (Score:1)
How many potholes on the road there, though? (Score:3)
The physical universe may not constrain us as much as we had feared, but it looks like gross human incompetence is filling that role quite nicely.
Re:Moore's Law?? (Score:1)
> foundation do we use to consider this a "law"???
Gordon Moore was one of Intel's founders. Unbeknownst to a lot of people, he didn't actually come up with this at Intel, but at his previous company, Fairchild Semiconductor.
Moore's law is a law in the same sense as Murphy's Law I suppose. Not like one of Newton's laws.
Take it as you will
YEAH (Score:1)
*raises fist*
Re:Maybe not lithography .... (Score:1)
By the time atoms pose the physical limit to Moore's Law, sub-atomic particles that we currently know nothing of will extend it.
Guaranteed.
what about interconnect, leakage, and power diss.? (Score:5)
But consider:
1. interconnect: as feature sizes diminish, the physical height of metal lines becomes greater than their width, making them look like skyscrapers, and the IC isn't so planar anymore. The problem then becomes the physical strength of the conductor, as it easily breaks as it is forced to bend over the surface of the chip. Copper interconnect is one partial solution to this problem, but it is not a magic bullet and things are getting worse all the time.
2. leakage: as transistors shrink, their gate oxide also scales. Therefore, for a given supply voltage, the electric field in the transistor increases until the gate blows out. So, then power supply voltages are scaled. Unfortunately, this tends to slow down the transistor unless the threshold voltage is also reduced, but then we have increased leakage current. This is quite a trade off, as increased leakage current not only increases the power dissipation (more on this next) but it also makes it more difficult to design RAM and mixed-signal/analog blocks.
3. Power Dissipation: Even though the supply voltage is decreased, and power dissipation of a single transistor decreases as the square of the supply voltage, overall power will increase for two reasons. First, there are many more transistors on the chip switching ever faster, and second, the reduced threshold voltages mean there will be significant static power drain even in CMOS logic. 1 nA of leakage/transistor in a 1 Volt, 1 Billion Transistor microprocessor of the future would burn a full Watt even without switching! This is a very serious problem not only for portable applications because it is difficult to package such a power hungry chip cheaply and efficiently.
While this is an interesting development to optical lithography, I don't think it will have much impact on Moore's law. In fact, I'm much more worried about the power issue and The Interconnect Problem.
A few optics comments (Score:4)
- classical imaging is limited by wavelength; the shorter the wavelength, the better the resolution. Lithography, fundamentally, is imaging a mask at a reduced size onto reactive material. So, the approach has been to decrease the wavelength, to get smaller feature sizes.
- as the wavelength and feature sizes decreases, optical interference effects became more of a concern. But they also learned to play cool tricks with the effects. Instead of using a conventional 'binary' mask (either opaque or transparent), they implemented phase masks. Certain areas, usually at corners, and line ends, had a different optical thickness, introducing a phase shift into part of the light, allowing interference, resulting in certain feature sizes to be reduced, approaching the lambda/2 limit.
- Other games they play, I think, involve the etching material itself. Because it does not react in a linear fashion, I think they have done things to modulate the image intensity more precisely, using the material reaction with the light to achieve feature sizes that are smaller than expected based on the image quality itself. That is, the material is used as a thresholding device. (I'm not sure if they actually do this, but I thought I've heard of it. Maybe not.)
- What's next? People have been declaring the death of "optical lithography" for years (decades?). Yet, the industry keeps finding ways to produce shorter wavelengths (in an industrial setting), and design/fabricate lens systems that can image at that wavelength. There have been predictions of x-ray and electron beam lithography, but 'optics' has so far held them off.
What about this new technique? I don't know anything about it. It could be a new necessary method. Or it might not pan out, faced with the multitude of other challenges, and the tremendous money & experience & effort thrown behind the current optical technologies.
- Parting thoughts:
Something often overlooked are the other parts of lithograhy. The stepper motors used to translate the silicon wafers are incredible! But that technology must be improved to provide sufficient resolution & accuracy as feature sizes decrease.
The masks themselves are also a fair feat, requiring some fabrication finesse
The lens systems required for lithography systems are insane. The search for new materials as wavelength decreases. Further, as the feature sizes decrease, lenses must have ever small tolerances, which pushes the measurement technology people to do amazing things.
I could go on, but I've rambled enough. Suffice to say that the lithography and related fields are really cool. The particular writing method is important, but there are a whole host of other challenges to face as well.
-----
D. Fischer
Re:How many potholes on the road there, though? (Score:3)
This is using a new high-tech process of press-release generation code-named "vapor". Motorola is said to be licensing this new technology from Intel to assist in ramping clock speeds for their PowerPC chips.
Re:YEAH (Score:1)
The fans are what make this all really worthwhile....
That and the biatches!
Hard limit in lithography that is overlooked (Score:4)
This problem is the size of the photosensitive compound molecule. Whatever the wavelenght you use, you have to impress a photosensitive resin with your ever-finer optical patterns. And the problem is that this molecule is big. We are already reaching a point where the size of the photoresist molecule is not negligeable anymore.
In a few years, at around 0.02 microns, we'll reach the operational size of the smallest photoresist blob that can be physically impressed with a photon. So even if the wavalength keeps decreasing, we'll still have that blob size as the choke point.
Moreover, the new photoresists for 0.113-micrometer laser are far from being perfect. They are still way too temperamental for production use. And nobody has anything better coming up. None. No plans, no projects, no announcements.
Isn't that sad? For all the marvelous optical tricks that we pull in the micro-electronics industry, we are now roadblocked by a basic chemistry problem. Photoresist used to be a glorified paint job on top of a wafer that everyone was taking for granted, but it's back with a vengeance.
Conclusion: Unless we have a breakthrough in chemistry (not laser, not optics), the Moore law is dead when we reach 0.02 micron.
Boyle's Law of Electronics (Score:2)
I have to dissagree strongly here. Applications expand, like a gas, to fit the available capability of any given technology. Could people using "powerful" $100,000 minicomputers in the 1970's ever dream how much computer power we have today or what we would use it for, or how cheap it would be? Streaming MP3s and video would sound like Star Trek to someone in the early 1980's!
Besides, even if the application doesn't change, new processing capability can be used for many things, such as automatic calibration of analog circuits (hard problem) and massively reconfigurable systems.
They'll always be things we can do to make stuff better, faster, or cheaper.
Re:Moore's Law?? (Score:1)
E-beam lithography and commercial use (Score:2)
Re:Yup, (Score:1)
Someone mod this guy up as +1 Funny.
Let's see...
in 1900, Relativity hadn't raised it's head (Lorentz had made some moves in that direction, but it hadn't been fully postulated).
Planck had just (with great reluctance) postulated the quantum, but was unhappy with the concept.
The idea of the photon was still a few years away (I believe Einstein's seminal paper on the Photelectric Effect was in 1905).
Yep, physics hasn't changed at all over the past 100 years!
Re:Innovation (Score:1)
We already are off loading video processing to specialty built video hardware. Were not doing this very much with sound (yes, you can get you high end cards to add base and sound fields, but you cant do OpenGL like sound calls - play the sound I uploaded to you before like it was coming from (x,y,z) with this ambient sound....)
Hard drives are prety dumb, and general purpose - the way you would set up RAID for, say, video or audio editing (which is called nonlinear, but a 5 second clip is eternity for a drive), is not the same way as you would for a database - and it would be different for differnt databases.
And this isnt even taking into consiteration network applications - all the hard thinking would be done on a centerlized host, with only the visualation being done localy. You processor is idle 95% of the time, but your video card is busy 100% of the time. If you have a smooth distribution of tasks you could have 20 people using your cpu if you could have 20 video cards. And if you scale this up, globaly, it would be close to smooth distribution.
My point is that there are more solutions then just throwing processoer power at the desktop.
Re:It's the Fabs, stupid (Score:1)
-_Quinn
Re:Yup, nope (Score:1)
I dunno about that one.. Never underestimate the power of Microsoft developers to write incredibly bloated programs and operating systems.
Re:Moore's Law?? (Score:1)
Gordon.
Re:Moore's law and software (Score:2)
It runs so slowly (a "mere" Pentium 400) that I can actually see my windows redraw.
Booting takes 5 minutes (NT 4.0)
Shutting down takes several minutes, too.
It might be fun and all to bag on NT, but if you're running a PII-400 that's going that slow you've got problems that go way beyond what Microsoft may have done. I'm running a PII-350 here with NT 4.0 WS and it's been running non-stop for 442 hours. The only reason this number isn't significantly larger is that I shut it and my FreeBSD box down when I know I'm not going to be using them for an extended period of time.
I would strongly suggest you start looking at what kinds of services are running, and the very real possibility that you've got some serious hardware problems. From what little info you've given, I'd be looking at either the hard drive or video card as the primary suspects.
For myself, I've been quite happy with this PII-350 for everything from web browsing to editing print quality photos in Photoshop. About the only thing that I'd be looking at a faster processor for is Bryce. Ah well, I'll probably need to crank things up to a 4Ghz processor to get the next Doom to play decently though.
Re:Innovation (Score:1)
Re:Yup, (Score:2)
Oops, better not put it right ON the processor... that's where the 2000W, liquid nitrogen powered cooling unit goes.
Re:what about interconnect, leakage, and power dis (Score:1)
Re:what about tunneling? (Score:2)
An interesting, yet somewhat overly complex example. A bit more to the point would be talking about how a basic transformer works. One coil of wire inducing current into a nearby coil. You don't require an actual coil of wire to get this effect, simply need the wires or circuit runs close enough to have them induce current into the neighbor.
with supposedly infinite input impedence
Just to get into the nit picky here, but a MOSFET is only said to be very high input impedance. In basic electronic components there's no such beast approaching "infinite" or "perfect" anything.
Jumping away from MOSFET's for a moment, I recall reading some articles a while back as the micron size dumped to 0.14. One of the problems the engineers were having to face was radiated electrons being generated by the solder on the board. Normally this radiation is so low as to be even hard to measure, yet it was causing these new sensitive circuits to trip gates and such.
As the size of these things drop down, there's going to be all kinds of noise problems that wouldn't have been considered prior. Coupling this with current induction problems, which as you pointed out increases with frequency, these engineers have a LOT to work out. Simply inventing a more accurate carving knife is most likely only going to prove to be 30% of the overall problem, and subsequent solutions.
Re:Innovation (Score:1)
That's nice and all but it really doesn't matter how fast *single* transistor can work. The problem is that the whole circuit must work at given speed. And that speed is limited by the speed of light.
For example say our chip is 10mm x 10mm and we have to send signal from side to side during clock cycle: the time required for signal to get on the other side is distance/speed = (0.10m)/(299792458m/s) = 3.34e-10s. Now if we need to send signal like this every clock the maximum clock speed we can achieve is 1/(distance/speed) = 10*299792458 Hz which equals to less than 3GHz chip.
Of course chip designers are aware of this and design chips in a way no signal needs to be send across the whole chip, but even if greatest distance needed to send signal through - during one clock cycle - is 1mm (one 10th of the chip) we can only get 30GHz - and only if this is our only bottleneck. And also in this case average throughput time is at least 10 clock cycles (time for "operation" to go through the chip).
In the end one should notice that the problem of memory being too slow compared to processor isn't getting away in the future because we surely need those signals from our memory chips and yet again we are limited by the speed of light. Expect to see memory chips really near CPU in the future...
The only way to always double the computing power in the (not so distant) future is to invent a way to transfer *information* faster than light. If it's possible - I don't know.
_________________________
Re:Innovation [offtopic] (Score:1)
How about OpenAL [openal.org]. I think we should have also OpenIL (Input Library) and perhaps OpenFL (Feedback Library) for control devices - think about you could treat keyboards, joysticks and insert-your-favorites-here input devices as one from programming viewpoint.
_________________________
Corallay: Bill Gates Law (Score:2)
Re:what about interconnect, leakage, and power dis (Score:1)
Yeah, moderator, this post is hilarious. Right up there with Algorithms in C.
-Pete
Re:Yup, (Score:1)
Re:Moore's Law?? (Score:1)
--
Finally!!! (Score:1)
psxndc
Open source sig: You decide what this should say
Physical Limits (Score:3)
Quantum vs Classical physics (Score:1)
I particularily like this comment as it shows that the mysterious nature of quantum physics can be intriging. Most of the time quantum physics is looked on as a hinderence of sorts for developing technology. Now with this innovation of silicon lithography and the advent of quantum computer research it looks as if the tables have turned. The strange nature of quantum physics is being harnessed to technology's advantage. Man has found ways to adapt. Soon it will be energy harnassing or communications. The quantum world is endless.
Even the samurai
have teddy bears,
and even the teddy bears
Re:Moore's Law?? (Score:1)
--
Moore's law and software (Score:1)
Yet it crashes often enough to be noticeable.
It runs so slowly (a "mere" Pentium 400) that I can actually see my windows redraw.
Booting takes 5 minutes (NT 4.0)
Shutting down takes several minutes, too.
Maybe hitting a limit to processor power will encourage programmers to reintroduce the concept of "knowing how to write good code." Lord knows processor speed and cheap memory have made it possible for even the best programmers to stop thinking about code quality.
Re:Maybe not lithography .... (Score:2)
It's all true! ±5%
Re:Moore's Law?? (Score:1)
Re:Innovation (Score:1)
Now, does the cost of processors go down in anywhere near a nominally similar relationship? When we have 2 GHz processors, will the 1 GHz processor cost half (or close to half) as much?
Kierthos
Re:Moore's Law?? (Score:2)
Gordon Moore first suggested the law in 1964 (although the time was twelve, rather than eighteen, months then), and co-founded Intel in 1968.
(End of summary).
It isn't really a law, but seems to have held for at least the past twenty years, and before that at the higher speed. (Strictly speaking, nothing should be considered a hard and fast "law" in most sciences - they are all unproven conjectures. They start being called laws if they hold for long enough to convince most scientists of their utility and accuracy. But I'm sure you knew that anyway).
Re:Sengan's back??!?!? (Score:1)
Hell, with a 4-digit user #, even I'm an old-timer these days!
Re:what about tunneling? (Score:4)
I've never heard of electrons tunneling between wires. This would be a severe, perhaps fatal, form of crosstalk, and even in a 0.1um technology, the wires aren't necessarily anywhere near that close together. What you do see, however, is something called induced gate current where a MOSFET with supposedly infinite input impedence exhibts a bias current into its gate. This is because the silicon-dioxide layer between the gate and the channel is so small electrons in the channel can tunnel through the gate oxide and escape out the gate lead. This tends to make the MOSFET look a little like a Bipolar Junction Transistors, which people have been dealing with forever. The main effect of this induced gate current is increased power dissipation.
What is interesting is that a similar induced gate current can occur when operating a MOSFET at very high frequencies. The problem here is that when the frequency gets too high the capacitance between the gate and the channel tends to short out and provide a conducting path through the gate terminal. This is observed (and taken into account) in CMOS wireless/RF circuits.
SENGAN YOU DAMN BRIT! (Score:1)
Post more, please! Put something on the front page that goes against the slashdot party line, 'k? Just for old times sake?
______
Re:Sengan's back??!?!? (Score:1)
God I loved that guy. Well, actually I loved how everybody got all reactionary against him. Grits just ain't the same.
What do I do, when it seems I relate to Judas more than You?
Re:Moore's law and software (Score:2)
It has everything to do with bad code. The bad code is in layers. The Windows kernel has bad code in it, the GDI has bad code in it, the GUI layer has bad code in it, Explorer has bad code in it, applications have bad code in them. It snowballs. It is also difficult to avoid, unless you focus on the particular problems you are trying to solve, rather than just making a big desktop thingy that's self-referentially designed around manipulating and customing a big desktop thingy. I think the KDE and Gnome people have started realizing this. Once you start running down that road, you end up in the same place.
We're definitely at the stage where re-architecting software can pay off much more than Moore's law. The Moore disciples are willing to put up with crap, because they know they can get 2x faster crap in under two years. They could get a 10x speed-up in less time if they just realized they were using crap and looked for alternatives.
Re:Yup, (Score:1)
LOL. Wonder what would happen if you tried to overclock something like that...would your computer become a mini-Chernobyl?
=================================
Moore's Hypothesis (Score:2)
Moore's is a Hypothesis in the classical sense. Seems to work right out of the gate, but who knows for how long? Not as long as Gravitation has held up, certainly. Evolution and Relativity are still theories, and Moore's Hypothesis is written on a Bazooka Joe wrapper compared to those.
OT- For all those people who complain that anime posts are not "news for nerds," this article is as close as
-jpowers
Re:Innovation (Score:1)
We used something like that on SGIs Onyx some years ago using an add-on box. You could download sounds, pitch them, specify full 3D positioning (great if you had enough speakers), speed (for doppler effects) etc.
Maybe somebody would see a market in it if people start buying more than 2 speakers (8 perhaps? ;-)) for their PCs.
Re:Boyle's Law of Electronics (Score:2)
Personally, I'm glad to be living in these enlightened times. I pity the poor saps from that primitive generation.
Now that the Olympics are on, I'm watching more TV than the whole rest of the year (excepting college bowl season) combined. My TV is a dust magnet, and my PC, even with 10,000 channels of shit to choose from (unless there's a 24 hours Flintstones channel!) will suffer the same fate. The great outdoors beckons and the call of the wild is strong in this one. No tech substitute for that, never will be.
It's all true! ±5%
Re:Innovation (Score:4)
The single-chip CPU is arguably the most important development of late 20th century, and it's exponential improvement (Moore's Law) is what drives the information economy. So what happens when Moore's law runs out?
If current trends are projected forward, by 2020 a bit of memory will be a single electron transistor, traces will be one molecule wide, and the cost of the fabrication plant will be the GNP of the planet. The speed of light imposes practical limits on how large you can make a chip and how fast you can clock one. This is why we'll have GHz chips, but fundamental physical laws prevent THz chips.
More importantly, the physical limits that shut down THz electronic computers apply to _any_ classical computing architecture; optical computing and other exotic technology can't beat the speed of light, or single-particle storage problems.
You can't win by going to SMP, because at best you get a linear increase with each processor; exponential increases in power require exponential increases in processor number, which require exponential increases in space and power consumption.
The only basis in physics for continuing Moore's law past classical computing is quantum computing. In a quantum computer N quantum bits (qbits) equals 2^N classical bits. This allows you to build a computer which scales exponentially with the physical resources of the computer. Quantum computing isn't a solved problem, but if and when it is it will be a revolution as big as the first single-chip CPU.
Other limitations to lithography? (Score:2)
One potential problem that has been solved (so far) is the problem of mechanically positioning things with a very high degree of accuracy. An actual IC is composed of several layers "printed" by several different masks, and each mask must be positioned over the wafer precisely so that the different features of a component (eg transister) are properly aligned.
How accurately can we position things today? How much better can we get? Are there other kinds of process limitations that have to be solved in order to take advantage of smaller features?