Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Moore's Law set to continue 112

Chips are made by etching tiny wires and transistors onto a silicon substrate. The process used is lithography, which resembles photography: layers of special chemicals are added onto the silicon base. Shining light through a mask changes the properties of the layers where the light hits, allowing further treatment to produce transistors, wires, and other so-called features. Classical physics limits the size of features achievable with a given wave-length lambda to the Rayleigh diffraction limit of lambda/2. This is achieved by using optical interference. In 1999, Yablonovitch and Vrijen suggested using two-photon exposure techniques to increase this resolution. Their interference pattern contained a high frequency 4* term (allowing lambda/4 sized features), but also a lower frequency 2* term of greater intensity which made it unusable for lithography. Now researchers at the JPL (USA) and the University of Wales (UK) have shown that using entangled photons removes the 2* term allowing features of lambda/4 to be created. Their paper goes on to show that in general features as small as lambda/2N should be possible for N-photon absorbing substrates. Slashdot contacted one of the authors Jonathan Dowling who told us that experimental validation of these results is underway at UMD and is looking good. This means that Moore's law that the speed of chips will increase two-fold every 18 months will probably not encounter a limit due to lithography. Thanks to B1FFMaN for bringing the story to our attention, and to Jonathan Dowling for emailing us the article in advance of its publication.
This discussion has been archived. No new comments can be posted.

Moore's Law set to continue

Comments Filter:
  • The big problem when I but a lowly undergrad was the electron barrier...which I understood to be the theoretical limit to size of paths and gates was the diamater of the electron. All this other stuff is nice to know about, but when did the world stop worrying about the size of its electrons?
  • i don't think anyone ever said that anything other than physics would be the boundary. what else is there??

    the human imagination and knowledge set.

    By most accounts, physics hasn't changed over the past 100 years and won't over the next 100. Only our understanding changes.

  • As chip features get smaller and smaller, quantum tunneling effects will become very noticeable. If you will recall from your modern physics course back in the undergraduate days (or at least for those who took it), as you decrease the physical size of the box containing your particle, there is a greater probability that your particle will be found outside your box. For computer chips this means that as the physical features on the chip get smaller and closer together, the electrons will be able to tunnel from one wire to another. This is called a tunneling current. As features approach 100nm it becomes fairly noticeable, and you have to start taking it into account.


    -----------------

  • by Anonymous Coward
    lambda/4 instead of lambda/2 gives us feature sizes twice as small, which is 2x as many features, which adds on a whole 18 months to what can be achieved. It's not really clear, but the lambda/4 might refer to length in one dimension, so we would get 4x as many features, for 3 more years of Moore's Law.
  • Ultimately, maybe they can use a scanning-tunnelling microscope to physically etch-out a mask of a thin layer of some metal, like gold or something, since you can move one atom at a time with one of those things, you could theoretically just program it straight off of your CAD design layout of the chip itself, and drive the microscope like a CNC.
  • by jpowers ( 32595 )
    The rest of the posts in this thread after the parent and first two replies were clipped on my browser, sorry about that first sentence, y'all.

    -jpowers
  • Hopefully, in the future, after CPU technology has stagnated for a century or two, perhaps software technology will have to fill in the gaps.

    We'll all be back to programming everything directly in assembler, writing to bare metal, and economizing every last cycle of bandwidth and bit of cache. Programming will be painful, and software will be elegant. GUIs will be a federal offense.
  • by Anonymous Coward
    Roger Moore. He initially postulated 30 years ago that the budget of Bond films would double every 2 years.

    Or was it Michael Moore, and his theory that twice as many dollars in corporate welfare is spent by the US gov't every 2 years? One of those two...

  • Streaming MP3s and video would sound like Star Trek to someone in the early 1980's!

    Yeah. Streaming MP3s from the point of view of the early 1980s: "So, you mean it's like the radio, only it's bigger than a bread box and has a big-assed TV sitting on top? Good deal!"

    Likewise, streaming video: "So it's like TV, only in a little teeny sub-section of a regular size TV, and the clips are a few seconds long and sometimes break up for no apparent reason."

    Personally, I'm glad to be living in these enlightened times. I pity the poor saps from that primitive generation.

  • Even if we can make chips smaller and smaller. There must be a limit. I mean what is the smallest number of atoms you need to build a transsistor?

    Well, these guys claim they can switch a single hydrogen atom between two silicon atoms.

    [mic.dtu.dk]

    Check out the press release

    And the slashdot discussion about it [slashdot.org]

  • Yep, physics hasn't changed at all over the past 100 years!

    Did you read the last part of the comment? ONLY OUR UNDERSTANDING CHANGES. Physics hasn't changed. Our KNOWLEDGE OF PHYSICS has.

  • Put your name on good info like this so you get modded up and the folks can read you. Informative.
  • If current trends are projected forward, by 2020 a bit of memory will be a single electron transistor, traces will be one molecule wide, and the cost of the fabrication plant will be the GNP of the planet. The speed of light imposes practical limits on how large you can make a chip and how fast you can clock one. This is why we'll have GHz chips, but fundamental physical laws prevent THz chips.

    The current speed record for a digital flip-flop is 770GHz. [sunysb.edu].

    While this technique is nowhere close to go into mainstream (or even scientific) computing it still shows that circuits operating in close-to-one-THz range are possible. Things might be different in twenty years. And Prozessors in the THz range would for sure be nowhere close to the CPUs we have today. Probably heaviliy asynchronous processors using architectures like systolic arrays etc. have to be used.

    Also dont forget that mainstream CPUs are not made with the fastest technology available, but with the cheapest. By the use of GaAs Cray was able to achieve clocking speeds of around 1GHz in a time when a stock PC was clocked at 33MHz - so what might a GaAs CPU with current technology scale up to today ? Or how about BiCMOS ? (given that you have a personal power plant and some insane cooling device ;) )

  • I have zero moderation points, and I feel truly helpless. As depressing as the message is, it's important.

    Of course, barring other difficulties, this still is an improvement from .14 to .02, which improves the circuit density by a factor of forty-nine. After that, I'm not sure what would be the next leap. Nanotechnology? Those electricity-conducting DNA strands? Etching with electron beams?

  • I thought the electron barrier problem was that as the pathways get smaller and smaller the electrons will be travelling too fast to make the corner. The diamater of the electron seems an unlikely problem because even if the chanel is only a atom or two wide electrons are orders of magnitude smaller. Besides electrons don't really travel like that, they sort of displace.
  • That's called Grosch's Law, after Herb Grosch, IBM and then ex-IBM gadfly from the big iron era. He gave three different statements of it, in descending order of rigor...for some reason I only remember the third, namely "No matter how fast the hardware gets, the software boys will [urinate] it away."
  • the covalent radius for silicon is about 0.2nm that means that a 0.18um silicon feature is roughly 1000 atoms wide. the ratio of feature size to atom/molecule size isn't really an issue at the moment although it eventually will be.
  • What is lambda? I can't be the only person who is wondering.

    If you wanna explain it for us, do it so that the average grade 12 student can get it, please :)
  • I see a lot of discussion involving this doesn't matter because of this problem or that problem or some other problem.

    I think these people are missing the point. Sure there are thousands of really challenging problems that need to be overcome in order to keep Moore's Law on track through the next decade. But... each problem has to be solved individually and this is one potential solution to one problem.

    EBeam, XRay and other Next Generation Lithography methods look promising but each of these has their own problems as well. The industry has long taken the approach of attacking problems from various angles and letting the best suited technology lead the way.

    That's what this article is about. Its one possible approach to one problem that we know we need to solve.
  • Lithography aint the problem. The problem is when "wires" on a processor get so small that they can't make turns. Yep, that's right, when a path on a processor gets too small electrons flowing happily down a "wire" just keep on going when they reach a bend. They just shoot right on through to the wire next door or until they find something conductive.

    Think of it another way, ever had a poorly shielded speaker wire crossing over a power cable? Remember that buzzing noise? Same concept, and it's true for processors too. In fact, comapnies like Intel and Motorolla have lots of research money invested in finding out how slow a turn has to be, what a turn can be near and so forth.

    Pretty soon the lithography will be so small electrons will be useless. :-)

  • Even if we can make chips smaller and smaller. There must be a limit. I mean what is the smallest number of atoms you need to build a transsistor?
  • Nah, at some point we'll actually run out of things which require such processing power. Video processing is probably about as bad as it'll get. Tho maybe we'll be running some nice AI, but I 'spect that's more a function of memory than CPU. (i.e. your very own Max Headroom)



    It's all true! ±5%
  • Of course it's going to be slow!

    As the processor speed increases, the amount a program can do in a given time increases. And it does more stuff because people like you ask it to:

    'I want a command prompt'.
    'I want a file shell'.
    'I want a GUI'.
    'I want multitasking'.
    'I want true multithreading'.
    'I want networkability'.
    'I want a punk-ass paperclip to annoy the hell out of me'.

    etc..

    So yeah, of course it's going to take up the processor time - we're asking our operating systems and programs to do WAY more than they did even just a few years ago. It has nothing to do with bad code.

    (ps, slashdotpeople: don't be anal and tell me that multitasking came before gui, or shit like that. i'm aware. and don't care.).

    rhyac.
  • IBM has had x-ray lithography for a while, but engineering challenges have kept that (shorter wavelength) technology in the small-scall arena for the time being.

    This new quantum approach looks really promising, but as the article states, the engineering challenges are going to dalay the actual use of this tech for ... a long time. Guesses? 10 years? Who can say. I like the comment about Moore's law pushing forward, but really.... this tech will take a while to have any effect on our CPU purchases.

  • Daaaaaamn... so, who's going to catch poor Sengan up to speed on First Post!, natalie portman, hot grits, and pedophilia?

    --

  • Quantum tunneling isn't a factor in lithographic fabrication, because it doesn't produce features anywhere near small enough to succomb to quantum effects. That's only a concern in quantum computers, and there are researchers who believe quantum tunneling can even be used to our advantage.

    Noise and decay can also be fought by standard techniques, but I do suspect that before long we're either going to run into a size barrier using current methods, or at least technology advance will slow to a crawl. The question isn't whether our current fabrication methods will change, but what will take their place...
  • I don't think anyone on the face of the Earth thinks Moore's Law is a law in the same sense Snell's Law or Boyle's Law are laws. It was just a rather offhand comment that Gordon Moore made in the late 1970s at a VLSI conference in Caltech. It is very interesting that engineers have continuously innovated to keep Moore's Law going, but of course it will eventually stop. In fact, people were predicting the end of Moore's Law at 1 Micron, but the miracle of optical lithography has us down to 0.15 micron!

    "Moore's Law" has always been considered more of a goal that a "Law".

  • Here here. Who is this sengan guy btw? That is one kickass post he wrote.

    --

  • Only a few, wasn't it IBM who a few years ago made a switch where the only moving part was 1 atom. Not there was still the two contacts but hey.
  • IIRC, Moore's Law originally applied only to the number of transistors on a chip, but has subsequently been applied to the "power" of a chip, the speed, the capacity, and probably a whole bunch of other stuff.
  • Maybe this came up earlier and I missed it, but... There was an article by Seth LLoyd from MIT in Nature volume 406, pp. 1047-1054 last month exploring how much computing power you could theoretically expect to get out of 1kg of mass taking up a volume of 1 liter. The result was ~5x10^50 operations per second. You can (eventually) get to the original article here [nature.com] (but annoying registration required). Anyway, we've got a ways to go.

  • Way, WAY, beyond Moore's Law.
    Here is truly, The Last Computer [newscientist.com]

    * "Admittedly, it might be a bit inconvenient putting a nuclear fireball on your desk."
  • Moore himself has proclaimed this Moore's Second Law: The limiting factor is fab facility cost. Five years ago a single Pentium fab facility cost $2 billion. The entire fab has to be suspended from pillars to isolate it from traffic vibration. Each generation of fab is getting more and more expensive to build, and this will ultimately limit device density, not the physical limits on the chip. Is Intel going to build $20 billion dollar fabs? $50 billion? $100 billion?
  • Im wondering if anyone has read Wired Magazine lately. They show Intel is in fact exceeding Moore's Law in both performance and transistor count (due to the huge L2 of the Xeons...) via a nifty graph very interesting story, dont know if Wired has it on the website or not though. Looked to no avail. p122 of the october
  • Uh, what?

    The speed of light limitation does not exist at the quantum level, significantly altering your back-of-envelope calculation.

  • only if you post as something other than the Anonymous Coward you are...
  • Wavelength of the light.
  • Even before that you run into speed of light problems. Take the inverse of say 10 Ghz and multiply it by the distance light travels in a second. 1/10000000000 s * 299,792.458 km/s = 0.0000299792 km = 2.9 cm (or a little more than an inch) Just sending a signal across the processor takes more distance than that.
  • architecture.
  • Wow, BOredAtWork and jafac scored -1 redundant. Huh? As a high 4-digit person, I find this highly offensive.:) I would find your posts to be +1 informative. A sengan sighting is news worthy. Sort of like seeing a UFO or Elvis.

    Where's pinguin?

    IIRC, the /. login user ids started when ppl started posting using nicks like BOredAtWork. Some of you moderators are probably saying, huh? what the hell are you talking about.

    I miss Meeept!!!

  • IBM has had x-ray lithography for a while, but engineering challenges have kept that (shorter wavelength) technology in the small-scall arena for the time being.

    I agree and I'll add that they've kept them there for a long time in the past. Industrial scale X-ray lithography looks as much like a pipe dream now as it did ten years ago. As an undergrad I took a course in quantum electronics (basically a course in quantum physics and electronic device applications). Tangentially, X-ray lithography was mentioned as a "Good thing", if the engineering details could be worked out. That was around 1990 and they still haven't been worked out to my knowledge. Any new approach intrigues me. Maybe enough new approaches will yield something that can be worked out in the near future. X-ray lithography is starting to look like fusion: something amazingly good that might be worked out at some indefinite point in the future. We need something quicker than that!

  • I actually just gave a little talk on the subject of uPs continuing their rapid growth (the 58% or so that Moore's law implies). Bad news...

    Even assuming we can reach 35nm gate lengths (that's a .035um process), the speed of the wires will be problematic because (to a rough approximation) the delay of a wire increases as 1/scale factor squared. In other words, decrease the feature size of your chips by a factor of 0.5, the delay of global wires goes up 4x. (Transistors are roughly sped up by 2x, however.)

    Global wires are used to connect big functional blocks of a uP, like the ALUs, Cache, register file, etc.

    The delay of small little wires (connecting adjacent gates, for example) stays about the same, but this still poses a problem since the transistors will have to wait for the wires as they get even faster.

    Wiring is already responsible for much of the delay of a uP, and is only going to get worse. Even if transistors get to 35nm (which the SIA predicts will happen in 2014), they only get 7x faster. This corresponds to a 15% annual improvement rate, well short of Moore's law 58%.

    A bunch of this is described here [wisc.edu].

    Imagine a plot of the relative performances of the fastest uniprocessor machine on earth compared to the fastest uP, graphed vs. time. see paper [wisc.edu]. What you'd see is that, in fact, the fastest computers in the world have been improving at a rate closer to 12-14% annually. uP's got a late start and were many orders of magnitude slower. uP's have been catching up, borrowing technologies from minis and supercomputers which have been resulting in yearly advantages of 50-60% over the last 20 years or so. But uP's are about to hit the same hurdles that have been bothering supers for a long time. (Supercomputers have been communication bound for some time.) Until something fantastic happens (optical? organic?) uPs and supercomputers may be very similar in performance.

    One last thing to note is that the bad news about future growth of uPs places an assumption on the microarchitecture--that it remain largely the same as today's. There are other possibilities being researched, for example, the RAW group at MIT. They may be able to cope with the wire delays in ways that a conventional uP cannot.

    -Ed

  • Bingo with the 1-d comment. The classical Rayleigh limit is inherently a 1-d measure. Halving that limit gives the potential for 4x as many transistors.

  • "Ugliness is always the result of people trying to make something beautiful. While beautiful is achieved by those who aim at making something useful." Oscar Wilde
  • Humans are nothing if not innovative. Even at MS, they are reorganizing, programming, integrating, and innovating themselves right out of that pesky paper bag. Granted they're not there yet, but I'm sure with a few new versions they'll be free.

    I doubt that we will see the slow down of processors anytime soon. When lithography's run finally comes to a standstill, quantum computing will have matured enough to grab the baton and keep up the race. To what ened, I don't know. Right now I'd just love to see a decent memory tech come out. DDR may beat RAMBUS, but that's not saying much.
  • It's going to happen one way or another. Of course, this means that once we
    hit the 2GHz mark, Windows 2002 will be requiring 1.66GHz. The only thing
    that'll really wind up being the boundary will be physics. I wonder when
    we'll end up with a small fusion reactor on top of the processor (:
  • I forget his first name, but I'm sure somebody else will chime in soon enough. Strictly speaking, it might more appropriately be termed "Moore's Observation" but cut us some slack, eh?
  • but we're going to have to use something smaller than atoms if it's going to keep getting smaller - we already have features chip features that are now measurable in numbers of atoms wide - eventually we have to use something other than the existing technologies (wires and transistors) to move stuff around - quantum dots/wires - bucky-balls - nanotech etc all that current pie-in-the-sky stuff will eventually be the only game in town

    And then poof! .... nothing smaller than atoms ... Moore's law breaks down.

  • Moore = Gordon Moore of Intel fame, who predicted that processor speed
    would double every 18-21 months [Moore's Law], as opposed to Gates' Law:
    software speed halves every 18-21 months.

  • Actually, Moore's Law refers to the density of transistors on a chip, not speed. It happens that speed follows transistor density because smaller transistors switch more quickly.

    --

  • Actually, Moore's Law refers to circuit density, not processor speed. Also, originally the time period was every year. It slowed down to 18 months after a couple of years.
  • by Froid ( 235187 ) on Friday September 22, 2000 @11:12AM (#760610)
    Already, companies like Intel are about six months behind Moore's law. It's nice to know that theoretically, Moore's law can continue for a few more years into the future, but how are these developments supposed to help struggling IC-manufacturers now?

    The physical universe may not constrain us as much as we had feared, but it looks like gross human incompetence is filling that role quite nicely.
  • > So help me out, Slashdot, who is it??? What
    > foundation do we use to consider this a "law"???

    Gordon Moore was one of Intel's founders. Unbeknownst to a lot of people, he didn't actually come up with this at Intel, but at his previous company, Fairchild Semiconductor.

    Moore's law is a law in the same sense as Murphy's Law I suppose. Not like one of Newton's laws.

    Take it as you will
  • Stick it to the MAN!!!!

    *raises fist*

  • "nothing smaller than atoms?"

    By the time atoms pose the physical limit to Moore's Law, sub-atomic particles that we currently know nothing of will extend it.

    Guaranteed.

  • by crgrace ( 220738 ) on Friday September 22, 2000 @11:51AM (#760614)
    While photolithography certainly is one of the potential limits to Moore's law, it is not the only one, nor is it the most difficult. For years we have had electron-beam lithography but it is expensive, and that is why we have pushed optical lithography to such dizzying heights. There is no technical reason not to use E-beam lithography, but there are economic reasons.

    But consider:

    1. interconnect: as feature sizes diminish, the physical height of metal lines becomes greater than their width, making them look like skyscrapers, and the IC isn't so planar anymore. The problem then becomes the physical strength of the conductor, as it easily breaks as it is forced to bend over the surface of the chip. Copper interconnect is one partial solution to this problem, but it is not a magic bullet and things are getting worse all the time.

    2. leakage: as transistors shrink, their gate oxide also scales. Therefore, for a given supply voltage, the electric field in the transistor increases until the gate blows out. So, then power supply voltages are scaled. Unfortunately, this tends to slow down the transistor unless the threshold voltage is also reduced, but then we have increased leakage current. This is quite a trade off, as increased leakage current not only increases the power dissipation (more on this next) but it also makes it more difficult to design RAM and mixed-signal/analog blocks.

    3. Power Dissipation: Even though the supply voltage is decreased, and power dissipation of a single transistor decreases as the square of the supply voltage, overall power will increase for two reasons. First, there are many more transistors on the chip switching ever faster, and second, the reduced threshold voltages mean there will be significant static power drain even in CMOS logic. 1 nA of leakage/transistor in a 1 Volt, 1 Billion Transistor microprocessor of the future would burn a full Watt even without switching! This is a very serious problem not only for portable applications because it is difficult to package such a power hungry chip cheaply and efficiently.

    While this is an interesting development to optical lithography, I don't think it will have much impact on Moore's law. In fact, I'm much more worried about the power issue and The Interconnect Problem.

  • by skoda ( 211470 ) on Friday September 22, 2000 @11:52AM (#760615) Homepage
    I'm not in the litho field, but I know a small bit about it, and here are a few more thoughts on the issues:

    - classical imaging is limited by wavelength; the shorter the wavelength, the better the resolution. Lithography, fundamentally, is imaging a mask at a reduced size onto reactive material. So, the approach has been to decrease the wavelength, to get smaller feature sizes.

    - as the wavelength and feature sizes decreases, optical interference effects became more of a concern. But they also learned to play cool tricks with the effects. Instead of using a conventional 'binary' mask (either opaque or transparent), they implemented phase masks. Certain areas, usually at corners, and line ends, had a different optical thickness, introducing a phase shift into part of the light, allowing interference, resulting in certain feature sizes to be reduced, approaching the lambda/2 limit.

    - Other games they play, I think, involve the etching material itself. Because it does not react in a linear fashion, I think they have done things to modulate the image intensity more precisely, using the material reaction with the light to achieve feature sizes that are smaller than expected based on the image quality itself. That is, the material is used as a thresholding device. (I'm not sure if they actually do this, but I thought I've heard of it. Maybe not.)

    - What's next? People have been declaring the death of "optical lithography" for years (decades?). Yet, the industry keeps finding ways to produce shorter wavelengths (in an industrial setting), and design/fabricate lens systems that can image at that wavelength. There have been predictions of x-ray and electron beam lithography, but 'optics' has so far held them off.

    What about this new technique? I don't know anything about it. It could be a new necessary method. Or it might not pan out, faced with the multitude of other challenges, and the tremendous money & experience & effort thrown behind the current optical technologies.

    - Parting thoughts:
    Something often overlooked are the other parts of lithograhy. The stepper motors used to translate the silicon wafers are incredible! But that technology must be improved to provide sufficient resolution & accuracy as feature sizes decrease.

    The masks themselves are also a fair feat, requiring some fabrication finesse

    The lens systems required for lithography systems are insane. The search for new materials as wavelength decreases. Further, as the feature sizes decrease, lenses must have ever small tolerances, which pushes the measurement technology people to do amazing things.

    I could go on, but I've rambled enough. Suffice to say that the lithography and related fields are really cool. The particular writing method is important, but there are a whole host of other challenges to face as well.
    -----
    D. Fischer
  • by jafac ( 1449 ) on Friday September 22, 2000 @11:53AM (#760616) Homepage
    Intel will keep announcing newer and faster chips. However, when you try to buy one, they'll be "unavailable". But the announcements will keep up with Moore's law.

    This is using a new high-tech process of press-release generation code-named "vapor". Motorola is said to be licensing this new technology from Intel to assist in ramping clock speeds for their PowerPC chips.
  • thanks!

    The fans are what make this all really worthwhile....

    That and the biatches!

  • by SysKoll ( 48967 ) on Friday September 22, 2000 @12:01PM (#760618)
    I hate to rain on this parade of optimism, but there is a hard limit on the lithography process that everyone here seem to have overlooked.

    This problem is the size of the photosensitive compound molecule. Whatever the wavelenght you use, you have to impress a photosensitive resin with your ever-finer optical patterns. And the problem is that this molecule is big. We are already reaching a point where the size of the photoresist molecule is not negligeable anymore.

    In a few years, at around 0.02 microns, we'll reach the operational size of the smallest photoresist blob that can be physically impressed with a photon. So even if the wavalength keeps decreasing, we'll still have that blob size as the choke point.

    Moreover, the new photoresists for 0.113-micrometer laser are far from being perfect. They are still way too temperamental for production use. And nobody has anything better coming up. None. No plans, no projects, no announcements.

    Isn't that sad? For all the marvelous optical tricks that we pull in the micro-electronics industry, we are now roadblocked by a basic chemistry problem. Photoresist used to be a glorified paint job on top of a wafer that everyone was taking for granted, but it's back with a vengeance.

    Conclusion: Unless we have a breakthrough in chemistry (not laser, not optics), the Moore law is dead when we reach 0.02 micron.

  • Nah, at some point we'll actually run out of things which require such processing power.

    I have to dissagree strongly here. Applications expand, like a gas, to fit the available capability of any given technology. Could people using "powerful" $100,000 minicomputers in the 1970's ever dream how much computer power we have today or what we would use it for, or how cheap it would be? Streaming MP3s and video would sound like Star Trek to someone in the early 1980's!

    Besides, even if the application doesn't change, new processing capability can be used for many things, such as automatic calibration of analog circuits (hard problem) and massively reconfigurable systems.

    They'll always be things we can do to make stuff better, faster, or cheaper.

  • Here [everything2.com] is yet another explanation of moore's law, and who Moore is, on Everything2 [everything2.com].
  • AFAIK E-beam lithography was simply not practical to actually MAKE anything until recently. It wasn't until the past year or two that projects such as Lucent's SCALPEL went from the research to equipment design stage. E-beam equipment is on the horizon for commercial use, while the stuff in this article is still research and still gets beat by E-beam techniques.
  • By most accounts, physics hasn't changed over the past 100 years and won't over the next 100. Only our understanding changes.

    Someone mod this guy up as +1 Funny.

    Let's see...

    in 1900, Relativity hadn't raised it's head (Lorentz had made some moves in that direction, but it hadn't been fully postulated).

    Planck had just (with great reluctance) postulated the quantum, but was unhappy with the concept.

    The idea of the photon was still a few years away (I believe Einstein's seminal paper on the Photelectric Effect was in 1905).

    Yep, physics hasn't changed at all over the past 100 years!

  • You cant win by going to SMP. But maby you can win by fundamantly changing the archeture of computers.

    We already are off loading video processing to specialty built video hardware. Were not doing this very much with sound (yes, you can get you high end cards to add base and sound fields, but you cant do OpenGL like sound calls - play the sound I uploaded to you before like it was coming from (x,y,z) with this ambient sound....)

    Hard drives are prety dumb, and general purpose - the way you would set up RAID for, say, video or audio editing (which is called nonlinear, but a 5 second clip is eternity for a drive), is not the same way as you would for a database - and it would be different for differnt databases.

    And this isnt even taking into consiteration network applications - all the hard thinking would be done on a centerlized host, with only the visualation being done localy. You processor is idle 95% of the time, but your video card is busy 100% of the time. If you have a smooth distribution of tasks you could have 20 people using your cpu if you could have 20 video cards. And if you scale this up, globaly, it would be close to smooth distribution.

    My point is that there are more solutions then just throwing processoer power at the desktop.

  • In the long-term future, fabrication plants will probably be moved into space, because a sizeable fraction of the expense is enviromental isolation, which is minimized off-planet. This may not necessarily make them cheaper (:)), but they should upgrade more cleanly...

    -_Quinn
  • Nah, at some point we'll actually run out of things which require such processing power.

    I dunno about that one.. Never underestimate the power of Microsoft developers to write incredibly bloated programs and operating systems.
  • I forget his first name

    Gordon.

  • Yet it crashes often enough to be noticeable.
    It runs so slowly (a "mere" Pentium 400) that I can actually see my windows redraw.
    Booting takes 5 minutes (NT 4.0)
    Shutting down takes several minutes, too.


    It might be fun and all to bag on NT, but if you're running a PII-400 that's going that slow you've got problems that go way beyond what Microsoft may have done. I'm running a PII-350 here with NT 4.0 WS and it's been running non-stop for 442 hours. The only reason this number isn't significantly larger is that I shut it and my FreeBSD box down when I know I'm not going to be using them for an extended period of time.

    I would strongly suggest you start looking at what kinds of services are running, and the very real possibility that you've got some serious hardware problems. From what little info you've given, I'd be looking at either the hard drive or video card as the primary suspects.

    For myself, I've been quite happy with this PII-350 for everything from web browsing to editing print quality photos in Photoshop. About the only thing that I'd be looking at a faster processor for is Bryce. Ah well, I'll probably need to crank things up to a 4Ghz processor to get the next Doom to play decently though.
  • Nah - way less than half. That's why I buy my stuff six months after it's new - it usually takes about six months for these things to halve in price.
  • I wonder when we'll end up with a small fusion reactor on top of the processor (:

    Oops, better not put it right ON the processor... that's where the 2000W, liquid nitrogen powered cooling unit goes.

  • everything you talk about is an issue of the transistor, whereas the lithography is a production solution to the moores law problem. there is a lot of work being done on the interconnect problem by looking at different materials. the leakage and power problems are afaik more problems that will have to be dealt after the introduction of electronbeam lithography. im not a device guy so i dont know what the biggest practical problem that will face chip makers is, but my guess is that its the power problem, regardless of lithography used. (even now its getting to be a problem)
  • called induced gate current where a MOSFET

    An interesting, yet somewhat overly complex example. A bit more to the point would be talking about how a basic transformer works. One coil of wire inducing current into a nearby coil. You don't require an actual coil of wire to get this effect, simply need the wires or circuit runs close enough to have them induce current into the neighbor.

    with supposedly infinite input impedence

    Just to get into the nit picky here, but a MOSFET is only said to be very high input impedance. In basic electronic components there's no such beast approaching "infinite" or "perfect" anything.

    Jumping away from MOSFET's for a moment, I recall reading some articles a while back as the micron size dumped to 0.14. One of the problems the engineers were having to face was radiated electrons being generated by the solder on the board. Normally this radiation is so low as to be even hard to measure, yet it was causing these new sensitive circuits to trip gates and such.

    As the size of these things drop down, there's going to be all kinds of noise problems that wouldn't have been considered prior. Coupling this with current induction problems, which as you pointed out increases with frequency, these engineers have a LOT to work out. Simply inventing a more accurate carving knife is most likely only going to prove to be 30% of the overall problem, and subsequent solutions.
  • The current speed record for a digital flip-flop is 770GHz..

    That's nice and all but it really doesn't matter how fast *single* transistor can work. The problem is that the whole circuit must work at given speed. And that speed is limited by the speed of light.

    For example say our chip is 10mm x 10mm and we have to send signal from side to side during clock cycle: the time required for signal to get on the other side is distance/speed = (0.10m)/(299792458m/s) = 3.34e-10s. Now if we need to send signal like this every clock the maximum clock speed we can achieve is 1/(distance/speed) = 10*299792458 Hz which equals to less than 3GHz chip.

    Of course chip designers are aware of this and design chips in a way no signal needs to be send across the whole chip, but even if greatest distance needed to send signal through - during one clock cycle - is 1mm (one 10th of the chip) we can only get 30GHz - and only if this is our only bottleneck. And also in this case average throughput time is at least 10 clock cycles (time for "operation" to go through the chip).

    In the end one should notice that the problem of memory being too slow compared to processor isn't getting away in the future because we surely need those signals from our memory chips and yet again we are limited by the speed of light. Expect to see memory chips really near CPU in the future...

    The only way to always double the computing power in the (not so distant) future is to invent a way to transfer *information* faster than light. If it's possible - I don't know.
    _________________________

  • We already are off loading video processing to specialty built video hardware. Were not doing this very much with sound (yes, you can get you high end cards to add base and sound fields, but you cant do OpenGL like sound calls - play the sound I uploaded to you before like it was coming from (x,y,z) with this ambient sound....)

    How about OpenAL [openal.org]. I think we should have also OpenIL (Input Library) and perhaps OpenFL (Feedback Library) for control devices - think about you could treat keyboards, joysticks and insert-your-favorites-here input devices as one from programming viewpoint.
    _________________________

  • "MicroSoft bloatware shall also double every 18 months to fill available cycles and storage!"
  • Funny?
    Yeah, moderator, this post is hilarious. Right up there with Algorithms in C.

    -Pete

  • Our knowledge of physics is Physics. Physics is about building models. To "believe" in the models is, I suggest, unscientific. To suggest that Physics is reality is to ignore history. Newton's reality is not the reality of today. In another 100 years the models will have changed, so Physics will have changed. "Reality" will be the same, but thats not Physics.
  • Here's [tuxedo.org] a good explaination.
    --
  • Now _this_ is news for nerds and matters. This is what I read /. for: Stuff that I have to read and reread a time or two to fully digest. I feel more informed and enriched. Cool.

    psxndc

    Open source sig: You decide what this should say

  • by Detritus ( 11846 ) on Friday September 22, 2000 @11:20AM (#760639) Homepage
    Even if you have infinite resolution lithography, what about all of the other problems that become important when devices get smaller? I've read about several, such as quantum tunneling, higher noise, lower breakdown voltages and increased susceptibility to damage and electromigration. Not everything scales in a linear fashion.
  • Now, new research (Jonathan Dowling, JPL/Caltech, 818-393-5343, Jonathan.P.Dowling@jpl.nasa.gov) illustrates that the Rayleigh criterion is a limit of classical, pre-20th century physics--and not of the "quantum" physics discovered and explored since the 20th century.

    I particularily like this comment as it shows that the mysterious nature of quantum physics can be intriging. Most of the time quantum physics is looked on as a hinderence of sorts for developing technology. Now with this innovation of silicon lithography and the advent of quantum computer research it looks as if the tables have turned. The strange nature of quantum physics is being harnessed to technology's advantage. Man has found ways to adapt. Soon it will be energy harnassing or communications. The quantum world is endless.

    Even the samurai
    have teddy bears,
    and even the teddy bears

  • This guy just read /. regularly and noticed that every 18 month there is a new article about CPU's that are 2 times more powerful than those in previous one....

    --
  • I'm not particularly concerned about hitting a theoretical limit to hardware power. At the moment, I'm typing on a system that has unimaginable 20 years ago.

    Yet it crashes often enough to be noticeable.
    It runs so slowly (a "mere" Pentium 400) that I can actually see my windows redraw.
    Booting takes 5 minutes (NT 4.0)
    Shutting down takes several minutes, too.

    Maybe hitting a limit to processor power will encourage programmers to reintroduce the concept of "knowing how to write good code." Lord knows processor speed and cheap memory have made it possible for even the best programmers to stop thinking about code quality.
  • IIRC Moore's law didn't directly address size or material. Assume at some point we get better semiconductors or move on to optical processors.


    It's all true! ±5%
  • by Anonymous Coward
    Click here [intel.com] for Moore's bio, or here [intel.com] for the summary of his original hypothesis and a couple of humorous corollaries.
  • Hrm... this time next year, 2 GHz processors? (The 1 GHz was first announced about 6 months ago, right?) So that means we're likely to have 4 GHz processors in March of 2003? Cool... maybe by then I can afford something better then a 233 MHz...

    Now, does the cost of processors go down in anywhere near a nominally similar relationship? When we have 2 GHz processors, will the 1 GHz processor cost half (or close to half) as much?

    Kierthos
  • From the Jargon File (heavily summarised):
    Gordon Moore first suggested the law in 1964 (although the time was twelve, rather than eighteen, months then), and co-founded Intel in 1968.
    (End of summary).

    It isn't really a law, but seems to have held for at least the past twenty years, and before that at the higher speed. (Strictly speaking, nothing should be considered a hard and fast "law" in most sciences - they are all unproven conjectures. They start being called laws if they hold for long enough to convince most scientists of their utility and accuracy. But I'm sure you knew that anyway).
  • wow! sengan? BOredAtWork? All we need is Meeept! and we've got a frickin /. old-timer's reunion going here.

    Hell, with a 4-digit user #, even I'm an old-timer these days!
  • by crgrace ( 220738 ) on Friday September 22, 2000 @12:12PM (#760648)
    For computer chips this means that as the physical features on the chip get smaller and closer together, the electrons will be able to tunnel from one wire to another. This is called a tunneling current. As features approach 100nm it becomes fairly noticeable, and you have to start taking it into account.

    I've never heard of electrons tunneling between wires. This would be a severe, perhaps fatal, form of crosstalk, and even in a 0.1um technology, the wires aren't necessarily anywhere near that close together. What you do see, however, is something called induced gate current where a MOSFET with supposedly infinite input impedence exhibts a bias current into its gate. This is because the silicon-dioxide layer between the gate and the channel is so small electrons in the channel can tunnel through the gate oxide and escape out the gate lead. This tends to make the MOSFET look a little like a Bipolar Junction Transistors, which people have been dealing with forever. The main effect of this induced gate current is increased power dissipation.

    What is interesting is that a similar induced gate current can occur when operating a MOSFET at very high frequencies. The problem here is that when the frequency gets too high the capacitance between the gate and the channel tends to short out and provide a conducting path through the gate terminal. This is observed (and taken into account) in CMOS wireless/RF circuits.

  • Hey, whats up! Haven't seen you in forever... thought you were long gone when they sold out.

    Post more, please! Put something on the front page that goes against the slashdot party line, 'k? Just for old times sake?

    ______

  • Please let meept post. PLEASE let meept post!

    God I loved that guy. Well, actually I loved how everybody got all reactionary against him. Grits just ain't the same.


    What do I do, when it seems I relate to Judas more than You?
  • So yeah, of course it's going to take up the processor time - we're asking our operating systems and programs to do WAY more than they did even just a few years ago. It has nothing to do with bad code.

    It has everything to do with bad code. The bad code is in layers. The Windows kernel has bad code in it, the GDI has bad code in it, the GUI layer has bad code in it, Explorer has bad code in it, applications have bad code in them. It snowballs. It is also difficult to avoid, unless you focus on the particular problems you are trying to solve, rather than just making a big desktop thingy that's self-referentially designed around manipulating and customing a big desktop thingy. I think the KDE and Gnome people have started realizing this. Once you start running down that road, you end up in the same place.

    We're definitely at the stage where re-architecting software can pay off much more than Moore's law. The Moore disciples are willing to put up with crap, because they know they can get 2x faster crap in under two years. They could get a 10x speed-up in less time if they just realized they were using crap and looked for alternatives.
  • I wonder when we'll end up with a small fusion reactor on top of the processor (:

    LOL. Wonder what would happen if you tried to overclock something like that...would your computer become a mini-Chernobyl?

    =================================
  • These other guys missed the real question: what makes Moore's Law a Law instead of a Theory or Hypothesis or whatever. Answer: nothing. Computers have had such an immediate and close relationship with our culture and society that linguistic rigor fell victim to slang and momentum. More specifically, the process seems to have been: Murphy's "Law" seems to work, so everything that seems to work will now be a Law.

    Moore's is a Hypothesis in the classical sense. Seems to work right out of the gate, but who knows for how long? Not as long as Gravitation has held up, certainly. Evolution and Relativity are still theories, and Moore's Hypothesis is written on a Bazooka Joe wrapper compared to those.

    OT- For all those people who complain that anime posts are not "news for nerds," this article is as close as /.'s gotten in a while. Count the posts and tell me why...

    -jpowers
  • We already are off loading video processing to specialty built video hardware. Were not doing this very much with sound (yes, you can get you high end cards to add base and sound fields, but you cant do OpenGL like sound calls - play the sound I uploaded to you before like it was coming from (x,y,z) with this ambient sound....)

    We used something like that on SGIs Onyx some years ago using an add-on box. You could download sounds, pitch them, specify full 3D positioning (great if you had enough speakers), speed (for doppler effects) etc.

    Maybe somebody would see a market in it if people start buying more than 2 speakers (8 perhaps? ;-)) for their PCs.

  • Likewise, streaming video: "So it's like TV, only in a little teeny sub-section of a regular size TV, and the clips are a few seconds long and sometimes break up for no apparent reason."

    Personally, I'm glad to be living in these enlightened times. I pity the poor saps from that primitive generation.


    Now that the Olympics are on, I'm watching more TV than the whole rest of the year (excepting college bowl season) combined. My TV is a dust magnet, and my PC, even with 10,000 channels of shit to choose from (unless there's a 24 hours Flintstones channel!) will suffer the same fate. The great outdoors beckons and the call of the wild is strong in this one. No tech substitute for that, never will be.


    It's all true! ±5%
  • by Procyon101 ( 61366 ) on Friday September 22, 2000 @11:28AM (#760664) Journal
    It's amazing that something as revolutionary as the single chip computer could come out of an engineer staring at thirteen separate schematics and saying, "Ok, but what about doing this with one chip?" And then being in the right circumstances to do it.

    The single-chip CPU is arguably the most important development of late 20th century, and it's exponential improvement (Moore's Law) is what drives the information economy. So what happens when Moore's law runs out?

    If current trends are projected forward, by 2020 a bit of memory will be a single electron transistor, traces will be one molecule wide, and the cost of the fabrication plant will be the GNP of the planet. The speed of light imposes practical limits on how large you can make a chip and how fast you can clock one. This is why we'll have GHz chips, but fundamental physical laws prevent THz chips.

    More importantly, the physical limits that shut down THz electronic computers apply to _any_ classical computing architecture; optical computing and other exotic technology can't beat the speed of light, or single-particle storage problems.

    You can't win by going to SMP, because at best you get a linear increase with each processor; exponential increases in power require exponential increases in processor number, which require exponential increases in space and power consumption.

    The only basis in physics for continuing Moore's law past classical computing is quantum computing. In a quantum computer N quantum bits (qbits) equals 2^N classical bits. This allows you to build a computer which scales exponentially with the physical resources of the computer. Quantum computing isn't a solved problem, but if and when it is it will be a revolution as big as the first single-chip CPU.
  • How many other potential and real limitations are there on lithography-like processes?

    One potential problem that has been solved (so far) is the problem of mechanically positioning things with a very high degree of accuracy. An actual IC is composed of several layers "printed" by several different masks, and each mask must be positioned over the wafer precisely so that the different features of a component (eg transister) are properly aligned.

    How accurately can we position things today? How much better can we get? Are there other kinds of process limitations that have to be solved in order to take advantage of smaller features?

When you are working hard, get up and retch every so often.

Working...