Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Lightning Rods for Nanoelectronics 105

dcunning writes "Over the last several years (in my short view) there has been a fairly constant hum as to whether or not processors will continue to be able to keep up with Moore's law. Usually this question (and the arguments answering it) is phrased in terms of the ability to continue to shrink transistors/wires/etc. and escape such things as electron tunneling, etc. Scientific American has an interesting article titled Lightning Rods for Nanoelectronics discussing the how's and what's of another issue: handling electrostatic charges as devices become smaller (and hence more sensitive to both the shock and the resultant heat.) After all, being able to build a 100GHz chip is useless if merely breathing on it will fry its circuitry."
This discussion has been archived. No new comments can be posted.

Lightning Rods for Nanoelectronics

Comments Filter:
  • Design the chips to be self-repairing.
    • Good luck teaching those chips to solder. ;)
    • Design the chips to be self-repairing.

      If that is possible, the next logical step I see is self building chips. I have for a long time had a weird idea, I know most people will say it is physically impossible, and they are probably right. But if my idea turns out to be possible it is really going to make a fantastic chip. Imagine a chip that could build a copy of itself, just not the same size but rather smaller in area. If the chip could encapsulate a smaller copy into itself we could start having fun. If the chip could make two smaller copies of itself it, and the childs can keep up with the same principle it would be ready for business. I call this fractal computing. Imagine if it was possible on every layer to increase the speed by just a few %.
  • insulation (Score:5, Insightful)

    by anotherone ( 132088 ) on Thursday September 26, 2002 @09:57AM (#4335871)
    being able to build a 100GHz chip is useless if merely breathing on it will fry its circuitry.

    Why? Couldn't you put it in a glass ball or something rather than a standard PGA type chip? A non-conductive oil bath maybe?

    • Well, the chip still has to communicate with the outside world somehow. Optical interconnects are still quite far off. Come to think about it, so are 100 GHz chips.
      • Posters' revenge... Read The Article!
        On page 4 (web version): "Already IBM has demonstrated 200-GHz transistors in the laboratory and is manufacturing 120-GHz technology."
        • Multiple hundred GHz transistors aren't necissarily that meaningful. It's multiple hundred GHz ICs that are at issue (Even if this article includes this irrelevant quotation). Hell, 20GHz transistors have been commercially available for many years and are used in things like logic analysers and RF transmitters. They're not all that tiny, and not particularly more susceptable to ESD. Just because IBM can run a transistor at 200 GHz doesn't mean that they're anywhere near running ICs at that speed.
    • Re:insulation (Score:2, Insightful)

      The article also makes the point about handling during manufacture. Yes, you can control that environment better than the use environment, but you still have to have a way to deal with it.

      Actually, I was confused at first and thought this was about nano-technology, not nanometer scale integrated circuits. At some point, I would expect a major technology shift away from electrical circuits toward something else. After all, cells and neurons don't have these problems. The problem comes from having all those tiny and very long metal interconnects.

    • Actually, insulation is the last thing you need. It's precisely static fields, which require insulation to develop, that cause the problem. For instance, one of the smallest research and military high voltage generator designs, a modified Van der Graaf, uses a flow of an insulating fluid instead of the rubber belt of the high school version. Thermal flow in a suitably designed oil bath could build up exactly the charge you want to avoid. And a glass ball could bring its own problems - a static field could go through it as if it wasn't there.

      The other problem, of course, is interfaces which have to pierce the shield, and where the main electrical nasties tend to get in.

      Funnily enough, back around 1990 there was some interest in using those arrays of points mentioned in the article to go back to tube technology for hardened military devices. So much so that we tried to evaluate them as possible ESD protectors for telecoms applications. I have a feeling that they are one of those ideas which goes around every 10 years or so, starting when someone works out once again what the overvoltage protection spikes on overhead power cables are for, and works down the scale.

    • Re:insulation (Score:2, Informative)

      by theMightyE ( 579317 )
      Why? Couldn't you put it in a glass ball or something rather than a standard PGA type chip? A non-conductive oil bath maybe?

      Having non-conductive stuff surrounding your chip is the wrong way to go - you really want to have something that conducts electricity. Static is caused by the buildup of electrical charges that don't have anywhere to go. If you get enough of them, and if they happen to be in some inconvienent place such as on the gate of a transistor that isn't electrically well connected to anything (because, say the next transistor up the line that controls this one happens to be turned off), the charges can break through the insulating layers between the transistor gate and the substrate of the chip (substrate -> the big hunk of silicon or whatever that the transistor is built on). This can cause catastrophic damage to the transistor gate, and then the chip don't work no more.

      It takes much more charge to break down a transistor gate than it does to simply turn it on, so the trick is to cover it in something that conducts well enough to bleed off excess (i.e. static) charge, but not so well that it shorts out the device. Add to that the fact that the material needs to be a good thermal condutor, not contaminate the chip with anything that messes up the semiconductor chemistry, etc., and it becomes a pretty tricky materials problem.

      At work (I make chips) we have condutive floors, conductive rubber pads for people to work on, and anyone who handles the chips needs to have a grounding strap on. We also sometimes use air ionizers in regions where chips are exposed so that the air itself becomes somewhat conducting. When I think about how much more sensitive a modern processor is than the devices I work with, it's amazing to me that they work at all by the time they make it to the average user's home.

  • Evil ESD (Score:5, Informative)

    by resonance ( 106398 ) on Thursday September 26, 2002 @09:57AM (#4335872) Homepage
    This is a really important consideration. Most people don't even know how sensitive modern electronics are to ESD. Heck, you don't even have to TOUCH something to fry it these days, the electric field itself can be strong enough to zap cmos devices.

    Taking a training class on ESD control was a real eye-opener; seeing it demonstrated before my eyes drove home the point that ESD safety precautions are CRITICAL when working on stuff.

    Since taking that class, we have implemented an ANSI 20.20 compliant service bay for PC hardware, and requested that all our distributors ship us parts manufacturer-sealed (they used to 'test' motherboards before they sent them to us). We have reduced our number of returns from customers immensely since then.
    • Re:Evil ESD (Score:2, Informative)

      by forged ( 206127 )
      And if some of your customers are still doubting, point them to the following literature:

      Memory Errors, Detection and Correction [] (The PC Guide)

      IBM experiments in soft fails in computer electronics (1978-1994) [] (IBM Research)

      IBM moves to protect DRAM from cosmic invaders [] (EETimes)

      All big electronic equipment manufacturers have ESD protection measures in place, however consumers (and sometimes retailers too) don't even know what it is. I bought RAM the other day, and the clerck was handling the DIMM's with his bare hands before me ! I was shocked, and even though I tried to explain, he didn't give a shit :/
      (fortunatwely for him, the 2 DIMM's worked out fine).

      • Ummm, none of these articles are about ESD, they are about soft errors, caused by cosmic rays or radioactive compounds (usually in the lead in solder bumps). These don't cause any permanent damage to the circuits (unlike ESD can), just cause data corruption (if there is no mechanism to detect and correct errors).

        Back when I was work for HP designing PA-RISC uprocessors I got to spend a weekend at a "facility" in Leadville Colorado (>10,000ft elevation, more cosmic rays up there) to measure soft error rates in the large caches of the PA-8700. Note, I say "facility" because it looked like a building that should have been comdemned, until you went inside and there was all kinds of computers and cosmic ray counters, but no AC, so it got damn hot! And we actually did see some soft errors.
      • I too handle DIMMs with my bare hands on a daily basis, yet, surprisingly, I've only fried one DIMM in the past two years.

        How is this possible, you ask? Well, even if your workplace isn't designed to limit ESD (our shop's working areas are), all you have to do is to make sure that you ground yourself before you touch sensitive equipment. All of our workbences have metal frames that are grounded, and they're all over the place, so all you have to do is to tap one before you touch your equipment and you're set. You VERY rarely just spontaneously generate an SE charge just by standing (or sitting) around.

        On top of all that, I think you have to give the equipment designers more props for the good job they do in designing the ESD protection into the equipment. Ram sticks and such are not nearly as vulnerable, in my experience, as you suggest.
      • I bought RAM the other day, and the clerck was handling the DIMM's with his bare hands before me! I was shocked, and even though I tried to explain, he didn't give a shit :/

        Handling RAM with your bare hands is NOT a big deal unless you are pulling it out of the socket while its on, or you are in an incredibly high static environment.

        ESD will really only cause damage when there is a constant source of power being applied to a semiconductor. Only then can the kind of cascade effects that the article talks about happen. The amount of ESD you generate in a normal environment by normal movement will not damage unpowered semiconductors.

        • If you walk one yard with a tray of IC's and without ESD protection, you can blow them all.
          • Maybe if you are walking on shag carpet in the middle of the desert of the least humid day of the year while shuffling your feet more than Michael Jackson, but under normal circumstances absolutely not.
    • Re:Evil ESD (Score:2, Informative)

      Yeah, the evil with ESD is the fact that the majority of problems aren't 'Catastrophic', that is, don't fail immediately. This means you can blast something with ESD and have it pass final test before you ship it to your customer who then has it fail on site (And which if you follow the 10X law of manufacturing is a complete bummer)
      The biggest bummer is that no matter how seriously you treat ESD, if anyone else in the chain of handlers/customers/suppliers haven't treated it with the same care it's still fuggled.
      • Exactly! That was some of the information we went over in the training class for it. Most of the damage only _weakens_ components, and causes them to fail in, say, six months instead of six years.

        Same thing is true with hard drives, which people don't realize are super-delicate. If you drop a drive more than an inch or two, the heads slap the platters and cause tiny scratches and dust to be relased. The drive will test fine and work fine for a while, but will fail with dead sectors far sooner than a properly handled drive.

        I really wish the rest of our employees could have gone to this training class. They all think of me as the ESD Nazi because I'm always on their backs about using their wristbands and component handling. To them, it's the rumored unseen threat of the ESD monster; to me, it's the real threat of something I've *seen*.

        A demonstration by an ESD trainer is a great way to show people. When they see on a meter that they generate seven hundred volts of charge on their body by simply lifting their foot, it hits home. Our training was by a 3M guy, hosted by Contact East. I'd recommend it to anyone; he was a fantastic presenter.
        • When they see on a meter that they generate seven hundred volts of charge on their body by simply lifting their foot, it hits home.

          So what? Its excess current and heat that causes damage. A few hundred volts at 10 picoamps isn't going to do shit.

    • Well said. It reminds me of a business plan I once cooked up. ESD doesn't always destroy devices, it can degrade their performance right? So you persuade audiophiles with more money than sense, the folks who happily cough up for gold plated connectors, CD clamps, all that voodoo, that ESD is their worst enemy. You sell them gold plated wriststraps, anti-static mats in fancy packaging, leakage testers, the works. All low cost stuff, but at a loony price. All because ESD does have audible effects if it can damage electronics. Of course vacuum tubes are more rugged, but building a CD player out of valves would be scary ...

    • Re:Evil ESD (Score:2, Insightful)

      by Vinnie_333 ( 575483 )
      Another thing to keep in mind, is that ESD issues are not the same the world over. Because of different climates, some areas of the world are virtually immune to ESD. Unfortunately, these are regions of the world were we are having our sensitive circuits borads designed and built. They don't even understand our concern over ESD! When these parts are used in the USA, they get fried relatively easily.
  • did they do this on Star Trek? You know they had to be using at least 100GHz.
    • did they do this on Star Trek?

      They didn't. Surely you've seen how many times their electronic consoles would explode into a shower of sparks, knocking a redshirt against the opposite wall. Not shielded very well, I'm afraid.

    • If I remember correctly from the techno-babble of the STNG Technical Manual...

      They use a form of isolinear optical chips, somehow interacting with a subspace field to operate at faster-than-light speeds and give a 150% performance increase or so....God knows how that would work, but think about the data transfer needed to say...Convert a human into or from pure energy/data in the span of 5 seconds or so, and something semi-mystical's GOT to be required, I guess :)

      (Anyone with access to the manual, feel free to correct me, just working from memory :))
  • not to mention that 100ghz is silly to contemplate (mainly because of frequency levels)...
    the ghz trap is working, even here at slashdot....
  • Performance/Price (Score:2, Insightful)

    by e8johan ( 605347 )
    It is all about how much performance per dollar you can deliver. If you only get a 50% yeild from your processes since the chip can't take the real world, you probably get a bad ratio.

    There have been a similar discussion [] concering clock frequencies earlier today, and I'd say that the same arguments work here too.

  • Yes, and? (Score:2, Informative)

    I'm sort of stuck to say anything other than "and?".
    Basically for this stuff to be a problem it needs to be into widespread manufacturing and that's not going to happen for a long time (we are still using 0204 [2milx4mil]discrete components for example and 00501's are available and we aren't using them) due to the cost of production.
    Otherwise, yes ESD is a problem and the only answer is better ESD handling and better circuit design to counter ESD issues. Current TTL electronics can be utterly blown by someone touching it so it won't be any different.
    • d'oh! As an aside flip chip devices and stuff are still not common industry use (considering they have barely got the skill at BGA down pat.)
  • Dust (Score:1, Offtopic)

    by Kryptoff ( 611007 )
    Ever wanted to remove the dust that accumulates inside your computer? Think again about how to do it. :-) I know someone who tried to remove the dust with a vacuum cleaner. Guess what didn't work anymore... :-)
    • Well I cleaned my CPU fan etc with my vacuum cleaner... its still working.. and at a lower temperature due to less dust etc.
      • don't want something bad to happen.
        (it's actually less of a problem sucking as opposed to blowing)
      • Well I cleaned my CPU fan etc with my vacuum cleaner.

        I have tried that too. 1-2 years after I bought my computer the CPU fan started getting very very noisy. I tried cleaning it with a vacuum cleaner. It did get cleaner, and it was still spinning. But the noise remained. After trying this three times I realized that something more drastic had to be done. At last I looked at this small label on the back of the fan: "Low noise long lifetime" it said, Yeah right I thought. I tried lifting off the one side of the label and gave the bearing a little oil. Since then it has been working better than ever before.
    • I know somebody who did this too. Actually, a vaccuum works if the cannister is kept far away from the computer, and you use proper attachments (for some, you can buy PC cleaning attachments). However, one person I knew used a dustbuster to clean inside the computer. It was that it sucked up anything important or jarred something, it was the electromagnetic field caused by the dustbuster motor that fried a fair bit of sensitive circuitry. And this was back in the old days, when chips were not quite so sensitive as now.

      The moral of this story? It's good to get the dust out because it's bad for your computer. It's very bad to use anything that generates electromagnetic fields at close range...

      compressed air=usually ok
      vacuum=not great
      dustbuster=that ominous blank screen when you turn on a PC
      - phorm
  • Not frightened yet (Score:4, Insightful)

    by plover ( 150551 ) on Thursday September 26, 2002 @10:04AM (#4335935) Homepage Journal
    C'mon, people. It's like the corollary to Moore's Law: Every eighteen months, someone has to publish an article why Moore's law will halt the progress of processor development in the next eighteen months.

    I remember reading once why they'd never be able to break the 25MHz barrier. And another bemoaning the fact that we'd never be able to produce submicron traces.

    While I know it won't be me, there will be some clever person somewhere who will wave their magic wand (figuratively) and dissipate static electricity problems. I refuse to believe that the market will let manufacturers STOP hunting for solutions.

    • by QEDog ( 610238 )
      I have seen many posts here that disregard the serious technical limitation imposed by classical computing by just saying 'Engineers will solve it, they always do'. That is like saying that faster than light travel is only an engineering problem. New computing paradigms are needed. Most predictions says that most of us will witness Moore's Law fail due to quantum mechanical and thermodynamical reasons. Instead of blindly pretending that the engineers will magically solve the problem it would be more proactive to start learning [] more about the prospects the next generation of technologies []. We need to think, not to hope for something magical to happen.
      • I applaud your hubris. You must be young.

        Once upon a time, way back in high school, I used to think that someday I'd be working for General Instruments coming up with the next revolutionary chip. (Yes, that's a clue to my age.)

        But I grew up. My field is now software. If they build it, I will come; but I can't build it myself anymore. I can barely hold up one end of a conversation regarding the damage static electricity might do to a chip.

        Don't get me wrong: I can read the article and appreciate the difficulties the engineers will go through in trying to solve their problems. But I can't solve this problem. I already have a day job, and coming up with crack ideas for chip fabricating isn't it. I know that.

        And I'm not alone. You don't think Gordon Moore is still in the lab saying, "well, if you developed a laser that operated in the X-ray spectrum, you could touch-up etch some smaller pathways to optimize the register pipeline", do you? News flash: his job is in the board room, promising shareholders that Intel's gonna make money this year, really, because they have great scientists who are on the verge of making a .08 micron breakthrough in three years.

        So get off your high keyboard. Either go work for a chip fabricator and do it yourself, or understand your own limitations. But don't tell me that my reading an article and rubbing a couple dusty old neurons together is going to come up with Intel's next big breakthrough. I just trust that by my offering them enough money for faster chips that they'll be pressured into developing something better than they have today.

        • To tell the truth, I'm a former computer engineer that quit a good software job to do get a PhD in Physics research on the quantum information field. My point is that you shouldn't ignore the problem. It is important not to forget the big picture. Remember, in your and my professional lifetime we will see the fall of Moore's Law. And you will care about that then. Maybe too late.
    • well, there is a limit.
      you will always need at least 1 electron to flop a gate.
  • by wackybrit ( 321117 ) on Thursday September 26, 2002 @10:04AM (#4335936) Homepage Journal
    Slashdot covered clockless chips briefly a few months ago. Why do they make sense? To learn why, let's compare computers to real life industry.

    In the 1800s, industry was limited to a few very large factories and workplaces. Over time, these factories became bigger and bigger and faster and faster, until eventually it became impractical to make everything in one place. So.. things were decentralized. Now when your car is built, the raw materials come from Brazil, the parts are made in Taiwan, then the cars are built in America.

    Processors are headed the same way. Things are becoming decentralized, and the load on the processor should, therefore, go down. The giant leaps and bounds with video cards have actually caused CPUs to have less work to do. No longer do CPUs have to do nasty 3D calculations.. the video cards do it!

    Clockless chips work very well in decentralized situations, since they operate based on incoming data, rather than to a clock. [] This means thousands of non-standard components can work together to produce the same result as one CPU.

    Even -car- engines are becoming decentralized now with specialist automatic gearboxes, electric backup motors, and psuedo-petrol engines in the Prius and Insight. With processors it makes even more sense.


    Business 2.0 article on Clockless Computing []

    Economist article on Asynchronous/Clockless chips. []
  • by hatchet ( 528688 )
    There will be about 20year delay in moore's law.. but then it will be able to keep up again. But not with classical silicon technology. But rather with some other advanced technology... transistors at quantum level? Or maybe advanced molecules which are able to calculate electrons. We shall see...
  • It's too early (Score:1, Offtopic)

    by Raul654 ( 453029 )
    Was I the only one who read that as "Lightning rods for narcoleptics" ?
  • Just Get the Mag! (Score:4, Insightful)

    by z84976 ( 64186 ) on Thursday September 26, 2002 @10:22AM (#4336061) Homepage
    Month after month, I see here on slashdot postings pointing out some thing or another in Scientific American. Just subscribe to the PRINT EDITION and get the same info weeks in ahead of the "fast electronic web version!" This was on the cover of the print edition that came to my house a month ago!
    • Seriously, there should be a BAN on anything from SciAm or NYT, or ZDNET. I mean, for Chrissakes, I read slashdot for the more obscure, cool, and important news that I can't get from mainstream. If it's just going to be a NYT/ZDNET regurgitation, why read it?
      • Because some of us won't waste our time and money on SciAm any more. It has turned into Science Lite for Dummies. If there happens to be a relevant article, I depend on others here to let me know to take a look at it.
  • by Entropy_ah ( 19070 ) on Thursday September 26, 2002 @10:24AM (#4336074) Homepage Journal
    Nanoelectrons with friggin' lasers on their heads!!
  • by dfenstrate ( 202098 ) <dfenstrate@gma[ ]com ['il.' in gap]> on Thursday September 26, 2002 @10:32AM (#4336148)
    Whats the point of being able to build a 100GHz chip is useless if merely breathing on it will fry its circuitry.

    Whats the point of building a circuit so fast that a signal can only go 3mm in a tick? (3.0*10^8 m/s)/100GHz

    I know that signal speed is a substantial fraction of lightspeed, so that makes the problem worse- can you make a viable processor that small (3mm)? Wouldn't you have to design it so basically the chip doesn't wait for the previous cycle to finish?

    I know 100GHz is just an off-the-cuff example, and I don't know much about processor design, so please enlighten me- it just seems like we're going to have to go completly different routes pretty soon.

    and no, I have not read the article.
    • Well, by the time you can build 10Ghz chips, your transistors will be so small that you'll be able to fit a LOT of transistors in that 3mm. So I don't think it will be a waste to build a chip like that.
    • I know that signal speed is a substantial fraction of lightspeed

      The signal goes around 2/3 of the speed of light (for those who don't know, the signal travels much faster than the electrons themselves).

      As for propagation delays, it's not a new problem. Super-computers have been having that problem for years because of their size. There's no real problem, as long as you account for the delays. In the case of super-computers, it was necessary to check the length of the wires correctly. For CPU, manufacturers will have to be careful with the length of the traces. It'll add come challenge but I doubt it'll be that hard to overcome...
      • Please explain? Suppose we do build a 100GHz CPU. Okay, the CPU itself works. Now, it will have a cache miss at some point. My DIMMs are at maybe 5cm from my CPU. That's 32 lost clock cycles just while the signal gets to the chip and comes back from it. At 4Ghz a clock cycle should be already lost due to this delay.

        So what's the point of making a CPU so fast? Even if it gets to fit into 3mm how much cache can we put there?
        • Is that right? Ok, so, at 100Ghz, that means each click is, ummm, 0.00000000001 seconds.

          Speed of light = 299,792,458 m/s, convert that to centimeters for ease of use (*100) so,
          = 29979245800

          ok, but we know the signal only operates at 2/3 of that speed, so mulitply by .66

          SigSpeed = 29979245800 * .66 = 19786302228 cm/s

          Now, to find out how fast it travels 5cm, we want to use s = d/t, ts =d, t = d/s. So
          5 cm / 19786302228 cm/s = 0.0000000002527 s.

          Now we want to compare the time is took to the clock timing,
          0.0000000002527 s

          so it takes 25.3 clock cycles at 100ghz! Hey, you were'nt kidding!

        • Right now, with DDR, it takes 5 bus (133 MHz) cycles just to send the row and column address (RAS & CAS) to the DIMM. That means 50 CPU cycles if you have a 1.33 GHz CPU. While it takes a lot of time to access one byte of data, the memory current systems are designed so they can send the next bytes much faster. With DDR, all the other consecutive reads will only take half a cycle (that's why we say the bus is at 266 MHz, while the bus clock is really 133 MHz), which is much faster. That's how it's been working ever since the Pentium came out (with EDO memory, correct me if I'm wrong) and much longer than that with super-computers.

          If you want your computer to work fast, you need to access data in memory sequentially. The first byte takes lots of time but the rest comes fast. The speed of light will eventually impose a hard limit on the memory latency one can have in a PC. Despite that, there's nothing preventing bandwidth from continuing to increase. You can keep sending the bytes faster, all it means is that while the CPU is receiving the first byte, you might already be sending the 10th one. No big problem here. Once again the super-computers have been dealing with that for a while because their memory is often far from the processor...
          • That's interesting, but still makes me think that a 100GHz CPU would never be available in a consumer market. Sure, now the clock speed competition still continues, but how long? People seem to have already began to realize they don't need a 3GHz machine for word processing. Who will buy a 100GHz CPU when it fails to be noticeably faster than a 75GHz one? Besides, most supercomputers have multiple CPUs, and I can't think of any game that can use more than one CPU.

            Of course if what you use it for is cracking encryption then it will be very useful. But for desktop use it will be mostly doing random memory accesses that will slow it to a crawl. Things like compression wouldn't be very fast on this CPU. For example, according to the bzip2 man page, the algorhitm accesses memory quite randomly, and cache memory speeds it up more than clock speed.

            Also the CPU isn't the only thing that affects performance, the RAM speed and FSB will have to catch up. If (when) they do you'll still have to keep the CPU busy somehow, and unless you have all the data in RAM the hard disk will slow down it a lot. Of course maybe somebody will come up with something better than hard disks by that time.

            I'm think it's more probable that before we reach 100GHz we'll need to start exploring other ways of speeding things up, like quantum computing, or multiple CPUs in desktop computers.
    • In the article, they briefly mention that IBM has some 200GHz technology in the lab and is manufacturing 120GHz tech. In the context of the article, it seem they're saying that IBM can manufacture a 120GHz circuit, but this is not for a PC. I'm certainly no electrical engineer, so the article did make me dizzy, but it was interesting nonetheless.
    • Piplining.
      You make the chip such that it does only part of the computation each cycle, and sucessive parts of the computation are adjacent to each other on the chip.
      You (theoretically) get the same throughput as without pipelining, just the latency is longer- as soon as one instruction passes on to the second stage of the pipeline, you can stuff another one into the first.
    • Whats the point of building a circuit so fast that a signal can only go 3mm in a tick?

      You perhaps don't understand scaling. I once built an 11 million gate equivalent box with a volume of about one cubic foot clocked at 16 MHz. It was almost as difficult as it is to design at 160 MHz, because everything was bigger and slower, but the speed of light is non-negotiable. If you were crazy enough to build say a 1 MHz Pentium out of individual transistors, you'd probably have metre length wires, and you'd still be fighting the speed of light.

      3mm is a long way at 100GHz. Try to think like an ant, or a microbe.

  • IIRC, Moore's law is only concerned with transistor count and not clock speed. This might make it keep holding on. Just take a look at multi-core chips (e.g. POWER4)...
  • by drinkypoo ( 153816 ) <> on Thursday September 26, 2002 @11:14AM (#4336525) Homepage Journal
    Simple enough; You have a two-stage bus interface. You put more and more of the computer into the CPU and then you wrap the CPU up in a package (hopefully just a PGA or what have you, though I suppose you could make the argument for going back to slotted connections) which uses slower logic to do the bus communications.

    You need to put more cache on the CPU's substrate for this, vastly more L2 that is. And a wider memory bus will be necessary, but we're going that way anyway.

    If you got really froggy you could even do this with MEMS; Use a physically breakable connection to supply power to the really delicate stuff and optically isolate it from the bus interface circuitry.

    • 'Wrap the CPU up in a package' Problem: (And in a large sense the major concept that most of the postings to the origin had lost sight of, that ALL SEMICONDUCTORS HAVE TO BE PACKAGED! And that packaging has to be cost-effective, relaible ad infinitum. IO density is just as much a limiting factor in the package, and we have our magic wands, of course, THEY'RE CALLED PROCESS ENGINEERS! All the wiseacre ideas don't ususally make it too production.
  • by Orne ( 144925 ) on Thursday September 26, 2002 @02:01PM (#4337976) Homepage
    I just ran accross this article on Yahoo about zirconium tungstate []. Its a metal combination of zirconium, tungsten and oxygen, with the remarkable property that it shrinks when heated, almost proportional to temperature from near absolute zero to the high 700 degrees F.

    Immediate proposed applications are dental fillings (heat stress is a leading cause of making fillings chip), microchips, and fiber optics.
  • The day-to-day travails of the IBM programmer are so amusing to most of
    us who are fortunate enough never to have been one -- like watching
    Charlie Chaplin trying to cook a shoe.

    - this post brought to you by the Automated Last Post Generator...

FORTUNE'S FUN FACTS TO KNOW AND TELL: A black panther is really a leopard that has a solid black coat rather then a spotted one.