Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology Hardware

DARPA Looks Beyond Moore's Law 217

ddtstudio writes "DARPA (the folks who brought you the Internet) is, according to eWeek, looking more than ten years down the road when, they say, chip makers are going to have to have totally new chip fabrication technologies. Quantum gates? Indium Phosphide? Let's keep in mind that Moore's Law was more an observation than a predictive law of nature, despite how people treat it that way."
This discussion has been archived. No new comments can be posted.

DARPA Looks Beyond Moore's Law

Comments Filter:
  • The Diamond Age (Score:3, Informative)

    by wileycat ( 690131 ) on Wednesday August 20, 2003 @05:52PM (#6748915)
    I"m pretty excited about the new man-made diamonds that are supposed to be able to keep moore's law going for decades when they come out. Wired had an article recently and a post here on /. too
  • by izto ( 56957 ) on Wednesday August 20, 2003 @05:55PM (#6748962) Homepage
    a) Chips are already "stacked". Layer over layer of silicon.

    b) If you are talking about stacking dice (That is, literally stacking chips inside the package) then the distance the information would have to travel when going trough the "vertical interconnects" would be thousands or tens of thousands bigger than the distance of any on-chip interconnection. Which means the communication between layers of stacked chips would be thousands of times slower. Not very good for microprocessors..
  • Working link! (Score:0, Informative)

    by rapevictim ( 557748 ) <feedback@goatse.cx> on Wednesday August 20, 2003 @05:55PM (#6748963) Homepage Journal
  • by kfg ( 145172 ) on Wednesday August 20, 2003 @05:59PM (#6748993)
    An educated observation, which is why it basically works.

    Please note that the observation was well enough educated that it includes the fact that its validity will be limited in time frame and that before it becomes completely obsolete the multiplying factor will change, as it already has a couple of times.

    In order to understand Moore's Law one must read his entire essay, not just have some vague idea of one portion of it.

    Just as being able to quote "E=mc^2" in no way implies you have the slightest understanding of the Special Theory of Relativity.

    KFG
  • by binaryDigit ( 557647 ) on Wednesday August 20, 2003 @06:04PM (#6749037)
    hardware has progressed dramatically over the past decade and left software somewhere behind... there is nt much use for faster and faster servers when software doesn't keep up the phase... this decade will be a "software decade"

    Not really. The functionality offered by software has pretty much flatlined (with the major exception being "media", e.g. mp3, mpeg, divx, etc). HOWEVER, the bloat and overhead of software continues to keep pace (and often surpasses) with the speed of hardware. This trend has no end in sight (mo features, mo features, mo features. Lookat those scaled miniature window/icons sitting in my dock updating realtime, oooh, aaaah. Lookat that 3d rotating desktop). Not meaning to pick on Apple here (I own several myself), but they are at the vanguard of eye candy code bloat, with Microsoft trying quickly to catch up.
  • Re:Qubit (Score:1, Informative)

    by Anonymous Coward on Wednesday August 20, 2003 @06:06PM (#6749056)
    thought that quantum computing was probably going to be viable within ten years, and will probably be far more advanced than any of the fabrication methods they listed in the article. Their web site talks a little bit about DARPA's quantum computing projects, but the page seems to be a little outdated. Anyone know if they're pursuing this as well?
    The quant-ph list might have activity with freshness a little more to your liking: quant-ph Aug 2003 [lanl.gov]. Or, just check out xxx.lanl.gov [lanl.gov] - yes it's real, yes it's useful, no it's not goatse.

    That said, with the potential applications of quantum computing in cryptography (especially brute-force cracking and decryption), it's unlikely that anything close to the bleeding edge is in the public eye.
  • by Junks Jerzey ( 54586 ) on Wednesday August 20, 2003 @06:10PM (#6749082)
    Moore's law is already ending. Intel's Prescott (i.e. Pentium 5) CPU dissipates 103 watts. That's beyond anything you can put in a laptop, and it's arguably beyond anything that should be in a workstation-class PC. But it also may not be that we're hitting CPU speed limits, just that we're hitting the limits of type types of processors that are being designed. Much of the reason the PowerPC line runs cooler than the x86 is because the instruction set and architecture are much cleaner. There's no dealing with calls to unaligned subroutines, no translation of CISC instructions to a series of RISC micro-ops, and so on. But there are the same fundamental issues: massive amounts of complexity dealing with out of order execution, register renaming, cache management, branch prediction, managing in-order writebacks of results, etc.

    Historically, designing CPUs for higher-level purposes, other than simply designing them to execute traditional assembly language, has been deemed a failure. This is because generic hardware advanced so quickly that the custom processors were outdated as soon as they were finished. Witness Wirth's Lilith, which was soon outperformed by an off-the-shelf 32-bit CPU from National Semiconductor (remember them?). The Lisp machine is a higher profile example.

    But now things are not so clear. Ericsson designed a processor to run their Erlang concurrent-functional programming language, a language they use to develop high-end, high-availability applications. The FPGA prototype was outperforming the highly-optimized emulator that had been using up to that point by a factor of 30. This was with the FPGA at a clock speed of ~20MHz, and the emulator running on an UltraSPARC at ~500MHz. And remember, this was with an FPGA prototype, one that didn't even include branch prediction. Power dissipation was on the order of a watt or two.

    Quite likely, we're going to start seeing more of this approach. Figure out what it is that you actually want to *do*, then design for that. Don't design for an overly general case. For example, 90% of desktop CPU use could get by without floating point math, especially if there were some key fixed point instructions in the integer unit. But every Pentium 4 and Athlon not only includes 80-bit floating point units, but massive FP vector processing units as well. (Not to mention outmoded MMX instructions that are almost completely ignored.)
  • by anzha ( 138288 ) on Wednesday August 20, 2003 @06:12PM (#6749099) Homepage Journal

    A lot of posters sem to think that DARPA, the US military, or the US government is a unified thing. It's not. Each part often have their own agendas. Research is very frequently driven by those agendas.

    However, DARPA often CYAs when it comes to research too. If you come up with a whacky idea that might just work they often will fund it even though it is in competition with another they have. The reason being that they then can see which whacky idea actually works. Often none do. or one does. or nother that seemed like a sure thing doesn't.

    A long story short, if quantum computing doesn't turn out to be all that, they've covered their techno @$$3$.

  • by Bender_ ( 179208 ) on Wednesday August 20, 2003 @06:13PM (#6749107) Journal
    a) Chips are already "stacked". Layer over layer of silicon


    False, there is just one active layer of single crystalline silicon that contains the devices. The remaining layers are interconnects.


    b) If you are talking about stacking dice (That is, literally stacking chips inside the package) then the distance the information would have to travel when going trough the "vertical interconnects" would be thousands or tens of thousands bigger than the distance of any on-chip interconnection.


    How, why? the lateral extend of any die is usually bigger than its height. In fact the distance would be much shorter. Active layers would be seperated by less than 100micrometers.

  • IBM thinks so (Score:5, Informative)

    by roystgnr ( 4015 ) <royNO@SPAMstogners.org> on Wednesday August 20, 2003 @06:25PM (#6749195) Homepage
    They made an announcement about it [ibm.com] less than a year ago. They don't say if they'll be doing anything special about heat problems, though.
  • by pagley ( 225355 ) on Wednesday August 20, 2003 @06:25PM (#6749197)
    Thank Goodness someone has finally said something about it, even if it was just in passing. The bonus is that it is on the front page of Slashdot.

    "Moore's Law" is no more a "law" in the sense of physics (or anything else for that matter), than any other basic observation made by a scientist or physicist.

    Oddly, you'd have a hard time believing it wasn't a Law of Nature by the apocalyptic cries from the technology industry when "Moore's Law" falls behind - spouting that something *has* to be done immediately for Moore's Law to continue, lest the nuclear reaction in the Sun cease. Or something.

    At the time it was coined by the *press* in 1965, only a small fraction of what we now know was known about the physics of integrated circuits and semiconductors at the time. So, looking back it's easy to see that the exponential trend in density would continue as long as the knowledge and abilility to manipluate materials increased exponentially.

    Yes, it is rather surprising that Moore's observation has held true as long as it has. And this isn't to say that the growth trend won't continue, but it will certainly level off for periods while materials or manufacturing research comes up with some new knowledge to advance the industry.

    As the article indicates, things are likely headed for a plateau, possibly toward the end of this decade or start of the next. And at that point, Moore's observation will simply no longer be true or appropriate.

    Let the cries of armageddon begin as "Moore's Law" is finally recognized as an observation that will eventually be outlived.

    For a little "Moore" background, see http://www.intel.com/research/silicon/mooreslaw.ht m
  • by randyest ( 589159 ) on Wednesday August 20, 2003 @06:57PM (#6749453) Homepage
    Right on a). well, mostly -- IBM has a new process that does allow transistors in some area-IO to be placed over logic gate transistors. It's more trouble than it's worth, though (unavoidable interactions are hard to calculate accurately).

    And right on b) -- the distance between 2 dice stacked is much shorter than 2 side-by-side. But this is totally irrelevant, mostly due to previous posters :). See, it's not that it's further to go vertical from one die to the next, rather than packaging each individually and connecting them horizontally. The problem is it's hard to go vertical. This is true from design, manufacturability, and reliability points-of-view.

    First, by area-IO I meed input/output (IO) drivers or receivers that can be placed anywhere in an area, rather than only around the circumference (preipheral IO). We have area-IO at the package level (such as BGA, or Ball-Grid-Array and FCBGA, or Flip-Chip Ball Grid Array [best for area-IO, and expensive]) and area-IO at the die level. Do we connect the dice before or after packaging?

    Either way presents problems. Such as (for pre-packaging connections):

    How do you electrically connect 2 area-IO dice? Usually, a die has little square landing pads, and these are only about 50um square, spaced every 200-250um or so on center in 2-D arrays of up to 70x70 and more. To be able to do anything with these tightly packed little signals, we drop special tiny drops of metal that stick to the pads, and press this up against a package substrate (ceramic), which includes routes to space those signals out more, like every 1.0mm or so. Even this is expensive and hard to mount to PCB, since it's hard to ensure both things are perfectly flat (package and PCB) so that all balls connect.

    In fact, we rely on the package (often including an internal metal "stiffener") to keep the die nice and flat, which helps avoid de-lamination (layers peeling apart). Two dice pressed next to each other would require some space between them to make the connection (i.e., some bumps for the connection, and valleys for no connect areas), and this and the elasticity of the electrical connection medium would leave enough play to let the dice warp all over the place.

    It'd be even harder to tell which ball(s) aren't connected. We do this now by confirming that the PCB is OK (usually pretty easy, so it makes a good reference), make the chip send specially-controllable data out (and take data in on inputs), then check to see what's right and wrong by measuring at the board level. If my board is another chip, how do I know which one I am debugging? This debugging (we call mfg testing) happens to all chips, not just some samples. If it isn't, failure rates will go up to unacceptable levels (like 20-50% or more).

    Testability is hard if you stack dice before or after packaging. Design is a bizzotch too, since you can't very well even model one whole chip at a time (and how the circuit performs depends on process, voltage, and temperature), much less two chips stacked with an insulator and some kind of very short, very small, very fragile, very susceptible to noise and crosstalk hunk of 1000+ wires between them. One local hot spot at X,Y on die A can mess up operation at x,y on die B, and we'd never be able to practically predict that.

    Most importantly of all, part of the reason chip design even works at all, and that we can churn them out for pennies each (after massive design and capital outlay for a fab), is that we can simplify the design dramatically by making assumptions, modelling the target device in isolation, verifying it in isolation, and then being able to safely assume this (truly wrong) assumption of isolation is close enough to true that the part will work in the system. Single packaged die are relatively infinitely insulated from everything except the I/O we carefully design. Stacked dice would not be -- they would interact strongly with each other in unpredicatab
  • Re:The Diamond Age (Score:5, Informative)

    by OneIsNotPrime ( 609963 ) on Wednesday August 20, 2003 @06:58PM (#6749467)
    The Slashdot article is here [slashdot.org] and the Wired article is here [wired.com] .

    Since diamonds have a much higher thermal conductivity (ie they can take the heat), they'd make better chips than silicon if only they were more affordable. Industrial diamonds are expected to make the whole industry's prices fall drastically by increasing supply and breaking the De Beers cartel .

    More about the De Beers cartel:

    Page 1 [theatlantic.com] Page 2 [theatlantic.com] Page 3 [theatlantic.com]

    Everything2 link [everything2.com]

    Personally I think these are awesome feats of engineering, and a way to give your significant other a stone without feeling morally, and literally, bankrupt.

  • by iafrey ( 700092 ) <slashdot.e-frey@net> on Wednesday August 20, 2003 @07:24PM (#6749669)
    Well according to this article on wired [wired.com] the promise of molecular computing is far far far beyond Moore's law. Not only in its processing power, but also in storage capacity, production, speed of production. Biomolecular electronics will change everything within the next 20 years (hopefully). We cant even imagine or predict what will happen. I just hope that our current stupid IP laws do not hinder this. I wouldnt be surprised if some new SCO tries to stall this technologies. Just in the name of making a underved profit.
  • Re:The Diamond Age (Score:1, Informative)

    by Anonymous Coward on Wednesday August 20, 2003 @08:21PM (#6750163)
    One of the 'problems', however, is that synthetically produced diamons are actually too perfect. This somehow makes them end up glowing in the dark, and thus they can be distinguished from natural diamonds.

    On top of that, the established jewelry diamond houses etch their name/logo into the diamond at a microscopic level.

    So unless your girlfriend likes the glow-in-the-dark and/or doesn't care about whether the diamond comes from an established house (taking into account that the glow-in-the-dark effect has indeed been overcome as some Russians claimed), you'd still have to be the morally and financially bankrupt person to stop your girlfriend from having a fit* as the monetary value of the diamond is the major player (after aesthetics) for a diamond-fitted jewel.

    ( * http://ask.slashdot.org/article.pl?sid=02/08/13/20 10256&mode=nested&tid=99 )

  • by Anonymous Coward on Thursday August 21, 2003 @09:01AM (#6753619)
    No, it's geometry, based on an assumption that you can decrease the size of components in a linear fashion over time. Double the number of components along the edge, and you quadruple the number of transistors on the chip. Hence the exponential scaling.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...