Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology Hardware

DARPA Looks Beyond Moore's Law 217

ddtstudio writes "DARPA (the folks who brought you the Internet) is, according to eWeek, looking more than ten years down the road when, they say, chip makers are going to have to have totally new chip fabrication technologies. Quantum gates? Indium Phosphide? Let's keep in mind that Moore's Law was more an observation than a predictive law of nature, despite how people treat it that way."
This discussion has been archived. No new comments can be posted.

DARPA Looks Beyond Moore's Law

Comments Filter:
  • by Keebler71 ( 520908 ) on Wednesday August 20, 2003 @04:47PM (#6748860) Journal
    First they want to get around privacy laws, now they want to break Moore's law...these guys have no bounds!
    • Exactly. Any new technology put out by these guys is quite likely to contain anti-privacy technology secretly embedded. My 486 running FreeBSD and lynx is still good enough for me.
    • I was waiting for the "breaking Moore's law" jokes, didn't expect it was first post.
  • Why am I always being forced to upgrade, darnit?
  • Stacked chips (Score:4, Interesting)

    by temojen ( 678985 ) on Wednesday August 20, 2003 @04:48PM (#6748869) Journal
    perhaps stacked wafers with vertical interconnects might help... I'm not sure how you'd dissipate the heat, though.
    • Multilayer chips have been around a long time. Think it's up to 7 or 8 by now. This idea, which exists outside of time, has been discovered on earth indepedendently of you. Neither you nor the earlier discoverer created it.
    • For a minute there I misread and thought your subject line was "Stacked chicks"! Then I realized you were just talking about some computer stuff. Dang!

      GMD

    • I saw this article [wired.com] about new diamond manufacturing techniques and it's an interesting read. Having diamond based processors looks like a viable technology in the near future and heat dissipation is one of the major reasons that they're considering diamond.

      I'm just worried about what my wife will say when the diamond in my machine is bigger than the one on her finger...
      -B
    • a) Chips are already "stacked". Layer over layer of silicon.

      b) If you are talking about stacking dice (That is, literally stacking chips inside the package) then the distance the information would have to travel when going trough the "vertical interconnects" would be thousands or tens of thousands bigger than the distance of any on-chip interconnection. Which means the communication between layers of stacked chips would be thousands of times slower. Not very good for microprocessors..
      • by Anonymous Coward
        B-b-b-ut.. Hypertransport!

        (sorry, I just like the name)
      • by temojen ( 678985 ) on Wednesday August 20, 2003 @05:11PM (#6749093) Journal
        the distance the information would have to travel when going trough the "vertical interconnects" would be thousands or tens of thousands bigger than the distance of any on-chip interconnection.

        But also thousands or hundreds of thousands of times smaller than going outside the package; which would make it ideal for multi-processors, array processors, or large local caches.

      • a) Chips are already "stacked". Layer over layer of silicon

        False, there is just one active layer of single crystalline silicon that contains the devices. The remaining layers are interconnects.

        b) If you are talking about stacking dice (That is, literally stacking chips inside the package) then the distance the information would have to travel when going trough the "vertical interconnects" would be thousands or tens of thousands bigger than the distance of any on-chip interconnection.

        How, why? the lat

        • Right on a). well, mostly -- IBM has a new process that does allow transistors in some area-IO to be placed over logic gate transistors. It's more trouble than it's worth, though (unavoidable interactions are hard to calculate accurately).

          And right on b) -- the distance between 2 dice stacked is much shorter than 2 side-by-side. But this is totally irrelevant, mostly due to previous posters :). See, it's not that it's further to go vertical from one die to the next, rather than packaging each individu
      • Some observations:

        1. My athlon is about 1cm x 1cm (the chip part, not the package)
        2. Vacuum does not conduct well
        3. Semiconductor manufacturers have very precise fabrication methods

        Given that, I'm sure they could figgure out a way to make the distance between any two points on two wafers 1cm2 less than 0.5cm., say by making the interconnects gold studs a micron or so high all over the surface of the wafer, and aligning them face-to-face.

        • We're trying.

          But how do you get "micron high" little gold studs to stick to the die in exactly the right places? How do you make sure each gold stud is exactly the same height (can't have a short one anywhere, even by a femto-meter)? Then, how do you physically/mechanically line them up exactly and keep them together perfectly for long priods of time under fairly wide ranges in vibration and temperature ranges? How do you prevent the dice from warping if each stud isn't 100% identical (such as if you
      • Chips are already "stacked". Layer over layer of silicon.

        Not really. Modern cmos design fabricates everything from a single silicon wafer using a large number of photoresist layers to create different regions of silicon and toss several layers of metal interconnects on top. All the transistors are on one level, placing an upper limit to the number of gates within a given physical area. What would be exceedingly cool, though, is the ability to stack arbitrary layers of silicon on top, providing the capa

    • by Anonymous Coward

      perhaps stacked wafers with vertical interconnects might help... I'm not sure how you'd dissipate the heat, though.

      That's an easy one! Between each wafer, you place a delightful creme filling. The filling would be of such consistency that no matter how hot the wafers are allowed to get, the creme does not melt.

      I propose we call this new technology Overall Reduction of Exothermic Output, or OREO for short.

      --
      Rate Naked People [fuckmeter.com] ! (not work-safe)

    • IBM thinks so (Score:5, Informative)

      by roystgnr ( 4015 ) <`gro.srengots' `ta' `yor'> on Wednesday August 20, 2003 @05:25PM (#6749195) Homepage
      They made an announcement about it [ibm.com] less than a year ago. They don't say if they'll be doing anything special about heat problems, though.
    • There was an article on slash a year ago about a guy who placed his mobo into a styrofoam cooler full of mineral oil. What about fill a heatpipe (ala shuttle computers) with a non-conductive liquid and fitting the heat pipe in such a way that the chip is inside it?
    • perhaps stacked wafers with vertical interconnects might help... I'm not sure how you'd dissipate the heat, though.

      I've heard that this may be possible, utilizing somehow little channels through the inside of the chip that would carry liquid nitrogen. I think before fab technology approaches that point, however, we may have better technologies.
  • by BWJones ( 18351 ) on Wednesday August 20, 2003 @04:49PM (#6748872) Homepage Journal
    Moore's law, bah! Thinking about it, DARPA should get Steve Jobs on board to study his Reality Distortion Field. Think of the military aspects of.......oh, wait. We already have that.

    • It's called (Score:1, Funny)

      by Anonymous Coward
      It's called the Bush Method. It isn't as fast or elegant as a genuine Reality Distortion Field, but it gets the job done about as well most of the time and the great thing is, it's cheaper and anyone can do it.

      The Bush Method is so simple, it's amazing no one thought of it before 2000. All you have to do is take the thing about reality you want to distort, and state that it has changed, whether or not it hasn't. The amazing thing is, if you say it enough times publicly, it actually becomes true.

      The Bush
  • by Thinkit3 ( 671998 ) * on Wednesday August 20, 2003 @04:50PM (#6748889)
    It's just a wild guess. It has absolutely nothing to do with physics, which is the real laws we all live by. It has much more to do with human laws such as patents and copyrights that limit progress.
    • by kfg ( 145172 ) on Wednesday August 20, 2003 @04:59PM (#6748993)
      An educated observation, which is why it basically works.

      Please note that the observation was well enough educated that it includes the fact that its validity will be limited in time frame and that before it becomes completely obsolete the multiplying factor will change, as it already has a couple of times.

      In order to understand Moore's Law one must read his entire essay, not just have some vague idea of one portion of it.

      Just as being able to quote "E=mc^2" in no way implies you have the slightest understanding of the Special Theory of Relativity.

      KFG
    • It's just a wild guess. It has absolutely nothing to do with physics, which is the real laws we all live by. It has much more to do with human laws such as patents and copyrights that limit progress.

      Though more than a "wild guess", you do have it right when you mention that it has no basis in physical reality.

      I don't think I'd blame IP so much as marketing, though. The major player in the field, Intel, holds most of the relevant IP.

      So why has Moore's Law worked for so long?

      Because Intel schedules
      • So true, and this same phenomena has much to do with the problems in the markets where making the numbers became more important than being honest about accounting. It's a confidence trick of enormous proportions and we're watching it crumble at the price the US economy.
        Our markets have been totally manipulated by these made-up notions like Moore's Law and now that the game is up, people are acting shocked when the problem is obvious.
        I was very impressed with this article for putting the time
    • "Moore's Law" was always more useful in predicting where your business needed to be in 5 years than anything else. If three doublings of hardware would render the service you provide trival and cheaply performable by your customers . . .

      . . . then it'll soon be time to sell this company and start another.
    • It's not a physical law, an observation or or eve a wild guess. This was Intel's Gorden Moore. It was a marketing plan.

    • Not a guess ! (Score:3, Insightful)

      It wasn't a guess, it was a statement of company policy.

      The manufacturers try to strike a balance between a high R&D investment (with rapid advances in technology) and keeping the technology in production long enough to generate a good return on that investment. Moores Law represents the 'sweet spot' that manufacturers had settled on.

      While it's held quite well in recent decades, there's no guarantee it will continue to hold. If they hit a technological wall, or economic conditions cause a drop in inve
    • I think this is a product of the fact that, in spite of the best intentions of computer scientists and hardware engineers over the years, the massive commercialsation of the industry means that computing really lacks a scientific underpinning. Electronic engineering, on which all computing depends, of course, is applied physics, with all the laws and theories that implies. And computing brings in a branch of mathematics - information theory - which has its own laws and theorems (not theories, because it's m
  • by Kibo ( 256105 )
    Didn't some of the recent quantum gate break throughs come on the former heir appearent to Silicon?
  • hardware has progressed dramatically over the past decade and left software somewhere behind... there is nt much use for faster and faster servers when software doesn't keep up the phase... this decade will be a "software decade"
    • hardware has progressed dramatically over the past decade and left software somewhere behind... there is nt much use for faster and faster servers when software doesn't keep up the phase... this decade will be a "software decade"

      Not really. The functionality offered by software has pretty much flatlined (with the major exception being "media", e.g. mp3, mpeg, divx, etc). HOWEVER, the bloat and overhead of software continues to keep pace (and often surpasses) with the speed of hardware. This trend has
      • Bloat is the consumption of additional computational resources WITHOUT a commensurate increase in utility.

        Those spinny flashy bits of eye candy in OS X make it significantly easier for me to use. Therefore OS X is not bloated. You may disagree, but you have the option to go command-line.

        Now, MS Office on the other hand...
    • Software has *pushed* that hardware development. The complexity of what we've been attempting to accomplish has skyrocketed, but in a "rapid-application-development-first-to-market" mentality that creates such utterly bloated programs that it now takes a high class system to do the same tasks that used to be in demand, but in a prettier way.

      I'd vote for more efficient software personally, but that's also because I'm a pack rat that can't let go of any of my hold hardware.
  • Moore's law is of course set with the assumption of silicon being used as the underlying semiconductor technology. With other semiconductor tech and even alternatives to the whole concept of semiconductors emerging, it is bound to fail eventually.
  • The Diamond Age (Score:3, Informative)

    by wileycat ( 690131 ) on Wednesday August 20, 2003 @04:52PM (#6748915)
    I"m pretty excited about the new man-made diamonds that are supposed to be able to keep moore's law going for decades when they come out. Wired had an article recently and a post here on /. too
    • Re:The Diamond Age (Score:5, Informative)

      by OneIsNotPrime ( 609963 ) on Wednesday August 20, 2003 @05:58PM (#6749467)
      The Slashdot article is here [slashdot.org] and the Wired article is here [wired.com] .

      Since diamonds have a much higher thermal conductivity (ie they can take the heat), they'd make better chips than silicon if only they were more affordable. Industrial diamonds are expected to make the whole industry's prices fall drastically by increasing supply and breaking the De Beers cartel .

      More about the De Beers cartel:

      Page 1 [theatlantic.com] Page 2 [theatlantic.com] Page 3 [theatlantic.com]

      Everything2 link [everything2.com]

      Personally I think these are awesome feats of engineering, and a way to give your significant other a stone without feeling morally, and literally, bankrupt.

  • What about Al Gore?
  • I don't know about you, but I'm starting to get fed up with this guy. His name has started to get on my nerves, That guy is everyvere. God Damnit: it is not possible to have a decen discussion anymore without anyone dragging in this guy an his so-called law.

    Therefore i propose: "Moores Law 2: Anyone mentioning his name in a discussion aboout semiconductors, CPU's or transitsors have lost the discussion."

    • I think that would be "the Godwin-Moore law."
    • I have read too many off-base posts in this thread, and I can't help but post to at least one of them.

      ::rant

      Firstly, one really can't have a meaningful discussion of the semi-conductor industry without understanding Moore/Moore's "Law".

      Secondly, and I would have thought people would understand this by now, but Moore's "Law" does not cover CPU Speed!! It merely relates the density of transistors on silicon with respect to time. It does NOT attempt to take into factor various computer architecture adv

  • What about diamonds? (Score:5, Interesting)

    by GreenCrackBaby ( 203293 ) on Wednesday August 20, 2003 @04:53PM (#6748931) Homepage
    This diamond article in Wired 'http://www.wired.com/wired/archive/11.09/diamond. html' seems to indicate that Moore's law is sustainable for much more than ten more years.

    Besides, I've been hearing about the death of Moore's Law for the last ten years.
    • Besides, I've been hearing about the death of Moore's Law for the last ten years.

      It's a popular filler topic for industry journalists who have nothing better to report about. They'll just point out that the latest processor from AMD/Intel/etc is "reaching the limits of current technology" and then progress ipso facto to "Moore's Law could be dead in a few years".

    • Actually, some people are predicting that Moore's law will fail, but not in the way that you think.

      The facts seem to show that we will have even faster development of processors than Moore's law states.

      The idea behind this is based on the rate of technological development in human history. If you were to graph the rate of technological advancement for recorded history you would see a long line of incremental yet minimal growth punctuated by the recent 100-150 years where the increase is almost geometric.
      • You've just described The Law of Accelerating Returns [kurzweilai.net]; it applies to the rate of technological growth in general, rather than just the specific case of Moore's transistor count observation.

        The funny thing (to me at least) is that very few people have fully digested the implications of exponential progress. They're in for a rude awakening over the next couple decades.

        --

  • Human Brain Processors. Of course, we'll have to pick only the best, so no Slashdot editors.

    Example:
    10 GOTU 4o
    30 Re+URN; G0SUB 42
    40 Print "Welcom to Windoes!":PRINGT "JUS KIFFING! HAHAHA!"
    43 RUN
    50 REM Copyright SCO(TM)(R)(C) 2012, NOT! HAHAHAHA
    6o GOt0 14.3

    Hey, it's a joke! Relax - no angry human brains will be used either!

  • umm (Score:4, Insightful)

    by bperkins ( 12056 ) * on Wednesday August 20, 2003 @04:53PM (#6748940) Homepage Journal
    Let's keep in mind that Moore's Law was more an observation than a predictive law of nature, despite how people treat it that way.

    Let's not and say we did.*

    Seriously, I doubt that many people think that Moore's law is on an equal footing as say gravity and quantum mechanics. Still, an observation that has held more or less for nearly 40 years is worth considering as a very valuable guideline. Let's keep this in mind as well.

    (*Why do vacuous comments like this make it into slashdot stories?)
  • by L. VeGas ( 580015 ) on Wednesday August 20, 2003 @04:54PM (#6748943) Homepage Journal
    This idea of speeding up processing speed is barking up the wrong tree and ultimately doomed to failure. We need to be focusing our attention on biochemistry and molecular biology. We already have drugs that slow your reaction time, thus making things appear to happen more quickly.

    See, if we get everybody to take xanax or zoloft, there's no limit to how fast computers will appear to be working.
    • > See, if we get everybody to take xanax or zoloft, there's no limit to how fast computers will appear to be working.

      Let's just kill everyone, then our computers will seem infinitely fast! Dude, if you're gonna dream, Dream Big!
  • just because of huge contract lead times, and this is just simple recognition of the fact. Any number of alternatives could pop up in the meanwhile (before anybody actually does anything), and that possibility needs to be accounted for.

    I bet that's what it really is, anyway.
  • . . . introduced by the PR whizzes behind Total Information Awareness name and logo [cafeshops.com], this new effort will be called either "SkyNet" or "Die Carbon Units," and feature a logo of a Borg drone ramming chips into the head of a howling toddler.

    Stefan "It's finally out!" [sjgames.com] Jones

  • Scientists are looking for alternatives to rats for experiments: If rats are experimented on they will develop cancer. --Morton's Law
  • by The Clockwork Troll ( 655321 ) on Wednesday August 20, 2003 @04:56PM (#6748965) Journal
    Every 18 months, someone will develop a new law to compute the rate at which the estimate of the rate at which the number of transistors on semiconductor chips will double will halve.
  • I thought that quantum computing was probably going to be viable within ten years, and will probably be far more advanced than any of the fabrication methods they listed in the article.


    Their web site [darpa.mil] talks a little bit about DARPA's quantum computing projects, but the page seems to be a little outdated. Anyone know if they're pursuing this as well?

    • A lot of posters sem to think that DARPA, the US military, or the US government is a unified thing. It's not. Each part often have their own agendas. Research is very frequently driven by those agendas.

      However, DARPA often CYAs when it comes to research too. If you come up with a whacky idea that might just work they often will fund it even though it is in competition with another they have. The reason being that they then can see which whacky idea actually works. Often none do. or one does. or no

  • Murphy's laws are also more observation than predictives, but I think that technology changes will not have effect on them.
  • I hate Moore's Law (Score:3, Insightful)

    by Liselle ( 684663 ) <slashdotNO@SPAMliselle.net> on Wednesday August 20, 2003 @04:58PM (#6748989) Journal
    Computer salesmen are using it like a club. You figure it would drive innovation, instead of driving CPU manufacturers take advantage of comsumer ignorance and do fairy magic with clock speeds. We should call it "Moore's Observation".
  • Parallel Computing (Score:2, Interesting)

    by lawpoop ( 604919 )
    When we absolutely cannot put anymore transistors on a chip, we will start making computers that are massively parrallel. In the future, you will have a desktop computer that will have 2, 4, 8, 16, etc chips on them.

    All these other things they are talking about are vaporware. Parallel computing is here and in use now.

    • by Abcd1234 ( 188840 )
      Sounds like a nice idea for the desktop or for certain classes of research, but there will always be a place for massive computational capacity on a single chip since there is a large class of computing problems which are not easily parallelizable, and hence can not take advantage of parallel computing.

      Incidentally, there is also a limit to how fast your parallel computer will get... it's call the bus. If you can't build high speed interconnects, or if your software isn't designed well (not as easy as it
    • by HiThere ( 15173 ) *
      Yes, but the current parallel computers have huge performance costs...they can easily spend over half their time coordinating themselves on many kinds of problems. Of course, on well structured problems there probably isn't a better solution possible.

      Two major answers occur to me:

      Answer one is that we figure out how to automatically decompose problems into independently solvable threads.. a quite difficult problem.

      Answer two is that we build special purpose parallel processors to handle parallelizable t
    • When we absolutely cannot put anymore transistors on a chip, we will start making computers that are massively parrallel. In the future, you will have a desktop computer that will have 2, 4, 8, 16, etc chips on them.

      That's pointless. Why would I prefer 8 chips? Wouldn't it make sense to make a die that's 8 times as big? Then, at the same feature size (0.18 or whatever), you get the same number of total transistors in both systems, same area dedicated to CPU per rig, but less slow (ie, FSB) interconnect

  • by Junks Jerzey ( 54586 ) on Wednesday August 20, 2003 @05:10PM (#6749082)
    Moore's law is already ending. Intel's Prescott (i.e. Pentium 5) CPU dissipates 103 watts. That's beyond anything you can put in a laptop, and it's arguably beyond anything that should be in a workstation-class PC. But it also may not be that we're hitting CPU speed limits, just that we're hitting the limits of type types of processors that are being designed. Much of the reason the PowerPC line runs cooler than the x86 is because the instruction set and architecture are much cleaner. There's no dealing with calls to unaligned subroutines, no translation of CISC instructions to a series of RISC micro-ops, and so on. But there are the same fundamental issues: massive amounts of complexity dealing with out of order execution, register renaming, cache management, branch prediction, managing in-order writebacks of results, etc.

    Historically, designing CPUs for higher-level purposes, other than simply designing them to execute traditional assembly language, has been deemed a failure. This is because generic hardware advanced so quickly that the custom processors were outdated as soon as they were finished. Witness Wirth's Lilith, which was soon outperformed by an off-the-shelf 32-bit CPU from National Semiconductor (remember them?). The Lisp machine is a higher profile example.

    But now things are not so clear. Ericsson designed a processor to run their Erlang concurrent-functional programming language, a language they use to develop high-end, high-availability applications. The FPGA prototype was outperforming the highly-optimized emulator that had been using up to that point by a factor of 30. This was with the FPGA at a clock speed of ~20MHz, and the emulator running on an UltraSPARC at ~500MHz. And remember, this was with an FPGA prototype, one that didn't even include branch prediction. Power dissipation was on the order of a watt or two.

    Quite likely, we're going to start seeing more of this approach. Figure out what it is that you actually want to *do*, then design for that. Don't design for an overly general case. For example, 90% of desktop CPU use could get by without floating point math, especially if there were some key fixed point instructions in the integer unit. But every Pentium 4 and Athlon not only includes 80-bit floating point units, but massive FP vector processing units as well. (Not to mention outmoded MMX instructions that are almost completely ignored.)
    • For example, 90% of desktop CPU use could get by without floating point math, especially if there were some key fixed point instructions in the integer unit. But every Pentium 4 and Athlon not only includes 80-bit floating point units, but massive FP vector processing units as well. (Not to mention outmoded MMX instructions that are almost completely ignored.)

      LOL! You MUST be trolling. Seriously! I'll bite anyway, though. How many people, with their computer:

      1) play audio/video
      2) edit audio/video
      3) he
    • Now that fast floating point hardware is standard on desktop CPUs, I take advantage of it whenever I can. Fixed point arithmetic is an error-prone kludge for CPUs without floating point hardware. I've waited decades for floating point hardware to become a standard feature of PCs. Take it away and I will have to break someone's legs.
    • by roystgnr ( 4015 ) <`gro.srengots' `ta' `yor'> on Wednesday August 20, 2003 @05:33PM (#6749258) Homepage
      For example, 90% of desktop CPU use could get by without floating point math

      Well, except for games.

      And anything that uses 3D.

      And audio/video playback and work.

      And image editing.

      And some spreadsheets.

      What's that leave, web surfing and word processing? No, even the web surfing is going to use the FPU as soon as you hit a Flash or Java applet.
      • by Elladan ( 17598 ) on Wednesday August 20, 2003 @06:01PM (#6749494)
        Well, except for games.
        And anything that uses 3D.

        Games and 3D make heavy use of FPU, but it's interesting to note that as time goes on, more and more of the heavy lifting FP work is being offloaded to the graphics processor.

        Given a few more generations, most of the FPU work in todays games may actually be executed in the GPU.

        Of course, this doesn't actually change anything, since tomorrow's games will just put that much more load on the CPU for physics processing and such!

        And audio/video playback and work.

        Video codecs are essentially all integer based. Audio codecs often use the FPU, but they really don't need to - fixed point implementations tend to be just as fast.

        And image editing.

        The vast bulk of image editing work tends to be integer-based, or easily convertible to integer-based.

        And some spreadsheets.

        Spreadsheet math calculations aren't really performance-related in any sense. 99.9% (remember, your statistics may be made up on the spot, but mine are based on sound scientific handwaving!) of the time a spreadsheet spends is in fiddling with the GUI, which is primarily an integer operation activity.

        That said, the parent poster's point sort of goes both ways. It's true that the FPU unit is heavily underutilized by most things outside of games, so it's not an unreasonable idea to strip it out and let the FPU be emulated in software or microcode or whatnot.

        However, that won't necessarily really help. Modern CPU cores are better able to manage their power routing than previous ones, so having an FPU on there doesn't necessarily cause any trouble. The CPU may be able to disconnect power to the FPU when it's not in use, thus making the whole thermal issue something of a moot point in this respect. If it doesn't generate heat, it's just a matter of wasted silicon - and silicon's becoming quite cheap!

        In fact, the FPU is an example of good design in CPU's, really. It's not too hard to fit a lot of computation units on one CPU core these days, hence having multiple ALU and FPU computation units being driven by complicated pipelining and SIMD engines. The difficulty is making efficient use of them - note the trouble getting SMP to work efficiently, and the whole idea of hyperthreading. While the FPU may get fairly low utilization, it is fantastically faster at floating point computation than the integer cores are, and putting a couple on a chip is thus generally a pretty good idea.

        • Of course, this doesn't actually change anything, since tomorrow's games will just put that much more load on the CPU for physics processing and such!

          Yes. And it should be a double-precision FPU. Trying to cram an physics engine onto the PS2, which has only 32-bit floating point, is not a happy experience.

  • The era of biological computing when I can just sneeze on my PC to double its RAM!
  • by pagley ( 225355 ) on Wednesday August 20, 2003 @05:25PM (#6749197)
    Thank Goodness someone has finally said something about it, even if it was just in passing. The bonus is that it is on the front page of Slashdot.

    "Moore's Law" is no more a "law" in the sense of physics (or anything else for that matter), than any other basic observation made by a scientist or physicist.

    Oddly, you'd have a hard time believing it wasn't a Law of Nature by the apocalyptic cries from the technology industry when "Moore's Law" falls behind - spouting that something *has* to be done immediately for Moore's Law to continue, lest the nuclear reaction in the Sun cease. Or something.

    At the time it was coined by the *press* in 1965, only a small fraction of what we now know was known about the physics of integrated circuits and semiconductors at the time. So, looking back it's easy to see that the exponential trend in density would continue as long as the knowledge and abilility to manipluate materials increased exponentially.

    Yes, it is rather surprising that Moore's observation has held true as long as it has. And this isn't to say that the growth trend won't continue, but it will certainly level off for periods while materials or manufacturing research comes up with some new knowledge to advance the industry.

    As the article indicates, things are likely headed for a plateau, possibly toward the end of this decade or start of the next. And at that point, Moore's observation will simply no longer be true or appropriate.

    Let the cries of armageddon begin as "Moore's Law" is finally recognized as an observation that will eventually be outlived.

    For a little "Moore" background, see http://www.intel.com/research/silicon/mooreslaw.ht m
  • DARPA (the folks who brought you the Internet)

    Shouldn't their acronym read FBI insted of DARPA, then?
  • Let's keep in mind that Moore's Law was more an observation than a predictive law of nature, despite how people treat it that way.

    Not entirely. The folks designing FooCorp's next generation of e.g. chip fabs generally use Moore's Law to tell them where the competition will be by the time the fab is built: FooCorp needs to be competitive at that point in the future. Then the folks designing e.g. PDAs use Moore's Law to tell them what processor power, memory capacity, etc will be available to them by th

  • Get rid of C! (Score:5, Interesting)

    by Temporal ( 96070 ) on Wednesday August 20, 2003 @05:58PM (#6749469) Journal
    Not many people know it, but one of the problems holding back processor technology today is the way programming languages are designed. Languages like C (or C++, Java, Perl, Python, Fortran, etc.) are inherently serial in nature. That is, they are composed of instructions which must be performed in sequence. However, the best way to improve the speed of processors is to increase parallelization; that is, make them do multiple things at once. And, no, threading isn't the answer -- threading too large-scale, and can only usefully extend to 2-4 parallel processes before most software has trouble taking advantage of it.

    Think about this: Why is video graphics hardware so much faster than CPU's? You might say that it is because the video card is specifically designed for one task... however, these days, that isn't really true. Modern video cards allow you to write small -- but arbitrary -- programs which are run on every vertex or every pixel as they are being rendered. They aren't quite as flexible as the CPU, but they are getting close; the newest cards allow for branching and control flow, and they are only getting more flexible. So, why are they so much faster? There are a lot of reasons, but a big one is that they can do lots of things at the same time. The card can easily process many vertices or pixels in parallel.

    Now, getting back to C... A program in C is supposed to be executed in order. A good compiler can break that rule in some cases, but it is harder than you would think. Take this simple example:

    void increment(int* out, int* in, int count)
    {
    for(int i = 0; i < count; i++)
    out[i] = in[i] + 1;
    }

    This is just a piece of C code which takes a list of numbers and produces another list by adding one to each number.

    Now, even with current, mostly-serial CPU's, the fastest way to perform this loop is to process several numbers at once, so that the CPU can work on incrementing some of the numbers while it waits for the next ones to load from RAM. For highly-parallel CPU's (such as many currenty in development), you would even more so want to work on several numbers simultaneously.

    Unfortunately, because of the way C is designed, the compiler can not apply such optimizations! The problem is, the compiler does not know if the "out" list overlaps with the "in" list. If it does, then the compiler has to do the assignments one-at-a-time to insure proper execution. Imagine the following code that calls the function, for example:

    increment(myArray + 1, myArray, count);

    Of course, using the function in such a way would not be very useful, but the compiler has to allow for it. This problem is called "aliasing".

    ISO C99 provides for a "restrict" keyword which can help prevent this problem, but few people understand it, even fewer use it, and those who do use it usually don't use it everywhere (using it everywhere would be too much work). It's not a very good solution anyway -- more of a "hack" if you ask me.

    Anyway, to sum it up, C generally requires the CPU to do things in sequence. As a result, CPU manufacturers are forced to make CPU's that do one thing at a time really, really fast, rather than lots of things at the same time. And, so, since it is so much harder to design a fast CPU, we end up with slower CPU's... and we hit the limits of "Moore's Law" far earlier than we should.

    In contrast, functional languages (such as ML, Haskell, Ocaml, and, to a lesser extent, LISP), due to the way they work, have no concept of "aliasing". And, despite what many experienced C programmers would expect, functional languages can be amazingly fast, despite being rather high-level. Functional languages are simply easier to optimize. Unfortunately, experienced C/C++/Java/whatever programmers tend to balk at functional languages at first, as learning them can be like learning to program all over again...

    So, yeah. I recommend you guy

    • The vast majority of my CPU's time is spent waiting for data to arrive, from memory, from the disk, from the keyboard and mouse, from the network (!), and from a wide variety of other sources, all orders of magnitude slower then my CPU.

      There's a damn good reason almost nobody cares about this, and the ones that do care already care, and that is that for the vast majority of what people do every day, none of that matters.

      You want to create Yet Another Functional Language? Hey, great, I'd hate to be the guy
    • Your post mixes up a bunch of things. For one thing, it implies that Python and Java allow pointer aliasing. They don't. Second, you include Lisp in your list of languages that are easy to optimize but give no indication why Lisp would be easier to optimize than (e.g.) Java. Neither allows pointer aliasing. Furthermore, Java compilers in general have more type information available to them at compile time and they do use that for optimization. Of course you can add type annotations to Lisp but if we're talk
    • A more important bottleneck to computing speed is machine architecture (the topology of the I/O, memory, cpu, etc.). It doesn't matter much if your code can be easily parallelized to 128 processors, if all 128 of them are blocked waiting for data on the same bus. 2-4 processors is probably the upper limit of the "shared memory bus architecture", not of the serial computation paradigm. Once we move to an architecture with more than one memory bus (like NUMA), then it would time to worry about the issues you
  • The true definition merely states - "The density of transistors on an IC will approximately double every 18 months". Many people seem to think that this implies a processing performance doubling, or a frequency doubling. It is nothing of the sort.

    The only direct effect is that the cost for a chip is halved every 18 months (assuming cost ~ die area). A side-effect is the fact that smaller transistors can be run at higher clocks than larger transistors, and/or dissipate less heat.

    It is upto processor archit
  • Its still completely free using a link below the rest. here [divx.com].
  • Moore's Law is the observation that when people are allowed to freely interact without the burden of government, they produce at an exponential rate.

    We can apply it to other industries and seen the same effects.

    For instance, before deregulation, long distant phone calls were expensive. Today, they are dropping in price, while the QOS and coverage is expanding.

    The software industry, due to almost complete government non-interference, is able to take software to completely new levels every two to three yea
  • Oh great... now they will provide tiny little missiles to Iran.

    >:-(
  • Moore's law is based on a single technology - optical lithography on flat silicon. The limits of that can't be more than ten years away. Somewhere around 2012, gate sizes reach the atomic level.

    We may hit a wall before that. Power dissipation may limit device density before atom size or fabrication technology doesn. In that case, memory devices, which use less power per unit area, will continue to be fabbed at smaller scales, while busier parts (CPUs, graphics engines) will not progress as much.

    Ther

"Marriage is low down, but you spend the rest of your life paying for it." -- Baskins

Working...