Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Intel Promises A Cool Billion (Transistors) 238

NevDull writes: "CNN is reporting that Intel has announced new semiconductor packaging which will lead to CPUs with a billion transistors running at 20GHz within 6 years. Yummy!" The advance here is removing the balls of solder between the chip's packaging and the microprocessor core, which leaves room for more transistors (or a thinner package). Like it says, though, this is years away from your pocket Cray.
This discussion has been archived. No new comments can be posted.

Intel Promises A Cool Billion (Transistors)

Comments Filter:
  • I guess I'll to replace my current system at least twice over the next six years to keep up...

    That's a lotta trasistors!
    • I guess I'll to replace my current system at least twice over the next six years to keep up...

      I don't know about you, but for me three years is a pretty normal turnaround time for going from new PC to obsolescence.
      • And now I have to step in and comment.

        Why ? I really don't gasp all this obsolecence happening. The only ones that have some real reason for this to happen are the hardware makers and or dear friend Gatus of Borg.

        I still have some 486 machines running pretty well here (and pretty fast, so you can guess what OS I DON'T use). I have a really nice K6-II as my network server, with something like 70% of idle cpu.

        I know it really gives a kick to have the ultimate, state of the art, machine to play with, but is it really needed ?

        What is happening is not really obsolecence, as far as I'm concerned. It's only market economy, and companies trying to sell new models based on ego, not on need.

        I think it's time for us to review all our concepts of computer obsolecence and requirements.
  • Weird prediction... (Score:1, Interesting)

    by Hagabard ( 461385 )
    Why would any CPU manufacturer attempt to predict processor design & clock speed six years into the future? It will be 2007 before this statement can be tested for validity at which point processor design could have changed drastically. Perhaps I should phone Cleo and ask her what the bus speed of my motherboard will be in 2010?

    Hagabard
    • because right now they're creating the core technologies that will be used for the future processors. they know (from past experience i would assume) that it's going to take them 6 years to get this chip to production and out to the masses.
      • here's an experiment (Score:1, Interesting)

        by Anonymous Coward
        go back to 1995. ask Intel when Merced(now Itanium) would be out. now ask them how fast it would be in late 2001. now understand that what they say about 6 years in the future isn't worth a flying fuck
        • At Intel, the trend seems to be for new-generation processors to be released 2-3 years later than predicted, while still benefiting from Moore's Law in the meantime.

          For example, Merced was originally projected for release in 1998, at clock speeds around 300 MHz. (Source: Usenet postings from early 1995.)

          My guess is that it will be 2010 before we see the gigatransistor chip this article is talking about, and that it will be at least somewhat faster than 20 GHz when it does appear.
    • Even if the design for the P4 was rolled out in stone 6 years ago there are changes implemented that makes this nothing more than a press release. I agree with the poster below that they took moore's law and threw in a margin of error.
    • Why would any CPU manufacturer attempt to predict processor design & clock speed six years into the future? It will be 2007 before this statement can be tested for validity at which point processor design could have changed drastically. Perhaps I should phone Cleo and ask her what the bus speed of my motherboard will be in 2010?

      Well...let's take a look at this as it compares to Moore's Law, which says, essentially, that the top speed of microprocessors will double every 18 months.

      6 years = 6 * 12 = 72 months

      72 months/18 months = 4x doubling

      Therefore, CPUs should be 2^4 = 16 times faster in 6 years. This means you'd see an Intel chip running at 32GHz, and an AMD chip running at 24.5GHz (but called the "Athlon 30K", of course, and benching faster than the Intel chip AND providing enough heat to warm a small city)

      Sounds like these predictions are a little lower than we'd like to see...
      • Actually, Moore's law isn't about clock speeds, it's about the number of transistors.

        42million x 16 (four doublings) = 672 million.

        They're planning on slightly outpacing Moore's law, not lagging behind it.
      • Well...let's take a look at this as it compares to Moore's Law, which says, essentially, that the top speed of microprocessors will double every 18 months.

        Actually, what Moore's Law essentially says is that the number of transistors on a chip will double every 18 months. The speed somewhat follows, but we have seen that simple scaling of transistor size is not sufficient to increase the speed linearly.

        Take AMD for example. AMD stays with basically the same microarchitecture as when they first crossed the 1 GHz boundary, over 18 months ago. What are they at, 1400 MHz? That's a 40% increase in the past ~18 mos. Hmm...

        Then you look at Intel. Intel practically abandoned the P3 to work on the P4, knowing the P3 was a dead end due to critical paths when scaling up the speed. The reason being that there are some parts of the microarchitecture that simply don't scale linearly with the rest of the process, primarily the memory system. Intel realized that the GHz race will guarantee market share, and has effectively succeeded in maintaining "Moore's Law" in the speed realm by scaling from 1GHz to 2Ghz in the same 18 mos. Sure, but it requires a reimplementation to do it.

        If you scale these rates over 6 years, Intel has, yes the 2^4=16x increase you are predicting. AMD on the other hand has but a 1.4^4=3.8x improvement over the next 6 years. End result, Intel would have the 32GHz machine, and AMD would have the 1.4GHz*3.8 = 5.32 GHz Athlon that they call the Athlon 30K which actually performs as well as a 7 GHz P4, (yet still heats the small city.)

        This really sounds bad for AMD, not to mention their incredibly-shrinking market share.

  • And as usual, Motorola and IBM will develop this technology first and promise chips as fast as 25 GHz. But in reality, the first IBM/Moto chips will come at 12 GHz at about the time that Intel releases their 20 GHz chips.

    Apple will introduce the chips in it's new iHyperMac which is the size of a quarter with a holographic display but they will be running downclocked to 10 GHz for marketing reasons.

    • In other news, AMD releases a 16 Ghz chip (although they call it the "AMD 16,000" for marketing reasons). It vastly outperforms the Intel 20 Ghz chip and costs a third of the price, but the liquid-nitrogen cooled case is just too damn tricky to build a window kit into, so nobody buys them.

  • At what cost? (Score:1, Insightful)

    Are they going to continue stretching the pipeline to get these "improved clock speeds"? I personally don't care if it runs at 20 GHz. I want performance.
    • IIRC, faster speeds can be achieved by smaller packaging, since data has shorter distances to travel.

      And if we can fit a billion transisotrs on a processor by then, think of the possibilites for other components like the motherboard and such.
    • It's a trade off. Intel has 2GHz P4's. Apple has 1GHz G4's. I don't think there's much question that MHz-for-MHz the G4 is a better processor (with some REALLY nice features like Altivec and power saving). But when Intel's P4 is over 1GHz faster, that ceases to matter quite so much.

      Scott
  • by ksw2 ( 520093 ) <[moc.liamg] [ta] [retaeyebo]> on Monday October 08, 2001 @11:21AM (#2401940) Homepage
    Intel already removed the balls from a processor... it was called "Celeron".
  • Like it says, though, this is years away from your pocket Cray.

    Agreed, but this would not be the case if we built computers for one specific purpose - which is exactly what most Crays do. The cheap and abundant processors today have a redundant instruction set with a lot of flaws, and are not made for any specific app.

    If we had RISC processors made for very specific purposes, I'm sure we'd be able to walk around a cray in our pocket :-)
    • I'm sure we'd be able to walk around a cray in our pocket :-)


      ...or are you just happy to see me?

    • "I'm sure we'd be able to walk around a cray in our pocket :-)"

      But if Intel's current crop is any indication, your bits would burst into flames unless you had Freon cooled undershorts.

      • ...your bits would burst into flames...

        No way! Because according to Intel, 1 bit is approximately 0.999999999999 bits which is a mathematical impossibility! :-P

    • With general purpose processors becoming blindingly fast, there will be even less need for ASICs - over time, what can't be solved with repackaged commodoty circuitry can simply be farmed off to the CPU or done in software.

      This will be key in driving down the cost of computing, as custom logic will always be more expensive than commodity logic.

      While I would expect these developments to also obviously drive down the cost of developing custom logic, volume production will always make commodity logic more cost effective.

  • by Anonymous Coward
    If I tried to put that thing in my pocket it'd burn a hole through my leg!
  • by BluePenguin ( 521713 ) on Monday October 08, 2001 @11:23AM (#2401949) Homepage
    to create an OS so bloated that you need a 20 Ghz chip to run it. ::Sigh::
  • We have pocket Crays already, in a manner of speaking. How fast was a Cray in the middle/late 80s?
    • Re:Pocket Cray (Score:2, Informative)

      right! The Cray-1, introduced in 1976, had 200,000 freon-cooled ICs and could perform 100 million floating point operations per second (100 MFLOPs) so, um, i don't think my Visor can do that, but is anything close?

      At the least, we have *Crays* on our desks...

    • We have pocket Crays already, in a manner of speaking.

      This is true. The original Crays were roughly cylindrical with a bigger portion at the base. About half of the people out there are already sporting pocket-sized hardware of a similar nature.

  • sounds like fun (Score:4, Interesting)

    by Lxy ( 80823 ) on Monday October 08, 2001 @11:24AM (#2401954) Journal
    Except that clock speed is becoming a useless benchmark. At what point do we realize that Intel's 20 Ghz machine and AMD's 12 Ghz machine have an unnoticable speed difference? If they were talking about a pocket cray as suggested, yes, I guess there is a use for it. They're not talking about supercomputing, they're talking about Pentium 4's!!! At 20 Ghz you'd have to slow the thing down to play Diablo!!
  • Does this mean it will run cooler?
  • With that many transistors running at X GHz, will Intel be providing a fusion plant to run this thing? With some small duct works, you could even use it to heat your house!

    Seriously, though, who knows what other kind of breakthroughs are going to be made that may obsolete this? There are advances being made in optical and even quantum computing all the time. Someone is even working on a biological hard drive using DNA strands!

    My 1/50 $ (US)
    • youwon't need a seperate power supply to run it, infact, the less metel, the less voltage needed, which means less power......but it may need more amps which means that if the amps needed to run it are high enough to off set the voltage saving, it may need more power to run.
    • Or maybe just a way of converting the heat back into power for, say the tubine that cools it, or even recylce the power by sticking thermlecouples all over the heatsink and pluging it back into the power supply.
      Doing this might also prevent the otherwise inevitable brownouts caused by more than one person on your block running the Px 20 GHz Proc. (or just blowing circuit breakers by running a vacuum in the house).
  • by kzanol ( 23904 ) on Monday October 08, 2001 @11:31AM (#2401979)
    A Cool Billion

    If only it were so - but looking back on the development later new cpu generations I'd bet it's going to be a HOT billion...

    requirements for cooling of new cpus are becoming ever more demanding, just the cpu can burn in excess of 50W in existing cpus.

    So, for my own requirements I'm more interested in getting an (energy) efficient system that can run with as few fans and noise as possible - it's practically impossible nowerdays to get a box where CPU power is NOT sufficient for even the most demanding tasks. The downside is that most modern boxes seem to be best suited for running flight simulators - at least they sound like jet engines.

    Also if you're working in an office with a lot of computers, the heat output of computers and monitors can be VERY noticyble, esp. in summer. (No, there's no aircondition in my office).

    Hopefully the new technology will not only be used to reate overpowerd energy hogs but also find its way into (mobile?) processors - same cpu core as existing cpu, but smaller layout, lower core voltage and correspondlingly much cooler/more silent.

    • The same technology that makes chips faster also makes them run cooler. The only reason the latest T-bird needs that massive heatsink is because it's performance is pushed to the limit, and that takes a lot of juice. Back the clock on it down to ~400Mhz, and your heat output would be significantly lower than a PII running at the same clock speed.

      These new processors require less power than older processors do when doing the same amount of work. however, the performance ceiling for the newer procs is much, much higher.
    • Eventually the masses will interact with computers by speech and video. Text and keybords will be secondary. Current computers cant quite do this yet, but how much is software versus hardware?
    • Actually, it will be a chlenge to do something with all these transistors. The problem is that as the number of transistors grows, the number of pins to the outside world can only grow by sqrt(#of transistors). So you can do more inside but you you have not enough I/O. That is the current problem in computer architecture. An issue [wisc.edu] of IEEE computer was dedicated to that. You need special access to see the articles at IEEE but you could do a google search with the title and the paper might pop up.
      • just the cpu can burn in excess of 50W in existing cpus

      An Athlon [amd.com] needs 76W and runs at up to 95 degrees C die temperature. Ouchie!

      Funnily enough, in some areas, it's illegal to put an incandescent lightbulb of that power in a confined area, e.g. the closet under the stairs where I run my (pleasantly warm) P133 firewall. I don't know of any such restrictions for computers.

    • In a few years it takes more energy to calculate what happens to a new type of car when it crashes (using a mathematical model in a computer) than building a car and actually crashing it...
    • >Do we actually NEED this much CPU power?


      No. In fact, 640k aught to be enough for anyone!

    • The downside is that most modern boxes seem to be best suited for running flight simulators - at least they sound like jet engines.

      You just wrote my new mail sig - thanks! :-)

    • The downside is that most modern boxes seem to be best suited for running flight simulators - at least they sound like jet engines.
      I absolutely agree with you on two basic points:
      • raw processing power is way oversold
      • machines are too damn noisy
      I wonder, though if the CPU is the main culprit. A small, 50 watt gizmo doesn't generate that much heat. It's true that CPUs often have heat dissipation problems, but only because so much heat is generated in such a small space.

      On the other hand, we still use the basic IBM layout for PCs, where a huge transformer is mounted inside the box. That so-noisy fan is there primarily to cool the transformer. Even with hotter and hotter CPUs, the cooling needs of the rest of the computer have actually decreased over the years, because systems use fewer and fewer chips.

      This design was obsolete two decades ago, when it was first introduced. Manufacturers at that time were moving to external power supplies, which can dissipate heat through radiation. Unfortunately, any computer not profoundly compatible with IBM's original quick-and-dirty design is now commercially nonviable.

  • Pocket cray (Score:4, Funny)

    by smaughster ( 227985 ) on Monday October 08, 2001 @11:32AM (#2401982)
    A cray in your pocket? It would better have some good cooling then, or you'd get some nice pick up lines.

    Him: Hey, baby, you make me feel *hot*.
    Her: Just take that cray out of your pants, geek boy.
  • Ok 20 gigahertz sounds like (well is) a lot of speed, but I refuse to believe that this won't just be wasted in the end by bloated code.

    I remember thinking when I owned a 486 that these new Pentium processors were going to have my box starting up in under 5 seconds and running my chosen apps soon after. WRONG. New and "essential" features combined with bloated code meant that just wasn't going to happen. None of the systems I use will boot in under 20 seconds.

    OK someone could run early 90's software on todays machines but just don't expect your files to work on anybody elses software (without tweaking anyway).

    I could be wrong but I think history says I won't be.

    • by Ars-Fartsica ( 166957 ) on Monday October 08, 2001 @12:10PM (#2402154)
      Your computer spends most of its time just waiting for you to do something. If you have purchased a PC in the last six months (Athlon or P4) you certainly have far more CPU capacity than your I/O merits.

      As for "code bloat" - deal with it, you are getting something bacl. Look at the memeory consumption for KDE2 vs. blackbox. sure, you are using ten times the memory, but in return you are getting a great deal of functionality. Your computer is there to be used, not preserved. Why not fill up that RAM? Why not saturate that CPU?

  • This just killed my high from ordering a 1.4Gig Athlon with a gig of RAM :-( I was hoping to be a stud at least for a day...
  • Great! (Score:2, Funny)

    by metlin ( 258108 )
    Now we will have a 20GHz processor which will tell us that 4+4 is 7.9999999999 approximately :-D
  • by ldopa1 ( 465624 )
    Well, it would be a Pocket SGI, wouldn't it? Palm OS with 1024 bit math! I can just imagine the sales pitch:

    "How many times have you been sitting on the bus, in need of some quick supercomputing? You're 20 minutes from the office and you just need to solve Planck's Theorem RIGHT NOW. That's why you need the Palm 2.5e+12!"
  • by Anonymous Coward on Monday October 08, 2001 @11:49AM (#2402064)
    I have to commend Intel for trying to tackle a problem that is daunting at best. But there are enough problems with existing IC packages that need to be taken care of between now and then. These include:

    1. High-speed signal isolation - two wires switching at enormous speeds like 10GHz are going to have effects on other signals in the package. There's enough trouble with this on high-speed multi-gigabit-per-second interfaces and even Rambus' crap TODAY. With signals packed in so close, how are they going to manage this tomorrow when the current memory bus is already at 3.2Gb/s? At 10GHz+, how hungry will the processor be for memory bandwidth? It's a fight between lower-speed highly-parallel signaling for density and higher-speed low-density serial signaling for signal integrity. A smaller package isn't going to help this. A larger package, even with fewer layers, will only aggravate signal coupling.

    2. Power delivery and consumption - on some packages, up to 30% of the total connections are for I/O and core power delivery. Making these smaller as Intel proposes will not help matter, considering that switching at 10GHz is going to make power consumption skyrocket. How do they expect to get the power to the chip? People have enough problems today trying to bump their processor voltages up when they attempt to overclock. This is only going to get exponentially more difficult.

    3. Die attach and reliability - I know they want to have solderless connections to the package. This is good - currently alpha particles from solder will occasionally cause false switching in memory elements. But with lots of heat cycles from power cycling up and down and questionable assembly yields that are usually tolerant to less than 0.5% from raw die to package. We take for granted the fact that the die will stay attached to the package today. How they will get the reliability to that point is beyond me, even if they've made a "major" stride. How do they account for field failures or age-related failures in a test lab?

    4. Substrate material - the package material itself is critical to thermal matching on the board as well as to signal integrity inside the package. At the speeds they propose will the current substrates be sufficient for reducing signal coupling? As usual, material science is again lagging behind the rest, and we need far more research into exotic materials to be able to get fast packages going.

    So, to me I think there's going to have to be larger packages with advanced cooling. I'm not going to get too excited. I certainly don't think that Intel will be able to take this course alone. What I forsee happening is to have new committees set up specifically for packaging as there are for IC process technology today. It's too capital and research intensive to be able to get away from having to use committees.
    • This is a *GREAT* comment. Please mod this up, it is worth it even though it was posted AC. It's a lot better than the standard, "Hey, look how fast I can run Diablo II now!" comments.

      Anyway, by committees do you mean standards organizations similar to IEEE groups? I completely agree with that point, it would really help to get the research moving along. Unfortunately, I think many of the IC manufacturers are too worried about squeezing every last cent out of their current technologies before they put the newer technologies on the market. Really, there is no rush to market new technologies as long as they are still making money, and people are happy with the current products. That is what often cause the technology to stay behind closed doors for longer. A standards committee could help get things to market more quickly???
  • With a billion + Transistors ,what will be the heat resistances and what sort of cooling will be required?Can anyone throw a few numbers. I suppose we shall move back to boiling N2 of cray fame...
  • I think it is great that processor technology is dontinuing to evolve and break through many technical limitations in the past few years. However my larger concern is with the I/O bottlenecks that are becoming more and more of a problem now that chips are running faster and faster. When is the next great breakthough in RAM technology going to come?

    Until we can start pumping 100+Gb/s to the processor, most of the power is wasted while it waits to fetch memory.
  • Huh? (Score:3, Interesting)

    by TheSHAD0W ( 258774 ) on Monday October 08, 2001 @11:58AM (#2402098) Homepage
    "Removing the balls of solder between the chip's packaging and the microprocessor core"...

    Well, sure, that'd give you lots more room for transistors... It'd also give you a lot more room for defects to creep in. This is functionally no different from expanding the die size to the point where the CPU size is the same. While it might be less expensive than cutting fewer chips per wafer, it does nothing to increase the reliability of the process.

    I think this is more of a pricing advance, and you'll see this lowering the cost of existing processor layouts, since you can decrease the die size without affecting the CPU design. But CPU size increases will still result in lower yield.
    • Re:Huh? (Score:2, Informative)

      by Zathrus ( 232140 )
      Kind of. If one of your spacing limitations is due to I/O, and the limitation on the I/O is due to the necessity of placing huge (relatively) gobs of solder between the output lines and the package pins, then removing the solder may allow you to space I/O lines closer together, giving you more die space for logic.

      But, yes, merely removing the solder doesn't change anything as far as the photolithography, deposition, or etching steps are concerned, and photo will still be one of the primary limitations in feature size (which then dictates just how many transisters you can pack into a square centimeter).

      Intel is merely expecting some reduced power consumption (and thus heat production), and that this is "step in our march toward making processors with 1 billion transistors" not that this will itself allow such.
  • Pentium 5 (Score:4, Funny)

    by PMan88 ( 467902 ) on Monday October 08, 2001 @11:59AM (#2402103) Homepage
    Imagine.

    Just today, Intel announced the release of the Pentium 5 processor. The new processor runs at 50 Ghz. It features a 300-stage pipeline. It will take 2 minutes for each instruction to complete on average. But to optomize, programmers can send 1800 intructions at a time, as long as they have no dependence on each other at all.
  • Before I upgrade my new laptop. Wow, just imagine how fast the games will be at that speed. It'll damn sure make for some fun LAN parties. No more having to worry about lugging a desktop around either! Hopefully by that time Gig fiber to the house won't be just a dream anymore either!
  • a billion somethings, 20 giga whatevers... Someday... What does this new package do NOW? How much faster will it move heat off the die? How much did they drop the capitance getting off to the fiberglass? Is there any REAL technical advantage or is this just another attempt to shake AMD off their tail by requiring more proprietary hardware? Just wondering. ZH
  • by rice_burners_suck ( 243660 ) on Monday October 08, 2001 @12:06PM (#2402132)

    Intel today announced its new 1024-bit (1 kilobit) microprocessor architecture technology. Named the Shiitakeum, Intel's new processor core boasts powerful new technologies which will enable content providers to deliver compelling enterprise solutions.

    The Shiitakeum incorporates the following new features:

    * SingleAtom technology squeezes one thousand transistors into a single atom.

    * The processing pipeline has been broken down into 299,792,458 discreet steps, enabling Intel to remove the internal clock altogether and run the processor at the speed of light. One "cycle" represents the absolute cosmic measure unit of time, and all operations occur in one cycle. (Compete with that, AMD! Bwahaahahahaha!!)

    * 24,856 new instructions have been added since the previous model, bringing the new total to over 72 trillion instructions. The entire UNIX operating system can be programmed in one instruction!

    * RAM has been depreciated. 4 terabytes of internal general-use registers allow software to make more efficient data access, providing a more compelling Internet experience.

    * Intel (r) AnswerNow (tm) Technology bends the space-time continuum, allowing the results of branch instructions and mathematical operations to be used before they are computed. The computations take place during idle cycles at some future time.

    * Intel (r) CodeSpirit (tm) Technology processes machine code by its spirit, rather than its letter, completely eliminating software bugs and preventing malicious code, such as a virus, from executing.

    * Intel (r) AlienCode (tm) Technology, based on CodeSpirit, allows users to execute programs written for any other processor, without previous knowledge of that processor's instruction set. The technology examines and "decyphers" the instructions and data in much the same way that scientists decypher written languages used by past civilizations. Via AnswerNow and CodeSpirit technologies, programs written for other processors actually run faster and better on Intel platforms than on their native processor. As a side effect, the processor now directly executes programs and scripts written in Java or any P-code or text-based language. In fact, even instructions spelled out in English are understood and executed by the processor.

    * Intel (r) BrainWaves (tm) Technology allows the processor to read and write information in the user's mind. The processor is given away for free, and based on the user's thoughts, targetted advertisements are inserted directly into the user's mind. The process is painless, and simply feels like a song stuck in your head. A nominal (i.e., expensive) fee can be paid daily to eliminate the advertisements.

    The Intel Shiitakeum Processor. Mushrooms Inside.

  • Two bad IBM has Promised 100ghz within a year or two. Seriously.
  • by morcheeba ( 260908 ) on Monday October 08, 2001 @12:12PM (#2402162) Journal
    Intel has more info on this (both pdf's):

    This backgrounder (4 pages, 17kb) [intel.com] has a basic diagram showing the change.
    This briefing (18 pages, 2466kb) [intel.com] is a presentation, but actually has some nice detail. It has some photographs of the devices, better diagrams, and a picture of a naked man in the shower (really!).

    I'll summerize:
    PGA packaging (as used in many big processors) is basically a ceramic or fiberglass carrier board with pins on one side, wires in the middle (like a small PC board), and some method to directly attach to the chip. The chip is usually connected to the board with small solder balls, like BGAs, but on a smaller scale. The balls provide some flexibility and loose tolerances, but since they are bigger than the wires they connect they require a fairly large pad on the chip. This technology is a way eliminate these balls, allowing for smaller pads, freeing up more area on the die.

    But you should check out the pictures -- they describe it better than I do.
  • IIRC Moore's law refers to the number of transistors on a chip. And we already know that processing power does not exactly increase with GHz. Any figures on the real speed of that beast?

    On the other hand, I'm more interested in reducing power consumption. My laptop hogs at most 30W, modern desktops may use ten times that. I'm sick of hearing of California's power outages and the like, when the technology for power saving is already there.

  • Who cares! As expensive as Intel is over AMD now, that chip will probably be around the cost of a new car or something..

  • >The advance here is removing the balls of solder between the chip's packaging and the microprocessor core

    Was I the only one who twitched when reading this?

    oh man... I am a geek..... help!! :)


  • I really feel I need 20 GHz. Anything that shaves even a few minutes off my day is very welcome. Considering just the work I do now, a 20 GHz processor might make my day 10% shorter.

    If I had that speed I would do a lot of video processing. I also hope there would be good voice recognition. Long waits for compiling would disappear.
      • Long waits for compiling would disappear.

      Maybe it's a perception thing, but I feel like my compile times stay constant no matter how much I upgrade my machine. Perhaps it's memory bandwidth or hard drive access, or perhaps it just that I've moved from ASM to C to C++ to (god help me) C#...

  • Looking at the article, I noticed a couple things worth follow up questions/thoughts:
    1).
    Because the distance the data must travel is shorter, the new packaging helps boost the overall speed and performance of the chip., and

    2):Intel calls the new packaging technology, for which it has already secured a number of patents, bumpless build-up layer, or BBUL, packaging.

    My questions are mainly related to the first item, as follows:
    1. Has the signal distance reduction (less layers) been cut sufficiently to allow the 10X increase in speed?
    2. Is the density of transistors currently limited by the layers, and finally
    3. (sort of a cross betweern the first two questions) Assuming that the 10X increase is possible, doesn't it require that the same kind of technology be used for all of the remaining high speed chips?
    The observation is related to the second item. For the sake of discussion, let us assume that the "bumpless" technology is the absolute best state of the art for a while. Will the fact that Intel has patented the technology give them a de-facto monopoly on ultra high speed/high performance chips, and if so, is this really good news or not?
  • need a 500Mhz Athlon to run the cooling system?

    That, my friends would be the ultimate irony.

    (because you know darn well Intel'l lowest end proc available will be the 19.3333 celeron IV).

    Moose.

    (top 2 reasons to mod me up:
    2) I'm funny, insightful and informative damnit
    1) /.'s database will eventually lose every one of my +3 or higher posts, sooner or later when there is database cor*&^%@
  • I almost dread computers going that fast. Just another excuse to make bloated code. But that might not be a bad thing. Mozilla might even be considered "lean and mean". Seriously though, this isn't addressing the other computer problems. Oh, like say the BUS SPEED. RAM isn't really making leaps and bounds compaired to speed either. And if hard drives only manage to improve at the rate they have been, virtual memory will be a gigantic disaster in terms of performance loss.

    The other good news is that AMD processors will keep your computer clean as they will combust small particles that enter your computer case. Although hardware people will be very displeased at not being able to look directly at the CPU unless the computer has been shut off for 15 minutes to allow for cooling.
  • The P4 came out in 2000 and has 42,000,000 transistors. Six years from now is seven years from 2000. 7 years/18 months = 4.667 doubling cycles according to Moore's law. 42e6 * 2^4.667 = 1.07e9---just over a billion.


    While this will be cool, it's not amazing. (Neither is the fact that that computer will come with about 10GB of RAM.)

    --Ben

    • You seem to think that moores law is somehow a fixed law of nature - if we just sit around doing nothing processor speed will still double every 18th month. Obviously this isn't true - it takes a lot of research to keep the speed doubling. This is one such thing. Hopefully there'll be more.


      -henrik

  • At clock speeds approaching 20 GHz, it seems there would be not only manufacturing problems to overcome, but physics problems as well.
    Some quick math:

    20 Gigahertz = 20*10^9 Hz (1/seconds)

    For time per cycle, 1/(20*10^12) = 5*10^-11 seconds per cycle

    Speed of light = 3*10^8 meters/sec

    In 1 cycle, light will travel (3*10^8) * (5*10^11) = 0.015 meters = 1.5 centimeters per cycle.

    Correct me if I'm wrong, but isn't this roughly approaching the size of the die? I know Intel's a big company, but I think someone might get a little upset if they try and break the laws of physics.
  • I guess the moderation system works after all.

  • "When the lights go down in the city" to all the people in California/Silicon valley.

    (This is not a troll, if you don't get it that says more about you than it does me. It's meant to be funny...if it does not amuse you, then don't mod it up or mod it down.
    "I was raised to belive there was *some* good in everyboday" Pacha in 'The Emp. New Groove'.
    Unfortunately, recent /. moderators have proven me wrong...metamoderation rules, obvious trolls = agree, otherwise don't moderate or *disagree*)

    Moose

    PS. IF you've read the far, don't you agree it's sad when you have to put disclaimers up just trying to make a __JOKE__?
    When moderators attack, tonite at 8pm.
  • by cnelzie ( 451984 ) on Monday October 08, 2001 @12:58PM (#2402362) Homepage
    If I recall correctly the original specs for the P4 stated a much larger cache and higher FPU. Then Intel found out that they would have to sell them for some insane price, like 1200 bucks, to make any kind of profit.

    So, what did they do?

    They clipped the FPU down to practically nothing, cut down the cache. Broke the JIT functionality and made the chip only able to really churn out specially optimized C code with any kind of speed.

    Sorry, but MANY companies still use and program in COBOL, FORTRAN and PASCAL. Before any of you claim those are "dead" languages, remember that these languages run programs that have been in use on mainframes, companies spent millions/billions on, for more than 20 years. COBOL recently had some WWW extensions started or discussed a year or two ago as well.

    I honestly have to question Intel's future processor roadmaps and production products when they show off things that are really to pricey for them to mass produce. It would be awesome if Intel was able to release the P4 like the original specs were. I would have one right now. The only thing is they didn't and the chip just ramps up the megaherts, but doesn't really do all that much more.

    --
    . sig seperator
    --
  • Before wishing for a pocket Cray, according to: http://www.dg.com/about/html/cray-1.html the Cray-1 was a 160 MFlop machine.

    I'm not sure how to equate that to X86 floating point, or even what the Cray-1 clock speed was, and I realize that it was a quarter century ago. But I think that modern garden-variety PCs are in or above the Cray-1 performance realm.
  • Why Intel? (Score:2, Insightful)

    by tangent3 ( 449222 )
    Why buy an Intel 20GHz CPU for $n when you can get an AMD 14GHz CPU for $(n/2) which beats the Intel 20GHz CPU in almost all benchmark tests?

    Just don't forget not to remove the heat sink.
  • #!/usr/bin/perl -w
    use strict;

    print "\n\nThe Magic Perl will entertain some queries now.\n\n";

    my $quit = 0;
    until ($quit) {

    print "What is your yes/no question for the Magic Perl? \n";
    my $ques = <STDIN>;
    chomp $ques;

    my @q_ans = (
    "Yes.",
    "No.",
    "Maybe",
    "My sources say, \"Yes.\"",
    "My sources say, \"No.\"",
    "These are not the droids you're looking for, move along.",
    "You are not ready to hear the answer for that.",
    "11.",
    "The answer you seek is within you.",
    "Certainly.",
    "No way.",
    "nowonmai...",
    "Doh!",
    "How the Hell should I know?",
    "You must learn control.",
    );

    my $rand = rand @q_ans;
    my $ans = $q_ans[$rand];

    print qq(\nYou dared to ask "$ques":\nThe Magic Perl says, "$ans"\nThe Magic Perl has spoken.\n);

    print qq(\nDo you have another question for the Magic Perl? Type "y" to ask.\n);
    my $again = <STDIN>;
    chomp ($again);
    if ($again eq 'y'){
    $quit = 0;
    } elsif ($again ne 'y'){
    print "The Magic Perl grows weary of your queries anyway! \n\n";
    $quit = 1;
    }
    }
  • Easy Bake Processors*!!

    Cook your favorite Goodies, and process your RC5/SETI packets fast! Purchase the Space heater for those cold nights in the Dorm/Batchlor-pad *Keep away from combustable material, do not touch Processor, Case, or desk. Intel coproration is not responcable for injury or death.
  • The cool billion concept is cool, but it also points out that the processor paridigm is locked in for another 6 years.

    It is my hope that within 6 yrs there is a greater focus on the -way- the little "ones" and "zeros" are processed, not necessarily how much faster it is done based on current standards.
  • Moores Law (Score:4, Insightful)

    by Faux_Pseudo ( 141152 ) <Faux.Pseudo@gmail.cFREEBSDom minus bsd> on Monday October 08, 2001 @05:10PM (#2403623)
    20GHz in 6 years? Sounds slow to me.

    current speeds are at 2G 2 X 2 = 4GHz in 18 months

    4 X 2 = 8GHz in 3 years. 8 X 2 = 16GHz and then 16 x 2 = 32GHz in 6 years. So why is IBM falling behind?

On the eighth day, God created FORTRAN.

Working...