Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

A Well-Chilled 750GHz Feasible Within 5 Years 212

drkhong writes: "...at least if you've got a good cooling system. IEEE Spectrum has an article about the next generation ICs. Using superconducting materials cooled down to 5K a peak of 750GHz has already been reached. Just think about how far light goes within one clock cycle, and then tell me you aren't impressed." These low-temperature devices are made of niobium (a superconducting metal), and use something called Josephson junction devices, resulting in chips for which the article states "there are no known physical barriers to decreasing size by a factor of 10 and thus increasing speed by a factor of 10, using lithography to move from today's 3-m linewidth to 0.3 m."
This discussion has been archived. No new comments can be posted.

A Well-Chilled 750GHz Feasible Within 5 Years

Comments Filter:
  • Information can travel no faster than the speed of light. Period.

    --Joe
    --
    Program Intellivision! [schells.com]
  • I mean it's cool and all... but I really don't want a desktop that can kill me if it has a coolant leak? It'd be great for supercomputers tho' =)
  • by yamla ( 136560 ) <chris@@@hypocrite...org> on Thursday December 14, 2000 @10:26AM (#558533)
    A peak of 750 Ghz has already been reached? That is not at all what the article says. It notes that data rates of 750 Gb/sec have already been reached, an impressive but totally different thing.

    This is, of course, very impressive but let us not forget that this requires cooling down to five degrees Kelvin. We are well past heatsinks and fans at this point. Unless the prices come down, it will cost around twenty THOUSAND dollars to cool the chip down this much.

    It will be a long time before you see a system like this on your desktop. Unless we develop room-temperature superconductors, of course. But that would change everything...

  • My karma is very high. booyeah! ;-)

  • ...what is the framerate of good old real existence?

    This would probably be limited by quantum physics. I know that there are only a finite number of positions and velocities allowed in a given range (i.e., it's physically impossible to travel at exactly 55 mph). I'm not sure if time is also quantisized, but I imagine it is.

    To get some idea, let's say we're looking at photons of light in the visable range at about 500nm. This corresponds to about 6E14 Hz, or rather, 600 trillion `frames' per second. To get much better, I suppose you would have to develop eyes that can see gamma rays or something like that.

    So if the hottest video card out there (some time from now) claims it can do quake at more than like 10^15 fps, don't bother buying it.

  • Intel to release new Pentium 42!

    750 GHZ of processing power!

    Guaranteed to really make your 56K dial up experience more multimedia intensive and enjoyable!

  • Are you kidding? Have you seen the estimates Microsoft has made with their products? I think it'll be....

    To install Windows 2005:

    486/66 CPU (only one, don't try two or more)

    150 Mb of Hard Drive space

    24 Mb of available RAM (32 Mb suggested)
  • crgrace,

    The article talks about a new Hypres (sp) AD converter that runs at 12 GS/s, and can dynamically change SNR for bandwidth, and vice versa (or at least # of bits, as the article says).

    Do you have any idea how many bits this puppy can do per sample? I didn't find this number, strangely, within the article.

    Thanks.

  • I feel compelled to chime in here. I studied Superconducting Electronics for a while as an EE grad student. The biggest issue was making memory out of superconductors of any reasonable density. Since we had so little memory, around 4K?, the efforts were concentrated on doing Digital Signal Processing. These operations do not require vast amounts of memory and are generally useful for real time digital signal filtering. Once the memory issue is resolved, I expect to see greater use of these processors for generic computing.

    Incidently, a Josephson junction looks like this..
    S I S (where S is a superconductor and I is an Insulator. The resulting junction provides non linear behavior and can act as an amplifier ala a funky transistor.

    Cheers.
  • Shouldn't Moore's Law be take in the same vein as Murphy's?

    Its more of a truism as it is a mathematical law, as it's only based on an observation of the way things seem to be.

    An example,
    If in finishing a messy project nothing went wrong, when something very well should have. I don't think this has proved Murphy's law as invalid, or that we should absolutly think that something went wrong because the result wildy violated it. Perhaps the best answer is that we should be skeptical of the result (nothing went wrong), beacause it goes against the 'wisdom' of the law.

    To take this to the case at hand, we should only be skeptical of claims, because it seems to go against the guidelines stated in Moore's Law.

  • Imagine what the ENIAC engineers would have to say about "impossibility" if you could time warp back to then now and demo a : 1.2 GHz Athlon, 2 GB RAM, Ultrawide Ultrafast 100 GB SCSI HDD, 21" LCD monitor ... ??

    From a brochure for a car rental firm in Tokyo : When passenger of foot heave in sight, tootle the horn. Trumpet him melodiously at first, but if he still obstacles your passage then tootle him with vigor.

  • i think you've found your source of heat when you mentioned the lack of 'light superconductors'... as seems to be the case with this, (please forgive my lack of understanding) but the heat would mostly come from circuits outside of the Josephson junctions.
  • Applying concepts like that to the brain is useless. The brain is unbelieveably parallel, and runs asynchronously, so "how fast can we think?" really doesn't mean anything. I know you were just thinking of the sci-fi aspects of this and all, but this is one nut that just ain't gonna be cracked.
  • You forgot to allow for relativistic time-dilation.
    The currents here are particles moving through niobium, not light. They have a momentum, and thus are subject to special relativity (not to mention general relativity).
    I'm late for work, so I don't have time to do any calculations, but the gist of it is that _as far as the particles are concerned_ they can travel much further than .4mm in one cycle.
  • are jst an excse to gratitosly use the "" character.
  • This is as fallacious as the idea that I could have a long rigid rod from here to Alpha Centauri, and communicate with it in realtime by pushing and pulling on the rod.

    (Hmm, how did i come to talk about rigid rods and fallacio in one sentence..)
  • 1) It's not that bad at all. I'm fairly proficient at qwerty and I switched to Dvorak in no time, matching speeds in an hour. It's far easier the more layouts you learn, but even for a second layout there's no real problem.

    2) No-one dares touch my keyboard :) I still have qwerty keys on it, just a little applet to switch between the two. I can still type just as fast as before with qwerty.

    It's no big performance hit, and you gain a lot from it. Give it a try, you might be surprised.

  • $ dc
    186000 5280 * 12 * p
    11784960000
    20 k
    750000000000 / p
    .01571328000000000000
    q
    So 0.1571 inch is the distance light travels in a vacuum; I don't know about the speed in these materials.
  • To install Windows 2005:

    749Ghz CPU (only one, don't try two or more)

    100Tb of Hard Drive space(see section on SCSI drivers)

    10Tb of available RAM (100Tb suggested)

  • Poor ol' Apple/Motorola hasn't even gotten to 750 MEGAhz yet!

    Cheers,

  • Any predictions about RAM or Hard Drives in five years? Compared the progression of CPU speeds, increases in memory speed and capacity has been almost static. Perhaps RAMBUS can start suing for royalties even before they develop their PC750000 RIMM
  • Er, just to be pedantic...Waters wrote all the lyrics for DSOTM, didn't he?
  • More information, please!

    I'm not entirely sure about that - it's not my field of knowledge :-) but iirc, brains are said to go at about a thousand hertz because that's about the maximum speed at which brain cells can output signals.

  • I will thanks!

  • Not *that* much faster. Thats only 750 1ghz thunderbirds, I'm sure distributed.net has far more cpu power than that. Though if you had a distributed.net type setup with a few thousand 750ghz machines...
  • You're correct if you assume they're hard little
    balls. But they're not. Info can't travel faster
    than light.

    More clarification on the above point is that
    electrons through copper go @ about half the
    speed of light.
  • According to scientists (there ought to be a link, but I can't remember it..), the maximum framerate a human can actually perceive is 76 fps. Higher than that might be easier on the eye (I know 100 Hz is clearly much more comfortable than say 60 Hz), but it's not possible to perceive higher than 76 Hz (fps).
    /S
  • No, see, while the CPU will finally be fast enough, the amount of time it takes for the hard drive to read the needed 20 gigs of data (actually, it's only 500 MB, but due to Microsoft's patented read and re-read technology, it reads the data 40 times), it will still take an hour or more to boot. A spokesperson from Microsoft said they are looking into caching the data in a RAM drive so the subsequent reads will occur at a much higher rate. Computer experts warn that to be able to store the 500 MB of data, Windows will require an additional 4 GB of RAM, which will in turn drive up the costs of computers. Market analysts, however, are quick to point out the increased sales of RAM will help out our falling economy, giving it the boost it needs to recover.

    --
  • by carleton ( 97218 ) on Thursday December 14, 2000 @10:29AM (#558559)
    Nope. That's the whole point of 128 bits. Assuming you're doing brute force, and have 1000 of these computers overclocked to run at 1000 GHz (to make the math easier), and of course assuming that they can do one trial per cycle,

    2 ^ 128 trials * 1 cycle / trial * 1 second / (10^12 aggregate computer cycles) * 1 year / (3600 * 24 * 365) = 10790283070806014 years = 10 quadrillion years.
  • Can you imagine the processor spending most of its time sitting around waiting for memory?

  • Two words.... "Beowulf Clusters"

  • I can see it now:
    750Ghz PentiumXXXIV processor 1Ghz FSB 2Ghz memory 500Mhz access to storage and a graphics processor that is only capable of pumping out three frames per second in Quake 25.
    gazamm! a 750x multiplier? That's crazy...but it just might work...I think that if be better to get that FSB up tho...I think that would solve the apparent bottleneck causing that 3fps. Try it out and get back to me...
  • Does anyone else remember the ultimate RISC computer? There was only one instruction in the instruction set, I believe it was "subtract and branch on negative." Four operands, I think, the two things to subtract, a place to store the result, and a target for the branch. Appropriate sequences of this instruction could be used to build all the low level functions you needed -- subtraction, addition, shifts (either way), and, or, xor -- and from those you could build more complicated things. Some of those functions took dozens of "native" instructions to implement.

    Someone actually built one of these and the stupid thing worked. Ran like a dog because everything needed to make multiple memory accesses and you had to execute zillions of instructions to actually accomplish anything. But clocked at 750 GHz and combined with some really fast memory...

  • good one :)
  • I've heard of water, but that's about it.
  • There are sampling oscilloscopes, that can map out a repetitive waveform by taking a very small time-wise 'snapshot' of a signal, then wait an integer number of cycles + a small dt, and take another snapshot, then put the waveform together. However, the fastest of these I've seen (and that's not many) had a bandwidth of 50GHz. There may be faster ones available, though, possibly using techniques outlined below.

    There are methods that the optics community uses to measure high speeds / short times. One such idea is an autocorrelator, which interferes a fast signal with a delayed version of itself, thus allowing you to map out the waveform shape. Possibly the 750GHz team made a version of one of these.

    There are pulsed lasers that have pulse widths of less than a femtosecond (admittedly, you probably have to build these yourself and be careful with components). these are probably the best generators of delta functions, and are also probably used to probe some of these novel devices.

  • What about carbon nanotubes? they seem like the perfect single-electron superconductor to me...
  • by Anonymous Coward
    a Beowulf cluster of there?
  • uhhh...what does behind the curb mean? Did you intend to say curve? Regardless, they are shooting for 10GHz by then in a reasonably priced chip (of course, that still means a cool grand+). If superconductors were cheap, everything would be using them.
  • I submitted a more accurate review that, among other things, didn't confuse data rate and clock frequency, to Slashdot and Kuro5hin last week. Kuro5hin accepted it. You can read it and reader responses at:

    http://www.kuro5hin.org/?op=displaystory&sid=2000/ 12/10/0925/1544 [kuro5hin.org]
  • There's a difference between a signal and a fluxon. The fluxon couldn't travel faster than light, but it's possible for a signal to do so, since it's just information. A macroscopic example can be given with a well-trained marching line. Say that there is a gap between the first third of the line and the rest. Someone gives an order, and the middle third moves forward -- all the people moving simultaneously, since they're well trained -- to close the gap, while the back third remains in place. The gap has just instantaneously moved 1/3 the length of the marching line, without any actual object having to move faster than the speed of light.
  • I think it is very unlikely they will be able to do it cheaply, which is just as important as being able to do it at all. Otherwise, this technology will be of interest only to the military.

    Don't forget that the military has played a significant role in almost all technology's birth to date. They have the (massive) funds to research fields without any saleable product in sight. The Internet (ARPANET), for example. Nuclear power. Rocket science. Jets. If they research superconducting enough, they may find a way (or help someone else find a way) to mass-produce it cheaply. In peacetime, really the only thing a military force is good for...!
  • I am really impressed, amazed, amused...

    I don't have moderator points left, please someone moderate it up.

    --ricardo

  • Didnt Lain already do this?

  • I have a PII/233, 256M RAM, Win2K. The Open dialogue on Media Player appears almost instantaneously, but I'm assuming it's been cached from yesterday.

  • Yea! Since I just saw a factoid saying Windows 2000 is the largest 'program' ever writen (I'd say it's multiple programs...) - from Learning Kingdom - "A complete printout of its 29 million lines of source code would form a stack of pages 193 feet high (59 meters), about as tall as a 19-story building." So are faster processors a good thing? That is, doing a whole bunch of dumb things really fast is not nesc. better than doing a few smart things slower. Tell me about how x86 is being utterly thrown out the window and that will be exciting.
  • Well, now that we have macroscopic quantization of magnetic flux, and fluxon-switching devices, that Flux Capacitor can't be far off now can it? Anyone got a DeLorean sitting in the barn waiting to be refitted? Now's your chance!
  • When using light primarily instead of electricity, a lot less heat is generated. Would some of the more advanced chips be cool enough to run safely without a heat sink? If so, this would not only extend the life the standard CPU but slightly reduce the ammount of clutter in the standard PC box. //this message posted by powerpenguin from somewhere.
  • I find it's typing speeds that slow the user down, not the CPU. Surely Dvorak keyboards (once we're past the learning curve) will make the real difference. My typing speed tripled over a week...

    Not that faster CPUs are bad, mind, just they don't solve the real issue.

  • by crgrace ( 220738 ) on Thursday December 14, 2000 @10:33AM (#558582)
    Superconducting logic has been out for a VERY VERY long time. In fact, IBM burned tens of millions of dollars on the subject in the 1970s. The problems with superconductors are even WORSE than the problems with superfast III-V logic. UCSB has 70 GHz flip-flops made out of transferred-substrate heterojunction (III-V, Indium Phosphide and something else) transistors, but nobody thinks they will revolutionize computing, because they won't. So it is with superconducting logic.

    There are two huge problems with superconducting logic that don't seem solvable in the near future. They are:

    1. Cost : These things are enormously expensive to manufacture and operate, and it is the economy of scale of CMOS techology which has enabled, more than anything, the current computing revolution. Do you have any idea how expensive coolant and the dewar to use it in are to get something to 5K? Even the so-called "high-temperature" superconductors have to be pretty damn cold to function; they just don't need to go so close to absolute-zero.

    2. Integration This is probably the killer. It will be extremely difficult to integrate many devices together. Even if myriad technical difficulties are overcome, the solution is not likely to be inexpensive, as CMOS technology is. For III-V semiconductors (which use much less exotic materials than superconductors), high defect rate, problems with lattic matching of the materials, and the lack of a high quality native oxide (like SiO2 in silicon) have made it impossible to achieve integration levels anywhere close to that achieved in silicon. Even GaAs, the most well-understood III-V semiconductor, can't be integrated to more than a few thousand devices. That's why we don't have 20 GHz GaAs microprosessors. And superconductors are even HARDER to deal with.

    In summary, even if researchers are able to overcome almost insurrmountable odds to find away to reliably integrate meaningful numbers of these devices on a single die, I think it is very unlikely they will be able to do it cheaply, which is just as important as being able to do it at all. Otherwise, this technology will be of interest only to the military.

    By the way, I know III-V semiconductors have a lot of very important uses, especially in optics and RF. It is a fact, however, that III-V logic is mainly of interest to the military and the space industry.

  • Even that's not correct. Electrons in copper move very slowly, on the order of centimeters/minute. Ripples, on the other hand travel at approximately half c (or thereabouts).
  • As another user already pointed out, even if this thing were one million times faster than Intel and AMD's current stuff (which it isn't), breaking 128 bit encryption would still take time well beyond the death of the universe.
  • Doesn't this wildly violate it? Right now we're at 1.5 GHz. Moore's law states that this will double every 18 months. That's about three times it should double in the next five years. La la la, David does some simple math:

    12GHz.

    Hmm. I know that Moore's law is just a rough estimate, but so far we've stuck to it pretty faithfully, right? If the rate of increase is increasing (aaaahhhhh! semantically difficult sentence!), I'm gonna be really impressed with where our technology goes!

    Unless they mis-estimated their release date (and we know that's never happened) by about six years.
  • Close, but actually the real market for 750Ghz in the home is Quake players who think polygons are 'not sufficient' and want real-time ray tracing at 60fps. I'm one of them. Is 750Ghz going to be sufficent for 'toy story' level rendering in real time?
  • I have to go to the drugstore to buy a few more pounds of liquid helium, I'll be back after lunch.

    Hmmmm... then does that mean that if you never return you'll be missing, presumed fed?

  • Whoops, an entire day of cafiene and /. reading has impaired my mathematical abilities. Make that a nine year mis-estimate.
  • After the invention of the Josephson junction in the seventies, most of what was known about this either died from complexity (too many subsystems) or was taken black and never heard from again.

    I suspect that advances in insulating materials, most of which have been recently declassified, will make cryogenic circuitry usable in smaller, possibly even desktop-sized machinery.

    Consider that the entire integrated circuit set could be built onto a substrate (standard thick-film assembly modded for low temperature) and then the entire package surrounded with aerogel or foamed silica insulation. A heat pump or chiller (Peltier devices, sterling cycle fridge, whatever) is attached through a window, and then the heat removed from the device. Given the kind of heat transfer rates you can get now, the heat pump section would only have to be slightly larger than the max power consumption of the circuitry.

    Okay, so it will take ten minutes to cool down to the point where the main processor works. This would be any different from waiting for Windoze to boot?

  • by pjrc ( 134994 ) <paul@pjrc.com> on Thursday December 14, 2000 @01:49PM (#558612) Homepage Journal
    In the early 90s, it was widely believed that GaAs transistors would replace good 'ole Si for microprocessors. They're a lot faster, after all.

    Well, several things happened:

    • Nobody figured out how to make reseasonalbe P-channel devices.
    • Small geometries were much harder, because III/V and II/VI type (more than one element) semiconductors suffer from a whole bunch of problems where, which in my limited understanding (I'm a circuits guy) are due to the wrong atom at a place in the crystal lattice, such as a Ga where and As should have been.
    • It's easy to take Silicon Dioxide (glass) for granted, until you try to figure out a good way to make insulators on other materials.
    • All the while, good ole Si kept getting better and better... not only faster, but higher densities. Today's CPU speed is as much a function of using lots of transistors as it is their speed. As more transistors were available, everyone invested a lot of research and thought into ways to use them to run code faster (superscaler architecture, branch prediction, out-of-order and speculative execution, etc)
    Now I've been watching the J-junction for several years now, though I know much less about how it really works that I ought to. I do know there's a big difference between a test device and processes that produce only thousands of them to being viable for a modern microprocessor. GaAs transistors are hugely popular for RF applications, where you only need a small number of them. Today nobody believes the world will eventually be overtaken by GaAs based microprocessors.

    It seems unlike the world will really be overtaken by J-Junction microprocessors, at least in our lifetimes. Maybe that's just wishful thinking, since I've got a lot of energy invested in transistors, and with a bit of luck that'll remain valuable for another 25 years... but then again, look what happened to all those guys how only knew about tubes!

    Anyways, the point is that there's a big difference between a small number of insanely fast test devices to a high density processor with all the other requisites to make a reasonable microprocessor.

  • wass,

    First, the converter is clocked at 12.8 GHz, which is very different from running at 12 GS/s. It is definately an oversampled converter, because they mention a digital decimation filter, but they don't give you the oversampling ratio. By oversampling, I mean the input is sampled higher than the Nyquist rate and noise shaped. Then, it is digitally filtered back down to Nyquist with significantly less quantization noise than you would have had without oversampling. A common oversampling ratio is 128 which would put the converter at 90 MHz signal bandwidth, which is quite high for an oversampled converter. I'm totally guessing on the oversampling ratio, though.

    As for the number of bits it can do per sampling, that is largely irrelevent in a communications context. What matters is the SNR. You can figure out the number of bits of linearity with the following equation: SNR = (6*B + 2) dB where B=the effective number of bits. In other words, if you have an SNR of 60 dB, you have just about 10 bits of accuracy. The reason accuracy is specified as an SNR rather than a number of bits is if the converter is nonlinear (as it always is in practice) then even if you have more bits of resolution, the additional bits are inaccurate and should be ignored. For example, if you have a so-called "12 bit" converter with an SNR of 63 dB, then although you have 4096 possible output codes, the uncertainty between them is enough that the lower 2 LSBs are garbage, and, although you have 12 bits of resolution, you only have 10 bits of accuracy. This is a common way for manufacturers to lie on data sheets. Keep in mind, though, there are situations where resolution is more important than accuracy (such as digital imaging) and you'd rather have a 12-bit / 10 bit accurate converter than a 10-bit / 10 bit accurate converter.

    Hope that cleared things up. If you have more questions, email me or reply to this post. I just love talking about data-converters!

  • by krlynch ( 158571 ) on Thursday December 14, 2000 @11:54AM (#558617) Homepage
    I wonder if there are enough particles in the universe to run a finite elements simulation for more than 4 hours in a 750 GHz CPU.

    There are more than enough to keep such a CPU busy for nearly all eternity....A 750GHz CPU (even assuming 1 flop/cycle average throughput) would still have a hugely difficult time just doing QCD calculations of the interactions inside a SINGLE proton in anything approaching days! (I have a colleague doing lattice QCD who was just telling me about their new algorithms for hacking time off of certain types of lattice simulations, and they are talking about running for 16 CPUyears on a brand new 90Gflop machine! At 750 Gflops, you're still talking 2 CPU years! Don't ask me for details, though, as I don't know any....not my field).

    For further consideration, there are about 10^80 particles in the universe (give or take a few orders of magnitude.....). Let's assume it only takes 10flops to update a single particle for one timestep (not even close, but let's run with it shall we?) That means we update 75 x 10^9 particles every second...let's round up and call it 10^11. That means it would take about 10^69 seconds to update one time step. Or 10^61 years. Which is roughly 10^45 times the age of the universe. Not to mention the amount of RAM you'd need to run this simulation on (which would take more particles to build than there are in the universe itself, but I digress.....)

    Really monstrously fantastically mind-bogglingly large numbers are really really fun :-)

  • use a 750ghz bewolfe to heat your office building!

    A few years ago in Minneapolis a company (Honeywell?) decided to shut down their old mainframe because they discovered they only had one job still running on it, eaisally ported to new machines. The day before the final shut off the janitors discovered that the building was built without heaters because the comptuer gave off enough heat to need cooling even in the coldest Minnesota winter. They ended up selling time on the mainframe (for peanuts, not even recovering energy costs) for 2 more years until they could install a heater.

  • "Honey, We're going to have to take out the kids room in order to put in the new 300 gallon liquid N2 dewar I just ordered... What?"

    Eric

  • by supabeast! ( 84658 ) on Thursday December 14, 2000 @10:41AM (#558629)
    The Freon!
  • by nachoworld ( 232276 ) on Thursday December 14, 2000 @10:44AM (#558633) Homepage
    This is absurd. The cost of keeping such a superconductor at 5 K is going to keep the general public, and even most corporations, from buying this technology. It's expensive to keep a box at that temperature in the lab (I should know, I'm a chemist). Only the US government would be willing to shell out the money for these low-maintenence devices (maybe). Corporations would rather just use the money to buy the computing speed in multiple CPUs rather than as one - it'd be a hellofa lot cheaper.

    Perhaps the people working on the project will eventually be able to use a superconducting material that works at liquid nitrogen temp instead of niobium (perhaps a yttrium complex like we use now? - I don't know the specifics of this 700GHz IC or whether it would be able to use Yttrium complexes). In that case, the cost will go down and perhaps we'll see more corporations buying this tech. In order for personal consumers to buy a 700GHz computer, we'd have to have room-temp or near-room-temp superconductors.

    But then we run into one of the hugest physics problems of the late twentieth century. The scientific community no longer has the enthusiasm it once had for searching for that "perfect" superconductor.

    ---
  • "Honey, with this new cryogenics tower computer, I'll be able to get my computing done faster, so I can then spend more time on YOUR needs."

    Simple. :^)
  • At those speeds, interconnect latency is a major problem. A photon can only travel 40 microns during a single clock cycle of that puppy. So chip layout becomes extremely important.
  • Shit. I'm off by a factor of 10. I meant 400 microns.

    (1 ns = ~30cm)
    (1 ps = ~300um)
    750 GHz => 1.3ps/cycle = ~400um
  • Better start saving now, then.

    Twenty thousand dollars, 5 years -- that's just 330 dollars a month. What? You have other things to buy? You don't think it would be worth it? Think of how fast you could crunch SETI units, or play Quake 2005!

    Come on man, where are your priorities!

    :-)


    Torrey Hoffman (Azog)
  • Dec SciAm (not online) shows nanotaubes a 1/20th
    the diameter of current wiring and very fast.
  • There's an additional factor of 2 because, on average, you only need to check half the possibilities. So it's actually 5 quadrillion years. See? Now that seems much more achievable!

  • They used Fluorinert. The -100C overclockers were playing with that.

    I also think one of the Crays used artificial blood plasma as the coolant.
  • Rambus is to formally announce later this afternoon that they will be filing IP suits, as they claim to own the patents on 5K niobium 0.3 uM "Josephson junctions" circuits.

    --

  • by dmatos ( 232892 ) on Thursday December 14, 2000 @10:47AM (#558658)
    With the advent of the 300MHz processor, the 233 I purchased became dirt cheap. Now that there are 1.5 GHz chips out, you can get an 800 MHz chip dirt cheap. When the 750GHz chips are produced, I will be lined up to buy an obsolete 500GHz chip that will be fast enough to start windows from boot in less than three minutes! Yay bleeding edge subsidizing second-stringers!
  • that you can rearrange the letters in "overclocker" to spell "clever crook"?
  • by mr_gerbik ( 122036 ) on Thursday December 14, 2000 @10:49AM (#558662)
    Thats precisely why Strom Thurman should use 256bit encryption!

    2 ^ 256 trials * 1 cycle / trial * 1 second / (10^12 aggregate computer cycles) * 1 year / (3600 * 24 * 365) = 31600000000000000000000000000000000000000000000000 00000 years... he should be retired by then at least.
  • Yes, it seems so...

    Naturally, all interconnects are superconducting and therefore lossless, at least at dc, and the losses remain low, compared to metals at room temperature, up to clock frequencies of about 750 GHz.

    It also says they _could_ achieve more than 100 GHz... and 750 Mbits per second _has been_ experimented.

    BTW, at 750 GHz that light goes _only_ 0.4mm (0.016 inches) in one cycle. That's impressive, but it also means there is still some margin to increase frequencies.

    I wonder if there are enough particles in the universe to run a finite elements simulation for more than 4 hours in a 750 GHz CPU. (nevertheless, we will need all that power to run Windows .NET 2010 Blackholesweeper ;-).

    --ricardo

  • The mention of Pentium-class in a post talking about 750Ghz is almost enough to make me want to throw up.

    There has got to be something better than x86. And if consumers are still stuck with x86 when processor speeds hit 750Ghz for the common computer, well, I have lost my interests in computers for life.

    I can see it now:
    750Ghz PentiumXXXIV processor 1Ghz FSB 2Ghz memory 500Mhz access to storage and a graphics processor that is only capable of pumping out three frames per second in Quake 25.

    We have got to leave behind the baggage before we hit multi-Ghz speeds. Please god, don't keep the architecture.

  • Unless the prices come down, it will cost around twenty THOUSAND dollars to cool the chip down this much.

    Certainly affordable for any company or organization who has a need for computation at any cost (simulations, physics modeling, ray tracing, code cracking). Most mainframes cost MUCH more than that!

  • The way I read the article, they've already solved most of the problems you're talking about. So what's the problem?
  • by Mr Z ( 6791 ) on Thursday December 14, 2000 @11:02AM (#558670) Homepage Journal

    One problem with these high clock rates is that you end up having to pipeline things rather excessively all over the place. I'd imagine at 750GHz that even a single 64-bit ADD would be pipelined over multiple cycles, due to transport delay!

    Think about it: Light travels about 1 foot per nanosecond (30cm). At 1GHz speeds, a signal could travel well across a die if it were unimpeded (eg. could travel at the speed of light). In fact, it could theoretically travel most of the way across the motherboard in one clock period. At 750GHz, light travels 0.4mm per clock tick -- about 1/20th the way across a typical CPU die (assuming a die in the range 8mm x 8mm to 10mm x 10mm die -- not too far off what we build today). We're talking 20 pipeline stages just to get from one edge of the die to the other, if we can travel at the full speed of light in a vacuum. And the bad news is that we probably can't -- just look at todays CPUs!

    What'll happen is that highly parallelizable problems will speed up, and inherently serial problems will end up staying the same. All of your number crunching for playing video games will rocket along since the calculations can be pipelined and parallelized, but the twisty, turny, five-instructions-and-a-branch control code won't speed up much.

    --Joe
    --
    Program Intellivision! [schells.com]
  • Now we just have to get the price down... Superconductors and 5K cooling systems are both insanely expensive. However, if I can get one of these today, I'd be willing to install the cryogenics facility in my house. That 750Ghz system would be a nice litte boost up from my P133 :)
  • by clinko ( 232501 ) on Thursday December 14, 2000 @10:20AM (#558676) Journal
    Joe Consumer -

    "750GHZ! WOW! NOW I CAN RUN AOL EVEN FASTER!
    AND WITH 56k AOL IS FASTER THAN EVER!"


  • I just can't see explaining why I need a cryogenics tower for my computer to my wife...

  • Finally a subject on which I have a decent contribution to make. I wrote a technical report on the technologies behind the current fastest supercomputers and on up-and-coming innovations. This gives a high-level overview of ASCI Red, IBM's Blue Gene, and the HTMT (superconducting technology based) project. Follow this link to the LaTeX2HTML version [arizona.edu] or download the Postscript version [arizona.edu].
  • Yeah, well, the servers crashed because they're cryogenically cooled superconductors, and we ran out of liquid helium this week.

    I have to go to the drugstore to buy a few more pounds of liquid helium, I'll be back after lunch.
  • It's not surprising that the NSA would be interested in this technology, but I do find it striking that there's such a blatant connection. If I was to guess, I'd say they're probably way ahead of industry and academia on this one.

    Uh...thin indium wires are used routinely on any instrument running at very low temperatures to limit heat input. The only way you can get a signal out of an instrument at say 4K and keep the instrument at that temp is to use thin wires.

  • It isn't obvious that moving to "optical" computers would necessarily diminish the amount of heat involved; the problem is that the switching principle changes, and I'm not sure that we really have suitable "superconductors of light" to correspond to the electrical properties of superconductors.

    The idea isn't new; Congo [amazon.com] used a "special form of diamonds" that would be used in exactly this manner as the McGuffin that was the excuse for them to go to Africa. And the book dates back 20 years.

  • IIRC, isn't Moore's law a relationship between time and transistor count rather than time and speed? While one could find a correlation between transistor count and speed, it's not really as relevant as people think.

    I think it could, theoretically be possible (although rather improbable) to reach 750GHz in 5 years and stay right on time with Moore's law, it would just be a matter of cooling.

    or maybe I'm an idiot. I dunno. i hope I'm right, because by posting I just lost the ability to moderate this thread :)

  • Doesn't this wildly violate it? Right now we're at 1.5 GHz. Moore's law states that this will double every 18 months. That's about three times it should double in the next five years. La la la, David does some simple math:

    How many times has this been said on /.? Moore's law is about the density of DRAM, and by implication, the density of other CMOS circuits. It says nothing about clock rate. It is true that smaller transistors are faster, but there are other problems with clocks that smaller transistors make more difficult, most notably clock skew. There is already logic out there that can go faster than 20 GHz, but it is LSI GaAs logic (flip-flops / gates / adders).

    Moore's law has nothing to do with superconductors, at all. We may never see mass-market superconducting logic. It will be just too expensive, and it could be impossible to integrate well enough for computers.

  • The cost of keeping such a superconductor at 5 K is going to keep the general public, and even most corporations, from buying this technology.

    What do you mean? The article says: "These days, about US $20 000 can buy a cryocooler that reaches down to 4-5 K and fits in the lower half of a standard 48-cm instrument rack. Commercial systems using off-the-shelf cryocoolers are now obtainable from Hypres to realize the SI definition of the volt; they require routine maintenance only once every 24 months. Further reductions in size, cost, and increased efficiency of cryocoolers should stem from increased volume of production and the availability of a cooler developed with cryogenic electronics as its specific application." This is certainly feasible in both cost & maintenance fees for any organization that has had to buy high-end workstations and/or mainframes.

    I don't see people buying this for their home desktop, but there would certainly be a great deal of interest by any company or organization who customarily dealt with large amounts of computation.

  • With superconducting materials there is an impedence, but its magnitude is somewhere around 10-18 times that of copper. I am guessing with that number, but the point is that it's tiny. It is, for an engineer, infintesimal.

    Interesting enough, not one of copper, silver, or gold - the best transition metal conductors at room temperature - exhibit superconductivity, at any temperature. When I had to write a paper on this, the highest-achieved-by-man superconductors were ceramics. Unusual elements like Ytterbium (Yb) were in the compounds they synthesized.

    This just in: AuIn3@0.00005 K - The first ferromagnetic superconductor. So there's hope for eternal electromagnets after all. :)

  • How 'fast' does our brain work? Can we generalize the synaptic process into a 'Hertz' unit of speed? Can this thing 'think' faster than a human brain? Would it be simpler to tie it right into our lobe than to use something primitive like a keyboard and mouse? Obviously the technology to do so isn't here, yet, but its kinda like sci-fi coming to life...

    --
  • I've worked for 4 months at the National Research Council in Ottawa, Canada, and there's only one thing you can use to cool down to 5K: liquid helium.

    We used liquid helium to cool our experiment. Back in '93 in bulk it costs about $10 CDN per litre. And it evaporates instantly on contact with air. You need to use liquid nitrogen to make sure that the surfaces holding the liquid helium are cold enough so that the helium doesn't just completely evaporate on contact.

    I saw another post saying $20k to cool the machine. That might be the cost per month of operation. While super fast chips may be feasable, the most cost effective cold you're going to get is just from liquid nitrogen. I'd probably try to start from there as a benchmark.

    (This is the kind of thing I expect to read about some drunk New Zealanders doing in their basement. LHe is just a bit too expensive, I guess...)
  • If you thought people came up with extreme overclocking methods before, just wait until they try to reproduce this in their garages....

    The Free ODMG Project [sourceforge.net] needs volunteers.
  • If moore's law holds, and I'm not one to predict whether or not an estimate would fail (who am I to do such a thing?) we should be somewhere around six or seven gigahertz by the time we're all scoffing at this article's headline.

    Six or seven gigahertz. We'll be finished simplifying the user interface FAR before then. We'll be fancifying it.

  • Just think about how far light goes within one clock cycle, and then tell me you aren't impressed.
    Wouldn't electric signals going througn a wire at that frequency emit light???

    --
    Game over, 2000!

  • I could just see it: Intel starts competing in the microwave business with their new super hot 750ghz chips, which flash fry food in seconds. Hell use a 750ghz bewolfe to heat your office building!

    Aside from that I don't see what the point is. Without RAM to match the 750ghz clock speed the chip's value would be seriously reduced. But I digress.

    Maskirovka

    History is on the move: those who fail to keep up will be left behind. Those who get in the way won't survie at all.
  • by crgrace ( 220738 ) on Thursday December 14, 2000 @11:18AM (#558728)
    Disclaimer: I design analog-to-digital converters for a living.

    The market the article talks about is the analog to digital converter market, not the desktop market.

    True enough. I stand by my statements, however. For one thing, the article discusses an A/D fabricated in the technology. The die size was 1 cm^2. That is truly enormous and very would be extremely expensive as a product. Even if they can bring down the lithography, it is still very expensive. Second, while they say it runs at 12.8 GHz, because of the decimation filter it is obviously an oversampled converter but they don't give the oversampling ratio, so we have no idea of the actual conversion rate. I drew parallels between III-V materials (which have been around since the 1960s and must have achieved some kind of maturity) with superconducting electronics (which have been around since the late 1970s) and I think they still stand.

    My belief is that this is a laboratory curiosity with little commerical potential. I'm sure the military is very interested in using it with radar, however.

    By the way, the IEEE is well known for pumping up "cutting edge" technologies that never reach their potential. Remember "fuzzy logic"?

  • I can get 100,000 frames / second on Q3. Dammit, I can see the difference!!


    --

  • Didn't know that. I had though that first working device was around seventy-two or three.

    I wasn't really thinking of resistive heating, rather inductive heating of dialectrics and substrates surrounding the conductors. I honestly haven't taken the time to find out if this is a problem with superconducting circuitry.

One way to make your old car run better is to look up the price of a new model.

Working...