Engineers Report Breakthrough in Laser Beam Tech 208
petralynn writes to tell us the New York Times is reporting that Stanford engineers have discovered a method to modulate a beam of laser light up to 100 billion times a second. The new technology apparently uses materials that are already in wide use throughout the semiconductor industry. From the article: "The vision here is that, with the much stronger physics, we can imagine large numbers - hundreds or even thousands - of optical connections off of chips," said David A.B. Miller, director of the Solid State and Photonics Laboratory at Stanford University. "Those large numbers could get rid of the bottlenecks of wiring, bottlenecks that are quite evident today and are one of the reasons the clock speeds on your desktop computer have not really been going up much in recent years."
More informative article: (Score:5, Informative)
The NYT story is pretty light on the technical details....a more detail-oriented write-up can be found here [eurekalert.org]... and you don't have to register to read it.
BugMeNot shortcut for 'ya ... (Score:3, Informative)
This work was funded by Intel and DARPA with some assistance from an HP researcher and uses something called the Quantum-Confined Stark Effect [google.com] with primary application in optical networking gear ... but hey, maybe
we'll see a 100 GHz PC in the not-too-distant future.
The halloween webcam is up [komar.org] ... but X10 technology isn't capable of 100 Billion times/second updates ... ;-)
Re:Moore's Law Finally Broken?!?!?!? (Score:2, Informative)
From the article:
Several industry executives said the advance was significant because it meant that optical data networks were now on the same Moore's Law curve of increasing performance and falling cost that has driven the computer industry for the past four decades.
Doh! Don't you hate it when you get all high and mighty posting about people who don't know what they're talking about and then find out that you don't know what you're talking about?
Re:Modulating Laser... (Score:3, Informative)
Re:Speed of light vs. speed of electrons in wire? (Score:3, Informative)
Re:Speed of light vs. speed of electrons in wire? (Score:4, Informative)
The electromotive force, or voltage, travels at about the speed of light.
Picture a hose of water. The water (electrons) takes a long time to get from one end to the other... but the effect of putting water in one end is immediately seen at the other end (within reason).
With AC, electrons never really gain ground in a balanced load situation. Back and forth and
Re:Speed of light vs. speed of electrons in wire? (Score:5, Informative)
Normal signal: ____----____----____----
0 1 0 1 0 1
New hawtness: _-_-_-_-_-_-_-_-_-_-_-_-
010101010101010101010101
Both took the same amount of time to travel down the pipe. But one conveyed 4x the information.
Re:Speed of light vs. speed of electrons in wire? (Score:5, Informative)
Optical interconnects alleviate many of these problems. With a laser, the ramp up time is significantly shorter, there's no capacitance in the system, and it is far less prone to interference. So, on a 100 GHz optical link you can multiplex 100 1GHz pins (essentially running a P4's FSB on two wires instead of something like 180), thereby significantly reducing the pin count. Or you could run the pins 100 times as fast, meaning much less processor waiting on RAM or bus data.
Re:Speed of light vs. speed of electrons in wire? (Score:5, Informative)
But that's not really the problem. Transmit time is still quite low (I've heard 1ns per 6 in of trace on a board). Latency isn't really the problem. The problem is -- how fast can you change the signal? That's bandwidth. Here electrical conductors suffer because of parasitic capacitance and inductance, skin effects, reflections, induced current from nearby conductors, and a whole host of other signal integrity issues. It gets worse the longer the channel is and the more things you have connected to it. If you're wondering why the MP Pentium 4s have been on a 100MHz QDR front side bus since they were released, this is why. It's also why even point-to-point interconnect like AMDs has only recently broken 1 GHz.
Optics don't really have this issue. Two fiber optic cables next to each other don't interfere with each other. You don't have to overcome the capacitance of the channel to change from one value to the next. You just send photons of one frequency, and then switch to the next. As fast as you can switch is how much bandwidth you can get.
Alright, I'm not really liking this explanation anymore. To just directly answer your question: the advantage is 100 GHz interconnect in a way that could potentially be built into chips.
Re:Speed of light vs. speed of electrons in wire? (Score:4, Informative)
You can build what's called an "aritficial transmission line" in just such a manner. It simulates the effect of a much longer pair of wires for lab purposes.
Re:I'm still betting on qubits (Score:5, Informative)
That being said, the problems that can be solved by quantum computers tend to be the ones that would take a regular CPU until the end of the universe to perform (break strong encryption, large traveling salesman problems, etc.). At some point, if we can make a quantum computer compact enough, we might end up having quantum co-processors built into out PCs but we'll probably never see the CPU of our PC replaced by a quantum computer.
The tech being discussed in the article would be directly applicable to making generic PCs run faster (though it could also have the potential to improve communication speeds with a hypothetical quantum computer as well). Another tech that will probably be leveraged to make generic systems faster is the replacement of silicon in computer chips with diamond. Since diamond can handle vastly higher temperatures than silicon, without melting, it is theoretically possible to push the clock speed on a diamond based CPU much higher than on today's silicon CPUs.
-GameMaster
Re:Speed of light vs. speed of electrons in wire? (Score:4, Informative)
Now for the fun part - What is the velocity of propagation?
For frequencies were the inductive reactance of the conductor is significantly larger than the resistance of that conductor at that frequency (think skin effect), then the velocity of propagation is c divided by the square root of the effective relative dielectric constant. This is often referred to as an LC transmission line since propagation is dominated by the series inductance and shunt capaitance. LC lines have a propagation velocity independent of frequency (at least to the first order). As an example, coaxial cable with a solid polyethylene dielectric will have a propagation velocity of 0.66c, which would be valid from a few hundred kHz to several GHz.
When the the conductor resistance is greater than the inductive reactance, then the line becomes an RC line where the "propagation velocity" is dependent on frequency (dispersive) and the time for a transition to propagate is proportional to the square of the line length. The effective "propagation velocity" is going to be a lot less than c. Turns out that the interconnects on chips are RC lines - and it is often necessary to insert inverters on a line to speed things up (recall that propagation time varies with the square of the line length) - a good rule of thumb is to space the inverters so the the propagation delay equals the gate delay.
The RC problem is why loading coils were put on phone lines - the inductive reactance of the coils is larger than the resistance and the line becomes an LC. The loading coils are bad news for DSL - and an unloaded line looks like an LC line at the frequencies used by the DSL modems.
A good reference for this is High Speed Digital Design, a Handbook of Black Magic by Johnson and Graham.
Re:Moore's Law Finally Broken?!?!?!? (Score:2, Informative)
In any case, while Moore's Law is specific to transitor based circuitry, the pattern is applicable to other technologies, such as Kryder's Law which covers rigid magnetic media (hard drives). In fact, looking at these cases in general within a field of technology suggests a more abstract pattern. After all, the original component technologies with which Moore worked when he made his observations have been replaced over the years, some of them multiple times, with the the common thread to all of them being that they ultimately deal with transitors.
If optical technologies get pulled in by the same economic factors that drive Moore's and Kryder's Laws, they'll very like fall into a similar pattern: doubling of a particular characteristic over constant intervals.
Of course, all of this also depends on how how close a class of technology is to its fundamental extreme physical limits. For instance, density of transistors is ultimately limited by the size of atoms; the limit there may be somewhere around a "one molecule transitor." In the particular case of the article, the technology is optical modulators and the measure is switching rates. For that, one limit may be the frequency of the transmitted light. The visible spectrum runs from 384-769 THz, with the higher frequencies more difficult (in general) to generate. All this in turn suggests an upper limit of around 700 trillion switchings per second. With a Moore's or Kryder's Law like rate, say doubling bit rate every two years, today's 10 billion bps goes to 700 trillion in about 33 years.
Re:Desktop power not going up much? (Score:3, Informative)
More efficient processors are only just closing in on 3ghz... pretty bad when the P3 (also reasonably good IPC) came out at 1GHz *5 years* ago.
Intel and AMD have clearly indicated that the good old days are over by introducing dual-core chips... nice if your workload needs that, but complicating the programming model (to multithreaded) is a concession to the physical limitations that are imposing themselves.
Re:Desktop power not going up much? (Score:3, Informative)
Re:Speed of light vs. speed of electrons in wire? (Score:2, Informative)
Slightly more correctly, the drift velocity of electrons in standard copper cable is on the order of (tens of) cm/s. Actual electron velocity is close to c (as they bounce around in a cable), and electron drift velocities can be on the order of 10^7 m/s in some media.
Re:Desktop power not going up much? (Score:3, Informative)
Who cares? They're more efficient. They don't need to run at 3ghz to be faster that the old stuff. Just because the clock speed isn't there yet doesn't mean the performance hasn't gone up. Look how many times AMD has pulled ahead of Intel in performance, and they've never even shipped a 3ghz CPU. The only thing that has fallen off is the power of intel's old marketing. The only reason there's a 3ghz number to "catch up to" is that so much performance was given up to hit those timings.
IPC isn't really that good a measure of efficiency either. What kind of instruction? How much work does that single instruction do? How long did it take to get the data for that instruction? IPC numbers are always calculated with "fast" instructions that have no, or few, wait states.
If all you're comparing is clock speed, or "IPC", you're not getting a very good performance comparison.
Intel and AMD have clearly indicated that the good old days are over by introducing dual-core chips...
Sounds more to me like the good ol' days are finally here. Multi-core has been a goal for years, but was available only to those with the deepest pockets. Intel and AMD bringing multi-core to the masses doesn't mean they've run out of ideas for increasing single core performance so much as it means they've figured out how to throw a few more cores on a die in a cost effective manner. As for multi-threading (a far too simplistic way to describe what you need to do to properly process in parallel), the fastest computers have always been highly parallel. We know how to do that stuff now. It's not rocket science. End users don't typically program their machines anyway, and we could do with weeding a bunch of the crappy engineers out of the job pool.