Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology Science

Engineers Report Breakthrough in Laser Beam Tech 208

petralynn writes to tell us the New York Times is reporting that Stanford engineers have discovered a method to modulate a beam of laser light up to 100 billion times a second. The new technology apparently uses materials that are already in wide use throughout the semiconductor industry. From the article: "The vision here is that, with the much stronger physics, we can imagine large numbers - hundreds or even thousands - of optical connections off of chips," said David A.B. Miller, director of the Solid State and Photonics Laboratory at Stanford University. "Those large numbers could get rid of the bottlenecks of wiring, bottlenecks that are quite evident today and are one of the reasons the clock speeds on your desktop computer have not really been going up much in recent years."
This discussion has been archived. No new comments can be posted.

Engineers Report Breakthrough in Laser Beam Tech

Comments Filter:
  • by TripMaster Monkey ( 862126 ) * on Wednesday October 26, 2005 @04:02PM (#13883633)

    The NYT story is pretty light on the technical details....a more detail-oriented write-up can be found here [eurekalert.org]... and you don't have to register to read it.
  • by xmas2003 ( 739875 ) * on Wednesday October 26, 2005 @04:03PM (#13883643) Homepage
    NYT registration required to read this John Markoff (infamous at Slashdot because of his "sensational" coverage of Kevin Mitnick) article ... but fortunately, BugMeNot [bugmenot.com] comes to the rescue with username/password of "twernt/twernt"

    This work was funded by Intel and DARPA with some assistance from an HP researcher and uses something called the Quantum-Confined Stark Effect [google.com] with primary application in optical networking gear ... but hey, maybe we'll see a 100 GHz PC in the not-too-distant future.

    The halloween webcam is up [komar.org] ... but X10 technology isn't capable of 100 Billion times/second updates ... ;-)

  • by Anonymous Coward on Wednesday October 26, 2005 @04:12PM (#13883714)
    I hate all the people that post that without knowing shit about it. As this applies to optics and not semiconductors, it really doesn't have anything to do with moore's law.

    From the article:

    Several industry executives said the advance was significant because it meant that optical data networks were now on the same Moore's Law curve of increasing performance and falling cost that has driven the computer industry for the past four decades.

    Doh! Don't you hate it when you get all high and mighty posting about people who don't know what they're talking about and then find out that you don't know what you're talking about?
  • by nullset ( 39850 ) on Wednesday October 26, 2005 @04:18PM (#13883772)
    Real Genius is the movie you're looking for, starring Val Kilmer. I love the promotional poster of Val Kilmer wearing an Einstein shirt, and Einstein wearing a Val Kilmer shirt.... (God) Kent, have you been touching yourself? Kent: yes....I mean NO!
  • by Incongruity ( 70416 ) on Wednesday October 26, 2005 @04:20PM (#13883795)
    You need to differentiate the drift speed of the particular electrons (this can be quite slow, esp. in AC) and the speed of propagation of energy, which if I recall is damn fast (near C, but not there...granted, 1/10 of C is still astoundingly fast, so my poor memory of freshman physics may not contradict you, though I think your guess is off)...the real advantage is that the switching speed is far beyond what we can do with current metal/electron based circuits (rtfa) . Additionally, this is big because using electrons generates more heat and is subject to induction/capacitence effects that light isn't. So those would be the main advantages, as I understand it... but I only play a physicist on /. so feel free to correct me, cruel world.
  • by DarthStrydre ( 685032 ) on Wednesday October 26, 2005 @04:21PM (#13883800)
    The speed of the electrons is on the order of cm/s, and is related to the current density.

    The electromotive force, or voltage, travels at about the speed of light.

    Picture a hose of water. The water (electrons) takes a long time to get from one end to the other... but the effect of putting water in one end is immediately seen at the other end (within reason).

    With AC, electrons never really gain ground in a balanced load situation. Back and forth and .. . . .

  • by Red Flayer ( 890720 ) on Wednesday October 26, 2005 @04:24PM (#13883833) Journal
    The modulation. The signal travels at about the same time, but you can turn it on and off much much faster... so the density of bits per unit of time is much higher.

    Normal signal: ____----____----____----

    0 1 0 1 0 1

    New hawtness: _-_-_-_-_-_-_-_-_-_-_-_-

    010101010101010101010101

    Both took the same amount of time to travel down the pipe. But one conveyed 4x the information.
  • by joe_bruin ( 266648 ) on Wednesday October 26, 2005 @04:31PM (#13883891) Homepage Journal
    The speed of electricity in a wire is not really the issue (it's about half the speed of light, I think. I'm sure someone will correct me). The real issue is signal propagation. When a transistor switches from closed to open or back, the electrical signal travelling through the wire is not a perfect on/off. The voltage ramps up or ramps down as some function of the length of the connection, width of the wire, conductivity, leakage from the transistor, inductance, ... The system needs a bit of time to "settle" into the new high or low state. This is a big limiting factor in the clocking of modern CPUs. For communication off the chip, it's far worse. Now the lines are no longer 90nm (or whatever the chip was made at) in width, and have to go through a far longer distance. That's why today's processors are limited at around 1GHz to the outside world, while internally they can be faster.

    Optical interconnects alleviate many of these problems. With a laser, the ramp up time is significantly shorter, there's no capacitance in the system, and it is far less prone to interference. So, on a 100 GHz optical link you can multiplex 100 1GHz pins (essentially running a P4's FSB on two wires instead of something like 180), thereby significantly reducing the pin count. Or you could run the pins 100 times as fast, meaning much less processor waiting on RAM or bus data.
  • by Chris Burke ( 6130 ) on Wednesday October 26, 2005 @04:35PM (#13883911) Homepage
    Yeah, that's not true. I don't know how fast an electron moves (I'm assuming not the speed of light, since they have mass, and that quantum physics I know little about probably comes into play), but in a normal conductor they don't move very far before slamming into something. Individual electrons don't move that far or fast on their own, it's the aggregate and resulting field that really moves.

    But that's not really the problem. Transmit time is still quite low (I've heard 1ns per 6 in of trace on a board). Latency isn't really the problem. The problem is -- how fast can you change the signal? That's bandwidth. Here electrical conductors suffer because of parasitic capacitance and inductance, skin effects, reflections, induced current from nearby conductors, and a whole host of other signal integrity issues. It gets worse the longer the channel is and the more things you have connected to it. If you're wondering why the MP Pentium 4s have been on a 100MHz QDR front side bus since they were released, this is why. It's also why even point-to-point interconnect like AMDs has only recently broken 1 GHz.

    Optics don't really have this issue. Two fiber optic cables next to each other don't interfere with each other. You don't have to overcome the capacitance of the channel to change from one value to the next. You just send photons of one frequency, and then switch to the next. As fast as you can switch is how much bandwidth you can get.

    Alright, I'm not really liking this explanation anymore. To just directly answer your question: the advantage is 100 GHz interconnect in a way that could potentially be built into chips.
  • by Rob the Bold ( 788862 ) on Wednesday October 26, 2005 @04:36PM (#13883922)
    The speed an electric signal will propogate in a transmission line is somewhat less than 1C. The value of 0.1C in a sibling post is a good rule of thumb. Think of your transmission line as a bunch of inductors in series and a bunch of capacitors in parallel (imagine a ladder with inductor legs and capacitor rungs). At each step along the way you need to charge up the capacitor before current will move to the next inductor, where your current will charge up the magnetic flux and then on to the next cap, etc.

    You can build what's called an "aritficial transmission line" in just such a manner. It simulates the effect of a much longer pair of wires for lab purposes.

  • by GameMaster ( 148118 ) on Wednesday October 26, 2005 @04:46PM (#13884008)
    Quantum computers are great, in theory, but even if we are able to figure out how to build one that actually works they are only capable of solving certain types of problems. Our present understanding of quantum physics tells us that you can't design a quantum computer that can do all the same math problems as a generic Intel/AMD CPU (e.i. run Windows; play Counterstrike; etc.).

    That being said, the problems that can be solved by quantum computers tend to be the ones that would take a regular CPU until the end of the universe to perform (break strong encryption, large traveling salesman problems, etc.). At some point, if we can make a quantum computer compact enough, we might end up having quantum co-processors built into out PCs but we'll probably never see the CPU of our PC replaced by a quantum computer.

    The tech being discussed in the article would be directly applicable to making generic PCs run faster (though it could also have the potential to improve communication speeds with a hypothetical quantum computer as well). Another tech that will probably be leveraged to make generic systems faster is the replacement of silicon in computer chips with diamond. Since diamond can handle vastly higher temperatures than silicon, without melting, it is theoretically possible to push the clock speed on a diamond based CPU much higher than on today's silicon CPUs.

    -GameMaster
  • by IvyKing ( 732111 ) on Wednesday October 26, 2005 @04:47PM (#13884016)
    First off, the electron velocity in wire is much less than the propagation velocity through the same wire.

    Now for the fun part - What is the velocity of propagation?

    For frequencies were the inductive reactance of the conductor is significantly larger than the resistance of that conductor at that frequency (think skin effect), then the velocity of propagation is c divided by the square root of the effective relative dielectric constant. This is often referred to as an LC transmission line since propagation is dominated by the series inductance and shunt capaitance. LC lines have a propagation velocity independent of frequency (at least to the first order). As an example, coaxial cable with a solid polyethylene dielectric will have a propagation velocity of 0.66c, which would be valid from a few hundred kHz to several GHz.

    When the the conductor resistance is greater than the inductive reactance, then the line becomes an RC line where the "propagation velocity" is dependent on frequency (dispersive) and the time for a transition to propagate is proportional to the square of the line length. The effective "propagation velocity" is going to be a lot less than c. Turns out that the interconnects on chips are RC lines - and it is often necessary to insert inverters on a line to speed things up (recall that propagation time varies with the square of the line length) - a good rule of thumb is to space the inverters so the the propagation delay equals the gate delay.

    The RC problem is why loading coils were put on phone lines - the inductive reactance of the coils is larger than the resistance and the line becomes an LC. The loading coils are bad news for DSL - and an unloaded line looks like an LC line at the frequencies used by the DSL modems.

    A good reference for this is High Speed Digital Design, a Handbook of Black Magic by Johnson and Graham.

  • by Salvage ( 178446 ) on Wednesday October 26, 2005 @04:56PM (#13884089) Homepage
    It's not all that accurately worded, but it is relevant. The lack of accuracy is likely due to trying to keep that comment short.

    In any case, while Moore's Law is specific to transitor based circuitry, the pattern is applicable to other technologies, such as Kryder's Law which covers rigid magnetic media (hard drives). In fact, looking at these cases in general within a field of technology suggests a more abstract pattern. After all, the original component technologies with which Moore worked when he made his observations have been replaced over the years, some of them multiple times, with the the common thread to all of them being that they ultimately deal with transitors.

    If optical technologies get pulled in by the same economic factors that drive Moore's and Kryder's Laws, they'll very like fall into a similar pattern: doubling of a particular characteristic over constant intervals.

    Of course, all of this also depends on how how close a class of technology is to its fundamental extreme physical limits. For instance, density of transistors is ultimately limited by the size of atoms; the limit there may be somewhere around a "one molecule transitor." In the particular case of the article, the technology is optical modulators and the measure is switching rates. For that, one limit may be the frequency of the transmitted light. The visible spectrum runs from 384-769 THz, with the higher frequencies more difficult (in general) to generate. All this in turn suggests an upper limit of around 700 trillion switchings per second. With a Moore's or Kryder's Law like rate, say doubling bit rate every two years, today's 10 billion bps goes to 700 trillion in about 33 years.
  • by timeOday ( 582209 ) on Wednesday October 26, 2005 @05:13PM (#13884225)
    I disagree, the slowed progress in PC speedups in the last couple years has been disappointing. Things really started to fall off at about 3ghz. 3ghz was released in 2002(!) over 2 1/2 years ago and we still haven't hit 4ghz, that says it all with respect to clock speed.

    More efficient processors are only just closing in on 3ghz... pretty bad when the P3 (also reasonably good IPC) came out at 1GHz *5 years* ago.

    Intel and AMD have clearly indicated that the good old days are over by introducing dual-core chips... nice if your workload needs that, but complicating the programming model (to multithreaded) is a concession to the physical limitations that are imposing themselves.

  • by a_ghostwheel ( 699776 ) on Wednesday October 26, 2005 @05:26PM (#13884309)
    Moore's Law has nothing to do with clock speed I think. If i remember correctly, it states that number of transistors on the chip will double every 18 months. Improved clock speeds are just side effect.
  • by osobear ( 761394 ) on Wednesday October 26, 2005 @05:26PM (#13884313) Homepage
    The speed of the electrons is on the order of cm/s, and is related to the current density.

    Slightly more correctly, the drift velocity of electrons in standard copper cable is on the order of (tens of) cm/s. Actual electron velocity is close to c (as they bounce around in a cable), and electron drift velocities can be on the order of 10^7 m/s in some media.

  • by ivan256 ( 17499 ) * on Wednesday October 26, 2005 @11:55PM (#13886702)
    More efficient processors are only just closing in on 3ghz...

    Who cares? They're more efficient. They don't need to run at 3ghz to be faster that the old stuff. Just because the clock speed isn't there yet doesn't mean the performance hasn't gone up. Look how many times AMD has pulled ahead of Intel in performance, and they've never even shipped a 3ghz CPU. The only thing that has fallen off is the power of intel's old marketing. The only reason there's a 3ghz number to "catch up to" is that so much performance was given up to hit those timings.

    IPC isn't really that good a measure of efficiency either. What kind of instruction? How much work does that single instruction do? How long did it take to get the data for that instruction? IPC numbers are always calculated with "fast" instructions that have no, or few, wait states.

    If all you're comparing is clock speed, or "IPC", you're not getting a very good performance comparison.

    Intel and AMD have clearly indicated that the good old days are over by introducing dual-core chips...

    Sounds more to me like the good ol' days are finally here. Multi-core has been a goal for years, but was available only to those with the deepest pockets. Intel and AMD bringing multi-core to the masses doesn't mean they've run out of ideas for increasing single core performance so much as it means they've figured out how to throw a few more cores on a die in a cost effective manner. As for multi-threading (a far too simplistic way to describe what you need to do to properly process in parallel), the fastest computers have always been highly parallel. We know how to do that stuff now. It's not rocket science. End users don't typically program their machines anyway, and we could do with weeding a bunch of the crappy engineers out of the job pool.

Be careful when a loop exits to the same place from side and bottom.

Working...