Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology Hardware Science

Production of Photon Processors Expected in 2006 217

ThinSkin writes "Photon processors that transmit data via light, not electrons, are slated to enter production in mid-2006, ExtremeTech reports. Headed by a UCLA professor and a Nobel Prize winner, startup Luxtera claims that its optical modulator clocks in at 10-GHz, tens times that of Intel's optical modulator researchers talked about last year. Since the optical module exists as its own entity, it will require a standard CMOS processes to integrate the optical waveguides. Luxtera has worked closely with Freescale Semiconductor to develop this technology."
This discussion has been archived. No new comments can be posted.

Production of Photon Processors Expected in 2006

Comments Filter:
  • by the_mad_poster ( 640772 ) <shattoc@adelphia.com> on Tuesday March 29, 2005 @09:12PM (#12084307) Homepage Journal
    Electrons ARE light particles.
    • by Anonymous Coward on Tuesday March 29, 2005 @09:16PM (#12084350)
      Actually, you're wrong--electrons are particles of mass. They travel in waves, just like electromagnetic radiation (that is, light), and have a distinct De Broglie wavelength, but they are not, themselves, electromagnetic radiation.
      • by Anonymous Coward on Tuesday March 29, 2005 @09:17PM (#12084368)
        The only thing I know about duality is that I'll be seeing this article again tomorrow.
      • by TummyX ( 84871 ) on Tuesday March 29, 2005 @09:29PM (#12084487)
        Yes, but they are light. Certainly lighter than protons which are themselves light.
        • by Pla123 ( 855814 )
          I think you mistake protons with photons.

          Photons are light. Protons are not.

          A Proton is a neutron with a positron.
          Electrons, positrons, protons, and neutrons are particles with mass and they are not light.

          When electrons or positrons move (current) they produce electro magnectic waves which are light.

          What they develop is the ability of devices like CPU and memory to comunicate using light and thus giving them more bandwidth.

          It's a small step toward faster computing and eventualy quantum computing...
          • by BeBoxer ( 14448 )
            Electrons, positrons, protons, and neutrons are particles with mass and they are not light.

            So they are what, heavy? It's a joke. Perhaps a little too subtle, but a joke none the less. Laugh. ;-)
          • i think the parent meant "light" as in "not heavy"

            a failed joke, don't ask for too much out of it
          • by novakyu ( 636495 ) <novakyu@novakyu.net> on Wednesday March 30, 2005 @03:55AM (#12086740) Homepage
            ... There's the saying, "Don't feed the trolls," but since this is marked "Informative", I should correct it on a few points:

            A Proton is a neutron with a positron.

            No, it's not. A proton is three quarks. From Wikipedia [wikipedia.org]:
            Protons are classified as baryons and are composed of two Up quarks and one Down quark, which are also held together by the strong nuclear force, mediated by gluons. The proton's antimatter equivalent is the antiproton, which has the same magnitude charge as the proton but the opposite sign.

            A neutron may decay into a proton+electron pair, but a proton is most definitely not composed of neutron+something else. If nothing else, this should be the proof: neutron is heavier than proton---by conservation of mass and energy, neutron cannot be a component of proton.

            When electrons or positrons move (current) they produce electro magnectic waves which are light.

            No, it's not when they move that they produce EM waves. It's when they accelerate that it does (if you had been a physicist, this difference would have been carved into your very being). Moving charge only creates a magnetic field, which doesn't necessarily propagate as an oscillating field in space (i.e. EM wave). What you need is not a current but an alternating (as one example) current.

            It's a small step toward faster computing and eventualy quantum computing...

            Er... I know that you don't know what you are talking about, but this has nothing to do with quantum computing. (O.K. I haven't RTFM (nor do I have interest or time to do so), so I may be wrong on this, but...) This development is analogous to moving to fiber optics from copper cables---it does use a less "lossy" and perhaps faster medium, but it is in no way related to quantum computing.

            • While most of what you say is correct, the following is wrong:

              If nothing else, this should be the proof: neutron is heavier than proton---by conservation of mass and energy, neutron cannot be a component of proton.

              The nucleus of every atom (except for hydrogen, obviously) is smaller than the sum of the masses of its nucleons. If your proof were valid, then they couldn't consist of the nucleons they consist of. The point is that the binding energy also contributes to the mass, and since the binding energy

    • by Anonymous Coward
    • I think you meant to say electricty is light. This is somewhat right but not exactly. Light /is/ made up of electromagnetic fields and the electric force between charges /is/ transmitted by photons, but they are two seperate ideas. Modern quantum physics has electricity sent via virtual photons while normal light is sent by real photons. Real photons involve both an electric field and a magnetic field perpendicular to each other and perpendicular to the direction of the wave propogation. The reason fiber op
    • You forgot to include several lines explaining that you were joking, and by light you meant "light as in not heavy". All good comedians take ten minutes to explain every joke don't they?
  • by Anonymous Coward on Tuesday March 29, 2005 @09:13PM (#12084320)
    people start talking about the GHz Myth?

    My photons are faster than yours!

    • Re:How long until... (Score:5, Interesting)

      by Rei ( 128717 ) on Tuesday March 29, 2005 @09:22PM (#12084420) Homepage
      Well, it is a multiplier, even if it's not the only factor...

      Of course, it just amazes me to think about. With a main clock cycle of 10 billion cycles per second, there would actually be fractional cycles going on at hundreds of billions of cycles per second. The number is staggering; a couple hundred billion times the width of the outer layer of your skin would reach to the moon. The photons will travel through hundreds of thousands of hand-designed gates at the tiniest of scales.

      And, of course, the most common usage for this marvel of modern engineering will be to provide better lighting effects in video games. :P
      • by ciroknight ( 601098 ) on Tuesday March 29, 2005 @09:28PM (#12084472)
        No offense, but we're more likely to see this kind of technology being used to make movies before video games. Hear me out.

        When newer processor technologies are developed, they're almost always deligated to server processors before they trickle down to desktop processors. (Of course, there are exceptions: MMX and its spawn, etc).
        br. I can't wait to see Pixar pick up the Apple Xserves based on an optical interconnected chip. The movies they'd makewould only get more spectacular.
        • "I can't wait to see Pixar pick up the Apple Xserves based on an optical interconnected chip. The movies they'd makewould only get more spectacular."

          most likely they'll be used by the younger graphic artists in order to obtain [(fp!)at the speed of l16][t, beeoztches].

          shame, but it's true.

        • No offense, but I'd rather we leverage this kinda thing in the pursuit of curing fucking diseases before we make the videogames.

          ...or the movies, whichever. Probably the movies first, since they don't need to render in real-time.

          • I guess you haven't seen the 3dfx ads [libero.it]? "You could use the technology to save lives.... or play games [libero.it]".
          • I would too, but the fact is, we don't have the computational power to cure diseases, and protein folding has yet to provide anything useful. But, if you want to spend multiple hundreds of thousands of dollars on machines that will take literally a year to produce any result at all (whether or not it would be valid could easily take another two or three years to confirm), go right on ahead.

            Fact: protein folding is best implemented via grid computing. That way, as newer techonologies come available, you ca
        • Re:How long until... (Score:3, Interesting)

          by danila ( 69889 )
          There doesn't seem to be anything technologically spectacular about Pixar movies these days. Toy Story was impressive. Finding Nemo was impressive. But Incredibles and Robots are generic 3D animation (with supposedly excellent stories, characters, etc.). Pixar is not a 3D graphics pioneer and the only thing Apple Xserves will do is drive the costs down (or up) a bit. Graphically it will all look the same.

          I am much more impressed with Kaena, Immortel, Sky Captain, Advent Children and the like. Pixar is pass
          • Comment removed based on user account deletion
            • I should have made myself clearer. Definitely, Pixar was a pioneer in 3D animation a decade ago. And clearly, it still makes a lot of process innovationss that improve the rendering technology, management and storage of art assets, etc. It's just that the visual quality (i.e. realism and special effects) of its animated films is nothing special, compare with other studios.
              • Comment removed based on user account deletion
                • OK, you are the second person to tell me about Violet's hair today... I know that Pixar makes a big point of having developed an ultra-cool hair technology, but IMHO it's mostly just PR. The technology s certainly nice, but overall relatively unimportant (just as Sully's fur was). From a technological point of view the amazing Polar Express is much more advanced (the performance capture, the realistic humans, the IMAX release).
                  • Check how the realistic hair was done in the Matrix [virtualcin...graphy.org] Revolutions, for example.
                    • Comment removed based on user account deletion
                    • Not exactly. It's not Pixar that did it, it's a particular researcher. I am not saying that Pixar doesn't have brilliant computer graphics scientists and programmers - they certainly have. I am just saying that currently Pixar doesn't appear to concentrate on technological innovations.

                      They are a animation studio, so that might not be a bad thing. But if you are looking for breakthroughs in CGI, it might not make sense to look at Pixar.
                    • That particular researcher works for Pixar, making it Pixar's work. ;)
          • Alright, fair enough. The reason I would say Apple xServes is because they're most likely the next computer to get said technology (Freescale's a PPC shop, could easily see them licensing the technology to IBM to use in the G*, which would go to an Apple machine before it would arrive in any other machine). The reason I said Pixar is because they're the most likely to buy Apple hardware.

            It is your opinion that Pixar is passe, but you must realize that development on the films you mentioned (The Incredible
            • I saw totally perfect metals in 2002, in DVD extras to Two Towers. There was a video of two sets of armour, side by side. One was 100% CGI, another was 100% real. They were indistinguishable. And I stopped being impressed with technological aspects of water surfaces after I saw The Perfect Storm and The Day After Tomorrow.

              I am not trying to bash Pixar or claim that they don't make good stories (they do) and successful films (though that doesn't mean that Incredibles was better than Immortel: Ad Vitam). I a
        • Of course, there are exceptions: MMX and its span, etc.

          Ever hear about GPUs?

  • Uh, okay (Score:4, Insightful)

    by Anonymous Coward on Tuesday March 29, 2005 @09:14PM (#12084329)
    And who gets to use these? Are these like only special coprocessors for million-dollar supercomputers? Are they going to be x86-compatible? MIPS compatible? What?
    • This makes a lot of sense for an interconnect between chips on a single board.
    • Re:Uh, okay (Score:5, Informative)

      by TopSpin ( 753 ) * on Tuesday March 29, 2005 @09:27PM (#12084471) Journal
      And who gets to use these?

      Whoever can afford them.

      Are these like only special coprocessors for million-dollar supercomputers?

      No. These are not "processors" of any sort. It is a new way to modulate signal between CMOS and optical at high frequency and small scale. It may provide faster bus speeds, assuming the reality matches the funding hype.

      Are they going to be x86-compatible? MIPS compatible? What?

      It will be "compatible" with any CMOS device that needs a bus to communicate with some other device. Since that includes all useful CMOS devices, it will be compatible with everything!

      • No. These are not "processors" of any sort. It is a new way to modulate signal between CMOS and optical at high frequency and small scale. It may provide faster bus speeds, assuming the reality matches the funding hype.


        I suppose then that putting them as the data bus to memory would be the best first thing to do. Imagine being able to read memory at register reading speed, that would be great even if you keep your same Pentium IV processor.

        • No you can't. Light is limited to c, just like the E field in the wires. At 3e8m/s, and a distance of 50cm to memory bit, that would take 0.5/3e8 ~ 1.7ns minimum for one way trip. That takes over 3ns to get to memory and back (ignoring any switching delays).

          3ns makes the memory access time equivelent to about 333MHz.

          What light gives you is more *bandwidth*. That also means you CPU will not run any faster, but it should be able to access to more memory at once. Multi-core/multi-thread processors like wh

          • Latency != Frequency (Score:5, Informative)

            by Corpus_Callosum ( 617295 ) on Wednesday March 30, 2005 @02:56AM (#12086513) Homepage
            No you can't. Light is limited to c, just like the E field in the wires. At 3e8m/s, and a distance of 50cm to memory bit, that would take 0.5/3e8 ~ 1.7ns minimum for one way trip. That takes over 3ns to get to memory and back (ignoring any switching delays). 3ns makes the memory access time equivelent to about 333MHz.
            I think you are confusing latency and frequency. There are serious problems with long wires and high frequency because of parasitic effects. Light eliminates these parasitic effects, enabling a much higher bus frequency (clock rate).

            It may take a few nanoseconds for the light to bounce around, but that light can be modulated at extremely high rates (that electrical wires cannot). Managing latency is a well understood problem, generally solved by using speculation, buffering, etc..

            The fact is, if these parts are running at 10ghz, you will have 10ghz connections between connected parts (with a few nanoseconds of latency, which is mostly irrelevant).

            What light gives you is more *bandwidth*. That also means you CPU will not run any faster, but it should be able to access to more memory at once. Multi-core/multi-thread processors like what SUN is advertising would benefit a lot from this technology. Single thread processors like P4 will not see any benefit.
            Bandwidth is a measure of frequency and number of communication channels. This advancement does indeed provide more bandwidth, mostly because it can be clocked higher. All computer configurations could see substantial benefits because current electrical designs have highly limited bus speeds (it is not signal propagation that matters, but signal modulation speed "frequency").

            Anyway, current access times are now limited by the speed of light, so I guess it will not be getting too much faster.
            Again, signal propagation speed is mostly irrelevant. Signal modulation speed is what is important. Latency != Frequency.
            • It may take a few nanoseconds for the light to bounce around, but that light can be modulated at extremely high rates (that electrical wires cannot). Managing latency is a well understood problem, generally solved by using speculation, buffering, etc..

              The extra bandwidth does indeed allow more in-flight memory accesses, but there are many problems involved with this.

              First of all, there are implicit problems in the memory-level parallelism in applications. How many memory accesses are independent of eac

              • At the end of the day, this provides a mechanism for the bus to operate at the same speed as the core in the CPU. When you have cache misses, you will still get a latency penalty ( potentially a few cycles in this case vs. 20 or more in traditional techniques ). Because the CPU doesn't have to wait for the data to pipe in at 1/6 it's clock, the downtime is greatly reduced.

                To see the effects of greater bus speed, just look at a G5 vs a G4. The difference would be much more pronounced when you move to th
    • Re:Uh, okay (Score:2, Informative)

      by PxM ( 855264 )
      These aren't processors. They're more like modems. They convert the optical signal into electrons and let a normal electronic CMOS CPU proccess the data. The article is about the fact that this modulator can be done on the same chip as the processor and is ten times as fast as the next best thing.

      --
      Want a free iPod? [freeipods.com]
      Or try a free Nintendo DS, GC, PS2, Xbox. [freegamingsystems.com] (you only need 4 referrals)
      Wired article as proof [wired.com]
    • Are they going to be x86-compatible? MIPS compatible? What?

      Since it's Freescale (née Motorola) that's mentioned in the article, any general-purpose CPU appearing from this effort will probably be ARM-based. However, the most likely application will be specialized processors for multi-gigabit network routers.

      Schwab

      • Re:Uh, okay (Score:3, Informative)

        by BitchKapoor ( 732880 )
        Since it's Freescale (née Motorola) that's mentioned in the article, any general-purpose CPU appearing from this effort will probably be ARM-based. However, the most likely application will be specialized processors for multi-gigabit network routers.

        Really? I wasn't aware that Freescale made ARM processors, too. After all, when it comes to microprocessors, they're primarily known for 68k and PowerPC.

    • Let's try...

      Routers -- Freescale does a significant amount of business [disclaimer]I believe[/disclaimer] supplying embedded PPC processors to the communications industry. Stuff like this helps to make optical fiber connection cheaper and faster. The Freescale involvement means that the photonic-to-electronic chips get a cost-effective integration with the electronic logic in the router.

      Maybe eventually, we'll see direct fiber communications connecting to our home PCs, at commodity prices, through speci
    • Indeed. I don't think they offer a shred of evidence to back up their 10GHz claim either. Which is sad, because if they've got processes 10x faster than Intel, it's saying something about Intel's R&D into this particular sector. Of course, it says nothing at all if they have no proof.

      Truthfully, I'd love to see optical processor technology, but I don't think we're ready. But if this company can provide, then I will consume :-)
    • Don't you mean buzznumber?
  • by Anonymous Coward
    to use light for your processor.

    Imagine Intel chips with this technology. Now instead of heating your whole room - you have an extremely bright night light. "Sleep" or "hibernate" will have a new meaning when you use it to turn off the main light in your room.
    • And this research will lead to FTL travel because the damned photons won't go faster then c!!
    • by Anonymous Coward
      Anytime you're using light for any purpose-- say, to light a parking lot, or to communicate between links in a processor-- and the light is visible to anyone or anything not explicitly served by that purpose, that's bad. If you're lighting a parking lot, and your lights are visible to planes passing overhead, that's bad, because that means you're paying to light the night sky for no reason. Similarly the chip manufacturers are going to want to make sure the photons stay inside of the microchip; if any is vi
    • hmmm...that would suck for those of us who like hacking in the dark...
  • Not a "Processor" (Score:5, Insightful)

    by TopSpin ( 753 ) * on Tuesday March 29, 2005 @09:15PM (#12084345) Journal
    It's high bandwidth (10Gbit/sec) small scale (130nm) modulation from CMOS to optical. This is not "processing" in the sense of optical logic.
    • So it's a processor bus? Makes more sense, but I'd really like to see the proof from this little startup company.
    • It is a light, portable screen usually circular and supported on a short-term scale, but ultimately, they're just masking the real problem, which can only be solved by the level of thinking that created them.

      The average girl would rather have beauty than brains because she knows that the nature in which ramanujan was referred to as "indian math guy" in the sense of optical logic.
  • 10Ghz? (Score:5, Interesting)

    by DrKyle ( 818035 ) on Tuesday March 29, 2005 @09:17PM (#12084359)
    At first, I thought "Wow! That's like blazing fast speed!" And then I thought "Well, that sure beats having a couple PS3 cell processors hooked up together" And then i read the article... and was promptly disappointed. The 10GHz speed is how fast it can turn electrons into photons, but the chip is still primarily electron-based, so what is the real performance gain? They don't tell you because it probably isn't any yet.
    • Re:10Ghz? (Score:2, Informative)

      by Anonymous Coward
      Blehh.... you're not making sense.

      The performance gain is up to the chip designers, who will design a chip as fast as they can. That wasn't the problem they were trying to solve.

      Rather, the problem this addresses is off-chip interconnect. Today chips communicate with the rest of the system via solder joints; this provides for very limited bandwidths, far far less than 10 Giga items per second. This is mostly because process improvements that have allowed us to shrink our chips have not allowed us to sh
    • by tubbtubb ( 781286 ) on Tuesday March 29, 2005 @09:31PM (#12084500)
      Actually this is less dissappointing that I originally thought --
      A major problem as CMOS processes get smaller and smaller is wires and wiring. Its really bad at 90nm and it looks like its going to be way worse at 65nm.
      Even if optical interconnects can just be used for long intra-unit busses (think L1 cache to fetch/decode unit, and there to integer unit and float unit, etc) we could see great performance gains.
      Something like when the upper metal layers in CMOS went to copper a few years ago.
      • Your suspicions are correct:

        65nm wiring is really slow. What we're seeing from TSMC is still bouncing around a bit for 65nm low power, but wires are slower than we were hoping.

        It looks as though people won't switch to 65nm because 65nm produces faster devices, instead people will go to 65nm for cost and capacity.

    • The biggest advantage I could see coming from this would be an external or memory bus using an optical interconnect. Since light waves can be pushed a lot closer together than electron channels, and not interfere with each other in the processes, you could build quite the large memory bus. Imagine running 16 pipes of this technology, even clocked at 5GHz. That's a 80GHz memory bus.
      • One of the interesting things about light is that it is non-interfering. In other words, it should be possible to lay down multiple signals in a single fiber instead of having to bundle multiple fibers together. There have been many advancements in the communications industry around this topic that could be relevant here.

        By loosing the restrictions imposed by the PCB, it should also be possible to have much more ingenious designs. What this tech could do for SMP alone is staggering in it's implication
  • Article Text (Score:5, Informative)

    by Anonymous Coward on Tuesday March 29, 2005 @09:17PM (#12084360)

    minus the omniture spyware tracking and massive banners
    ________

    Startup Luxtera has announced its plans to enter the CMOS photonics market, anticipating the day when microprocessors will transmit information via light, not electrons.

    The company claims that its optical modulator for transforming electrons into photons runs at 10-GHz, ten times the speed of an optical modulator Intel Corp. researchers began talking about last year. Beginning in mid-2006, Luxtera hopes to enter production of photonic devices using standard CMOS manufacturing processes. ADVERTISEMENT

    Although the majority of chip-to-chip communications are conducted using copper-based interconnects, researchers are already looking toward the day when the balance shifts toward optical transmissions, initially for chip-to-chip interfaces between microprocessors, or between a microprocessor and memory device. Fibre optics are a standard component of modern telecommunication infrastructures, and interfaces such as Fibre Channel also use optical fibre interconnects to link up devices.

    Although light slows down by some degree when transmitted through an optical medium, shifting to optical-based components is still too expensive than relying solely on copper, even when factoring in the additional power, heat, and crosstalk issues.

    "The problem here that we can solve is a matter of bandwidth," said Gabriele Sartori, Luxtera's vice president of marketing and a former advocate for the HyperTransport protocol developed by Advanced Micro Devices.

    Part of the relatively high cost of photonics comes from the fact that converting electrons to photons requires an intermediary device, such as the modulator Luxtera is designing. Today, that device exists as a separate module. Intel, Luxtera, and others are trying to integrate the optical waveguides within standard CMOS processes, that can be controlled by the standard voltage swings of a microprocessor.

    However, doing so requires that the optical vendor have close ties to a microprocessor manufacturer. At Intel, that's no problem. Luxtera, on the other hand, has worked closely with Freescale Semiconductor to develop the technology. Finding a partner like Freescale is "necessary," Sartori said. "You must walk before you can run."

    Freescale has taped out several engineering samples of the optical technology, including a chip, one side of which includes the optical interface built in. The sample chip use a 130-nm SOI process, the same technology used to fabricate the G4 microprocessor. Part of Luxtera's job has been to develop silicon libraries, the files used to design the photonic chips in the same way other libraries serve as the blueprint for making more conventional semiconductors.

    The 32-employee startup originally received $7 million funding from Sevin Rosen Funds and August Capital in 2001, followed by an additional $15 million by New Enterprise Associates in 2003. Eli Yablonovitch, a professor at UCLA who developed photoelectronic crystals, sits on the company's board, while Arno Penzias, who won the 1978 Nobel Prize for his work on the Big Bang theory, serves in an advisory role. Other board members include Andy Rappaport of August Capital, which funded Transmeta, among others.
  • IBM, better info (Score:4, Interesting)

    by tubbtubb ( 781286 ) on Tuesday March 29, 2005 @09:21PM (#12084412)

    IBM is working in this area also [ibm.com] . . .

    Will be interesting to see a PowerPC with the guts of the VMX unit running at 10Ghz . . .
  • But.... (Score:1, Offtopic)

    by bob_herzog ( 788938 )
    Does it run Linux?

    Also, while I'm here...

    Imagine a Beowulf.....
    In Soviet Russia photons....
    3 PROFT!!
    Only old people use photons....
  • by Anonymous Coward on Tuesday March 29, 2005 @09:30PM (#12084497)
    http://www.forbes.com/forbes/2005/0411/068.html

    Interestingly, the 10Ghz figure comes from a measurement made a researcher at Sun Labs, who have been working with Luxtera for more than a year now. The article also talks about what other companies such as Intel and IBM are up to.
  • by Nova Express ( 100383 ) <lawrenceperson.gmail@com> on Tuesday March 29, 2005 @09:30PM (#12084498) Homepage Journal
    For those not up up on the twists and turns of the semiconductor industry, be aware that Freescale is Motorola's semiconductor division spinoff. They were responsible (with IBM) for the PowerPC, and developed the AltiVec (aka "Velocity Engine") vector processing technology used in current Apple PowerMacs. They still do a lot of microcontrollers for embeded devices.

    Just thought I'd clear up that potential confusion...

    • For those not up up on the twists and turns of the semiconductor industry, be aware that Freescale is Motorola's semiconductor division spinoff.

      So Freescale != Motorola. :-) Actually, I used to work at Motorola, and owned some stock. I recently heard about the spinoff by receiving some shares of Freescale. Go Freescale!!

      1. Work at a company where innovation moves at the speed of a glacier.
      2. Purchase stock while working there, watch it decrease in value by 75%.
      3. Receive stock in spinoff comp

  • by karvind ( 833059 ) <karvind.gmail@com> on Tuesday March 29, 2005 @09:41PM (#12084574) Journal
    The summary is misleading (as pointed out by other readers) as it is more of optical interconnect technology.

    Other groups working on optical interconnects: (incomplete list)

    Heriot Watt [hw.ac.uk]

    Cornell University [cornell.edu]

    IBM Zurich [ibm.com]

    Delft [tudelft.nl]

    UIUC [uiuc.edu]

    Intel [intel.com]

    Stanford [stanford.edu]

  • This will be like Crusoe. Solid technology basis. Involves significant tradeoffs. Market fails to materialize. Technology goes back on the shelf where if can be "discovered" again in 5 or 10 years.
  • by erroneus ( 253617 ) on Tuesday March 29, 2005 @09:46PM (#12084605) Homepage
    Maybe someone smarter than I can explain how it all works.

    Okay, I am down with light based switching mechanisms and all that. But in my mind, I'm wondering how registers are "storing" information. Light, to my knowledge, cannot be effectively stored. I recall from a couple of years ago someone attempting to make progress in that area but I don't recall hearing that they were successful.

    I guess it's time for me to go back to school on this new technology 'cause I *just* don't understand it. Anyone who does understand it care to spit out a few paragraphs to summarize how it works assuming the reader already understands the basics of digital electronics?
    • by katharsis83 ( 581371 ) on Wednesday March 30, 2005 @12:23AM (#12085784)
      "But in my mind, I'm wondering how registers are "storing" information. Light, to my knowledge, cannot be effectively stored."

      That's not an issue here, from what I can tell. The 10 GHz number is modulating light to electrical signals. All the actual storage and processing will be done just as before; you still have your Flip Flops and storing the bits. The only difference here is that instead of copper interconnects, we use light pulses. The benefit of this new technology is that it can be done with normal CMOS fabrication techniques.

      Anyone with more experience with this stuff is free to correct/clarify.
      • Yup, this is only for communication between components (or processors). BTW, for an optical processors, I guess you'd probably want to think of it more like function composition (e.g., the register file could be threaded between transformations instead of actually being stored).
  • by G4from128k ( 686170 ) on Tuesday March 29, 2005 @09:48PM (#12084620)
    At 10 GHz and an index of refraction of 1.5, each 2 centimeters of light pipe adds 1 clock cycle to the latency to the system (2 clock cycles to the round-trip). Put a optically-connected device a foot (30 cm) from the processor and you have 15 clock cycles of data (or a 30 clock cycle response time) just due to the fiber, let alone any in the devices at either end of the fiber-optic pipe.

    Its always interesting to see what happens when the relative speeds of processor, memory, and interconnects change.
  • Perfect! (Score:5, Funny)

    by RobertKozak ( 613503 ) on Tuesday March 29, 2005 @10:02PM (#12084727) Homepage

    Now when I find a bug in my code I can just reconfigure the photonic matrix and reverse the polarity of the power coupling.

    And if that doesn't work I'll try modulating the field harmonics.

    This can really save me in a tight situation.

    Robert
  • by Esion Modnar ( 632431 ) on Tuesday March 29, 2005 @10:15PM (#12084832)
    Where's the kaboom? There was supposed to be an earth-shattering kaboom! --Marvin the Martian
  • what are they going to call this baby? Goethe?

  • Slashdot mislead (Score:5, Informative)

    by photon317 ( 208409 ) on Wednesday March 30, 2005 @01:56AM (#12086206)

    If you read the article carefully (which is laced with marketing hype and was obviously written by someone only passingly familiar with the technologies involved), you will see that nobody's promising optical cpu's in 2006. In anticipation of future optical chips and other technologies, Intel has begun developing one of the stepping stones toward this technological era, which is an optical/electrical gateway of sorts which can be built on a standard electrical chip to allow it to interface with optical components. Think a modern cpu, with some low level optical/eletrical interface on the edge of it so that a row of optical "pins" can stick out one side in addition to the normal electrical pins on the bottom.

    This little startup company is working on the same thing, and hopes to have it out soon. Their marketing article is trying to build hype so they can get more cash. Nobody will be selling anyone an all-optical cpu in 2006 (or 2007, or 2008, etc).
  • The tech support calls for this will trump all: Tech Answer 1: Data loss? Ma'am, it says clearly in the instructions that this device is not to be used near any singularity of any kind. It's been known to warp and bend results. Tech Anser 2: Sir, the machine is acting slowly? Are you by chance going 299,792,458 m/s? That drops performance to 286 levels. What's that? you're running BSD? So why are you complaining? Plenty of horsepower for that.
  • Just a modulator (Score:5, Interesting)

    by Laaserboy ( 823319 ) on Wednesday March 30, 2005 @02:16AM (#12086337)
    From the Article:

    The company claims that its optical modulator for transforming electrons into photons runs at 10-GHz

    I may not have a Nobel Prize, but I do have a Ph.D. in physics. Electrons do not tranform into photons. They may produce photons, but not turn into them.

    I see these articles that claim the creation of optical processors. But read the article, and all the researchers have to do is add a silicon processor and BOOM, we have an optical processor. It's not that easy.

    I remember the researcher who created an optical computer that was the size of a room. Why is this? Electrons are small. They bend around corners. They stay put. They move when you want them to. Photons do not bend well around small corners, do not support CMOS-like circuits and generally fail at most tasks of that versatile, tiny doer of great deeds, the electron.

    As usual, it's just an optical modulator. Boring old modulator.
    • PhD or not, you don't need the attitude. It's fairly obvious what they meant by "transforming electrons into photon." Consider a modem. It would be perfectly reasonable to say a modem "transforms" digital signals into analog signals. The digital signal doesn't literally "turn into" the analog signal, but it's still a good high level description. Likewise, "transforming electrons into photons" is a solid descriptions of what their modulator does.

      And while this tech can't be used to create an optical pr
  • Maybe now I'll be able to crank all the settings up in Doom 3.
  • by Alioth ( 221270 ) <no@spam> on Wednesday March 30, 2005 @05:00AM (#12086931) Journal
    From TFA:

    Although light slows down by some degree when transmitted through an optical medium, shifting to optical-based components is still too expensive than relying solely on copper, even when factoring in the additional power, heat, and crosstalk issues.

    Is it just me or is this a really badly constructed sentence? It changes subject halfway through (from the speed of light in optical medium to the cost of copper).
  • As far as I can tell from the mediocre article, all they've done is gotten a tiny LED to flash at a rate of 10 giga flashes per second. I guess you could call that a "modulator". But it's just a fast LED. And they havent explained how it's going to be a economical and compact way to shuttle data around.
  • Hmmm...I wonder how "big" one of these interconnects is. Currently chips have pins that are visible to the human eye, and even solderable by the human hand. If it were possible to have optical "pinouts" that were really small, you could decrease the size of a chip package/circuit board. Of course, I suppose there would still have to be pins for power, but ya' know.

    And since an optical interconnect wouldn't need solder, these chips would need a completely different process for connection to a circuit boa
  • Everybody calls them "electrical" computers. So once we move on to light, will people change the name? I always thought it was a stupid distinction, personally. I mean, you would call mechanical engineering "metal engineering".

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...