Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology Hardware

Researchers Develop Photonic Processors 61

TheCybernator writes to mention a New Scientist story about scientists who are developing a light-based processor by actually storing and delaying photons. These 'optical buffers' may one day be used to make super-fast microchips based on light instead of electrons. From the article: "A decade from now ... there [may] be not seven cores but hundreds on a chip ... Connecting these cores using light could solve this problem. Until now, the lack of optical buffers has been a key roadblock to these kinds of light connections. The way information is transmitted means that buffers must hold packets of data while a router decides where they are to be sent. Buffers are also needed to delay optical pulses - so they do not collide at switching points - and to synchronise streams of data coming from different places."
This discussion has been archived. No new comments can be posted.

Researchers Develop Photonic Processors

Comments Filter:
  • by account_deleted ( 4530225 ) on Monday December 25, 2006 @11:42AM (#17359982)
    Comment removed based on user account deletion
  • If the ionization-rate is constant for all photonic entities, they can really bust some heads... in a optical sense of course.
  • I'd like to know how and with what form of computer science 100 CPU cores will be useful for.
    • Re: (Score:2, Interesting)

      by yaminb ( 998189 )
      I look at my process list. There's a hell of a lot of processes and threads. Even without any special programming for any given application, you still get the very handy benefit of less context switching. Maybe even 1 process per core :P I'll take that anyday.

      • I've never seen circumstances where context switches significantly harms performace. Besides, if there was, there would be a need for an automatic way to lock a process to a core so it doesn't get spread about, because that's worse than a context switch because context switches don't necessarily mean a full cache flush. Shuffling processes between cores basically often means a full flush because a process can be assigned to a core that it has not run on, meaning always having to re-load your data and inst
    • by maynard ( 3337 )
      Cellular Automata
    • Re: (Score:2, Insightful)

      by xirtap ( 955611 )
      Isn't that like saying "640KB ought to be enough for anyone"?
      • Re: (Score:3, Insightful)

        by MSTCrow5429 ( 642744 )
        Aside from the mythical status of that statement, no. The logistical difficulties in threading that many cores and using them efficiently, especially in personal computing, is extreme. I have no doubt that at some point this will be done, but a number of breakthroughs in computer science and a paradigm shift in programming techniques is required first.
    • with 100 cores, one would assume that they would develop several for more specialized tasks.

      Say a bunch that are basically GPU, some that deal with physics well, some that do great with protien folding(BPU?)

      • by piojo ( 995934 )

        First off, 100 is an arbitrary number. A million could be possible, using photons to carry signals.

        We may not have many uses yet. This is a problem for everybody that tries to predict the future: we do not yet know what we will do with newer technology. When we first invented computers, nobody could imagine word processors or GUI applications, much less digital images or video. The inventor of steam engines probably didn't think they would lead to airplanes. Once we have the technology, people will think o

    • Re: (Score:2, Informative)

      by DamonHD ( 794830 )
      Hi,

      1) You can have 32--64 "cores" (depending on how you define it) on a Si ship from Sun now. I'm using a 24-thread T1000 now and it's great.

      2) I assume this was a troll post, since there are many many many "embarrassingly parallel" scientific/financial/Web problems...

      Rgds

      Damon
    • by Anonymous Coward
      silly question, silly answer.
    • Re:100 Cores? (Score:4, Informative)

      by mbrx ( 525713 ) on Monday December 25, 2006 @01:01PM (#17360344)
      The important distinguision to make when comparing the benefits of going massivly paralel processing is that it is possible to solve NEW problems in realtime with these processors. Eg, we don't need to run Word 100 times faster, however we can get eg. games and scientific simulations (two sides to the same coin) that uses detailed physics engines and realtime raytracing. Raytracing can be almost naivly paralellised with up to as many processors as screen pixels. I remember using a computer with 65536 processors called the maspar which was built in the early 90's. Our main use for this computer was for image processing which also could easily be parallelized. It just took a bit of a shift of perspective to learn how to program it since it was SIMD (Single Instruction Multiple Data) but boy where it fast for it's time.

      Physics is a bit more difficult but there are tehniques too for paralellization utilizing the fact that object interactions form islands of connected parts. Eg, when simulating your hair in a realistic way don't test for interactions with the objects in a distant part of the game. Physics engine are just starting to become used for these purposes but can easily require how much CPU power you want for it. Simulating eg. the clothes in the game characters or dynamic subdivision of parts as they break or bend due to forces (do you want realistic dents in your car after hitting that pedestrian?). These would both require an order of magnitude more CPU power than what we can do in realtime physics today.

      So, to make a short summary. Yes, we can always achieve new tricks with even more computing power. Give me a cluster of a million processors and i would still complain that it's too slow for what i want to do.
      • Eg, we don't need to run Word 100 times faster

        well, 1 core to handle your typing, 1 core to handle the spellcheck, 1 core to handle the grammar check, etc etc, and Word would go faster. However, they'd all be accessing the same memory and would probably bottleneck there instead - so yes, you're quite right, it'd still be too slow.

        I think 2 cores is about right for desktop machines - who needs more than that, given the apps we curently have (niche or specialist apps are not considered here as they're .. well
        • However, they'd all be accessing the same memory and would probably bottleneck there instead

          We may start to see ram chips having more and more pins (they may get thicker and thicker - end up looking like many ram chips glued together?), and perhaps external dongles from cpu (or a socket located on the motherboard beside the cpu) to memory. Compared to the complexity of these photonic processors and massively parallel cores, the memory bottleneck issue is quite trivial. I'm sure back in ye olden days, they

      • The important distinguision


        The *word* is "distinguishment".
      • by Prune ( 557140 )
        >> Raytracing can be almost naivly paralellised with up to as many processors as screen pixels.

        What the hell? You cannot produce high quality images with a single ray per pixel. Even with the best importance sampling, you still need on the order of a dozen rays per pixel on average.
    • Never heard of Windows Vista?
    • Re: (Score:3, Informative)

      Well, it may not be CS, but you'll be able to run Microsoft Vista on it...

      OK. Now that the (semi-) joke is out of the way, a 100 processor core would have a ton of uses - large scale Monte-Carlo simulations (used in everything from AI, to biostatistics, to computational chemistry); verification of logic circuits, microcode and tests for both; large-scale optimization problems; high-speed rendering for scientific visualization and entertainment purposes; and the list goes on. Oh yeah, if you had more cores

    • Well, much depends on what we use the 100 cores for anyhow. Most of the scientific applications, imaging/video applications run loops, with code sizes in the loops, number of nests etc not very large and in the 100s of lines. Inter/Intra loop dependencies do not occur very often. Basically we have potential for huge available parallelism in code by executing loop iterations in parallel. And we are limited only by the available parallelism in the machine we are using. Also there are compiler code transforma
    • trying to find the end of Pi?
    • The sort that needs more than 640K or ram, I should think.

      No one needs more than a 6502 processor [wikipedia.org] and 32K of ram; the last 20 years has been an exersize in self-indulgence. And MS Word : pah! Long live WordWise(tm)

  • Cool! I should still be alive then!
  • Does this mean Santa will finally have time with the family?
  • by __aaclcg7560 ( 824291 ) on Monday December 25, 2006 @12:28PM (#17360184)
    The world market for photonic computers will only be five or six.
  • Another technology that sounds cool but we'll never see because we'll have to wait 20 years for ther patents to expire, before we see this be put into practice.

    Whatever happened to Carbon Nanotubes?
  • ... a New Scientist story about scientists who are developing a light-based processor by actually storing and delaying photons.

    I'd be more impressed if they'd developed an optical processor that actually stored and speeded up photons.
  • Using light would let chips run at the speed of light! Or is that the speed of electricity? They both run at the same speed. What is the real benefit to using optical chips? Three dimensional optical storage I can see. Long distance cabling runs I can see. Transfers across tiny traces on a chip... not so clear to me. Especially considering the size increase that could be expected by moving to optics. Is it the same lack of attenuation seen in optical fiber at work on a small scale and making a noticeable di
    • by imsabbel ( 611519 ) on Monday December 25, 2006 @01:37PM (#17360512)
      Speed of electricity =! speed of light.
      speed of light insite a metal =! c.

      In copper lanes like on modern cpus, the speed is about 30-35% of c.
      Photonic crystals and optical fibers, otoh, can have a permiability that allows speeds of near C for photons.
      • There is no such thing as "speed of light inside a metal", because, you know, metals are pretty much non-transparent, as in, light does not propagate through them! If in doubt, find metal object and try to see through it! :)

        "Speed of electricity" that GP was referring to was a cute (if not entirely scientfic) way to trick the reader into thinking that "speed of ligth" and "speed of EM wave" in given medium are somehow different -- nope, they are not.

        Speed of E/M wave in SiO2 insulator between two sides of c
      • by zCyl ( 14362 )

        In copper lanes like on modern cpus, the speed is about 30-35% of c.
        Photonic crystals and optical fibers, otoh, can have a permiability that allows speeds of near C for photons.

        Of course, this is only an advantage when the photonic processor components become the same size as the smallest modern electronic components and with equally fast switching times. If the components are three or more times larger, or have significantly slower switching times, then there is no gain.

    • Re: (Score:3, Informative)

      by rebelcool ( 247749 )
      electricity is about a third the speed of light and is now one of the big bottlenecks in high performance computing.

      ie, one of the reasons the cache on a processor is so much faster than going out to ram is that it is physically nearby. processors today move fast enough that several cycles idle while just waiting for ram to return data through the winding conductor path.
      • "electricity is about a third the speed of light and is now one of the big bottlenecks in high performance computing"

        Under practical circumstances perhaps but according to physics that is false. Individual electrons move much slower than the speed of light, but changes in the electrical state that take place on one end of the wire are transferred to the other end of the wire at the speed of light.
    • by hab136 ( 30884 )
      What is the real benefit to using optical chips

      If nothing else, they'll run cooler. Heat is one of the main problems with designing better chips.
    • by forkazoo ( 138186 ) <<wrosecrans> <at> <gmail.com>> on Monday December 25, 2006 @01:56PM (#17360644) Homepage
      Using light would let chips run at the speed of light! Or is that the speed of electricity? They both run at the same speed. What is the real benefit to using optical chips? Three dimensional optical storage I can see. Long distance cabling runs I can see. Transfers across tiny traces on a chip... not so clear to me. Especially considering the size increase that could be expected by moving to optics. Is it the same lack of attenuation seen in optical fiber at work on a small scale and making a noticeable difference when the effect is considered across billions or trillions of pulses? Will there be fewer heat problems when scaling the chips to higher speeds?


      We are starting to get to the point where the capacitance of the tiny little wires in a genuine concern, and crosstalk between them is also significant. Also, the amount of space taken up by wiring is annoying. You can use a single waveguide with several frequencies of light to replace several wires and solve all those problems at one. At least, in theory. In practice, it's really hard to build it. But, it'll be pretty sweet when we get it all sorted out.
      • Except that these tiny wires are fraction of the size the light wires have to be. Optical chips are going to need to be quite a bit larger to do the same as existing chips. Light might be faster, but engineering issues can be quite large to make an optical chip worth a damn. It's like everytime we hear of X technology, it's not going to win anytime soon because the existing tech usually has twenty or more years of hard work behind it.
  • Already out (Score:2, Informative)

    by dl748 ( 570930 )
    I guess the author didn't know 100+ core chips are already out, http://www.rapportincorporated.com/ [rapportincorporated.com], with a 256 cored chip already for sale. They are already coming out with a 1024 cored chip. In fact, IBM has already entered a partnership with them creating a multichip, PowerPC core + 1024 cores for a 1025 cored chip.
    • Yeah, because the whole point of this article was the number of cores, nothing to do with oh say, PHOTONIC processing of signals or anything like that.

      To your comment I say: Ho hum.

  • by rollingcalf ( 605357 ) on Monday December 25, 2006 @02:25PM (#17360794)
    Hopefully using light instead of electrons would cut down on the amount of heat that is dissipated. Otherwise, a 256-core processor could serve double duty as a furnace for a 3000 sq foot home in winter.
    • One immediate application would be in high speed (> 100 terabit/sec) routers. Right now the transport is photonic, and the switching can be done using MEMS. But because of lack of buffere for photons the classification has to be done electronically.

      • ..., unlike electrons. There goes your dream of an all-optical transparent switch. You can still control light with electrical signals, or reflect it with a MEMS micromirror, and I guess you can achieve total bandwidth in 100 terabit range right now -- what is problematic (especially with MEMS approach) is packets becoming too long: terabit with 1ms MEMS switch time -> gigabit packet.

        Anyway, someone who really needs small packet switching at fiber speed in 100 terabits/sec might as well go with superco
  • Functional programming is ideal for the kind of CPUs the article describes. 100s of computing tasks executing in parallel? a dream come true...
  • Doctor: "Arm The photonic cannon!".
  • So the OCPN research group here [ucsb.edu] has already gotten our All-Optical packet-routing to work. All optical in that the signals is Never converted from Optical. The switching signals are still electronic, but an integral part of the system is the packet delay (so the signal is delayed while the switches are set).

    We, at first, literally used strands of fiber to delay the signal (so a non-variable delay), now we're using the same fiber delay, but between the multiple strands of fiber are the typical 2x2 optica

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...