Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
IBM Networking Technology

IBM Discovery May Lead To Exascale Supercomputers 135

alphadogg writes "IBM researchers have made a breakthrough in using pulses of light to accelerate data transfer between chips, something they say could boost the performance of supercomputers by more than a thousand times. The new technology, called CMOS Integrated Silicon Nanophotonics, integrates electrical and optical modules on a single piece of silicon, allowing electrical signals created at the transistor level to be converted into pulses of light that allow chips to communicate at faster speeds, said Will Green, silicon photonics research scientist at IBM. The technology could lead to massive advances in the power of supercomputers, according to IBM."
This discussion has been archived. No new comments can be posted.

IBM Discovery May Lead To Exascale Supercomputers

Comments Filter:
  • I hope they're also talking about standard GPU supercomputers (which are as we know pretty cheap at less than £100 for the low end!)
    • It seems the definition of a supercomputer keeps changing

      http://www.youtube.com/watch?v=gzxz3k2zQJI [youtube.com]

      • by RulerOf ( 975607 )
        Power Mac G4 as a weapon? It must have had on-chip 128 bit encryption. [wikipedia.org]
      • Sometime around 1975 I decided I wasn't going to play with low-powered computers anymore. I went looking for a job on a Cray or one of the big Control Data supercomputers.

        I never got the job, but I did get the compute power; it's in my pocket.

        • by Chrisq ( 894406 )

          Sometime around 1975 I decided I wasn't going to play with low-powered computers anymore. I went looking for a job on a Cray or one of the big Control Data supercomputers.

          I never got the job, but I did get the compute power; it's in my pocket.

          I keep getting spam emails promising me that

      • by hitmark ( 640295 )

        As long as it crunches massive numbers quickly, who cares how its defined?

    • by JanneM ( 7445 ) on Wednesday December 01, 2010 @05:10PM (#34410960) Homepage

      GPUs are indeed an inexpensive way to boost speed in some cases. But they have been rather oversold; while some specific types of problems benefit a lot from them, many problems do not. If you need to frequently share data with other computing nodes (neural network simulations come to mind), then the communications latency between card and main node eats up much of the speed increase. And as much of the software you run on this kind of system is customized or one-off stuff, the added development time in using GPUs is a real factor in determining the relative value. If you gain two weeks of simulation time but spend an extra month on the programming, you're losing time, not gaining it.

      Think about it this way: GPU's are really the same thing as specialized vector processors, long used in supercomputing. And they have fallen in and out of favour over the years depending on the kind of problem you try to solve, the relative speed boost, cost and difficulty in using them. The GPU resource at the computing center is used much less than the general clusters themselves, indicating most users do not find it worth the extra time and trouble to use.

      It is a good idea, but it's not some magic bullet.

      • the big problem is once you solve an exascale problem, how do you write the answer to disk fast enough? Petabytes of data are hard to handle/visualize/analyze/copy around.
        • You use RAM for disk.

          • No, that won't work. I'm talking about persistent storage. What do you do when you want to view the results of timestep 45 in a 1000 timestep simulation, each timestep of which generates a few petabytes of data?
        • Since we're talking about discoveries that may lead to faster computers, these are the solutions it may use:
          * Texas A&M Research Brings Racetrack Memory a Bit Closer -> http://hardware.slashdot.org/story/10/12/01/0552254/Texas-AampM-Research-Brings-Racetrack-Memory-a-Bit-Closer [slashdot.org]
          * SanDisk, Nikon and Sony Develop 500MB/sec 2TB Flash CardSanDisk, Nikon and Sony Develop 500MB/sec 2TB Flash Card -> http://hardware.slashdot.org/story/10/12/01/1322255/SanDisk-Nikon-and-Sony-Develop-500MBse [slashdot.org]

          • "Texas A&M Research Brings Racetrack Memory a Bit Closer"

            I groaned audibly at the terrible pun.

          • Those are indeed memory and disk improvements, but they are not keeping up with Moore's law the way CPU speeds are. So there is an increasing gap. I am one who believes that the gap must ultimately be closed, but right now, it is not being paid much attention because FLOPS are still king marketing-wise.
        • by dAzED1 ( 33635 )
          you use the cpu to employ insane compression algorithms and thus need to write a lot less to disk?
          • Insane, OK, but there is a limit to information compression. Believe me, all of these tricks are being used to their max and it's still a problem. There is no exponential increase in compression efficiency. If you take compression, faster disks and RAM, and improvements in networking technology into consideration, you still have a huge gap arising between CPUs and storage/memory in the fairly near future.
      • by afidel ( 530433 )
        The major problem with adoption is probably that most of the people running jobs on SC's are scientists not computer scientists. They use large piles of ancient, well tested libraries and only tweak small parts of the code that are specific to their problem. This means that most of those libraries will need to be ported to OpenCL and CUDA before adoption really picks up.
        • by Titoxd ( 1116095 )

          The major problem with adoption is probably that most of the people running jobs on SC's are scientists not computer scientists. They use large piles of ancient, well tested libraries and only tweak small parts of the code that are specific to their problem. This means that most of those libraries will need to be ported to OpenCL and CUDA before adoption really picks up.

          And we have a winner!

          Most people do not want to write their eigensolvers, Poisson system solvers, matrix multiplication routines, and the like. They just want to use code that already does that, and that has been tested to do its job well. Code verification is important. So, the libraries that do so need to be ported before anyone in HPC switches to GPU architectures seriously. (Remember: this is the land where FORTRAN is still king...)

      • GPUs are indeed an inexpensive way to boost speed in some cases. But they have been rather oversold; while some specific types of problems benefit a lot from them, many problems do not.

        Where do you get the idea that GPUs have been oversold? Is the loudest mouth breather in the room representative of the general consensus? One vain, overreaching guy from 1960 who had spent too many hours hunched over a keyboard predicts human level AI within the decade, and the entire endeavour is tainted forever? All to alleviate one slow news day?

        2000 BC called, and wants their sampling procedure back.

        Sixteen lanes of PCI-e V 3.0 has a architectural bandwidth of 16 GB/s and we're looking at about 4GB

      • by mikael ( 484 )

        You forget the cost of purchasing hardware or leasing supercomputing time. If you want your simulation running on a supercomputer at the nearest research unit, then you would have to get your algorithm or simulation written in OpenMP, parallel Fortran or whatever language was optimised for that system, and you would be tied to that hardware. Have a desktop system under your desk, and you have the freedom to use the system when you want.

        I'll agree, it is bad that software has to be wrapped around one piece o

        • by JanneM ( 7445 )

          This is what I meant about overselling this idea. A GPU is not an alternative to a cluster. The practical speedup you get is 2-3 times a normal dual-core cpu (a bit more for specific problems, less for others). A GPU is in other words very roughly equal to adding another 2-core cpu to your system, which makes the cost-benefit tradeoff clearer: cheaper node, but longer development time. If you need a cluster for your problem you're going to be using MPI no matter what*; one or two GPUs will make no material

          • by mikael ( 484 )

            That's my experience as well. I played around with TMS340x0 (TIGA) graphics accelerator cards back in the in the early 1990's. One half was the regular VGA standard of the time (256 colors). The other half was 2 Megabytes of memory, a TMS34020 32-bit graphics coprocessor with (if you were lucky), one or more TMS34082 floating point coprocessors. It was really only intended to accelerate paint programs like Tempra and the 2D rendering of AutoCAD. There were some 3D demos written for it like 'flysim', but tha

  • by martas ( 1439879 )
    Just yesterday I was talking with someone about how IBM have devolved into patent trolls that do no worthwhile research. Have I been proven wrong, or is this just vaporware? Anyone with the knowledge to do so, please enlighten me!
    • by Anonymous Coward

      From the article

      "In an exascale system, interconnects have to be able to push exabytes per second across the network,"

      "Newer supercomputers already use optical technology for chips to communicate, but mostly at the rack level and mostly over a single wavelength. IBM's breakthrough will enable optical communication simultaneously at multiple wavelengths, he said."

      The sad part:

      "IBM hopes to eventually use optics for on-chip communication between transistors as well. "There is a vision for the chip level, but

    • They do file more patents than anyone else, and have done a couple of nasty things against open source software, but they do do a lot of research, especially it seems at the low level on hardware, and they do a lot of good in the open source work (Eclipse, etc). I'd be ok with them as long as I didn't work at places that are stuck using their software. I really wish people would stop buying it, it just encourages them.
    • Re:Huh (Score:5, Insightful)

      by Amorymeltzer ( 1213818 ) on Wednesday December 01, 2010 @05:33PM (#34411248)

      IBM may be patent-happy, but it's only reasonable to protect their "inventions". There's a huge difference between a patent troll who buys patents solely for litigation purposes, and IBM, who has been among the leading tech innovators for decades, defending their investments using the legal system. We may not love the current state of affairs for patents, but it's important to distinguish between bottom feeders out for a dirty buck and successful entities making use of their R&D department.

    • by geekoid ( 135745 )

      While not like it was, IBM does do a lot of R&D.

    • Any good they do in research is negated in the sheer amount of frustration, inefficiency, and anger they produce from inflicting Lotus Notes on millions of unfortunate customers.

  • by Anonymous Coward

    IBM's press release is http://www-03.ibm.com/press/us/en/pressrelease/33115.wss [ibm.com]

    One interesting bit is that the new IBM technology can be produced on the front-end of a standard CMOS manufacturing line and requires no new or special tooling. With this approach, silicon transistors can share the same silicon layer with silicon nanophotonics devices. To make this approach possible, IBM researchers have developed a suite of integrated ultra-compact active and passive silicon nanophotonics devices that are all

  • A whole dictionary full of perfectly good words and they have to make one up to mean “very large”...

    • by Anonymous Coward

      Exascale is not a word. ... A whole dictionary full of perfectly good words and they have to make one up to mean "very large"...

      Finally someone who might agree with my proposal to replace the overcomplicated SI system with a much simpler 'big'/'small' size classification system! I'm still not sold on the need for adjectives, though I'm open to debate.

      • It's not worth debating. As long as the most common adjectives used are also sexual terms,* the public WILL have its adjectives.

        *and they are, as in "really f**king big!", "Muthaf**kin gynormous", and "awesomer than F**kzilla stompin the s**t outa Tokif**kio"

    • Its a portmanteau!

      Exa (obviously being a step above Peta which is above Tera which is above Giga and so on and so forth)

      and scale. Which is self-explanatory.

      The best part about English is silly quirks like portmanteaus. Don't try to be pedantic.

      • by splutty ( 43475 )

        I absolutely agree. Pedantic is also a beautificious portmanteau. Ped (from the Greek for foot), and antic. So you're doing antics with your foot. Foot in mouth!

    • by uburoy ( 1118383 )
      See http://en.wikipedia.org/wiki/Exascale_computing [wikipedia.org]. Not very well explained in th wiki but I think it has a precise meaning. It means "reaching performance in excess of one eaxflop". Also, http://en.wikipedia.org/wiki/Exa [wikipedia.org].
    • by Hatta ( 162192 )

      Obviously what they meant was "Exo-scale" computing. Which is, of course, computing technology received from extraterrestrial reptilians.

    • Yes, exascale is a word. No, exascale does not mean "very large." It means "1000 times as big as petascale." You might want to check Google before you post. I know, this is /.
      • You might want to check Google before you post.

        I checked Webster’s. And if it means “exa-flop scale”, people should just say exa-flop scale.

    • by WrongSizeGlass ( 838941 ) on Wednesday December 01, 2010 @05:33PM (#34411246)

      Exascale is not a word

      A whole dictionary full of perfectly good words and they have to make one up to mean “very large”...

      Exascale is a perfectly cromulent word.

    • Whatever, so long as it brings back the TURBO button I'm buying one!

      • by geekoid ( 135745 )

        haha, the turbo button. Man, that was awesome. As if anyone would not run in turbo.

        • As if anyone would not run in turbo.

          Um, that was actually exactly what anyone would need to do, sometimes. The turbo button existed to slow the computer down. Necessary for running some really-old games that implemented hardware-sensitive timers and ran much too fast on “fast” computers (such as the 16 MHz box I cut my teeth on).

          Is there a -1 for computing history fail?

        • As if anyone would not run in turbo.

          Not a tetris fan, I see?

    • And still nowhere near hellascale. Lame.
    • For some reason, when people invent new and innovative technologies that have never existed before, they feel this inexplicable need to come up with a new name to describe it. I for one don't see why they couldn't just call it a really-really-fast-scale computer, but alas the English language evolves along with the new developments, and we wind up with exascale [wikipedia.org], which - though it is not a word - has it's own Wikipedia page for some unknown reason.
  • I can play crysis.
  • by chemicaldave ( 1776600 ) on Wednesday December 01, 2010 @05:03PM (#34410884)

    ...that the metal connections between individual components would not be fast enough.

    I only wonder how long before this sort of technology makes its way to the consumer market, if only for show. Of course I can't see a use for an exascale databus on the mobo anytime soon.

    • by olsmeister ( 1488789 ) on Wednesday December 01, 2010 @05:11PM (#34410972)
      It's obviously not the same, but in some ways it sounds similar to Intel's Lightpeak. [wikipedia.org] I guess it is the next logical step once you get to that point.
      • It does. It also sound similar to the idea needed for Quantum Connected CPUs from Travis S. Taylor's book "The Quantum Connection" (from Baen, the first few chapters are available free online here: http://www.webscription.net/chapters/0743498968/0743498968.htm?blurb [webscription.net]. The idea of the Quantum Connected CPUs is built up in chapters 4,5 and 6 which are included in the free sample.

        ( now all we need is the AI, and a few other things, and the Galaxy is our oyster :D )

    • Of course I can't see a use for an exascale databus on the mobo anytime soon.

      An exascale databus ought to be enough for everyone, at least for the five who comprise the world market for computers.

    • by John Whitley ( 6067 ) on Wednesday December 01, 2010 @06:14PM (#34411680) Homepage

      ...that the metal connections between individual components would not be fast enough.

      If you bothered to RTFA (emphasis mine):

      Multiple photonics modules could be integrated onto a single substrate or on a motherboard, Green said.

      I.e. they're not talking about hooking up individual gates or even basic logic units with optical communications. Anyone who's actually dealt with chip design in the past several decades realizes that off-chip communications is a sucky, slow, power-hungry, and die-space-hungry affair. Most of the die area and a huge amount (30%-50% or more) of power consumption of modern CPU's is gobbled up by the pad drivers -- i.e. off-chip communications. Even "long distance" on-chip communications runs into a lot of engineering challenges, which impacts larger die-area chips and multi-chip modules.

      • I wonder if optical interconnects couldn't be a great boon to volumetric (3d) chip design. Unlike wires, lasers going different directions can pass right through each other (can't they?) Think about a bunch of people on different levels of a big atrium in a tall hotel, all signalling to each other with lasers. You make a CPU with a hole in the middle which is ringed with optical ports aimed up and down at different angles. Now stack them up (like a roll of Life Savers candy) alternating with ring-shaped
    • The metal connections are certainly fast enough, after all the signals on the metal lines will travel at a fraction of the speed of light divided by the index of refraction of the surrounding dielectric medium, same as an optical waveguide.

      But there are two important problems which this does not address: loss and crosstalk.

      Because the conductor loss is very significant for metal interconnects, much more power is consumed in long interconnects. This power consumption only increases with transistor density a

      • and you can multiplex multiple color channels

        And presumably polarizations of the same color. To augment your excellent post: this will make electronics cooler and get better battery life. Or should I say photoelectronics?

    • >>I only wonder how long before this sort of technology makes its way to the consumer market, if only for show.

      Photonic interconnects have been studied for a long time. One of my faculty advisers at UC San Diego, Dr. Clark Guest (brilliant man) has been doing work on optical computing since... the 80s? (I don't know, but a long time.)

      They really do represent the next step in computer evolution, but are really tricky to get working right at a consumer-grade level. For example, holographic storage has b

  • the difference between this and Intel's technology, other than the obvious chip-to-chip vs machine-to-peripheral difference.
    It's all variations on silicon (nano)photonics, right? The article says "Intel is also researching silicon nanophotonics at the silicon level, but has not yet demonstrated the integration of photonics with electronics"...but that makes me wonder what the big deal about Light Peak is, then... is the only difference the "nano"?
    • Forget Light Peak, Intel has already demonstrated an on-chip CMOS laser and used it for optical links: press release here [intel.com]. I really don't know what the IBM guy meant with his claim.

      • by Bengie ( 1121981 )

        Light peak is meant for up to ~100m and scaling up to 100gbit in the future and meant to replace USB/SATA/HDMA/etc. I some how doubt on-chip CMOS lasers are meant for anything beyond a meter as they're meant for chipset-to-chipset..

  • by nickersonm ( 1646933 ) on Wednesday December 01, 2010 @05:14PM (#34411000)

    We have reached an informational threshold which can only be crossed by harnessing the speed of light directly. The quickest computations require the fastest possible particles moving along the shortest paths. Since the capability now exists to take our information directly from photons travelling molecular distances, the final act of the information revolution will soon be upon us.
    -- Academician Prokhor Zakharov, "For I Have Tasted The Fruit"

    Now I just need room temperature superconductors to build my gatling laser speeders.

  • Where we get a Positronic Brain [wikipedia.org] from this.
    • by Surt ( 22457 )

      Well, besides the fact that moving you there will be inconvenient for us, there won't be any such location because positronic would be a step backward from photonic in terms of performance, assuming your more interested in calculation power than explosive power.

  • The interconnects are not the entire problem. Faster transmit helps, of course. But the information still has to come in from storage; it's still held in slow memory banks; it still has to propagate across the swarm. Software still has to be able to access that data in a way that makes sense and can scale to half a million nodes. Connectionless distributed computation is nontrivial, and while lower-latency intranode communication might get us the last 5% it won't get us the first 95%.

    • by LostOne ( 51301 ) *

      There is a factor you may have neglected to consider: with a faster transmit, the distance between components can be longer with the same speed components. That is, the communication latency introduced by path length is lower.

  • by Anonymous Coward

    Optoelectronics really is the holy grail of computing. There's no cross talk problems, no magnetic fields to worry about, and you can multiplex the hell out of a communication link. The current record [wikipedia.org] is 155 channels of 100 Gbit/s each. (!)

  • by blair1q ( 305137 ) on Wednesday December 01, 2010 @05:41PM (#34411348) Journal

    He's sped up links between chips from something like one-third c to c.

    Architecturally that reduces inter-chip latency by 66%, which does indeed open up a new overall speed range for applications that are bandwidth-limited by interconnects. But in no sense does it imply a 1000-fold increase in overall performance. It's only a 3X improvement in bandwidth of the physical layer of the interconnect to which the speedup applies.

    It may allow architectures that pack in more computing units, since light beams don't interfere physically or electrically the way wires do. And light can carry multiple channels in the same beam if multiple frequency or phase or polarization accesses can be added. Those will further improve bandwidth and possibly allow a further increase in the number of computing units, which could help get to the 1000X number.

    BTW, didn't Intel have an announcement on optical interconnects just a while ago? Yes. [intel.com] They did [bit-tech.net].

    • by drerwk ( 695572 )
      Current switching speed is not limited by sinal propagation speed in metal;1/3c. More likely by the capacitance in the line.
      • by blair1q ( 305137 )

        That's a good point, but it means he's done even less to improve computing performance.

        • by drerwk ( 695572 )
          No, it means he's done much more. As I understand it, getting signal off chip is power limited because you have to drive great big pads with wires attached that either end up going to another die in the package, or off package. The point is that there is an impedance mismatch with respect to the small transistors and the big pads. With light there is no impedance mismatch provided you cane get enough photons.
  • Are they implying that signals travel faster through a fiber optic cable than a copper cable? Or just that there is less interference between the lines.
    • Re: (Score:3, Insightful)

      by wurp ( 51446 )

      Well, electricity does travel slightly slower than light (physical electrons, which have mass, do move, although not from one end of the wire to the other). However, I suspect what they're after is improved switching speed. High frequency photons can switch on & off more sharply (i.e. in less time) than electrons in a typical electrical flow.

      • by wurp ( 51446 )

        Goddamit, I hate the way "No karma bonus" sticks until I turn it off again now.

    • by geekoid ( 135745 )

      not faster, more.

  • Well, at least they are using their smarts to actually invent the things they claim instead of sitting on patents like some other companies. Now to remember the new password standard, minimum 90 characters.
  • Windows will finally be usable?

  • ... the Singularity must almost be upon us. I, for one, welcome our new supercomputing overlords!



    (No it isn't, Ray Kurzweil is an idiot, and don't call me Shirley!)
  • FTA:

    The photonics technology [will] help IBM to achieve its goal of building an exascale computer by 2020

    So I guess IBM is in line with the International Technology Roadmap for Semiconductors [itrs.net].
    There has been a lot of research done by the major players in the industry, individual components have been developped (light sources, couplers, phodetectors, optical waveguides, etc...) and IBM just showed they can produce them on-die with standard semiconductor production methods.
    That's not the kind of breakthrough the article claims, it is usual incremental progress. And I am quite happy with that.

  • I remember when IBM's future PowerPCs would be a gajillion times faster than anything from Intel and then Apple would finally rule the desktop. ...

    Oh wait.

Beware of all enterprises that require new clothes, and not rather a new wearer of clothes. -- Henry David Thoreau

Working...