IBM Discovery May Lead To Exascale Supercomputers 135
alphadogg writes "IBM researchers have made a breakthrough in using pulses of light to accelerate data transfer between chips, something they say could boost the performance of supercomputers by more than a thousand times. The new technology, called CMOS Integrated Silicon Nanophotonics, integrates electrical and optical modules on a single piece of silicon, allowing electrical signals created at the transistor level to be converted into pulses of light that allow chips to communicate at faster speeds, said Will Green, silicon photonics research scientist at IBM. The technology could lead to massive advances in the power of supercomputers, according to IBM."
GPU = supercomputer? (Score:2)
Re: (Score:2)
It seems the definition of a supercomputer keeps changing
http://www.youtube.com/watch?v=gzxz3k2zQJI [youtube.com]
Re: (Score:2)
Re:GPU = supercomputer? (Score:4, Funny)
Re: (Score:3, Funny)
Re: (Score:2)
Power Mac G4 as a weapon? It must have had on-chip 128 bit encryption.
It wasn't on chip encryption, but reaching a gigaflop that was a threshold for export restrictions. Of course what the industry considered supercomputers had already progressed far beyond that level.
http://findarticles.com/p/articles/mi_qn4182/is_19990907/ai_n10131702/ [findarticles.com]
But how does it all compare to a cloud/botnet of smartphones?
More and more computing power everywhere, but the Earth still has plenty of problems left to solve.
Re: (Score:2)
Sometime around 1975 I decided I wasn't going to play with low-powered computers anymore. I went looking for a job on a Cray or one of the big Control Data supercomputers.
I never got the job, but I did get the compute power; it's in my pocket.
Re: (Score:2)
Sometime around 1975 I decided I wasn't going to play with low-powered computers anymore. I went looking for a job on a Cray or one of the big Control Data supercomputers.
I never got the job, but I did get the compute power; it's in my pocket.
I keep getting spam emails promising me that
Re: (Score:2)
As long as it crunches massive numbers quickly, who cares how its defined?
Re:GPU = supercomputer? (Score:5, Informative)
GPUs are indeed an inexpensive way to boost speed in some cases. But they have been rather oversold; while some specific types of problems benefit a lot from them, many problems do not. If you need to frequently share data with other computing nodes (neural network simulations come to mind), then the communications latency between card and main node eats up much of the speed increase. And as much of the software you run on this kind of system is customized or one-off stuff, the added development time in using GPUs is a real factor in determining the relative value. If you gain two weeks of simulation time but spend an extra month on the programming, you're losing time, not gaining it.
Think about it this way: GPU's are really the same thing as specialized vector processors, long used in supercomputing. And they have fallen in and out of favour over the years depending on the kind of problem you try to solve, the relative speed boost, cost and difficulty in using them. The GPU resource at the computing center is used much less than the general clusters themselves, indicating most users do not find it worth the extra time and trouble to use.
It is a good idea, but it's not some magic bullet.
Re: (Score:2)
Re: (Score:2)
You use RAM for disk.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You are talking about IBM here. It will get built and sold/leased.
Re: (Score:1)
Since we're talking about discoveries that may lead to faster computers, these are the solutions it may use:
* Texas A&M Research Brings Racetrack Memory a Bit Closer -> http://hardware.slashdot.org/story/10/12/01/0552254/Texas-AampM-Research-Brings-Racetrack-Memory-a-Bit-Closer [slashdot.org]
* SanDisk, Nikon and Sony Develop 500MB/sec 2TB Flash CardSanDisk, Nikon and Sony Develop 500MB/sec 2TB Flash Card -> http://hardware.slashdot.org/story/10/12/01/1322255/SanDisk-Nikon-and-Sony-Develop-500MBse [slashdot.org]
Re: (Score:1)
"Texas A&M Research Brings Racetrack Memory a Bit Closer"
I groaned audibly at the terrible pun.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
The major problem with adoption is probably that most of the people running jobs on SC's are scientists not computer scientists. They use large piles of ancient, well tested libraries and only tweak small parts of the code that are specific to their problem. This means that most of those libraries will need to be ported to OpenCL and CUDA before adoption really picks up.
And we have a winner!
Most people do not want to write their eigensolvers, Poisson system solvers, matrix multiplication routines, and the like. They just want to use code that already does that, and that has been tested to do its job well. Code verification is important. So, the libraries that do so need to be ported before anyone in HPC switches to GPU architectures seriously. (Remember: this is the land where FORTRAN is still king...)
relentless progress oversold (Score:2)
GPUs are indeed an inexpensive way to boost speed in some cases. But they have been rather oversold; while some specific types of problems benefit a lot from them, many problems do not.
Where do you get the idea that GPUs have been oversold? Is the loudest mouth breather in the room representative of the general consensus? One vain, overreaching guy from 1960 who had spent too many hours hunched over a keyboard predicts human level AI within the decade, and the entire endeavour is tainted forever? All to alleviate one slow news day?
2000 BC called, and wants their sampling procedure back.
Sixteen lanes of PCI-e V 3.0 has a architectural bandwidth of 16 GB/s and we're looking at about 4GB
Re: (Score:2)
You forget the cost of purchasing hardware or leasing supercomputing time. If you want your simulation running on a supercomputer at the nearest research unit, then you would have to get your algorithm or simulation written in OpenMP, parallel Fortran or whatever language was optimised for that system, and you would be tied to that hardware. Have a desktop system under your desk, and you have the freedom to use the system when you want.
I'll agree, it is bad that software has to be wrapped around one piece o
Re: (Score:2)
This is what I meant about overselling this idea. A GPU is not an alternative to a cluster. The practical speedup you get is 2-3 times a normal dual-core cpu (a bit more for specific problems, less for others). A GPU is in other words very roughly equal to adding another 2-core cpu to your system, which makes the cost-benefit tradeoff clearer: cheaper node, but longer development time. If you need a cluster for your problem you're going to be using MPI no matter what*; one or two GPUs will make no material
Re: (Score:2)
That's my experience as well. I played around with TMS340x0 (TIGA) graphics accelerator cards back in the in the early 1990's. One half was the regular VGA standard of the time (256 colors). The other half was 2 Megabytes of memory, a TMS34020 32-bit graphics coprocessor with (if you were lucky), one or more TMS34082 floating point coprocessors. It was really only intended to accelerate paint programs like Tempra and the 2D rendering of AutoCAD. There were some 3D demos written for it like 'flysim', but tha
Re: (Score:2)
We're talking supercomputers, not pc:s. A lot of the software used there really is written for one single task or one single project. Once the project is over and the original user is done with it, it's never used again. While you may want to do something similar at a later date your new task is different
Re: (Score:2)
Huh (Score:2)
Re: (Score:1)
From the article
"In an exascale system, interconnects have to be able to push exabytes per second across the network,"
"Newer supercomputers already use optical technology for chips to communicate, but mostly at the rack level and mostly over a single wavelength. IBM's breakthrough will enable optical communication simultaneously at multiple wavelengths, he said."
The sad part:
"IBM hopes to eventually use optics for on-chip communication between transistors as well. "There is a vision for the chip level, but
Re: (Score:2)
Re:Huh (Score:5, Insightful)
IBM may be patent-happy, but it's only reasonable to protect their "inventions". There's a huge difference between a patent troll who buys patents solely for litigation purposes, and IBM, who has been among the leading tech innovators for decades, defending their investments using the legal system. We may not love the current state of affairs for patents, but it's important to distinguish between bottom feeders out for a dirty buck and successful entities making use of their R&D department.
Re: (Score:2)
While not like it was, IBM does do a lot of R&D.
Re: (Score:2)
Any good they do in research is negated in the sheer amount of frustration, inefficiency, and anger they produce from inflicting Lotus Notes on millions of unfortunate customers.
Karma Whoring AC (Score:1)
IBM's press release is http://www-03.ibm.com/press/us/en/pressrelease/33115.wss [ibm.com]
One interesting bit is that the new IBM technology can be produced on the front-end of a standard CMOS manufacturing line and requires no new or special tooling. With this approach, silicon transistors can share the same silicon layer with silicon nanophotonics devices. To make this approach possible, IBM researchers have developed a suite of integrated ultra-compact active and passive silicon nanophotonics devices that are all
Exascale is not a word. (Score:1, Funny)
A whole dictionary full of perfectly good words and they have to make one up to mean “very large”...
Re: (Score:1)
Exascale is not a word. ... A whole dictionary full of perfectly good words and they have to make one up to mean "very large"...
Finally someone who might agree with my proposal to replace the overcomplicated SI system with a much simpler 'big'/'small' size classification system! I'm still not sold on the need for adjectives, though I'm open to debate.
Re: (Score:2)
It's not worth debating. As long as the most common adjectives used are also sexual terms,* the public WILL have its adjectives.
*and they are, as in "really f**king big!", "Muthaf**kin gynormous", and "awesomer than F**kzilla stompin the s**t outa Tokif**kio"
Re: (Score:3)
Its a portmanteau!
Exa (obviously being a step above Peta which is above Tera which is above Giga and so on and so forth)
and scale. Which is self-explanatory.
The best part about English is silly quirks like portmanteaus. Don't try to be pedantic.
Re: (Score:2)
I absolutely agree. Pedantic is also a beautificious portmanteau. Ped (from the Greek for foot), and antic. So you're doing antics with your foot. Foot in mouth!
Re: (Score:1)
Re: (Score:2)
Obviously what they meant was "Exo-scale" computing. Which is, of course, computing technology received from extraterrestrial reptilians.
Re: (Score:2)
Re: (Score:1)
You might want to check Google before you post.
I checked Webster’s. And if it means “exa-flop scale”, people should just say exa-flop scale.
Re:Exascale is not a word. (Score:5, Funny)
Exascale is not a word
A whole dictionary full of perfectly good words and they have to make one up to mean “very large”...
Exascale is a perfectly cromulent word.
Re: (Score:2)
Personally, I'm trasmodic with eucompipulation about Exascale.
Re: (Score:3)
Re: (Score:2)
Whatever, so long as it brings back the TURBO button I'm buying one!
Re: (Score:2)
haha, the turbo button. Man, that was awesome. As if anyone would not run in turbo.
Re: (Score:2)
As if anyone would not run in turbo.
Um, that was actually exactly what anyone would need to do, sometimes. The turbo button existed to slow the computer down. Necessary for running some really-old games that implemented hardware-sensitive timers and ran much too fast on “fast” computers (such as the 16 MHz box I cut my teeth on).
Is there a -1 for computing history fail?
Re: (Score:2)
Not a tetris fan, I see?
Re: (Score:1)
Re: (Score:3)
Re: (Score:1)
which - though it is not a word - has it's own Wikipedia page for some unknown reason
Funny, there’s a Wiki project page for that [wikipedia.org].
Re: (Score:2)
No, there isn't. You seem to think that the only place you will find real words is in a dictionary. It turns out there are also real words in encyclopedias.
Re: (Score:2)
Re: (Score:1)
Great, exactly why making up words is dumb. Now I’m not even sure whose interpretation of it was correct, mine or yours.
Re: (Score:2)
His, I'm pretty sure. Exa flop scale computers (short: exascale) are a big area of research right now.
Re: (Score:1)
Exa flop scale computers (short: exascale)
Great, take the one thing out that actually tells people what you’re talking about (flop, floating-point operations per second).
Why not just say “exa-flop scale”? It’s an additional, what, 5 characters? Well, depending on whether or not you hyphenate “exa-scale”, which you probably should.
Re: (Score:2)
Great, exactly why making up words is dumb. Now I'm not even sure whose interpretation of it was correct, mine or yours.
Yours is wrong. It's an industry specific term and there is precedent going back a couple of decades. We are currently at petascale levels (i.e. computers able to hit over 1 petaflop on the linpack benchmark) before that terascale. I don't think gigascale was ever a common term, even before the high end was able to do a gigalop.
Fuck I feel old... (Score:2)
I remember beading an article in the IEEE Spectrum called: "Reaching for the megaflop" in the nineteen-seventies.
I was working for CDC building power supplies (at their facility in Dorval, PQ, Canada) and keeping up with technology.
Exaflop computing is just blowing me away...
Finally! (Score:1)
Re: (Score:2)
But still not on its maximum settings.
Who would've thought... (Score:4, Insightful)
...that the metal connections between individual components would not be fast enough.
I only wonder how long before this sort of technology makes its way to the consumer market, if only for show. Of course I can't see a use for an exascale databus on the mobo anytime soon.
Re:Who would've thought... (Score:4, Informative)
Re: (Score:2)
It does. It also sound similar to the idea needed for Quantum Connected CPUs from Travis S. Taylor's book "The Quantum Connection" (from Baen, the first few chapters are available free online here: http://www.webscription.net/chapters/0743498968/0743498968.htm?blurb [webscription.net]. The idea of the Quantum Connected CPUs is built up in chapters 4,5 and 6 which are included in the free sample.
( now all we need is the AI, and a few other things, and the Galaxy is our oyster :D )
Re: (Score:3)
Of course I can't see a use for an exascale databus on the mobo anytime soon.
An exascale databus ought to be enough for everyone, at least for the five who comprise the world market for computers.
Re:Who would've thought... (Score:4, Interesting)
...that the metal connections between individual components would not be fast enough.
If you bothered to RTFA (emphasis mine):
Multiple photonics modules could be integrated onto a single substrate or on a motherboard, Green said.
I.e. they're not talking about hooking up individual gates or even basic logic units with optical communications. Anyone who's actually dealt with chip design in the past several decades realizes that off-chip communications is a sucky, slow, power-hungry, and die-space-hungry affair. Most of the die area and a huge amount (30%-50% or more) of power consumption of modern CPU's is gobbled up by the pad drivers -- i.e. off-chip communications. Even "long distance" on-chip communications runs into a lot of engineering challenges, which impacts larger die-area chips and multi-chip modules.
Re: (Score:2)
Re: (Score:2)
The metal connections are certainly fast enough, after all the signals on the metal lines will travel at a fraction of the speed of light divided by the index of refraction of the surrounding dielectric medium, same as an optical waveguide.
But there are two important problems which this does not address: loss and crosstalk.
Because the conductor loss is very significant for metal interconnects, much more power is consumed in long interconnects. This power consumption only increases with transistor density a
Re: (Score:2)
and you can multiplex multiple color channels
And presumably polarizations of the same color. To augment your excellent post: this will make electronics cooler and get better battery life. Or should I say photoelectronics?
Re: (Score:2)
>>I only wonder how long before this sort of technology makes its way to the consumer market, if only for show.
Photonic interconnects have been studied for a long time. One of my faculty advisers at UC San Diego, Dr. Clark Guest (brilliant man) has been doing work on optical computing since... the 80s? (I don't know, but a long time.)
They really do represent the next step in computer evolution, but are really tricky to get working right at a consumer-grade level. For example, holographic storage has b
Re: (Score:2)
I'm trying to figure out (Score:1)
It's all variations on silicon (nano)photonics, right? The article says "Intel is also researching silicon nanophotonics at the silicon level, but has not yet demonstrated the integration of photonics with electronics"...but that makes me wonder what the big deal about Light Peak is, then... is the only difference the "nano"?
Re: (Score:2)
Forget Light Peak, Intel has already demonstrated an on-chip CMOS laser and used it for optical links: press release here [intel.com]. I really don't know what the IBM guy meant with his claim.
Re: (Score:3)
Light peak is meant for up to ~100m and scaling up to 100gbit in the future and meant to replace USB/SATA/HDMA/etc. I some how doubt on-chip CMOS lasers are meant for anything beyond a meter as they're meant for chipset-to-chipset..
Re: (Score:2)
No they're not, this is between chips.
Finally, Optical Computers! (Score:4, Funny)
We have reached an informational threshold which can only be crossed by harnessing the speed of light directly. The quickest computations require the fastest possible particles moving along the shortest paths. Since the capability now exists to take our information directly from photons travelling molecular distances, the final act of the information revolution will soon be upon us.
-- Academician Prokhor Zakharov, "For I Have Tasted The Fruit"
Now I just need room temperature superconductors to build my gatling laser speeders.
Re: (Score:2)
I love that game, still play it. Wish they'd make a sequel.
Re: (Score:2)
When I last played that old gem I had this real neat moment of "who the fuck let Sarah Palin aboard the Unity?".
Wake me... (Score:2)
Re: (Score:3)
Well, besides the fact that moving you there will be inconvenient for us, there won't be any such location because positronic would be a step backward from photonic in terms of performance, assuming your more interested in calculation power than explosive power.
Re: (Score:2)
he's got 88 petabytes in his brain!
Just part of the problem (Score:2)
The interconnects are not the entire problem. Faster transmit helps, of course. But the information still has to come in from storage; it's still held in slow memory banks; it still has to propagate across the swarm. Software still has to be able to access that data in a way that makes sense and can scale to half a million nodes. Connectionless distributed computation is nontrivial, and while lower-latency intranode communication might get us the last 5% it won't get us the first 95%.
Re: (Score:1)
There is a factor you may have neglected to consider: with a faster transmit, the distance between components can be longer with the same speed components. That is, the communication latency introduced by path length is lower.
The holy grail (Score:1)
Optoelectronics really is the holy grail of computing. There's no cross talk problems, no magnetic fields to worry about, and you can multiplex the hell out of a communication link. The current record [wikipedia.org] is 155 channels of 100 Gbit/s each. (!)
Not sure this guy understands the problem. (Score:4, Informative)
He's sped up links between chips from something like one-third c to c.
Architecturally that reduces inter-chip latency by 66%, which does indeed open up a new overall speed range for applications that are bandwidth-limited by interconnects. But in no sense does it imply a 1000-fold increase in overall performance. It's only a 3X improvement in bandwidth of the physical layer of the interconnect to which the speedup applies.
It may allow architectures that pack in more computing units, since light beams don't interfere physically or electrically the way wires do. And light can carry multiple channels in the same beam if multiple frequency or phase or polarization accesses can be added. Those will further improve bandwidth and possibly allow a further increase in the number of computing units, which could help get to the 1000X number.
BTW, didn't Intel have an announcement on optical interconnects just a while ago? Yes. [intel.com] They did [bit-tech.net].
Re: (Score:2)
Re: (Score:2)
That's a good point, but it means he's done even less to improve computing performance.
Re: (Score:2)
Re: (Score:2)
That's why I talked about multiplexing with colors and phases.
But here's a question: while it's true that the light has a much higher frequency and can support faster modulation, is there a transistor capable of switching it that fast?
Why is this faster? (Score:2)
Re: (Score:3, Insightful)
Well, electricity does travel slightly slower than light (physical electrons, which have mass, do move, although not from one end of the wire to the other). However, I suspect what they're after is improved switching speed. High frequency photons can switch on & off more sharply (i.e. in less time) than electrons in a typical electrical flow.
Re: (Score:2)
Goddamit, I hate the way "No karma bonus" sticks until I turn it off again now.
Re: (Score:2)
not faster, more.
SUper 1337 hax0r machines (Score:1)
But does this mean... (Score:1)
Windows will finally be usable?
Surly now... (Score:2)
(No it isn't, Ray Kurzweil is an idiot, and don't call me Shirley!)
ITRS roadmap (Score:1)
FTA:
The photonics technology [will] help IBM to achieve its goal of building an exascale computer by 2020
So I guess IBM is in line with the International Technology Roadmap for Semiconductors [itrs.net].
There has been a lot of research done by the major players in the industry, individual components have been developped (light sources, couplers, phodetectors, optical waveguides, etc...) and IBM just showed they can produce them on-die with standard semiconductor production methods.
That's not the kind of breakthrough the article claims, it is usual incremental progress. And I am quite happy with that.
Yeah, always the future (Score:2)
I remember when IBM's future PowerPCs would be a gajillion times faster than anything from Intel and then Apple would finally rule the desktop. ...
Oh wait.