Forgot your password?
typodupeerror
Network Technology

Exabit Transmission Speeds May Be Possible 98

Posted by timothy
from the increasing-oscillations dept.
adeelarshad82 writes "Scientists at UC Berkeley were able to shrink a graphene optical modulator down to 25 square microns in size (small enough to include in silicon circuitry) and were able to modulate it at a speed of 1GHz. The researchers say that modulation speeds of up to 500GHz are theoretically possible. According to the research, due to the high modulation speeds, a graphene modulator can transmit a huge amount of data using spectral bandwidth that conventional modulators can only dream of. Professor Xiang Zhang, in an attempt to boil his group's new findings into consumer-speak, puts it this way: 'If graphene modulators can actually operate at 500GHz, we could soon see networks that are capable of petabit or exabit transmission speeds, rather than megabits and gigabits.'"
This discussion has been archived. No new comments can be posted.

Exabit Transmission Speeds May Be Possible

Comments Filter:
  • by Anonymous Coward

    I fail to see the point unless we also get processing speeds able to keep up with the data.
    And specially storage speeds. SSDs don't cut it.

    • by Nukedoom (1776114)

      Just like airplanes, telephones, and cars. All of them. Completely. Pointless.

      • I have no idea why people come to slashdot just to post as an AC and denounce every new bit of technology.

        Luddites are alive and well in the 21st century, it's just amazing how much effort goes into it these days.

        • by Flyerman (1728812)

          Considering the speed of tech, it requires a lot of effort. When you carry a mobile multifunction computer in your pocket, it's a little hard to denounce progress.

    • Duh [wikipedia.org]

    • With transmissions speeds that fast, processing and storage can both be dumped on someone else without worrying about filling the pipe.

    • by AlecC (512609)

      Many systems connected to many systems. It doesn't have to be to/from a single CPU or storage device. You can put your datacentre where it is most efficient in energy or cooling terms, but have it appear to be where you want it operationally. Or you can aggregate datacentres scattered across the globe into a unified system, load sharing as the peak load moves round the globe. It makes the physical attributes of "the cloud" more possible.

    • Because you've got to move data around on chips faster to achieve those faster processing speeds.
    • by vlm (69642)

      And specially storage speeds. SSDs don't cut it.

      Oh of course they do. You just have to use more than one, in parallel / striping mode. Think of a "real" NAS or an IBM DASD with dozens of drives in parallel.

      Probably this will be used mostly for DWDM style stunts... Find the fastest system and its press release. Insert two in a box twice as big. Issue press release to the mass media, and sadly, /., reporting "new world record of twice the libraries of congress per second". The general public responds with "who cares" because that kind of press release

    • Think routers.
    • by athlon02 (201713)

      Uhmm, need I point out the obvious? ... 1 Exabit/sec / 100,000,000 users = 10 Terabit/sec bandwidth per user. Yes, I know there's overhead, distribution across large distances, etc, etc that would lower the realistic bandwidth. But, it means each user could still have a crazy amount of bandwidth.

    • by Ultra64 (318705)

      At those speeds, why would you need to store anything locally?

    • by sgt scrub (869860)

      i agree with you that is it pointless. mostly because this statement, "we could soon see networks that are capable of petabit or exabit transmission speeds", incorrectly uses "we". "we" being large ISP's and people in academia, maybe. "we" being the people getting bandwidth to our homes, completely unlikely. "we", being the later in most U.S. locations, can't even get fucking fios in our neighborhood!

    • I haven't done the math, but at 500 GHz it seems like dispersion [wikipedia.org] would make any network longer than a single chip fundamentally unable to use that kind of frequency.

      For a mesh network-on-a-chip though, you could probably dumb down the routers a lot (you'd have to to let them operate at that freq), and basically trade inefficient routing for a way higher link rate... basically operate the network such that you can deliver a message 100 times faster than than you can send 1 message. The routers may not even n

  • by empiricistrob (638862) on Wednesday May 11, 2011 @04:18AM (#36091448)
    So in theory if you can get an electrical signal to the graphene, you can use it to modulate laser light up to 500ghz. Awesome!

    That just leaves two fatal flaws:
    1. You need to modulate the electric signal with useful information at 500ghz. I'm not an expert, but it seems like we're a long way off from being able to do that. Can anyone comment?
    2. How do you demodulate such a signal?
    • by drolli (522659) on Wednesday May 11, 2011 @04:41AM (#36091564) Journal

      1. there is a logic which is nearly fast enough. It's called RSFQ, but interfacing it to graphene may be difficult.

      2. with RSFQ ADCs.

      If its about analog mixing, you could use bolometer mixers, interfacing to RSFQ circuits.

      • by Anonymous Coward

        1. there is a logic which is nearly fast enough. It's called RSFQ, but interfacing it to graphene may be difficult.

        Dude, Reading Something Freakin Quick won't cut it.

      • by Luckyo (1726890)

        Building it on a meaningful level is hard enough, as it requires supraconductivity. Costs would be astronomical.

        That said, tech mentioned in OP, unlike RSFQ doesn't even exist yet. It's just a working theory. By the time it's working, there are bound to be ways to use it.

    • by mpoulton (689851) on Wednesday May 11, 2011 @05:00AM (#36091638)
      The modulation problem can probably be solved with clever use of current technology. Initially at least, the only application for links with this bandwidth would be in aggregated data transmission, accumulating dozens or hundreds of lower bandwidth connections. A clever modulation method would utilize multiple separate electrical modulation signals to control the optical modulation, possibly by using multiple separate modulator elements in the optical path, each operating at a lower modulation rate (but synchronized with the others and phase shifted). In the long run it will be interesting to see how data transmission technology evolves to accommodate high data rates like this. 500GHz is hardly even an electrical signal, it's almost light-like. Wires don't work at those frequencies; it's waveguide-only territory. It can really only be handled easily as a modulated optical signal. If we are to progress to a point where data rates like these are practical for individual computing devices we will have to switch to all-optical protocols for networking, and probably also for internal data transport within computing devices. Demodulation of an optical signal with this much modulation bandwidth is pretty much an unsolved problem for now, AFAIK. As with the modulation process, I'd probably try to split it into multiple channels each covering a narrower bandwidth. Unlike the modulation process, I can't think of an obvious way to do that off the top of my head. It's also worth noting that the professor seems to be contemplating the use of many optical modulators (each at 500GHz), each operating on a different fundamental wavelength to multiply the link bandwidth. Hence the prospect of petabit and exabit data rates from 500GHz modulation.
      • by Soft (266615)

        the professor seems to be contemplating the use of many optical modulators (each at 500GHz), each operating on a different fundamental wavelength to multiply the link bandwidth. Hence the prospect of petabit and exabit data rates from 500GHz modulation.

        And that's the key problem: you can't just replace the 40-GHz modulators in a 50-channel x 40-Gbit/s fiber system, because the optical frequencies of the channels must be spaced widely enough that the channels won't overlap. These ultra-high-speed modulato

    • by AlecC (512609)

      You fan in and fan out in several stages, the later of which are graphene based.

    • by FST777 (913657)
      Modulating broad-spectrum light at 1GHz is still way better than modulating an electrical signal at 1GHz. So for on-chip silicon circuits there would still be a huge gain at current "clockspeeds".

      It's nice to think about this tech on a large (optic-fiber networks) scale, but the applications on a small (silicon wafer) scale are, IMHO, more interesting.
    • by MattskEE (925706)

      1. By the time graphene is ready to be modulated at 500GHz we will almost certainly have the technology to modulate the graphene at 500GHz. It requires a transistor with unity gain at roughly 1000GHz to do that. State of the art university and military researchers are building both analog and (very simple) digital circuits in the 300+GHz region using transistors with unity gain frequencies at 1THz and above. They are using group III-V heterostructure devices such as InGaAs/InP heterojunction bipolar tran

  • Will graphene computing be the new quantum computing henceforth?

    • Not really. Even 500GHz processors would still be extremely slow compared to quantum computers for crypto stuff, though this technology could presumably be used to make a very nice system bus, useful for graphics processing and other bandwidth intensive applications..

      • Quantum computers are a whole lot faster for some specific tasks. Not faster communications. But they'd easily be able to outperform existing tech by many orders of magnitude on prime factorisation, database lookups, statistical modeling. A lot of things used for scientific computing. A quantum computer would actually be a hybrid technology - a largely conventional computer, with just a quantum co-processor added on.
      • I assume he meant in the sense of being the "new hot technology that's just over the horizon" for the next couple decades.
  • It's all well and good having super fast transmission capabilities but do we have anything that can process/store data as quickly? It's an honest question as I've always been lead to believe that data storage is the bottleneck.
    • Re:data storage? (Score:5, Interesting)

      by Puff_Of_Hot_Air (995689) on Wednesday May 11, 2011 @04:30AM (#36091514)

      It's all well and good having super fast transmission capabilities but do we have anything that can process/store data as quickly? It's an honest question as I've always been lead to believe that data storage is the bottleneck.

      Infrastructure is where this is important. There are these extremely expensive cables made of glass under the ocean connecting various land masses. It's extremely convenient to be able to upgrade the boxes at either end instead of laying more tubes (*warning* simplification!). You don't need to store the data (at least not in one box), you just need to switch it. This is why fiber is so awesome; people just keep on discovering new ways to jam more down those pipes!

      • by Kjella (173770)

        There are these extremely expensive cables made of glass under the ocean connecting various land masses.

        More importantly, the fibers themselves aren't that expensive anymore as you can see from FiOS/FTTH deployment. Getting new cables in place is what costs an arm and a leg. So more capacity over same cable is very, very cost efficient.

        Also a comment to the GP:

        It's an honest question as I've always been lead to believe that data storage is the bottleneck.

        If you have streaming, is storage really all that necessary? With Spotify etc. for music, Netflix etc. for movies - and assume you can stream BluRay quality effortlessly, what do most people need TBs to local storage for? Yes, there are niches like when

        • Thanks... just goes to show how limited my view was as I was only thinking of the cable into my home!

          Can these modulators replace existing ones or does the entire cable network need an upgrade for this? If it's just the modulators, then wow!

          • by Kjella (173770)

            Good question, as far as I can tell there are some variations in fiber optic cables so impossible to say. Most cables are built to work in some rather narrow bands at extreme speeds, you might get the 500GHz with regular cables but probably not full spectrum. This is mostly guesswork though.

        • what do most people need TBs to local storage for?

          For all the software they use but never paid for and have no intention of every paying for, porn, personal documents, porn, backups, porn, backups of porn, music, porn, backups of music and porn, home movies, porn, games, porn, porn games, pictures, porn, emails, porn, porn emails, and of course, porn.

          Not everyone wants their stuff in "the cloud". Having something at your location gives you faster access than going to a site, no matter the transmi
      • This is why fiber is so awesome; people just keep on discovering new ways to jam more down those pipes!

        Same reason why porn is so awesome.

    • Not at all levels (Score:4, Informative)

      by Sycraft-fu (314770) on Wednesday May 11, 2011 @04:35AM (#36091536)

      You have to remember that the more bandwidth you want to deliver to the end user, the more you've got to have in the backhaul. Like if at work you want to deliver true 1 gigabit to 1000 people's desktops, you can't very well then have a 1 gigabit connection out to your data center. They won't get a gigabit of performance.

      So while speeds like this wouldn't be needed for servers or such, they could be for big links. You want to link big_router_a with big_router_b which have all sorts of very fast connections to smaller routers then maybe this interests you.

      • by Kjella (173770)

        No, but you probably don't need 1000 gigabit either. At the giant LAN party "The Gathering" this easter they had 5200 people and a 100 Gbps uplink, but the traffic mostly stayed in the 10-12 Gbps range. True, the ~140 table routers were limited to 2 Gbps each, but that is still only 5-6 of those maxed. And those are pretty much all computer enthusiasts spending their easter there.

        The NIX (Norwegian Internet eXchange) in Norway tops out at about 70-80 Gbps maximum for 4.96 mio people - that isn't all Interne

        • True, you don't need 100% backhaul. The more people you have, and the faster the connection, the more you can pack in to a given connection. People use in spurts and it all kind of evens out. However it still gets to be pretty massive. The connections their teir-1 ISPs have between each other and between big points are massive.

          Also I think we'll be able to find a use for quite a bit more bandwidth to the home. I've got a 50mbit line (officially, actually seems to be more like 100mbit most of the time) and I

  • How long is it until we can keep a local copy of all human culture and data on a thumb drive? A long long time ago, back in the dark ages of the intertubes called the 90's, people started downloading individual mp3 tracks. Then individual albums, then artist collections, then music video collections and soon entire genres. Of course now people don't bother much with downloading music and just stream it from wherever is convenient at the moment (youtube, music stores, illegal streaming sharing sites etc.). T
    • by Deaddy (1090107)

      Actually you can have a torrent of torrents; at least rtorrent has the ability to scan specified directories for new .torrent-files, and automatically add them to your queue (and move them to destination folders if finishied, so you can download your torrent-torrent to that directory and automatically add them to your conventional torrent-dir. However, I'd go for a simple zip-file and a web-interface where you can check the torrents you want to download, and then download and unpack the zip file with all th

    • You could put .torrent files in a torrent.
    • And how long will it be until information interconnection is so fast that ALL information can feasibly be anywhere within milliseconds...thus degrading the value of information ownership into nearly nothing, approaching what could be quantum memory -> all of human knowledge is contained on a memory structure entangled throughout the world....so everybody has a complete copy of all knowledge constantly updated as fast as knowledge is produced....and with Moore's law, this will happen in...about 50 years?
  • I had no idea conventional modulators had the ability to dream.
  • With all this extra bandwidth, AT&T will up their quotas from 150GB to 200GB!

    • Are you kidding? Now that AT&T has bandwidth caps in place, nothing short of the second coming of Jesus will get them to move them. Its Cha-Ching time and they know it.
      • by operagost (62405)
        They learned from the government. Have you noticed how many taxes are on your telecom bill from Federal, state, county, and sometimes local government? My $54.95 phone/internet bundle comes out to $74.
    • by djdanlib (732853)

      If anything, they will increase a few cryptic line items on your bill to "support" the rollout of the technology, then cackle all the way to the bank while you use up your monthly cap in only a few seconds.

  • This technology should be ready for market in about 13.5 years. Going from 1 ghz to 500 ghz with a doubling every 18 months will take 9 periods. Let's add a few years for developing the tools necessary to mass produce and we are at 15-20 years. Obviously there is no reason to believe that this technology will follow a similar growth curve. It likely will be substantially worse. It's nice to know we have some theoretical headroom but there is even less to get excited about here than when there is the pr

  • Can we just transmit data through a few kilometers of fiber wound on a spool, demodulate the data and then resend it as a storage mechanism? At 300MHz you store 1bit per meter or 1Kbit per km. At 300GHz you get 1Mbit per km. At 300THz you get 1Gbit per km. At 300 petabit/s you store 1Tbit per km. Of course this storage medium loses data when the power is turned off, but that's OK for some applications. And with 1km of fiber, your data is never more than 3.3 uS away.
    • The delay line memory [wikipedia.org] inventors would be very happy to see their technology used again....
      Seriously though, such delay lines are actually used in routing to avoid storing incoming packets in memory.

  • They might be possible, but then how would the telecomm giants justify drastically inflated bandwidth prices?

    Always look for the money...

  • How many porn movies per second is this?

  • by LS (57954)

    Yo Dawg,

    I heard you like me so we put my music videos on Xiang Zhang's graphene modulator network so you can watch Xzibit on Exabit.

I am here by the will of the people and I won't leave until I get my raincoat back. - a slogan of the anarchists in Richard Kadrey's "Metrophage"

Working...