Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Media Networking

Terabit-Per-Second Class Connections over FTTH 117

Big Fat Dave writes "Thanks to research from Japan's Tohoku University, an article at Tech.co.uk wonders if someday the megabit and gigabit classes of net connections will join kilobits in the 'antique tech' bin. By doing some advanced mathematics and 'tweaking' existing network protocols, researchers may be able to enable standard fiber-optic cables to carry data at hundreds of terabits per second. 'At that speed, full movies could be downloaded almost instantaneously in their hundreds. At the heart of the development is a technique already used in some digital TV tuners and wireless data connections called quadrature amplitude modulation (QAM). One glance at the Wikipedia explanation shows that it's no easy science, but the basics of QAM in this scenario require a stable wavelength for data transmission. As the radio spectrum provides this, QAM-based methods work fine for some wireless protocols, however the nature of the optical spectrum means this has not been the case for fibre-optic cables ... until now.'"
This discussion has been archived. No new comments can be posted.

Terabit-Per-Second Class Connections over FTTH

Comments Filter:
  • Re:ya but.. (Score:5, Informative)

    by Ogun ( 101578 ) on Saturday November 17, 2007 @03:47PM (#21391721) Homepage
    Fastest backbone router that I know of is the Cisco CRS-1. It can scale to a system capacity of 92 Tbps in total, using 72 42U rack units as one large router. Still, the fastest interfaces on that machine is OC-768 at roughly 40 Gbps.
  • by iampiti ( 1059688 ) on Saturday November 17, 2007 @03:48PM (#21391727)
    I'm not sure if this is the case still, but a networks teacher of mine told me some years ago that the bottleneck of the internet were the routers.
  • Academic work (Score:4, Informative)

    by Bananatree3 ( 872975 ) on Saturday November 17, 2007 @03:51PM (#21391751)
    multi-terabit connections are an absolutely wonderful thing to have in some academic research fields. Science research, computing research can all benefit. For some dude downloading movies and music? A 100mbit would be absolutely wonderful and gigabit would be more than enough.
  • by Kohath ( 38547 ) on Saturday November 17, 2007 @03:56PM (#21391787)
    The story is about doing it over fiber optics -- using an optical signal instead of an electrical one.

    It seems like something that might be useful 20 years from now.
  • Re:ya but.. (Score:4, Informative)

    by Kjella ( 173770 ) on Saturday November 17, 2007 @05:43PM (#21392531) Homepage
    The practicality and economics is that in all larger construction projects here in Norway today, whether it's apartment blocks or new fields of housing they lay fiber connections. There are approximately two million households and about 150,000 (7,5%) of them can get fiber connections. Each year 30,000 new houses are built and many of them will have fiber connections, though lone houses don't qualify. If we say 25,000 a year (30,000 less lone houses plus retrofits) then over the next decade I expect that to rise to 150,000 + 10*25,000 = 400,000 (20%) for a conservative estimate. Oh yeah and we're considerably more sparsely populated than the US. Fiber has good end-mile economics as long as you're putting down cables anyway. Now, that wouldn't make it useful with a terabit last mile but if you want real capacity and not US "unlimited" capacity, then it's really nice if actually delivering is very cheap. And a few thousand people on gigabit connections add up to terabits quite fast...
  • by dsgrntlxmply ( 610492 ) on Saturday November 17, 2007 @06:08PM (#21392749)
    Neither this article, nor anything linked from it and accessible without subscription, describes the result in any useful detail.

    What is routinely done today in hybrid fiber/coaxial cable (HFC) cable TV systems, is to use linear RF-band, often 50-750MHz in 6MHz (North American standard) bands corresponding to television channels. Both 64- (6 bits/baud) and 256- (8 bits/baud) QAM modulation standards are used. 64-QAM has been around since maybe 1996.

    256-QAM requires a better signal/noise ratio through the transmission path, and better A/D resolution and more demodulation work in the receiver. 256-QAM gives around 38.8Mb/s payload rate after subtracting TV standard (ITU J.83B) ECC and packet overheads. 256-QAM is seeing increasing use as better chip technology makes the demodulators cheap, as cable plant is upgraded to push fiber farther out toward the end subscribers with better signal quality.

    700MHz / 6MHz = 116 TV channels * 38.8Mb/s = roughly 4.5Gb/s digital capacity for QAM on a 700MHz RF bandwidth. Again, this is done routinely today, except of course a TV receiver only selects and demodulates a single 6MHz channel at a time.

    One could WDM a number of 700MHz RF ensembles onto a fiber, but this of course requires source lasers (ones designed for wideband linear modulation, or with $$$ external modulators) with precisely tuned and stabilized wavelengths, and corresponding optical splitter/filters, individual optical receivers for each wavelength, and RF-band demodulators for however many channels the RF band has been divided into.

    Terabit through this conceptually straightforward WDM approach would require over 200 such optical carriers (a couple of racks of very expensive equipment. It's feasible, but not something you will have on the side of your house (even receive-only) in the near future.

  • by Crypto Gnome ( 651401 ) on Saturday November 17, 2007 @07:17PM (#21393239) Homepage Journal

    They're been doing way more than QAM in the last decade, they're doing 64-way amplitude modulation, with frequency spectrums (cable) for ages How the fuck are they using multi-frequency modulation techniques on light rays (fibre) ?

    Are you aware that "radio waves" and "light rays" are fundamentally the same thing [wikipedia.org]?

    <Massive generalization> anything we have worked out how to do "with radio" is something that there is no fundamentally intrinsic reason why we should not (one day) be able to work out how to do "with light"</Massive Generalization> (and don't bother saying things like passing 'radio" through a sheet of cardboard which obviously blocks "light" - I'm talking about *uses* ie modulation/signalling techniques, not "modifying the laws of physics" issues)

    Or do you think that a 1kHz audio wave is in some *magic* way fundamentally and intrinsically different from a 5kHz audio wave? or a 25kHz wave?
  • by Tom Womack ( 8005 ) <tom@womack.net> on Saturday November 17, 2007 @08:39PM (#21393809) Homepage
    It looks as if

    http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=4348615&isnumber=4348298 [ieee.org]

    is something like the work being reported on; 'A 1 Gsymbol/s, 64 QAM coherent signal was successfully transmitted over 150 km using heterodyne detection with a frequency-stabilized fiber laser and an optical phase-locked-loop technique. The spectral efficiency reached as high as 3 bit/s/Hz.'

    Masato YOSHIDA's list of papers at

    http://db.tohoku.ac.jp/whois/Tunv_Title_All.php?&user_num=LTU0OA==&sel1=1&sel2=1&sel3=1&sel4=2&page=1&lang=E [tohoku.ac.jp]

    looks very plausible in the context of this work; 'coherent optical transmission' is I think the relevant buzz-word. Going from 1Gsymbol/s to 10Tsymbol/s is clearly a lot more work, but being able to do optical QAM at all is pretty spectacular.
  • Re:ya but.. (Score:4, Informative)

    by funkboy ( 71672 ) on Saturday November 17, 2007 @09:32PM (#21394101) Homepage
    Probably because you haven't seen a Juniper T1600 [juniper.net]. It has 2.5x the per-slot bandwidth of the CRS-1. The Cisco marketing literature may go to 92tbps, but I challenge you to show me a production CRS multishelf system with more than one fabric shelf. Once T1600 modules are available for the TX Matrix the system will provide 6.4tbps in two and a half racks, using far less power than the equivalent real estate worth of CRS hardware (2.4tbps max), at about the same cost. BTW a fully configured 72-rack CRS-1 would require about .8 megawatts of power and belch about 2.5 million BTUs of heat per hour...

    Erm, not that that's a biased viewpoint or anything (heh)...

    Anyway, IMHO far more important to router scalability is the per-slot and per-watt bandwidth, not how many systems you can chain together (as long as you can chain some reasonably useful number, but I don't see a need for more than 8 chassis in a system). The CRS-1 won't be able to handle 100gE without a system-wide fabric upgrade or double-width cards or something. The T1600 (and for that matter, the Foundry NetIron X series, though not in the same class of capabilities or scalability as the Juniper) will be able to slot in 8 100gE linecards the day they're available.
  • Re:ya but.. (Score:3, Informative)

    by funkboy ( 71672 ) on Saturday November 17, 2007 @10:06PM (#21394331) Homepage
    > True, but the routers and repeaters on the backbone have buses don't they?

    The 750hp 2.4L V8 engine in an F1 car produces about 3-4x the amount of power of a production car engine of the same displacement, but you don't see even high-end mfrs like Porsche putting that sort of thing in street cars (for reasons I hope I don't need to explain).

    The data plane in high-end routers have custom-designed switch fabrics [wikipedia.org], which technically are not buses and operate in a different (more scalable) fashion. The wiki article is actually on fibre channel, but the concept is the same. Cost alone precludes use of such components in PC hardware, not to mention various other factors.

    That said, PCI Express is pretty damn nice when you start talking performance vs. cost (both per $ and per watt) when the number of high-bandwidth devices on the bus is low, and the existing plethora of 8 & 16 lane devices & motherboards and the potential to scale to 32 lanes (64gbit/sec) in the future mean that the bus in a modern COTS PC is not the bottleneck in high-performance networking on such hardware. The two things that are:

      - The ability of the operating system & host processor to handle the load offered by the networking stack at such speeds. Mitigated by techniques such as TOE [wikipedia.org] and interrupt mitigation & hardware polling [linux-foundation.org]. Done in hardware, getting cheaper, widespread implementation in common NICs not there or crappy (ahem, Realtek).

      - The bandwidth to the user's machine, which is what TFA is about...

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...