Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology

Optical Fiber Capacity Growth 54

kastaverious writes: "I found this on Scientific American. It talks about developments in all optical switching and the growth in capacity of optical fiber. The article has some interesting graphs of bandwidth demand and the growth in bandwidth availabilty. There is also a good explanation of some of the technical issues involved in increasing switching capacity, and efforts underway to overcome these problems." The article also has lots of good SciAm-style graphics. This short article at Janes also sheds some light on the world on undersea cable laying, which also recalls the article Neal Stephenson wrote for Wired a few years ago.
This discussion has been archived. No new comments can be posted.

Optical Fiber Capacity Growth

Comments Filter:

  • From the article:

    Every day installers lay enough new cable to circle the earth three times. If improvements in fiber optics continue, the carrying capacity of a single fiber may reach hundreds of trillions of bits a second just a decade or so from now--and some technoidal utopians foresee the eventual arrival of the vaunted petabit mark.

    Good gravy! Too bad the ISPs will divvy that up into a billion megabit lines for you and me.


    Sometimes nothing is a real cool hand.
  • All those post-modern hippies buying fibre optic cable lamps for their home? This is most likely the cause of severely limited bandwidth on the Internet today. Combine that with the fact that each Katz posting uses up over 50 TB of bandwidth on a daily basis to transmit to slashdot readers who blindly worship him and we have some serious issues. There is a solution though, and it's one some might call it old-fashioned. But it will work: Stop using up bandwidth to view pornography.
  • by LordArathres ( 244483 ) on Wednesday January 24, 2001 @12:38AM (#485544) Homepage
    One day we wont have internet lag.
    One day our connection will be several gigs/sec.
    One day there will be no keyboards.
    One day etc etc...
    This is all fine and good and it will be one day, however I do not think that fibre optic cabling will allow us all to do this. First of all it is VERY expensive and not easy to repair if it sits on the bottom of the ocean. I think better satellite systems and/or wireless will be the future. But currently that is what we have and we are making it work.
    It impresses me how much cable has already been laid and how much more will be. The cost and resources must be staggering. I've been on ships and just trying to imagine laying cable behind the ship for THOUSANDS of miles if well and I quote...
    "Whoa"
    I still guess it is easier than laying down fibre optic networks on land b/c of all the construction etc necessary.
    Until that day that we all have a direct fibre optic connection or satellite (not current crap satellites but good ones) connections I guess i'll be stuck with my 18 kb/sec DSL connection. I should not complain as I remember back when the 2400 baud modems came out and then the 9600 was revolutionary that we HAD to have even thinking "How can they make this better?"

    just my 1:40 am half asleep at work opinion.
    Lord Arathres
  • by Anonymous Coward
    Will fiber optics ever make it (meaning in the next 5 to 10 years) to the last mile? That is, will fiber optics be laid down as a replacement for copper telephone wires and coaxial cable lines anytime soon at that last stretch immediately before the homefront.

    Anyone have a supported answer?
  • Good gravy! Too bad the ISPs will divvy that up into a billion megabit lines for you and me.

    Why is this bad? What's wrong with broadband connections for everyone?
  • I've always hated how you can't easily repair broken optic lines... Unless they address that, I assume drawing thousands of miles of lines will be a daunting task...

    Copper wire (which is hardly fragile) gets broken seemingly often... I would assume this would only be worse with optic lines...

    Of course, this is my "I'm about to go to sleep, can't use my moderation points since not too many people have posted, and I would really like some good discussion" post.
  • I've been following this whole HDTV thing, but I just dont get it. I mean why spend $1000+ on a TV that you can only use in certain areas and that costs way too much money. I mean I dont NEED a HDTV, I dont WANT a HDTV, I have a hard enough time leaving my house as is with just computers, had I a good tv, I might never leave.

    Depending on who you are that might be a good thing.

    Lord Arathres
  • If you've got the money for it: sure! There was an article about (filthy rich) geek houses on /. a few weeks (months?) ago. There was fiber all over. And where I live (Belgium) any company can buy/lease an optical connection at various telco's. I guess that they will sell it to a private person also (if you got the money of course :-) One example: http://www.kpnbelgium.be [kpnbelgium.be] Or more direct SDH leased lines [kpnbelgium.be]
  • by Anonymous Coward
    How do we show our appreciation? By worrying about the resolution on our HDTV unit.

    You mean by challenging our brains that God took the time to design, to show our appreciation by occupying it with things like HDTV, and using our resources that God gave us to make those designs happen? Somehow, I think even God doesn't design every snowflake, he just wrote a really good algorithim for making them. (In perl, of course)

  • hey, my parents had one of those lamps somewhere. I wonder what happened to it.
  • What would be your criteria, then, for deciding who should be allowed to have a broadband connection? Would it really be practical or fair to implement these criteria?
  • by Devout Capitalist ( 94813 ) on Wednesday January 24, 2001 @01:24AM (#485553)
    The article misses the key technologies of the future summarized as the smoke and mirrors story I came up with over at Sun Microsystems. We used this to talk about honking bandwidth, the need for big servers, and why a portable language like Java makes sense.

    DWDM is a start, but there are two major problems:

    • Smoke:Right now we can throw a lot of bandwidth across a long haul fiber, but these use expensive lasers that run only one or two protocols. There are a lot of seperate networks, HTTP/TCP/IP, SONET, some voice stacks, even Telex. Each of these networks has its own protocol stack right down to some fiber based ethernet standard or hacked up 1990's protocol. The best solution is to make a 'smoke' box that will allow splitting by frequency so that I can run a dozen frequencies as SONET, a dozen as voice, and twenty TCP/IP. The magic 'smoke' box splits the incomming fiber into several seperate fibers, each carrying a distinct set of frequencies that can leverage other equipment. By combining together different inputs, I can use a single long fiber for multiple networks. One order of magnitude.
    • Mirrors:This is the area where Lucent is making some progress. I need to do some nifty tricks with routing or my gross bandwidth buries my useful bandwidth. All the ATM switch cloth with IP cache in the world won't help if I need to cross over the optical/electrical boundry for every packet. A 'mirror' could be the simple stuff with Lucent using a physical switch to reimplement timesharing (1 cycle for SF to NY, 1 for SF to Boston, 1 for San Jose to NY, ...). The mythical mirror solution is to hit a lattice with a signal such that the reflection property reflects to a different destination. You would only need to cross the boundry for the destination part of a packet or routable stream. This 'mirror' magic would be an independent improvement from DWDM or 'smoke'. Most likely, you would use DWDM, split to fibers with smoke, route with mirrors. Another order of magnitude; maybe two.

    Finally, give up on rewiring the last mile. The DSP and other signal processing tricks will get faster and cheaper more quickly than any solution that requires rewiring. It makes financial sense to swap end point electronics rather than rip open walls. You may see many more COs making shorter runs to the houses, but either existing coax or twisted pair into the house will carry our future bandwidth. (Thanks to Brent and Richard for convincing me.)

    I miss Sun, they had more interesting problems than running a non-profit. See the non-profit at TrueGift Donations [truegift.com].

    Cheers!

    Charles

  • Take another look at how fast high tech has grown. 100 years ago the first radio broadcast over 100 miles took place. Since then it has gone from radio telegraph to teletype to voice to TV to Color TV to Stereo FM to Stereo TV to Packet radio to satalite TV to digital direct broadcast TV to Cell radio (phones, GSM and Trunked radio) to digital voice phones to Satalite phones.

    Now project forward 10 years. 20 years ago Direct Broadcast Satelite on a small dish for home was thought impossible.

  • What would be your criteria, then, for deciding who should be allowed to have a broadband connection? Would it really be practical or fair to implement these criteria?

    Well, first, we're gonna need some sort of moderation system, so that we can rate the worthiness of people's Internet usage... then we can set up a kind of 'karma' system for determining who should get how much bandwidth...

    - cicadia

  • At least here the major bottleneck for web connections is the "last mile" of copper wire between the user and the optical fibres already used by the phone companies. So, what would really help is to put down the cost of fibre installation. (or ADSL) I think the major cost in fibre installation is wages of the guys doing it. This will not bring down the costs of setting a fibre connection. After setting, one only gets more bps than before.

    So, this will not give web access to those who can not afford it now. I think wider web access is more important than more bps. Also, this will not increase the quality of the web, except for those who 'Download large image files from the busiest servers of the web'.

  • by Arkleseizure ( 251525 ) on Wednesday January 24, 2001 @01:35AM (#485557)
    They keep referring to unlimited bandwidth in this article. People seem to fail to realise that minds operate in an infinite-bandwidth environment and that any resource which can be measured in bits or bits/s can easily be consumed by a person. Even if you cover the entire globe in optics with optical switches and routers with mind-boggling information rates will that cope with every high-res videoconference call, every TV broadcast and movie on demand, every book sold, every office which ceases to exist physically and becomes virtual, etc, etc? Unlimited, my arse.
  • Yupyup... very possible... in fact in The Netherlands, the so-called GigaPort project is already doing trials with 100MBit/Sec optical connections to the home for a mere cost price of around 20-30 guilders.. That works out to be roughly 15-20 US dollars.. I want one!! ;-)
  • Hmm if memory serves, our (still quite monopolistic) British Telecom (BT) was toying with the idea of fibre to the home, as was our government...

    However, since they (BT) spent so long dragging their feet with unbundling the local loop ADSL I wouldn't bank on bits of glass ever poking through the skirting board.....

    While they can make money feeding us incremental improvements why do anything else?
  • That's the point! Wider easier and cheaper access to the web. It's almost ridiculous, how the information technology is hampered in Germany by monopolistic Telekom. They are unable to provide proper installation services for any communication technology (DSL, ISDN, analog, doesn't matter) and they even want money for this incompetence. And they even stop any other infrastructure provider that shows up and might do better. As a result there is still no real flatrate in Germany. And this in a technical progressive country (at least once)! I only want a 56k flatrate connection. Then I would be able to use apt with debian ftp servers or use online help in a proper way. I don't want streamed porn videos and GBs of mp3s. I just want the time to read without paying for every word.
  • http isn't really a protocol separate from tcp/ip, on the contrary it's a protocol most commonly run on top of tcp/ip.

    Having only one protocol at the bottom is only a problem if it's impossible, or undesireable to wrap the other protocols in it.

    Witness the many articles here on slashdot mentioning odd or silly tunelling-schemes like tcp/ip over dns. I'd not be the least bit surprised if transporting sonet over ip works fine, and even if it doesn't, tranporting the payload of sonet over ip shouldn't be impossible.

  • This is of course true, cabling is near impossible to repair on the bottom of the ocean - it always is was and will be. So new cable is not laid. If it breaks lay new cable - its the cheapest and easiest way. Look how much it cost to lay the railroads or the initial electrical grids. Costs will fall as the networks grow and when the costs of switching and cabling fall sufficiently bandwidth will drop in price and in many ways will be free in near unlimited quantities.
  • I think better satellite systems and/or wireless will be the future

    Too much latency in satellite communication vs. fiberoptic.

    -jerdenn

  • Late last fall there was some TV reports about a new Toronto housing development that had fibre installed into each house as a standard.

    The report did not go into detail regarding hardware required to use the fibre or where/how it tied into the local service provider.

    Answer: New construction with in five years will probably have fibre installed. Existing homes/ developments will likely not get the technology for 10-15 years due to the rewiring required.

  • http isn't really a protocol separate from tcp/ip, on the contrary it's a protocol most commonly run on top of tcp/ip.

    While you are correct that HTTP is most commonly run over TCP/IP, please note that HTTP is completely separate from TCP/IP.

    RFC 2068 (HTTP/1.1) - "HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used"

    -jerdenn

  • It mentions huge bandwidth usage by 'metacomputing' and 'web agents'. What IS metacomputing and what ARE web agents!?!
  • I agree that 'Unlimited' bandwidth is a misnomer, but I also believe that so would saying we live in an 'infinite bandwidth' environment. Maybe 'incredibly massively even by ten years from now way way outside anything that we can do for just one person', but not infinite. You have a limited number of cells in your eyes, ears, tongue, nose and skin. They fire at speeds lower than some number I can't say (because I don't know)..
    Or another way, there are a finite amount of particles either in the universe, or that we can interact with (my fall back position).

    My suggestion: replace 'unimaginable [to most non dreamers]' or 'incredible [for 2001]' with unlimited in the article.
  • by grammar nazi ( 197303 ) on Wednesday January 24, 2001 @04:13AM (#485568) Journal
    Everyone on Slashdot thinks that the cost of laying new fiber optic cable is the only cost associated with cables. Let me inform you:

    1. Copper cable is heavy. One mile of Copper cable is a few thousand pounds, compared to one mile of optic which is under one hundred pounds (depending on the type).

    2. When used in telecommunications systems, copper wire needs repeater stations every 2-3 miles. These are stations that people have to routinely check and fix when something breaks. Fiber optic in contrast, only needs 1 repeater station every 300-500 miles.

    3. I forget the actual sizes, but you can send more bandwidth on one little optic than you can send in a large diameter bundle of copper wires.

    4. Glass is cheaper than copper. Once the manufacturing technology of glass fibers catches up to that of copper wire, than the prices for optical cable will be cheaper than that of copper.

    Finally, I fail to see how copper wire is any easier or cheaper to repair than optical wire when it is on the bottom of the ocean. This is an argument for a wireless system, but I think that there would be too much latency in a wireless system.

  • by Cato ( 8296 ) on Wednesday January 24, 2001 @04:17AM (#485569)
    You are making this more confusing than it really is, by not using any technical terms that might make sense (e.g. add-drop multiplexer, optical switch). A shame, since your points are valid...

    - 'Smoke' - it's hard to work out what you are talking about here - seems like the 'smoke' box is an add-drop multiplexer for DWDM, which puts multiple frequencies (aka wavelengths) from various input fibres on a single output fibre. DWDM is inherently multi-protocol of course, as each wavelength can carry a unique protocol.

    - 'Mirrors' - this is just one of the many possible all-optical switching technologies that are under development. These include MEMS (tiny mirrors that can reflect light onto different fibres), electro-holographic Bragg gratings (completely solid state and with useful testing/monitoring features), and even a bizarre technology that involves using inkjet techniques to blow bubbles in and out of place, thereby affecting switching (from Agilent).

  • A good site for further reading is www.optastic.com [optastic.com].

    Nicely written analogies making many of the optics issues much easier to grasp.

  • ... all the world looks like a nail.

    I was discussing SciAm's article about optical switching technology (which I read weeks ago in dead-tree format) with a friend who did his M.Sc. in laser crystalography. He pooh-poohed the bubble refraction/reflection switches, the nano-mirror switches, the delay loop/phase change switches... the only thing that will work, he insisted, was a set of specialized, pre-pumped lasing crystals that would boost signal power as they change the signal direction from one light path to another by refraction.

    I observed that the crystals he was talking about would only work for one wavelength (so you couldn't stack signals) and in any case don't exist now, and won't exist for some time. "Doesn't matter," he said, "that's still the only technology that'll work."

    He doesn't work on lasers and optics anymore, but if that's the kind of attitude that the telcom companies have, I'm glad there's more than one group trying to solve the problem.
  • repairing a sattelite is as difficult if not more difficult than repairing a transatlantic cable. Fiber is dirt cheap. alot cheaper than coax/copper right now when bought in serious quantities. Splicing fibre is brain-dead easy. I have used the box that fuses the fiber. lay in end1, lay in end 2 press fuse. done. (The box costs $40,000 now but hey) and fibre is not intended for end user. it is for infrastructure.

    internet lag will always exist as the fiber will always be faster than the switches and routers.

    I hope that our connections will not be several gig's per sec. 100baseT is more than anyone at home would ever need, one can watch hDTV video via that link while talking and surfing. (NOTE your set top boxes will be more like tvio's with a digital cable box and cablemodem combined. This is coming VERY soon.. I seen the beta units in testing)

    no keyboards? why? I can type faster than most people can talk.

    changes will happen... but most everything is already here. Just for the rich right now.
  • All those post-modern hippies buying fibre optic cable lamps for their home? This is most likely the cause of severely limited bandwidth on the Internet today.

    You forgot the fiber-optic christmas trees, and the little handheld flashlights with the fibers that you wave at night events :)
  • A quick Google search reveals that metacomputing seems to be related to the Grid, i.e. it's a way of gluing disparate systems together (usually supercomputers) with a single set of middleware that makes it much easier to write large-scale distributed applications.

    Such systems will be used for all kinds of scientific calculations, as well as telemedicine, distributed virtual reality caves, and so on. The Grid will eventually impinge on us all (e.g. running massive simulations to make a medical diagnosis as you sit in your doctor's surgery).

    Depending on the sort of application, the bandwidth demands can be enormous.
  • When I say an infinite bandwidth environment I really mean an analogue one. True, nerve cells either fire or don't, but the time at which they fire and the factors at a synapse deciding whether a neuron will fire or not, are continuous variables. I think a human is capable of responding to a signal without converting it to information. I believe this is the level on which instinctive and intuitive responses occur. On that level, in order to make a group of humans respond identically twice you would have to give them the exact same (analogue) input signal both times. To perfectly specify an an analogue signal requires an (ok it's a dodgy use of the word, but...) infinite amount of information (unless the signal is a combination of elementary functions or something).
  • InterNet growth will continue until there is
    interactive video of broadcat TV quality or better
    everywhere- office, school, home, vehicle.
    This is the natural human-communications-computer
    interface. We still have a way to go to figure
    out computer-video interfaces. Text interfaces
    are a passing form, mainly for academic use.
  • No company or group of companies can afford such capital outlay in today's short-term obsessed stock market. Investors would severely punish any stock that hedged its profits on a infrastructure plan that would take at least a decade to pay off.

    The days of home-installed telco equipment are coming to an end. It is expensive and problematic for telco companies to maintain equipment in consumers homes, be it for phone or data. Added to which, rapidly changing standards prohibit any telco from dedicating any strategy to any particular technology. Consider the current state of optical computing - SONET is currently the main standard, but probably on the way out in the next few years. Hence no telco is going to roll out a SONET network to consumers homes because much of the equipment driving the network will become obsolete.

    The better approach for voice and data is wireless. Not only does this allow location independence, but it also allows the telco to avoid the costly business of maintaining the line into the consumer's home.

  • The authors of this article fail to address the obvious issue of the ongoing build-outs of fiber networks -the imminent oversupply of fiber.

    In a bandwidth-starved world it seems odd to think that there is a glut of fiber, but the very soon will be if there already isn't.

    If all of the fiber in the ground right now was lit, the cost of transmission would effectively drop to zero - its just a matter of who can ride out the inevitable shakeout in the market and consolidate the networks of the ones that can't compete. In the mide-term, consumers could actually see reduced capacity as the market consolidates.

  • Hmm if memory serves, our (still quite monopolistic) British Telecom (BT) was toying with the idea of fibre to the home, as was our government...

    I live on a new houseing estate (my house will be 3 years old this month), when the BT engineer came to 'install' the phone[1] I asked him whether it was copper or fibre to the house. He said it was copper all the way from the exchange, they tried to lay fibre in the next estate over, but they couldn't get it to work.

    So BT are not only monopolistic and slow, they are also incompetent[2]



    [1] 168 pounds to 'install' my line, all he did was come in, stick a handset in the socket and make sure there was a dial tone. Amazing what you can charge when there is no competition[3]

    [2] Okay, so the estate was started _after_ BT announced they would do ADSL, so you would think that all new houses would be capable wouldn't you. No such luck, too far from the exchange and too much line impeedence

    [3] The only services allowed to put connections to a new property are, gas, electric, water and BT. No other telco is allowed, no cable company is allowed. They have to wait until the top surface is put on the road, and them come and dig it all up again...

  • This company at:
    http://www.fibercabletohome.com

    is in the starting blocks and just about ready to starting their run.

    I think the site isn't quite ready for primetime, there are still a few uncompleted links.

    Their main financial premise seems to be the equity value of owning the 'last mile' to the customer. A customer on fiber is not likely to switch back to cable or DSL, so the company can capitalize on the long-term recurring revenues from the customers.

    Maybe it will work for them, who knows?


    To the Moon!
    http://www.beefjerky.com
  • The author did have a sense of humor, however:

    PowerPoint slides at industry conferences emphasize why the deluge is yet to come.

    I think he hit the nail on the head, considering my only PowerPoint effort yeilded a 75 megabyte monster. When you understand this, 'metacomputing', 'web agents' and IT will all make sense.

  • it says 'every 120 miles an optical signal has to be converted to electrical ...' this is not correct. with optical amplifiers, you can put 5-6 of them to amplify a signal before you need to rectify it, so you have to convert optical to electrical more like every 600 miles or so.
  • The cost of patching in a new line to even a small number of homes is astronomical.

    Unless there is a major telco behind this, or another form of long term capital, this cannot take off.

  • Wireless doesn't need to work like cell phones. Surely your own company has been involved in the trials of shooting light over the open air. Its optical without the fiber, and its in development right now, and it refutes all of your spectrum arguments.

    Cable TV was the last great wiring build-out to consumers - no one is taking on that cost ever again. The last mile will be wireless.

  • The 'glut' exists because it is easier for providers to maintain prices, rather than cut them.

  • The decision/need to regenerate the signal is strongly dependent on many factors (error tolerance, fiber type, cost of transmitters, number of wavelengths, etc.) For submarine systems regeneration may only occur after 1000's of km, but the cost of these systems is much higher than land based systems. There exists a data rate/distance/cost trade-off.

    If you are in the SF bay area and interested in this subject, Photonics West is currently happening at the San Jose Convention Center (through Thursday.) For information check here [spie.org].

  • Any such solution would be effectively limited to line of sight. While this would work wonderfully in (eastern) North Dakota, most places have things like hills, trees and buildings that would get in the way. A satellite system would theoretically be able to reach everywhere, but would be at the mercy of the weather. In other words, optical communication without fiber is extremely limited.
  • All of those reasons are valid, and possibly "Everyone on Slashdot" doesn't understand this, but the real reason fiber optics hasn't completely replaced copper is that it truely is more expensive. One of the main reasons it is so expensive is that they can't make enough of it, fast enough, to meet the demand.

    All major communications companies want to be laying fiber. It's widely accepted that it will be the communications medium of the future. I worked for Lucent Technologies as a summer intern, directly in optical switching and networking. I was told on a number of occasions that Lucent couldn't even produce enough fiber to meet the internal needs, much less fill the huge back order on cable.

    This example is why fiber optics are expensive. Especially when you look at multi mode and single mode fiber. The price increase is huge. They can't make it quickly enough to meet demands.

  • Why wireless won't work

    Wireless communication is great for cell phones and GPS and a bunch of other things, but when you start talking monstrous bandwidth, you need cable. Say, for instance, that 5 million New Yorkers want internet connections of 2 Mb/s each, and that your wireless technology can pack 10 bits per Hertz (that's really tight packing!). 5 million x 2 million / 10 = 1x10^12 So to give those folks their internet access, you need 1 THz (terahertz) of electromagnetic spectrum. But the whole usable spectrum is only about 300 GHz, and the FCC probably wants some of it for little things like radio stations, air traffic control, military communication, etc. ;) Wireless connections won't work because there are too many people.

    I don't agree with your example. You are assuming that all 5 million New Yorkers are simultaneously connected to the same base-station. Your example seems to "prove" even cell-phones are impossible. In reality, a network of base-stations, each connected to each other and the backbone by fiber, would be used to implement the last mile. If one base-station was used per 1000 customers, then only (1e3 * 2e6) / 10 = 2e8 MHz if required. So, using your assumption of 10 bits per Hertz, only 200 MHz of bandwidth is needed to implement the network city-wide. Agreed, it would be damn hard to find an economical A/D converter if we needed a high SNR, but it certainly IS possible.

  • by Anonymous Coward
    Actually, I think it's very reasonable to assume that everyone will be online at once.

    For starters, people spend a lot of time on the internet. The average length of a phone call has changed in the last few years from something like 10 minutes to something like an hour, and websurfers are the reason.

    But websurfing aside, once services like video-on-demand or plain old TV are offered over these networks, people will stay connected all the time. And if you doubt the potential market for those services, think how often people rent movies, and how much more often they'd rent if they didn't have to leave their Barcaloungers!

  • You don't seem to understand how a wireless system is partitioned.

    All 5 million people could be online at once, but they are not served by a single base-station. There would be one base-station for every 1000 or so people. So, we would need 5e6/1000 = 5000 base-stations for all 5 million people to be online at once and still only use 200 MHz of bandwidth.

    Cell-phone networks work the same way. Each "cell" is served by a base-station that can handle maybe 100 simultaneous calls in its area. When a call is made from cell-phone to cell-phone, each cell phone is actually communicating with its nearest base-station and the two base-stations are communicating with each other over a fiber-optic link. Because base-stations dynamically assign bandwidth to individual cell-phones requesting a connection, when two cell phones are talking to each other they could be on totally different frequencies.

    In summary, when a wireless network is partitioned using cells, the bandwidth requirement is dependent on the number of users per base-station, not on the total number of users. Therefore, increasing the number of users from, say, 10000 to 5 million, only requires additional base-stations, NOT additional spectrum.

  • A number of wireless alternatives coming on the market now are line-of-sight. Starband and Sprint do require clear access to the satellite (not as trivial as you think - installation folks told me that in urban areas a substantial number of locations cannot recieve the signal).

    I think it would behoove you to do some research on the system I am discussing - it has been used succesfully in trials to transmit in the gigabit range.

  • The article also has lots of good SciAm-style graphics.

    Since when are SciAm's graphics good? I've always found them obtuse and esoteric. Other, less "elite" science publications have a graphics style that communicates concepts better.

    Is anyone with me on this?

  • Something I don't think you've considered is population density. For some places you could get away with 1000 people per base station with those stations fairly far apart (transmiting with a decent power output). In somewhere like New York or Tokyo you're talking about a thousand people in a couple blocks. Thats alot of base stations close together. To keep them patitioned you would need to reduce the transmission wattage to the point where some people would have to settle for slower speeds due to signal loss. Wireless communications are way too limited to enable 5 million New Yorkers to get 2Mbps connections all at once. If you don't agree you've never gotten a "Network Busy" message whilst trying to make a cell phone call in a metropolitan area.
  • Sure, I did 100 splices one day. your proceedure is correct for the old outdated system. Try the newer boxes that do everything for you (Plus have a nice LCD display so you can watch the fiber being fused and measure the splice loss.

    The new stuff makes fusing super easy. you just need the money to get to that level.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...