Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Optical Fiber Storage 71

TypeCast writes "When you've got Canada's elbow room, perhaps you can squeeze in a 'disk drive' 5,000 miles in diameter. But the plan by Canada's CANARIE researchers for a Wavelength Disk Drive (WDD) within optical networks suggests all of Universal Music's library would still make for a tight squeeze as light-speed storage. Here's a white paper on the WDD for those who aren't afraid of MS Word documents."
This discussion has been archived. No new comments can be posted.

Optical Fiber Storage

Comments Filter:
  • by Anonymous Coward
    [judge]Do you know you are mortally nerdy?
    [geek]Yes, sir.
    [judge]I sentence you to 15 years in a federal ass-pounding facility.
  • by Anonymous Coward
    [judge]Did you know we can externally correlate any router logs in existance against your post times?
    [geek]No sir.
    [judge]Did you realize we really hate jokes aboot ass-pounding?
    [geek]Oh, no sir!
    [judge]Did you know that I caress statues?
    [geek]yes sir.
    [judge]Did you finish your supper?
    [geek]fuck no, sir.
    [judge]Where is the stolen data?!
    [geek]it travels around as light waves, sir.
    [judge]This is no excuse. Pokey
  • by Anonymous Coward on Thursday February 08, 2001 @11:49PM (#444315)
    Our friendly geek is once again in a state court...
    [judge] So does your controversial web page reside in California?
    [geek] It resides in California 15 milliseconds every 350 milliseconds, your honor.
    [judge] Pardon me?
    [geek] My web page is served on optical fiber storage. It goes around the country in a big circle.
    [judge] B-but it's stored somewhere in California, r-right?
    [geek] No sir, it's encoded in photons travelling at the speed of light, you honor.
    [judge] [thinking for a few seconds] Goodness, I'd rather be put on a simple divorce case.
  • Wasn't there an article on here a few weeks ago about how scientists had managed to stop light and suspend it until they released it by using some gas? Are we going to see that be used for storage? :-)
  • Crowded? If you put every human on the planet in Texas, everyone would have more square feet of space than you have in your dorm room. Don't get out much, do you?

    I don't believe you, not even a tiny bit. According to Britannica [britannica.com], Texas is 266,807 square miles. That's 1,408,740,960 square feet. According to Ask Jeeves [askjeeves.com], the world's population is currently at 6,127,565,379 people. Dorm rooms are usually about 10 feet by 12 feet and are designed for two people, which works out to about 60 square feet per person. Without going into exact calculations we can immediately see that there would be under a square foot per person which directly contradicts your statement that each person would have more room than he does in his dorm.

    Now if you take everyone on the planet and cram them into Ontario, Canada... There is signficantly more breathing room! Ontario occupies 412,581 square miles, or 2,178,427,680 square feet. That's almost twice the square footage per person. Ontario is only the second biggest province in Canada... Kinda puts Texas to shame considering it's twice as large!

    Let's expand to the entire U.S. If you were to cram every person on the planet into the United States (3,679,192 square miles, or 19,426,133,760 square feet) you end up with a mere 3.17 square feet per person, or about 1/20 the room you'd have if you were in his dorm (assuming he has an average dorm room as given above).

    Texas is the biggest state, sure... That don't mean shit when you're talking about six billion people though.

  • Simon Travaglia has gone over this idea [iinet.net.au] previously in his BOFH series.
  • So, uhmm, the idea here is to waste a bunch of bandwidth for 10GB of storage. Hell, I just bought a 46GB drive for $150. And what happens when there's more demand on the circuits? Do they start deleting data?

  • This has been done before, but not with light. With sound! That's right, back in the 1950's they actually made memories from long mercury-filled tubes conducting sound impulses. The impulses would be amplified and "squared-up" each time around. They could only store a few thousand bits that way, but you could always have several of them running in parallel.

    -Ed
  • by yabHuj ( 10782 ) on Friday February 09, 2001 @12:51AM (#444321) Homepage
    The concept is absolutely not new. Every FDDI ring topology could be used for/with this. Whereas Token ring only allows one single packet to circulate within the ring, this "new" concept allows, no, requires the ring to be filled up.

    The "new" idea was not the ring as network topology nor the storage itself (has been done acoustically as stated above), but to coordinate client-server clusters with this. But clusters organized in ring-topology are not new either.

    In the proposed topology the master server just injects "to-be-done" packets into the ring. The clients pick up (and remove) one packet from the ring each time they want to start crunching the next work packet. The other packets will be circulating the ring until solved.

    Main problem is that one will be either wasting a lot of transmission capacity for idle data circulation - or be running into capacity problems. There is a reason why most detail work when designing clusters goes into designing the optimum network architecture for the specific problem...
  • Oh man... just imagine...

    /dev/fibre0 contains a file system with errors, check forced.
    /dev/fibre0:
    Inode 23271 has illegal blocks.
    /dev/fibre0: unexpected inconsistency; run fsck manually.

    Every time some farmer puts a hoe through a cable...
  • What with the cost of optical networking dropping by a factor of 2 every 100 days or so ( See Scientific American Dec00), soon that 10G will be 100T and very accessible. Also, with the advent of being able to stop light pulses and restart them (See Nature, 25Jan01), we can have a disk drive with yottabyte capacity (for a definition of lesser known si units, see http://physics.nist.gov/cuu/Units/prefixes.html [nist.gov]).
  • | Tolken ring actually allows for several
    | packets to circulate...

    Well, duh... Of course a Tolkien ring has to have several packets circulating at the same time.

    Three packets for the Elven-Kings under the sky,
    Seven for the Dwarf-Lords in their halls of stone,
    Nine for Mortal Men doomed to die,
    One for the Dark Lord on his dark throne
    in the land of Mordor where the Shadows lie.
    One Packet to rule them all, One Packet to find them,
    One Packet to bring them all and in the darkness bind them
  • Absolutely. In the mid-70's I worked with an SDS-920 which had a vector display unit using an acoustic delay line memory. Here are pictures of another delay line device: http://www.cs.rpi.edu/~collinsr/p203/ [rpi.edu]. Free space is even cheaper than fiber as a storage medium. If you bounce a 1 Gb/s bit stream off the Moon, you have stored something over 200 MB.

    All serial storage has an access time problem, however. You may have to wait a couple of seconds to get the data you want from the moonbounce recirculation.

    -Martin

  • When you put data on this fiber ring, within a very short time all the computers on the ring have seen the data. So if you want a bunch of computers to cooperate on a job, this would be a great way for them to update each other on what they are doing.

    Isn't this almost the definition of DNS?
    All your base are belong to us.
  • by NateH ( 26302 )
    Wasn't there an old BOFH piece mocking this sort of idea?
  • by gattaca ( 27954 ) on Friday February 09, 2001 @01:34AM (#444329)
    I remember seeing an article about nano-motors that used vaporised water to move a piston that made a shaft rotate. A friend pointed out it was a steam engine. Just very small.

    Now people are talking about fibre optic delay lines as storage devices. Some of the earliest computers stored data as sound waves in mercury [cam.ac.uk] and
    nickel wires [science.uva.nl]. A speaker injected sound in one end, it was picked up my a microphone at the other, re-shaped and squirted back in.

    Same idea, different medium.

  • Crowded? If you put every human on the planet in Texas, everyone would have more square feet of space than you have in your dorm room. Don't get out much, do you?

    Oh, you're worried about a hundred years from now? Become a teacher. More education reduces population growth rates.

  • Make sure you don't make a backup using entangled photons...
  • "Texas is 266,807 square miles. That's 1,408,740,960 square feet."

    You multiplied the number of square miles times 5,280, the number of feet in a mile. But that's only the number of square feet along a one-foot-wide strip of a square mile.

    266,807 square miles times 5,280 feet (one side of a square mile) times 5,280 feet (the number of one-foot strips in a square mile) is 7,438,152,268,800 square feet. Now do the division by the number of people. 1,213 square feet per person.

  • That's not 'empty space.' That land has great value - some would argue even more than the richest appartment in NYC, or wherever else you can think of. Man destroys so incredibly much through self rightiousness without thinking what is of true 'value.'

    A hundred years from now, when the earth's human population is sky-high, and there are more of us than there are roaches and rats, that 'empty space' might be all that's left for people to go to in order to get away from the insanity of constant, close space contact with other humans.

    Living in a dorm, I find such close-space contact near traumatic. It really takes a hit on my performance as a student and a human being. If you're like me, such a society would be unlivable, and would quickly drive you to insanity. (At least there are some cliffs nearby where I can get away from it all and relax every once in a while.)

    While magnificant skyscrapers are quite beautiful in their own way, they still can't compare with the natural beauty of millions of differently shaped and sized trees, all (mostly) unharmed by humans.

    -------
    CAIMLAS

  • And here's a working link [telus.net] to an HTML version. Dunno what happened to the original HTML version. Looks like a possible typo in the domain name (or the machine may have gone down). If that breaks, then you can try pulling it off my personal box. [bcgreen.com]

    This html translation was generated (blindly) by StarOffice [sun.com].
    --

  • If he was Canadian, he would have been Frozen Coward, not Anonymous Coward.
    --
  • A couple of things to remember:

    This protocol essentially eats bandwidth. If you have 10GB of of storage available in 80Gbits of bandwidth on CA*net3, and you're using 5GB of storage, you're sucking 40Gbits of bandwidth across the whole network before you even start to count "normal" data on the network.

    Granted, they mention making this bandwidth low priority, but you're still eating a LOT of bandwidth. Since it's across the whole network, it's actually more of a strain on the network than a 40GB point-to-point feed. This could, in some instances, affect the maximum burst bandwidth of the network for normal data and/or the data loss of 'normal' data when the saturation point is reached.

    The time savings is very theoretical. In fact, it's statistical. Mostly, what you're going to save is the cost of injecting data into the network. Beyond that, the savings you get improve as the space between nodes increases.

    example
    Let's take the example of a 100ms loop in Canada. The average delay for a piece of data in the loop is going to be 50ms. I've got an ADSL link in Vancouver. My ping time to UBC is 20MS. My ping time to UMontreal is 100MS. My ping time to what I presume is Telus's backbone appears to be 14MS. I'm going to presume that the 14ms is overhead (my distance to the loop). This gives me an average access time of 50+14=64MS for data in the loop.

    If my data originates in Montreal, then I'm going to save something like 100-64=36MS over calling the machine at Umontreal. directly. (this presumes that the Umontreal machine has the data in RAM).
    If my data originates at UBC, we have 20-64= -44ms. In other words, I LOSE 44ms over directly contacting the UBC machine for the data. (note that reducing my distance to the loop doesn't change the savings/loss since it also reduces my distance to both UBC and UMontreal).

    Mostly, where the savings lie are in the difference between sucking data out of the loop vs pulling it off of disk. You can also get some savings if the data is being essentially multicast. Those savings do not appear to be available to the current protocol which seems to remove a packet from the loop once a machine requests it. The data then needs to be re-injected for a second recipient.
    --

  • by snookums ( 48954 ) on Friday February 09, 2001 @01:24AM (#444337)

    And here is the HTML version [vitrualave.net] kindly genereated by freviewer [freeviewer.com].

  • Ordinary glass strands do not a fiber optic make. Look into the edge of a piece of ordinary window glass. Green, huh? Not that clear, really. Unless you roll it thin.

    Nor will the light stay confined to a simple glass strand. Not if it touches another piece of itself. glass optical fibers are a core of one type of glass surrounded by another of a different refractive index, and light never reaches the inside edge of the fiber.

  • In May 1949, Maurice Wilkes' team at Cambridge University completed the "EDSAC" ("Electronic Delay Storage Automatic Computer"), closely based on the EDVAC design report from von Neumann's group.

    Instead of fiber it used 16 mercury delay lines that yielded 256 35-bit words (or 512 17-bit words) of storage.

    Source: Chronology of Digital Computing Machines [best.com]
  • That's the speed of light in vacuum. In solid materials, it travels quite a bit slower.
  • when that extra bandwith is needed? This storage systems utilizes unused network bandwith but network traffic load is a rather dynamic throughout the day, does it just start deleting files at random to make room for it's primary function, network traffic? This whole concept seems terribly impractical unless a provider has absolute control over each point in their network and can manage this storage without causing latency on other traffic... I believe there would be considerable demand for such high speed distributed storage for small size, high traffic, distributed database services.
  • Nah, they'll keep backups on a huge stack of 32 megabyte floppies...

    "Everything you know is wrong. (And stupid.)"
  • hey; with a data packet cache this could use simple rules:
    • 1) if the next in sequence packet is in the cache put it into the stream next. (move to next packet)
    • 2) if the next in sequence packet is in the stream repeat it in its place in the stream. (next)
    • 3) if the cache is full (this packet is out of sequence) repeat this packet in its place (next)
    • 4) if there's space for this packet (out of seq) in the cache, chache it for being put in sequence.
    after a few loops, everything will be in sequence. Of course with access of the whole loop taking only 100ms it might be okay to just cache the sequence you're looking for and you'll have a worst case of 100ms access. This approach would be bad though for application accross the whole network.

    -Daniel

  • ...we can have a disk drive with yottabyte capacity (for a definition of lesser known si units, see http://physics.nist.gov/cuu/Units/prefixes.html).

    A yottabyte is 10 to the 24th power bytes or 2 to the 80th power bytes. I found it interesting how we have adopted the SI metric terms which are decimal based for information units which are binary based. It is all explained here http://physics.nist.gov/cuu/Units/binary.html [nist.gov]

  • Connectivity can be lost and restored. Data lost on a fiber network cannot.

    But they aren't planning to store anything long-term on it! It's only intended for very short-term data, such as which computer is working on what part of a large job, or the results of one piece of the job that finished running.

    They only will have 10GB for the whole ring; that wouldn't be much for all of Canada if people try to store MP3 files on it!

    And anyway, if you are going to make a peer-to-peer, massively parallel computer, you need to make the system robust. Forget the backhoe; suppose a power failure takes out an entire town's worth of computers all at once?

    P.S. It would be serious overkill, but I keep picturing this being used to release Linux kernel 2.6! 10:00, it ships; 10:01, every computer in Canada has a copy...

    steveha

  • by steveha ( 103154 ) on Friday February 09, 2001 @12:29AM (#444346) Homepage
    They aren't doing this in an attempt to re-invent the hard disk. This is about peer-to-peer, massively parallel computation.

    SETI@home [berkeley.edu] works in client-server fashion: your desktop computer asks the main server for a chunk of data, then chews on the data and talks to the server again. This is massively parallel computation, but it isn't peer-to-peer, it's client/server.

    When you put data on this fiber ring, within a very short time all the computers on the ring have seen the data. So if you want a bunch of computers to cooperate on a job, this would be a great way for them to update each other on what they are doing. If you did it right, you would have massively parallel distributed processing: all the computers in Canada tied into a single InterComputer. And just as Napster [napster.com] can spread popular songs around where a single FTP server would be hammered, an InterComputer potentially could handle truly large computations that any single computer (or even Beowulf [beowulf.org] cluster) couldn't.

    Multicast data packets aren't new; that's why they said it takes only a few changes to try out their ideas. Multicast packets are currently designed to die fairly quickly so they can't clog a network up too much; these guys want the packets to go all the way around the ring.

    P.S. That joke about the backhoe chopping the fiber was only a little bit funny, and then only the first time. When a backhoe hits a cable today, half of Canada does not lose Internet service! It isn't a trivial ring; it has some redundancy redundancy.

    steveha

  • So I had an idea a few years ago. Why bother
    with SPEED of access? I really don't look at
    most of my files (like, why bother? they are
    really not important).

    Get myself a USENET site. Deliberatly insert
    junk into messages and recirculate them.
    I think I could jam many gigabytes into the
    system without too many people noticing.

    Use steganography to hide the messages...
    Or, SPAM with the info hidden into the anti-
    SPAM defeaters. Or, encode into junk emails
    that will be bounced.

    Ok, it's silly. But I really have to do something
    about the 30GB of music that I am collecting but
    not listening too...

    Liberate my hard disc!

  • No, not really. With DNS, non-authoritative servers keep a cache of lookups, but the lookups don't keep floating around on the cable. If another non-authoritative server needs to look up the same record, it has to establish another connection.

    With this system, the data stays on the line. If it were used for DNS, that would mean something like the root servers would broadcast their _entire_ database, and the packets would keep circling the loop for a while until the root servers update them again. Any lookups, then, would only have to look at the packets already there; no connection necessary.

    Of course, 10GB isn't nearly enough storage capacity to hold all the root servers' data. And that doesn't even include any subdomains. Their data would have to be there too if DNS would really work like that.

    It could be really cool for distributed processing, though. Reserve a certain portion of the available bandwidth as shared memory, so that every box can read simultaneously. I think that's more what the article had in mind.

    --
    To go outside the mythos is to become insane...
  • The article gaves some minor technical details on how this is supposed to work, but it didn't really explain how the data or signal is supposed to keep alive continously within the loop. Is there some sort of signal generator that reads the incoming signal and then pumps it back out again??

    Nathaniel P. Wilkerson
    Domain Names for $13
  • Trees? In Canada? Ok, I admit there are trees, especially in British Columbia, but if you have ever lived there like myself you know full well that this landscape is not the pristine paradise so many make it out to be. What, with the clearcut logging that goes on throughout the whole northwest, its sometimes uglier than the rocky scerascape of Arizona. I don't buy that crap about Canada. The reason why it isn't full of people yet is 'cause no one wants to live there. However, I don't really think its because of the geography, probably the fact that its too cold and the economy and government do not provide an economically sound infrastructure for growth. On the weather side of things take a look at California, the weather along the coast is practicully 70 degrees all year round, no wonder there is over 30 million people living there. It was paradise until everybody realized it... Anyhow, I have no idea what I'm rambling about, its 3:12am, and I just needed to reply to the ridicoulus comments of this AC.

    Nathaniel P. Wilkerson
    Domain Names for $13
  • I converted the white paper to PDF format.

    You can download it here:
    Wavedisk White Paper (PDF) [dyndns.org]

    Cheers,
    Chase

  • I loaded it with abiword 0.7.12 and saved it as wavedisk.zabw

    ls -l wave*

    -rw-r--r-- 1 msevior users 64000 Feb 10 00:04 wavedisk.doc

    -rw-r--r-- 1 msevior users 12057 Feb 10 00:05 wavedisk.zabw

    Get your FREE abiword for your platform of choice at http://www.abisource.com

    Abi is cool :-)

    Martin [msevior@chaos ~]$

  • If more latency means increased capacity for one of these devices, as the article suggests, then I am betting that they could achieve capacity that would rival that of holographic media if they use DALnet as part of the path of the packets. I have seen ping times in the hundreds of seconds.
  • An old calculator, the Friden EC-130 used sound traveling down a wire as memory. You can read about it at http://www.geocities.com/oldcalculators/friden130. html [geocities.com].

    It is a wonderfully convoluted machine.

  • Whoops, power failure! We just lost every bit of data on the network! Guess we'll just have to start over by chiseling symbols into stone tablets...
  • I guess that's one way to use up all of that empty space Canada's got lying around...
  • Lets say I take a glass monofilament several miles or hundreds of miles long. I wind it on
    a spool and bring both ends out. I have a fiber transceiver at each end. I have then
    created one of these drives but for local use.

    Glass strands are cheap, in fact, they are sold for fiberglass "chopper guns". The strand
    should not require any sheathing as the light should remain confined in the fiber strand.
    A spool of this glass can be bought at a fiberglass supply house....how the inner end
    is accessed is a problem which may require you to unwind it from one spool onto another
    without breaking the strand. Or, chop into the side of the spool and get a strand close to
    the inner end.

    With no moving parts, and encapsulated, this should be really reliable.

    Lots of hacking potential here...how many miles of cheap plain glass strand can you have for
    a cheap fiber transceiver?
  • by GodSpiral ( 167039 ) on Friday February 09, 2001 @03:43AM (#444358)
    that the data access time is comparable to RAM.

    So shared data between processes executing in multiple computers from vancouver to halifax can be quickly accessed and updated.

    The speed increase over requesting the data from ether over it being in some computer's cache depends on how fast that computer's connection is to a backbone's router.

    Although now that I do some quick calcs, the max latency turns out to be, 180000(k/s)/8000(km) = 1/23rd of a second = 45ms, and avg latency 22ms, which is slower than HD.

    You'd be saving 5ms to 10ms maybe over getting the data from a cpu behind a router, so the applications are indeed pretty narrow, but not non-existant.
  • Well, dark moon could work but why not aim higher...

    --

  • Weird, This guy is a student AND human. Must be a freshman.
  • Maybe this could be used to cure the Slashdot effect - for a few hours after an article is posted, leave all the pages referenced in the article floating around on the network. Instead of slashdotting an unsuspecting server, readers get a copy from the nearest router.
  • Did anyone ever read the Bastard Operator From Hell that was very close to this? The story went along the lines of the BOFH convincing his boss to buy a huge quantity of CAT5 cable and store it on the roof. All the data would be looping around the network on the CAT5 cable on the roof. I just about spit Mountain Dew all over the screen when I made the connection.

  • This is true, however the capacity can be increased not by increasing the latency, but by increasing the buffer size of the routers, and by increasing the number of routers. The side effect of this increase would be increased latency. Doing this would put a certain percentage of the data on the fiber network, and a certain percentage in the buffers.
    For instance, 10 gigs/sec with a latency of 100 ms would result in 1 gig of data, but put ten routers with a (hypothetical) gig of buffer, and you have 101 gigs of space at a latency 10.1 seconds. There is sort of a space/speed ratio that has to be met. Increase the speed, and you lose space on the network. However, increase the speed of the connections (100 gig/sec if thats possible) and you can bypass this restriction.
  • Lesseee.. the article says that it takes 100ms for a packet to circle the entire network. That means an average latency of 50ms. Jeez! Thats slower than a Winchester disk from 30 years ago!!
  • But the point is that the memory already exists, and (I believe) the buffers are often separate from the routing tables. In one buffer, out another. As long as the router can route at full wire speed, there is no problem. You're just using memory that wasn't fully used before.
  • > Whoops, power failure!

    Talk to Microsoft. They'll set you up with some five-nines reliability.

  • What with the rolling blackouts in Happy-Fun-Land, I wouldn't even risk storing data this way there for 13ms out of 350.

    ObNote: A method of data storage the relies on continuous uninterrupted power for maintainence of storage is BAD. Yeah, the umpteen Tera-byte hard drive is still out of the hands of your average User, but if the power goes out, it doesn't necessarily wipe my drive. (That's what the magnets are for!)

    Kierthos
  • Would you prefer that he had it be a road crew that, oops, accidentally cut the fiber lines whilst repairing a culvert drainage.

    Big deal, he picked on farmers. If he had been talking about French farmers, it would have been completely merited. Anyway, how many times have we heard stories about Joe Farmer cutting fiber lines? It seems to make the news (at least here in the wilds of South Carolina) more then any news of road crews, inept technicians or drunk college students playing with the fiber lines.

    Finally, it seems just a touch odd that someone using a quasi-Roman name (Solidus) would call anyone gwailo....

    Just my 2 shekels.

    Kierthos
  • Millions of people across China were unable to access [mercurycenter.com] much of the Internet on Friday after an undersea cable was severed, and an official at China Telecom said it could take 10 days to fix the problem.
  • This'll do wonders for trojans, worms and the like. Imagine having one virus available to ALL coputers on a network at the same time (give or take 100ms). Talk about propagation. Wow.

    What other ways do you think people will find to abuse this new method of storage? Only time will tell...
  • This was funny, until you started making fun of farmers. That's just not cool. The master says 'be there many producers, few consumers', in deference to the farmer/peasant's role at the top of the confuscian heirarchy (and I paraphrase as you gwailo would not be able to functionally comprehend the subtle nuances).

    Solidus Fullstop, Esq.
  • Tolken ring actually allows for several packets to circulate...depends on the MAUs you are using.
  • Currently, the High School which I attend (Holy Heart Of Mary [k12.nf.ca], St. John's, NF) has access to the Ca*net 3 (whereas I always thought it was called CA3net...) for the purpose of participating in a project called "Learn Canada".

    Through this project, we are able to videoconference with schools across the country with an extremly high-quality video feed. This way, one of our teachers can teach a class in Victoria, BC, without leaving the province, teachers can be evaluated by teachers elswhere in the country, etc.

    Unfourtunately, though, I'm not allowed to play with the new server we received for the project. The entire project is running Red Hat 7.2, whereas my school uses NT servers... I was looking forward to using a Linux box at school...



    Yours ect.
  • Umm, isn't c == 3.0E8 m/s == 3.0E6 km/s? That would make this 8000km / 3000000 (km/s) (it's distance / velocity) = 2.6666 ms; much faster than the fastest hard drive.
  • I remember discussing this with two programmer friends in a pub in The Strand in 1981.

    The originator of the idea proposed bouncing radio waves or a laser beam off the moon - giving about a 4 second latency.

    After much discussion (and beer) we agreed that optical fibre wound onto a drum would be more reliable.


    So if Jim Murray or Kirk Whelan are still alive out there, get in touch!


    George
  • After reading both of the posted articles I am still not even mildly convinced that any of this is realistically applicable.

    Someone please calculate - what is the worldwide capacity of the network to hold data based in this article?

    Sure it's a neat technical idea and I am not against that, but get real guys - storage should be on STORAGE devices - disk, tape whatever....... What level of reliability, backup etc could ever be given to this concept?

    how could this be used in the real world?

    what are the capacity implications?

    what backup could ever be made available>

    Go on...
    Tell me what this is really about!!!!

  • probably time for the federal gov't to step in and grab control of the data waves before free enterprise gets too far. Must protect certain frequencies for national defense... I can see it now - data radios with lots of dials... someone better patent this before CMGI does...
  • An interesting concept I've previously considered myself... about which I'd like to make a few observations:

    The paper correctly asserts that TCP as a transport introduces significant overheads to IPC in distributed systems, but glosses over the detail. The generic nature of TCP assumes that the underlying network technology is imperfect, which as far as I can tell, the WDD does not. Hence, the cost of TCP's ability to tolerate failures is an overhead of protocol processing to establish a reliable stream transport. TCP is not ideal for high performance message passing systems, but this should not come as any surprise - just look at the inefficiencies inherent in the HTTP protocol (particularly early versions).

    It would be fairer to compare the likely performance of the WDD with a messaging system based upon a transport without retransmission support - message passing over UDP being an obvious contender. Here, protocol processing is drastically reduced, but a new communications issue must be raised: what happens if a packet is lost in-transit owing to corruption? (I admit that this is far less likely in typical optical networks of today, than yesterdays less robust CSMA/CD offerings, but this problem must be considered, particularly if the stability of a large scale distributed processing system depends upon it!

    When I read the WDD proposal I see a clear comparison with token ring systems, the only significant difference being that the "transmitting node" is able to pre-empt what data requires to be sent. To me, the storage capacity of a WDD renders it more comparable to a shared memory,. This highlights a need for a strategy to decide what to store in the shared medium, and what to store on traditional media... a page (packet) replacement algorithm if you will. A successful PRA would present a considerable performance improvement, as it would significantly reduce latency - the curse of distributed computing! Conversely, such a PRA could equally well be applied to all communications, as it effectively presents a mechanism to pre-empt what data requires transmission, and everyone likes clairvoyant systems.

    This aside, my primary objection to the usefulness of this sort of facility is that the authors assume that a WDD would present a use for spare bandwidth. I would argue that there is no such thing. When utilisation of a shared resource (the transport) is low, then latencies are lower, and this is of fundamental importance to the performance of processes interacting over a communications channel. Cf. write-times on an empty vs. full hard disk:-)

    To summarise - an interesting idea from which I can imagine many practical spin offs, but surely it is frivolous to suggest that this will revolutionise computer communication or shared data storage.

  • I follow, but feel that you are taking me out of context. When I brazenly stated that lower utilisation results in lower latencies, I accept that this requires several (not unrealistic) assumptions. I agree whole-heartedly that it is possible to statically schedule without loss of performance (providing that capacity exceeds demand), and can see your perspective when you state that latencies are not lower in synchronous systems.

    My fundamental objection is that the overwhelming majority of communications systems, and all general-purpose systems, are not synchronous... and for good reason. While we see a proliferation of synchronous transmission media, we must also consider the nature of the processes on the computers effecting communication over these networks, which are almost universally asynchronous, not to mention the effects of truly external events, such as communication triggered by user actions.

  • Back to the backhoe... I'm not talking about the redundancy redundancy redundancy of the actual connection. I'm talking more about the actual data.

    How many copies are floating around? If a small segment of the network gets cut off and my important data (silly me) happens to be in that network segment and then, as another reader mentioned, the power goes off, what happens to my data? Connectivity can be lost and restored. Data lost on a fiber network cannot.
  • I wonder what kind of redundency is built into the system? It would not be very neat to have your data flying across the fiber and run smack into a severed line cut by some poor guy's backhoe. If not redundency, will they be backing this stuff up somewhere?

    The glass just seems a little fragile...
  • This makes no sense at all. Also, the analogy with routers as the heads of a harddrive and the wavelengths as tracks is flawed.

    Let's see, the ring is 5000 miles or roughly 8000 km. The signal takes .1 sec. Without any delays, the signal would travel the 8000 km in 8000 / 300000 = 0.027 sec. The delay comes from the routers that have to convert each packet to an electronic signal, read it, and convert it back to an optical signal. To be able to do this, a router has a buffer, so that it can store a couple of megs of data so it doesn't have to drop packets too soon. It buys the router some time to do it's decode / recode stuff. It is exactly this memory that is used as a 'disk drive'.

    So just let's put in more routers, increase the latency of the signals, and thus increase the capacity of your "drive".

    The suggestion that an optical signal stores any info, is very misleading.
  • Just scanned their whitepaper (thx for the PDF version). They quickly mention router memory is not used. But then they go on without explaining what does store the data. It has to be stored somewhere. It can not be in the light signal.

    Look at it this way. If there were no delays in the network, only optical repeaters, no routers, the signal would travel at the speed of light, and travel the 8000 km. in .027 secs. To put 10GB of data in that ring, you got .027 sec. You would need a speed of 375 GBytes / sec, divided over 8 wavelengths this would be 375 Gbits/sec per wavelength. Good luck! So the light signal alone simply can not store 10 GB.

    Now, this ring has a latency of .1 sec instead of .027. This latency is introduced because of routers. Since the ring itself can only store .027 secs worth of data, the other .1 - .027 = .073 secs have to be stored somewhere else. And that can be in no other place than in the routers memory.

    (Or have they invented optical repeaters that delay the speed of light and have a memory of their own?)

    You could also buy 20 routers, hook them up with ethernet in a circle, pump around data through it as fast as you can, and enjoy an 'ethernet' drive at home. Be sure to buy routers with large buffers for increased storage!

I THINK THEY SHOULD CONTINUE the policy of not giving a Nobel Prize for paneling. -- Jack Handley, The New Mexican, 1988.

Working...