Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

UDP + Math = Fast File Transfers 449

Wolfger writes: "A new file transfer protocol talked about in EE Times gets data from one place to another without actually transmitting the data..." Well, the information is transmitted, just not in its original form. I think there are a couple of other places working on similar ideas - wasn't there a company using this for a fast file download application? User would go to download a game demo or something, receive pieces from several different places, and knit them together? Wish I could recall the company's name.
This discussion has been archived. No new comments can be posted.

UDP + Math = Fast File Transfers

Comments Filter:
  • Kazaa does that (Score:2, Informative)

    by DOsinga ( 134115 )
    User would go to download a game demo or something, receive pieces from several different places, and knit them together?

    The file sharing networks based on fasttrack technology do that. You download a movie or game from different users at the same time. Kazaa stitches it back together.
    • A program by the name of Download Accelerator does that also. It splits up the file from the same location and downloads different chunks at the same time. Worked pretty well from my limited experience.
      • A win32 FTP client called GetRight as been capable of doing this since several years ago.

        They called it "Segmented Downloads", ie the program would hit multiple ftp sites for the same file, doing a resume to get different parts at the same time. Heck, it would even locate the mirrors for you.

        And yes, it caused a substantial improvement in download speed. It seems thus that the bottleneck is seldom in the last mile.

        But this has little to do with the article, which as far as I can tell is mostly gibberish.

  • getright (Score:4, Informative)

    by awptic ( 211411 ) <infinite@com[ ]x.com ['ple' in gap]> on Thursday December 13, 2001 @08:55AM (#2698425)
    The name of the program michael is referring to is called getright, which can connect to several known mirrors of a file and download seperate fragments from each.
    • Maybe, but comparing what GetRight et al. do (parallel downloading from FTP mirrors) to how Digital Fountain would achieve the same thing [stanford.edu] (erasure codes) is like comparing cups and string to IP telephony. Sure, they achieve the same thing, but the comparison is a bit insulting...

      With the GetRight solution, 100 people downloading from my FTP mirror = 100 TCP streams worth of bandwidth and CPU consumption. With the Digital Fountain solution, 100 people downloading from my mirror = 1 stream worth of bandwidth and CPU consumption.


  • GetRight uses multiple sites to download from and then pieces them back together.

    http://www.getright.com [getright.com]

  • by Tsar ( 536185 ) on Thursday December 13, 2001 @08:57AM (#2698435) Homepage Journal
    The Transporter Fountain sits alongside a switch or router, and one Transporter Fountain is needed at the sending and receiving ends of a connection. Prices will range between $70,000 and $150,000.

    Oh, boy, I'm gonna stop by CompUSA on the way home and grab one of these.
    • No kidding - It's kind of like a FAX machine: That'll be about $70,000 if you just want to send or receive, or about $150,000 if you want to send and receive, give or take $10k.

      -"Zow"

  • Those would transfer the fastest, since they don't consist of bitmapped data, but just the instructions to create the image.

    I wonder what equations are used to convert raw unpredictable streams of data to formulas, and how come that the formulas used aren't bigger than the sent packets themselves? They mentioned XOR, but that just sounds silly, because XOR does nothing with data except do some reversible equation on them which does neither shrink or grow data.

    Does anyone have more info? It does sound interesting though...

    • Re:Vectors... (Score:5, Informative)

      by hburch ( 98908 ) on Thursday December 13, 2001 @09:21AM (#2698544)
      Consider the following (almost certainly bad, but workable) scheme:
      • Convert a chunk of the file into an order-k polynomial (use the coefficients of the polynomial to encode the chunk)
      • Send the evaluation of the polynomial at several distinct locations (more than k+1).
      • Receiver gets at least k+1 packets.
      • Using math, it recreates the original polynomial, and thus the chunk.


      Please note that I'm not saying this is a good scheme. It is just an example one, and one that doesn't detail the chunk polynomial conversion, which would be very important. There are several papers describing schemes where people have actually worked at making them tenable.

      Modulo compression, if you want such a system to require only receiving k packets (although you send more than that), the sum of the size of the k packets must be at least the size of the original file (otherwise, you could use such a scheme to compress the file).
  • Compression? (Score:3, Interesting)

    by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Thursday December 13, 2001 @09:00AM (#2698444) Homepage Journal

    In essence, is not this the same as file compression? The amount of information is the same
    (for those, who remember, what Bit is). It is just, that the usual one character per byte is awfully wastefull. Which is why the various compressors are so effective.

    Add a modern data transfer protocol and you may
    get some start up money :-)

    • Re:Compression? (Score:4, Informative)

      by s20451 ( 410424 ) on Thursday December 13, 2001 @09:20AM (#2698531) Journal

      In essence, is not this the same as file compression? The amount of information is the same (for those, who remember, what Bit is).

      It is more than merely compression. The received data is compressed, which saves transmission time, but this technology is already well known (and the company isn't claiming a compression rate better than entropy [data-compression.com], or anything else silly). The innovation here is the elimination of acknowledgement or ARQ packets. I'm speculating here, but it looks like they are encoding the data by transforming a file into a huge "codeword" -- when the codeword is transmitted, the receiver waits for enough packets to correctly decode the codeword, which results in the recovery of the file. There's no need for ARQ or TCP because transmitting extra codeword elements will automatically correct any errors incurred in transmission.

  • Flow Control (Score:3, Informative)

    by Detritus ( 11846 ) on Thursday December 13, 2001 @09:00AM (#2698448) Homepage
    You still need some form of flow control or rate limiting, otherwise a large percentage of the UDP packets are going to get dropped. Plus, you have the problem of UDP streams stealing bandwidth from TCP streams on a limited bandwidth link.
    • Re:Flow Control (Score:3, Interesting)

      by Omnifarious ( 11933 )

      Quite correct. This protocol does not sound at all TCP friendly [yahoo.com]. It needs some way of dynamically responding to network conditions to be that way. Even something so simple as doing an initial bandwidth test, then rate limiting the UDP packets to 90% of that would be a big help, though for a large file that would still create a lot of congestion problems.

      Does anybody know if IPV6 has any kind of congestion notification ICMP messages so stacks can know to throttle applications when the applications are blasting out data that's congesting the network?

      • Multicast (Score:4, Interesting)

        by srichman ( 231122 ) on Thursday December 13, 2001 @10:20AM (#2698866)
        Quite correct. This protocol does not sound at all TCP friendly [yahoo.com]. It needs some way of dynamically responding to network conditions to be that way.
        Well, "this protocol" is just multicast (from a network perspective). Though there have been research attempts to introduce pruning in multicast streams, it is inherent a non-flow controllable transmission protocol. If you take offense to the lack of "TCP friendliness" in multicast, then I suggest you complain to Steve Deering, not Digital Fountain.

        Multicast is congestion friendly in that it can inherently seriously reduce bandwidth consumption, but it's obviously not congestion adaptive. I think the easiest (and probably best) way to introduce congestion control in a multicast is to have multiple streams at different rates, e.g., you have five simultaneous multicast webcasts of a Victoria Secret show, and folks can join the mulitcast group with the video quality that best suits the bandwidth and congestion on their pipes. This idea works very well for the Digital Fountain-style software distribution: rather than having a server serving the same stream at 10 different rates, you can have 10 servers streaming their own unsynchronized Tornado encodings, and clients can subscribe to however many their pipes can support. With 10 streams of data coming at you, you can reconstruct the original data ~10 times as fast.

    • It doesn't matter if you lose data in the stream. You actually send more data than the original file - but the receiver has to receive approx the same amount of data as the original file (about 5% if I remember correctly). Thus you can send 100kbytes (for a 50k file) but the receiver only needs to get half the packets - so loss is not a problem.
      Think about simultaneous equations. If I have 2 unknowns I can solve it if I have two equations. Now imagine that I send you 4 equations, each with the two variables. You only need to receive 2 of the equations to be able to reconstruct the original two pieces of data.
      The other neat thing about this is that you can multicast traffic - and each receiver can start listenin when it wants - so if a receiver starts listening halfway through you can still get the whole file!
      • Re:Flow Control (Score:2, Interesting)

        by kmacleod ( 103404 )
        I think you missed the point. Sure, you need only to sip from the fire hose to finally quench your thirst, but that wastes a lot of water.

        Where TCP acks and controls flow on every single packet, which also must be received by the sender in order, a protocol like this can occasionally send back a report on what kind and how many packets it is receiving (fragmented, packet loss due to bandwith, packet loss due to congestion).
      • You can only solve a linear system of equations with two unkowns and two equations if the equations are independant. And you can't have more independant equations than you have variables. So your calculation is very optomistic. Imagine if I send you the following data:

        X+Y=4
        X+2Y=5
        2X+4Y=10
        2X+2Y=8

        With 50% packet loss (as opposed to a 50% chance of loss per packet*) you only get two useful equations 2/3 of the time. Of course, these boxes are expensive and can do a lot more number crunching than just linear algebra. But of course, I wouldn't really have expected a good comparison from a company whose veep of marketing said "FedEx is a hell of a lot more reliable than FTP when you're running 20 Mbytes" Clearly all those game demo sites on the net need to get with the program.

        *Computing the chance of loss per packet gives:

        1/16 chance of getting all four, 100% success
        3/16 chance of getting 3 of 4, 100% success
        8/16 chance of getting 2 of 4, 66.666% success
        3/16 chance of getting 1 of 4, 0% success
        1/16 chance of getting none, 0% success

        or 28/48 is a little more than 4/7, slightly worse odds than if you always got 2 of 4.
    • It looks to me like they don't care if packets get dropped. The sender just keeps jamming stuff down the pipe until the receiver gets enough. Yeah, seems to me they need some kind of flow control, and I definitely think it'd be a bandwidth hog.
    • You still need some form of flow control or rate limiting, otherwise a large percentage of the UDP packets are going to get dropped.
      The whole point of Digital Fountain's erasure encoding scheme is that it doesn't care how many packets get dropped on the floor; a client can reconstruct the original data just as efficiently with whatever packets it receives as if the stream had been sent at an optimal rate to begin with.
      Plus, you have the problem of UDP streams stealing bandwidth from TCP streams on a limited bandwidth link.
      This is a problem. You want to satisfy the people with the fat pipes, but not destroy all the little pipes and hose people's TCP windows. See my other post [slashdot.org] for a possible solution.
  • Kinda like IFS? (Score:2, Interesting)

    by pointym5 ( 128908 )
    I mean it's not for image compression specifically, but it definitely reminds me of IFS image compression in some ways. I'll bet that compression is very time consuming, but that's fine if you're warehousing data. I wonder if the clients are pre-loaded with a body of parameterized functions, so that the server just sends information describing what functions to run and what the parameters are. I guess if it's all based on polynomials all it needs to send are vectors of constants.

    Neat idea. Patents: here [uspto.gov] and here [uspto.gov].
  • heh.. (Score:5, Insightful)

    by Xzzy ( 111297 ) <`gro.h7urt' `ta' `rehtes'> on Thursday December 13, 2001 @09:02AM (#2698456) Homepage
    > These files routinely are mailed on tape rather
    > than transmitted electronically. "FedEx is a
    > hell of a lot more reliable than FTP when
    > you're running 20 Mbytes,"

    Having worked in the industry they mention, I'd hazard that they don't use ftp more because of the illusion of security than anything else. People in the EDA world (which is where I worked, and has a close relationship with chip manufacturers) are immensely paranoid about people getting ahold of their chip designs, because if someone steals that.. you not only lose your next chip, you enable someone else to make it for you.

    These people just don't trust firewalls and ftp yet, but they do trust putting a tape in an envelope and snail mailing it. At the very least it makes someone liable if the letter gets stolen, which you can't do with electronic transfers..

    At any rate, ftp is plenty reliable for transfering 20mb files.. I do it every time a new game demo comes out. :P Maybe they meant 20gb. Cuz I've seen chip designs + noise analysis + whatever take dozens of gigs.
    • by swb ( 14022 )
      These people just don't trust firewalls and ftp yet, but they do trust putting a tape in an envelope and snail mailing it.

      I've heard this said about the diamond business and the postal service. Diamond couriers, who are carrying just diamonds, can be tracked and robbed easily. Once a package enters the postal stream its nearly impossible to steal that specific package.

      I dunno if its really true or not, but it has a certain counterintuitive logic that makes it believable.
    • Re:heh.. (Score:4, Interesting)

      by regen ( 124808 ) on Thursday December 13, 2001 @10:57AM (#2699073) Homepage Journal
      This is really funny. I used to work for the NYSE (Stock Exchange) and it wasn't uncommon to transmit 10GB of data over night via FTP to a single brokerage. The data that they were transmitting was at least as valuable as chip designs. It would allow you to alter trade position after trades had been transacted but before they cleared. (e.g. cancel a bad transaction or increase the amount of a good transaction)
      • Please tell me that data was encrypted first.

      • Re:heh.. (Score:3, Interesting)

        Data transfered to variouse financial routines are routinely transmitted using insecure protocols in an unencrypted format. I've wored in at least 2 positions, one in a creditcard clearinghouse, another at an ECN. This data goes over unsecured channels all the time.
    • Re:heh.. (Score:3, Informative)

      by MeerCat ( 5914 )
      TCP/IP adds a 16-bit checksum to the packets. This will generally detect an error burst of 15 bits or less, if the data is uniformly distributed then this will accept a corrupt segment with probability 1 / (2^16-1). [Snader p70]. This was designed to catch bugs in routers [postel.org] etc. (which may write the wrong data when forwarding packets) rather than catch all data corruption on the wire.

      Depending on how much noise goes undetected at your physical layers, you should expect a TCP session to pass thru incorrect data about 1 in 10^9 to 10^12 bytes passed (thats the metric I use) - and if this is unacceptable then your application layer should check and/or correct data itself, bearing in mind the end-to-end argument [reed.com] for system design.

      T
  • "User would go to download a game demo or something, receive pieces from several different places, and knit them together? Wish I could recall the company's name."

    Uh...doesn't something like Download Accelerator Plus (yeah yeah, I know its a hive of spyware) already do that (downloads from multiple locations only to recombine the file later)?
  • by Chirs ( 87576 ) on Thursday December 13, 2001 @09:05AM (#2698467)
    Guys, this is nothing like Kazaa. Kazaa will let you download from several sources simultaneously, but only because it just requests different parts of the file from each source. At that point there are still send/ack type protocols in use.

    This technology (from the write-up anyway) uses some kind of proprietary technique to re-map the data into another domain and send the information required to reproduce it. It sounds kind of like sending a waveform as a series of Fourier coefficients rather than as actual data samples. By changing to a different domain, it is possible to send metadata from which the original information can be recreated.

    I have no idea exactly how they would do it, but it doesn't sound completely impossible.

    However, its nothing like Kazaa or GetRight.
    • This sounds a lot like what PostScript is to a rasterized file. A set of descriptions of what the picture looks like, which are small and easy to transmit, which are then drawn to produce the picture.

      With real vector PS its easy, since you start out by creating vectors (eg, Adobe Illustrator). How you get from a non-vector "destination" to the metadata you really want to transmit sounds complicated.
    • IANADP (i am not a dsp programmer...), but if I remember correctly, the process of the Fourier transform doesn't reduce data. The transform brings our data into the frequency domain, which, in terms of audio and video data, is where we do all of the tricks to eliminate what we've learned our "ears" can't hear or "eyes" can't see.
    • Tornado codes (Score:5, Informative)

      by srichman ( 231122 ) on Thursday December 13, 2001 @10:08AM (#2698793)
      This technology (from the write-up anyway) uses some kind of proprietary technique to re-map the data into another domain and send the information required to reproduce it. It sounds kind of like sending a waveform as a series of Fourier coefficients rather than as actual data samples.
      Actually, they use [digitalfountain.com] Tornado codes [berkeley.edu] (or a proprietary update thereof), an erasure code. That is, they use forward error correction to encode streaming data or a software distribution over a single (or multiple) client-independent streams of multicast. After a client grabs enough packets, it can reconstruct the source file.
  • Swarmcast (Score:4, Informative)

    by Webmonger ( 24302 ) on Thursday December 13, 2001 @09:08AM (#2698480) Homepage
    I believe Michael's talking about OpenCola's Swarmcast [slashdot.org].
  • by Omnifarious ( 11933 ) <eric-slash AT omnifarious DOT org> on Thursday December 13, 2001 @09:10AM (#2698485) Homepage Journal

    The company's name is OpenCola [opencola.com] and the name of the product was SwarmCast. The guy who did SwarmCast, Justin Chapewske, is now at a company he started named Onion Networks [onionnetworks.com]. OpenCola appears to have completely abandon its original Open Source approach to their software.

    Apparently, Justin has taken the GPL portions of Swarmcast and is improving them at Onion Networks.

  • Debian does something similar with the Pseudo Image Kit [debian.org].

    It gets all the parts of the install ISO cd image, from disparate sources, stitches them together, and then uses rsync to patch it to exactly make a duplicate of the original install image.

    Very nifty.

  • The quirk is that none of the data is ever transmitted; the receiving end creates its own copy of a file based on a complete set of mathematical equations.

    This is called compression. Everybody is doing it and it has been done before.

    When you download a ZIP file, you are not downloading the content. You are downloading a mathematically transformed version of it. You then translate it back. Modems have been compressing and decrompressing on the fly since the late 1980s.

    Maybe they have a better compression scheme? (Fractal based?) That would be news. Everything else is a distraction.
    • Re:Not a new concept (Score:4, Informative)

      by Omnifarious ( 11933 ) <eric-slash AT omnifarious DOT org> on Thursday December 13, 2001 @09:26AM (#2698564) Homepage Journal

      UDP drops packets. What they are saying is they can packetize things in such a way that as soon as you pick up any N packets, you get the file, no matter what. They are also implying that anything less than N packets leaves you gibberish. This is quite different from file compression. It may be related to fractal file compression, but I think it's probably more related to cryptographic key sharing schemes.

    • Maybe they have a better compression scheme? (Fractal based?) That would be news. Everything else is a distraction.

      I remember back in the good old BBS days of a program called OWS which did "fractal compression"... In reality, it deleted the file and stored the sector information of the deleted blocks in the "compressed file" -- if you moved the .ows file off the disk, you'd get "decompression" errors. :-)

    • How it works (Score:5, Insightful)

      by gargle ( 97883 ) on Thursday December 13, 2001 @09:38AM (#2698643) Homepage
      The EETimes article is extremely poorly written.

      The technique used by Digital Fountain is called Forward Error Correction. It allows a message M with m parts to be encoded into n parts, where n > m. The interesting thing about this is that any m of the n parts will allow the original message M to be reconstructed.

      This means that if a part of the message is missed, the receiver doesn't have to request a resend .. it just continues listening. This is especially cool for multicast transmission since even if receivers A and B miss different parts of the message, the broadcaster doesn't have to send the missed parts of the message to the different receivers - it just continues broadcasting since any part can substitute for any other part.
      • The RFC (Score:5, Informative)

        by gargle ( 97883 ) on Thursday December 13, 2001 @09:42AM (#2698668) Homepage
        http://www.ietf.org/internet-drafts/draft-ietf-rmt -info-fec-01.txt

        The use of Forward Error Correction in Reliable Multicast

        Enjoy.
      • The technique used by Digital Fountain is called Forward Error Correction.

        Several Forward Error Correction techniques are used in cellular networks, for example. If you have a GPRS or (soon) UMTS phone, it is already using some of the techniques that appear to be incredibly advanced if you read the EETimes article. But in fact, all these things are well known. I think that several DAB/DVB (Digital Audio/Video Broadcasting) protocols do the same as well.

    • A tech paper [digitalfountain.com] on their "tornado codes". Also, the link to their tech papers website. [digitalfountain.com]

      Didn't have much chance to look over the algorithms, but we really should have compression used more frequently in network transport.
  • For anyone that just wants the jist of the article:

    "The sending side transmits these symbols until the box on the receiving end confirms that it's collected enough symbols. "

    So basically, it's not much more than UDP with a single reply telling the server to stop transmitting.

    Not bad, but you better have some good timeouts worked into this thing. UDP by definition is a non-replying "if it gets dropped who cares?" protocol. If the receiver's connection were to go down, wouldn't the server just get flooding all the in-between routers with packets for awhile? That's not good for traffic congestion.
  • This could be done easily without the proprietary algorithms. Just send update packets with a header in each on stating that it is packet number N and there are X total packets. Then, request missing packets when you get towards the end, and put them all together in order when you get them all.

    Somewhat unrelated --- Does anyone else miss Z-Modem. We need a zmodem like program for that works over telnet so we don't have to open a separate FTP session. In the BBS days, you just typed rz filename and it came to you.
  • my take (Score:5, Informative)

    by bpowell423 ( 208542 ) on Thursday December 13, 2001 @09:17AM (#2698516)
    There have been lots of comments along the lines of, "this is just a novel compression/transmittion scheme". In a way, that looks to be true, but here's my take.

    Judging from this:

    The sending side transmits these symbols until the box on the receiving end confirms that it's collected enough symbols. The receiving box then performs an XOR operation on the symbols to derive the original data.

    It appears to me that the transmitting side generates the symbols (parameters of the equations, I guess) and begins sending them to the receiving side as fast as it can. Apparently there are multiple solutions to the equations that will arrive at the same answer, so when the receiving end has received enough symbols to make it works it says, "stop sending already!" Apparently they're getting their speed because A) things don't have to go in any order (that's how the 'net is supposed to work, right?) and B) Alice and Bob don't have to keep up this conversation: Alice: Hey, Bob, can you send me X? Bob: Okay, are you ready? A: Yes, Go ahead? B: Okay, here it comes. A: I'm waiting. B: Here's the first packet." A: What? That packet didn't make it over. B: Okay, here it is again. A: Okay, I got that packet. B: Good. A: Okay, I'm ready for the second packet. B: Okay, here's the second packet.

    Okay, I had too much fun with the Alice and Bob conversation there. Anyway, it looks like there scheme is compressing things in the form of their equations, and then just sending them in a burst until the receiver is happy.

    Sounds like it might work, but it'll generate a ton of network traffic, I'd bet!

    • Re:my take (Score:3, Interesting)

      by mr3038 ( 121693 )
      It appears to me that the transmitting side generates the symbols (parameters of the equations, I guess) and begins sending them to the receiving side as fast as it can. ...don't have to keep up this conversation: [long conversation removed]

      It seems to me that article indeed speaks about network that has high latency but high bandwidth with some loss. How about simply compressing [redhat.com] the data and using bigger packets to transmit it? If you can use big enough window while sending data you can push all the data to the network in the beginning. Conversation comes to A) Here it comes A) Done [64 packets, 125MB] B) Okay, listening B) Resend 2,7 and 25 A) Done [3 packets, 6MB] B) OK. Note that A starts sending before getting reply from B. In fact, with fast long-distance connection it could be that A gets to the end before B getting "Here it comes".

      I think if we want to speed up file transfer we need an API to tell OS that we're going to send lots of data so make it big packets or the opposite. Currently we just open socket connection to destination and start write()ing. OS has no way to guess whether or not we're going to write 100 or 10e8 bytes. We need a way to tell OS that the data we're sending isn't worth a dime before it's all done so make it big packets to minimize bandwidth wasted to TCP control traffic.

      You can opt to waste bandwidth to reduce perceived latency and that's what I think is done here. A sends file twice and in a case some packets were lost the sent copy would be used to fill in missing parts. A has sent missing packet before B had known it's missing it. Yeah, A wasted half the bandwidth for the redundant data that got correctly to the destination at the first time but we aren't interested in that. The key here is to use UDP so that lost packets are really lost instead of automatically resend. This kind of setup increases overall throughput only if latency is the only problem in your network. Perhaps this is needed in the future because increasing bandwidth is easy - not necessarily cheap - but the light crawling inside fiber is so damn slow!

  • fastest (Score:4, Funny)

    by cr@ckwhore ( 165454 ) on Thursday December 13, 2001 @09:20AM (#2698532) Homepage
    Someday when we all have extraordinarily fast computers, we'll simply be able to send somebody an MD5 sum and the computers will be able to "crack" it back into the original file. At that point, commercial software wouldn't even have to come on a CD... just print the hash on a slip of paper and the user could type it in.

    word.
    • Someday when we all have extraordinarily fast computers, we'll simply be able to send somebody an MD5 sum and the computers will be able to "crack" it back into the original file.

      Not familliar with hashes? You can't do it quite like that because an MD5 hash is the same for any number of datasets. Trying to un-do a hash is pretty idiotic at that level.

      However, think about this scenario: Send the MD5, byte count and CRC64. Now rattle through all the possible combinations of "byte count" of data that generate the same MD5 and CRC64 and values as were sent. It's still not foolproof but you've reduced the dataset considerably. Of course, now you compare your new file to the GPG sig sent to ensure you got it right.

      Perhaps as we approach quantum computing it will become more realistic to brute-force a many-hundred-megabyte file from a number of its hashes.

  • Yeah, I don't ftp is so slow that anyone is going to pay $70k for their proprietary 'Transporter Fountains'. Seems like anyone with a little common sense and math ability could easily cobble together a software UDP based transfer protocol that has all of the properties described in the article.

    The key is to build in redundancy without increasing the amount of data sent so much that you counteract the speed gains you get by using UDP.

    -josh
  • Just send restart at commands to many different servers, then cat the files onto each other. This is how Dowload Accelerator does it, and Fast Track is the same theory. Programs just take all the mental work out of it.
  • Article is wrong (Score:4, Interesting)

    by saridder ( 103936 ) on Thursday December 13, 2001 @09:22AM (#2698545) Homepage
    The article quotes that "...FTP requires packets to arrive in sequence, and TCP requires a receiving end to acknowledge every packet that arrives, so that dropped packets can be resent..."

    This is incorrect. TCP has a concept of sliding windows where once a number of packets has been received successfully, the receiver increases the number of packets that can be sent without an ack. This is an exponential number, so if it receives 2 packets successfully, it will then tell the sender that it will take 4 before an ack is needed. The only time you get a 1 for 1 ack ratio is if you miss a packet and the window slams shut.

    Furthermore, UDP for data is highly unreliable, and I wouldn't trust it across WAN's. Frame Relay switches may drop packets if you exceed your CIR and begin bursting, so that whole transfer will never succeed. Therefore you actually waste bandwidth cause the whole transfer is doomed to fail, and the sender will never know it.

    Also some routers have WRED configured in their queues, purposely dropping TCP packets to increase bandwidth on a global scale. This would damage the file transfer process as well if it was UDP based, as this system is.

    Stick with the RFC's and the tried and true TCP transport system. This company will fail.
    • Sliding windows is flow control, not error recovery.

      I would trust UDP with anything I would trust TCP with as long as the application does the error checking on the data, which is exactly what they are saying their product does. TCP is really high overhead compared to UDP, and not always necessary. One of the reasons for TCP was so that programers wouldn't have to deal with as much, but if you can make something that handles it more efficiently then you only have to send a retransmit request whenever there is lost data, and not after every window.

      Maybe it's my tendacy to fight for the underdog but I feel UDP has gotten the shaft. It's a great way to slam traffic around, and as secure as your application is written to make it.

      Nice little doc over TCP and sliding windows for anyone that might want one. [cisco.com]
    • Re:Article is wrong (Score:5, Interesting)

      by Minupla ( 62455 ) <minupla@gmail.PASCALcom minus language> on Thursday December 13, 2001 @10:31AM (#2698921) Homepage Journal
      Stick with the RFC's and the tried and true TCP transport system. This company will fail.

      You may be right, they may fall on their noses. Or the win with their system might be large enough that we decide that we need to rework the way we do some things. Hard to say at this point.

      I do take issue though with your 'Stick with the RFCs' comment. If we stick with what we know works, we will never evolve. If the folks at CERN had had this attitude, we'd still be using GOPHER (side note: my favorate interview question for testing people who claim to have a long time Net experience, 'Explain the difference between Archie and Veronicia'.)

      GOPHER was the tried and true text retrieval system. Besides, UDP has an RFC, and is a perfectly legit method of moving data, provided you accept its limitations and ensure you correct for them. TCP has a lot of overhead that is not always required. If our network breaks because someone sends UDP, its we who need to reread our RFCs.

      'Nuff Said.
    • The article quotes that "...FTP requires packets to arrive in sequence, and TCP requires a receiving end to acknowledge every packet that arrives, so that dropped packets can be resent..." This is incorrect. TCP has a concept of sliding windows where once a number of packets has been received successfully, the receiver increases the number of packets that can be sent without an ack. This is an exponential number, so if it receives 2 packets successfully, it will then tell the sender that it will take 4 before an ack is needed.
      TCP still acknowledges every packet; it just batches acknowledgements as an optimization. An important point, though, is that if you starting dropping packets (or if they arrive out of order) at a moderate rate (could even be a small loss rate, depending on your delay and bandwidth), fast retransmit [google.com] (introduced in TCP Reno) will cause you to acknowledge every packet.

      In contrast, UDP never acknowledges (or disacknowledges) a packet.

    • Re:Article is wrong (Score:4, Interesting)

      by seanadams.com ( 463190 ) on Thursday December 13, 2001 @12:35PM (#2699596) Homepage
      Furthermore, UDP for data is highly unreliable, and I wouldn't trust it across WAN's.

      You're missing the point of UDP. UDP [faqs.org] is just a *tiny* layer on top of IP, which adds a little extra information (basically the port number) so that the OS can deliver a packet to the right application. UDP can not be compared with TCP - it doesn't provide reliability and flow control, and it has absolutely no notion of a stream of data. If desired, these can be provided in the application layer (see UDP, TFTP, NFS, etc. etc.)

      TCP [ibiblio.org] is a reliable transport, but it's much, much more than that. First off, the fact that you're using TCP doesn't make the path between sender/receiver any more reliable. Your packets get dropped just the same as if they were UDP or any other protocol over IP. TCP provides reliability by retransmitting lost packets, but you knew that. It also provides flow control and congestion avoidance - this means detecting when the receiving end (and the router queues in between) are ready for more data, and throttling your transmission rate to match that capacity. It also means being "fair" to other streams sharing the same bottleneck(s). It does this by "backing off" the number of packets in flight, i.e. halving the congestion window, to reduce the data rate. These algorithms are a very active field of research - there's a *lot* more to TCP than meets the eye of a socket programmer.

      When TCP loses a packet, that packet must be retransmitted. This is expensive because it means another RTT.

      Anyhoo...

      You can think of FEC for data transmission as being analogous to RAID 5 for storage. By adding one extra bit (the XOR of the rest of the word) you can lose any single bit and still know it's value. It's very simple. If the word is:

      0 1 0 1

      And I add an extra "parity" bit, where the parity bit is 1 is the number of ones in the rest of the word is odd, zero if it's even:

      0 1 0 1 [0]

      I can now lose any one bit (including of course the parity bit). Eg if I have only

      0 1 X 1 [0]

      Then I know the missing bit is a '0', because if it were '1' then the parity bit would be a zero.

      Applying this to data transmission, you can see that by sending just one extra packet, you greatly reduce the chance of having to retransmit anything.

      EG if I have to send you 10 packets over a link with 10% packet loss, there's a 65% chance that I'll have to retransmit one of those packets. (and a 10% chance that each retransmitted packet will have to be sent again, and so on).

      However if I'm using FEC and I send one extra "parity" packet, then I only have to retransmit if TWO OR MORE packets are lost. The chances of losing TWO out of the eleven packets is only 30%, so you can see that for an overhead of 10%, I've reduced the number of retransmits by a factor of more than two! I hope those figures are right. I used this tool [vassar.edu] to calculate them. Of course there are a lot of knobs you can turn depending on how much overhead you can afford for the parity information, and what degree of packet loss you want to be able to tolerate.

      Anyway, you can see that there are lots of possible improvements/alternatives to TCP - it's an old protocol and a lot of research has been done since RFC 793.
  • Udpcast [linux.lu] in FEC mode does this too: in addition to the original data, it can transmit an arbitratry amount of "FEC" blocks which are a linear combination of the data blocks. If some data blocks are lost in transit, udpcast can recalculate them from the FEC blocks by multiplying the vector of received data by the inverse encoding matrix.
  • Quoted from the article:
    In this case, the Transporter Fountain creates not equations but hundreds of millions of "symbols" which can be used to reconstruct the data. The sending side transmits these symbols until the box on the receiving end confirms that it's collected enough symbols. The receiving box then performs an XOR operation on the symbols to derive the original data.
    So, assuming that each "symbol" is at least one byte, then creating "hundreds of millions" of these symbols would result in hundreds of megabytes of data. Furthermore, the guy quoted 20MB as being a large amount of data to send.

    Conclusion: Only sales & marketing would try to sell a product that turns 20MB into 100MB, sends it via UDP, only in order to have the results XOR'd together.

    Where do they get these people?

  • Tornado Codes (Score:5, Informative)

    by Jonas Öberg ( 19456 ) <jonas@gnu.org> on Thursday December 13, 2001 @09:24AM (#2698560) Homepage
    While not actually related, John Byers, Michael Luby and Michael Mitzenmacher wrote a paper on using Tornado codes to speed up downloads. Basically, what they propose is clients accessing a file from more than one mirror in parallel and using erasure codes to make the system feedback-free. The abstract:

    Mirror sites enable client requests to be serviced by any of a number of servers, reducing load at individual servers and dispersing network load. Typicall, a client requests service from a single mirror site. We consider enabling a client to access a file from multiple mirror sites in parallel to speed up the download. To eliminate complex client-server negotiations that a straightforward implementation of this approach would require, we develop a feedback-free protocol based on erasure codes. We demonstrate that a protocol using fast Tornado codes can deliver dramatic speedups at the expense of transmitting a moderate number of additional packets into the network. Our scalable solution extends naturally to allow multiple clients to access data from multiple mirror sites imultaneously. Our approach applies naturally to wireless networks and satellite networks as well.

    I don't have the paper in a computer format, but the number is TR-98-021 and John and Michael were both at Berkeley at the time (1998), so it should be fairly easy to find if someone is interested. Doubtlessly, a number of other reports on the subject should also exist that deals with the same problem but with different solutions.
    • by Anonymous Coward

      While not actually related, John Byers, Michael Luby and Michael Mitzenmacher wrote a paper on using Tornado codes to speed up downloads. Basically, what they propose is clients accessing a file from more than one mirror in parallel and using erasure codes to make the system feedback-free.

      Those "Tornado Codes" you mentioned are coauthored by one of the executives of this company (Luby is CTO).
  • Sorry, the whole article seems to make some magic mumbo-jumbo out of the process. Apparently the file is transformed, but how does that transformation help? The main difference between UDP and TCP in this case is, that TCP maintains the sequence of Packets, so after splitting a file up, sending it as TCP-Packets and combining it again, all parts (sent as Packets) are in the right place. UDP does no such thing, and also UDP doesn't check, if a packet really reached it's destination. This frees UDP of some overhead TCP has. But to send a large File (with a simple approach), now you have to label each UDP-Packet with a sequential number, and, at the end, check if all Packets arrived (and maybe request missing Packets again), then rearrange them according to the sequence numbers.

    Now i don't see, how a transformation of content helps here, instead of adding the information where in the file the packet goes (a kind of serial number), now you have to label, where in the equation it should go (a kind of coefficient index), so the receiving end knows, whether it has all information, and which information is still missing, and must be requested again.
    • That's not it. Because of their algorithm, order doesn't matter, and neither do dropped packets. The receiver only needs any n packets, so the transmitter keeps sending until the receiver says it got enough. Then the results are magically XORed to get the original file. So, no, you don't need a sequential number, you don't need to check if all packets arrived, and you don't have to rearrange them.
      • Yup, reading a few answers here i'm already reading up on FEC (why didn't the EETimes mention this?). The nifty thing about this is, that it's also a nice way to store information in a distributed way (think freenet, gnutella, ...) as far as i understand it. This opens a whole new perspective on the whole P2P theme, since a piece of information wouldn't be stored on one host alone, and also because of the implied redundancy, meaning that the information would still survive in that network until a critical number of hosts would be disconnected. Also netload would be distributed on the sending end, opening some interesting ways for congestion control.
  • It's an encoding scheme that sends you the instructions on how to build something rather than the stuff itself. Not so special as they make it sound. Saying that you get the data without it being sent to you is the biggest troll for mid-level clueless managers that want to download their "repr0ns" faster. Not that I'm even sure it will work that well....
  • This is basically what the guys doing reliable multicast get up to plus what you do for tape backups. It isn't particularly new.

    You create your data in records and groups. Each group contains a longitudenal XOR of the other records within the block. This comes from tape backups that were proof against bad-spots and was later used in RAID.

    You then sequence and shoot your data in records across the network. If one record is dropped, it can be recreated through the XOR redundancy record. If two records are dropped, you need a rerequest mechanism. This can be either on UDP or via a separate TCP link.

    If you want an example of prior art, go to the trading room of a bank. How do you think all those prices are delivered to every workstation?

  • ...wasn't there a company using this for a fast file download application? User would go to download a game demo or something, receive pieces from several different places, and knit them together?

    michael, this is not what the product does. From the article:
    By translating a packet stream into mathematical elements, the company eliminates the back-and-forth transactions that confirm whether data has reached its destination. In the Digital Fountain approach, the receiving end waits until it has received a certain number of packets, then signals the transmitting side to stop sending. The operation doesn't require a network processor, but relies instead on the computational power of standard PC processors.

    The quirk is that none of the data is ever transmitted; the receiving end creates its own copy of a file based on a complete set of mathematical equations.

    It appears as though the singal is broken down into equations, that when combined produce the original data. These equations are all sent from the same server to the destination client. The speed increase then comes from the fact that the size of the equations is less than the size of the data.

    The article does not mention that the equations come from multiple servers, which is a very big difference! IMO, this technology is much more newsworthy than yet another multi-server downloading tool like Kazaa.
  • ok, people, you seem to be far away from your thinking apparatus.
    compression takes data, applies an algorythm to it to generate new data that is representation of the whole.

    contrary to that, this transforms the data to an algorythm that simulates the whole.

    it is a minor point of order, but one worth thinking about because it is theoretically different.

    this is also not distributed, it is point to point, I don't know where the submitter got the distributed model from, I suspect he pulled it out the air, but this is very clearly not that. It requires a machine at each end of the point2point.

    however, logically, all data that is not random, can be represented by equations that are smaller than the data. However, the real load exists in the generation of the equations from the data, and not the reconstruction of the data itself, so for me this seems to be quite a possible project, though i suppose it will take quite a bit optimization.

    one other thing, the article does not say that it uses a standardized technique, and it would be interesting if they did not use standard vector analysis or the like. If they used vectors, then this could just reduce to a system of file conversion from one standard to another. I think it would be far more interesting to be what it says it is supposed to be a file conversion to mathematical simulation of the file.
  • From the article:
    "FedEx is a hell of a lot more reliable than FTP when you're running 20 Mbytes," said Charlie Oppenheimer, vice president of marketing at Digital Fountain.

    Maybe I'm misunderstanding the quote, but I semi-regularly download 600+ MB iso images and other multiple-hundred-meg files over ftp links at work, and I've never had problems. It's probably just me, but that really sounds like a load of bull, designed to whip up interest in an otherwise lack-lustre product (given it's high price).

    Cheers,

    Tim
  • See http://www.digitalfountain.com/getDocument.htm/tec hnology/DF_MetaContentWhitePaper_v4.pdf
  • The technique is called Forward Error Correction [google.com]. I don't know much about it, but I know that you can do things like break up a file of N*B bytes into 2N blocks of B bytes each and then be able to reconstruct the file from any N of the 2N blocks. The GPL'ed Mojonation [mojonation.net] system uses it, if I recall correctly, to distribute each 32kB chunk of data into eight 8kB blocks allowing reonstruction from any four of them.
  • Luby Transform Codes (Score:5, Interesting)

    by Detritus ( 11846 ) on Thursday December 13, 2001 @09:52AM (#2698715) Homepage
    After looking through some of the material on the company's web site, this product appears to be based on LT (Luby Transform) coding. Each encoded packet is the result of XORing a random selected set of segments from the original file. When sufficient packets have been received, they can be used to reconstruct the original file. Insert magic algorithm. The nice thing about this is that the transmitter can continually stream packets, and a receiver can start collecting packets at any point in time. When the receiver has collected sufficient packets, it can reconstruct the original file. Packet ordering is totally irrelevant. You just need enough packets to generate a unique solution. The math for the code has not been published yet, but this is supposed to be a successor to tornado codes, which have been described in the literature.
  • security (Score:2, Offtopic)

    by night37 ( 543185 )
    I wonder how hard it would be to highjack a UDP based session like this. What if bogus packets are injected along with the stream of valid ones. Does the math include any form on encyption? Or is this a tunnel for other protocols? Damn it, we need to move away from clear text protocols, not create new ones!
  • Swarmcast (Score:5, Informative)

    by Orasis ( 23315 ) on Thursday December 13, 2001 @09:59AM (#2698753)
    The "Math" they use is called Forward Error Correction (FEC) and is the same stuff that the Swarmcast distributed download system is based off of (http://sf.net/projects/swarmcast/ [sf.net]).

    I am the creator of Swarmcast, and we at Onion Networks (http://onionnetworks.com/ [onionnetworks.com]) already have a product that can provide file transfers just as fast as Digital Fountain, but ours is 3-5.5% more efficient and much much cheaper.

    On the open source front, we are working on the next generation of the Swarmcast technology which will combine parallel downloads over HTTP, Multicast IP, and UDP (for tunnelling through NAT). We have started work defining a standard for this system called the "Content-Addressable Web" [onionnetworks.com] and hope to see it implemented in everything from Apache to Mozilla.

    Please mod this up, people shouldn't be paying $150k for these types of technologies.
    • FEC Library (Score:5, Informative)

      by Orasis ( 23315 ) on Thursday December 13, 2001 @10:17AM (#2698854)
      Oh yeah, and you can download our completely patent-free and open source FEC library from here [onionnetworks.com] and build your own Multicast or UDP based download system very quickly (provided you get the flow control right :)

      --
      Justin Chapweske, Onion Networks [onionnetworks.com]
    • shouldn't be paying $150k for these types of technologies.

      You haven't looked at the outrageous prices of Cisco equipment lately, have you? :-)

      Assuming these boxes are for accelerating multicast applications and preserving WAN bandwidth, then US$70,000 per box sounds competitive with Cisco. Put a large one of these boxes at the headend of a multicast, and one box for each LAN where there are a number of receiving clients, and there could be considerable savings in WAN usage.

      /. readers don't realize how expensive a WAN link costs, and how those costs jump in large multiples when a PHB demands more bandwidth for his pet project. Especially outside of the US where international WAN links cost upwards of $10,000/month for 2Mbit/second, and to add another 2Mbit/sec will cost another $10,000 plus a long wait for turn-up. If the PHboss absolutely has to roll out a multicast application immediately, I'd throw a handful of these boxes on the network, and not bother with buying more bandwidth for a while.

      the AC
  • by tonywestonuk ( 261622 ) on Thursday December 13, 2001 @10:13AM (#2698818)
    I expect this system is a little like Raid 5, Used on Hard Drives. Eg, 5 disks, 1 goes down, you still have enough data to restore the failed drive.- This seams simalar to been sent 5 Million UDP packets, 1Million get lost on the way, however, you still can piece together perfectly what you wanted from the remaining 4Million.
  • There is nothing "proprietary" here. The techniques for reliable transmission of digital data over unreliable media has been a central area of EE research for at least three decades. The unreliable media is now UDP instead of broadcast RF or transmission lines.

    Line reliability in the "normal" sense is classified by bit error rates. Here, the analogous rating would be packet dropped per second. So, it seems like a straightforward application of FEC. It is useful for the reasons that they state. TCP transmissions look like this:


    Client: Request Data Packet #00001
    Server: Transmit Date Packet #00001 then wait for Client to say OK or Retransmit.
    Client: Request Data Packet #00002
    Server: Transmit Date Packet #00002 then wait for Client to say OK or Retransmit.
    ....
    Client: Request Data Packet #20343
    Server: Transmit Date Packet #20343 then wait for Client to say OK or Retransmit.
    DONE

    The FEC/UDP form would look like this:

    Client: Request Data FILE
    Server: Transmit FEC Data Packet #00001
    Server: Transmit FEC Data Packet #00002
    ...
    Server: Transmit FEC Data Packet #20343
    Client: OK...I got it. Stop Transmitting.
    DONE

    There is no per-packet dialog between server and client. This removes network latency from the transfer equation. Note that FEC over UDP does require redundant information to be transmitted, so there is no free lunch here, but it is certainly likely to be faster than any TCP/IP implementation.
  • by TheSync ( 5291 ) on Thursday December 13, 2001 @10:21AM (#2698873) Journal
    OK folks, here is the "real deal."

    Digital Fountain's core technology is called "meta-content (tm)". The meta-content engine produces packets based on the original content, such that if the receiver receives enough packets (just slightly more than the size of the original content), the original content can be recreated. The neat part is that it doesn't matter which meta-content packets you get. If you need to receive 10 packets, you can get 5, miss 5, get another 5, and it works. Or you can get 1, miss 10, get 9, and it works as well. As long as you receive some 10 packets from the "fountain," you can recreate the original content.

    Why is this cool? Several reasons. Digital Fountain claims that TCP connections with RTT of 200ms and 2% packet loss have a bandwidth limitation of 300kbps, no matter how fast the actual transport channel is. So you just go to town to full bandwidth with UDP to use up the entire channel, and use Digital Fountain technology so it doesn't matter which 2% of packets get lost, just as long as you transmit enough packets to make up for the packet loss.

    OK, why else is this cool? Imagine a Digital Fountain continuously transmitting meta-data on a multicast address. If you want to receive a file, just listen to the multicast address. It doesn't matter when you start listening, just as long as you listen for enough packets to recreate the original file. Multicast file distribution.

    Interestingly enough, Digital Fountain has also pioneered multicast on-demand streaming, but the real secret sauce there is something besides meta-content, but meta-content makes it easier.

    As some people have mentioned, you can use UDP with FEC to achieve some error correction. But meta-content can handle long burst errors, whereas FEC is only appropriate for short, random errors. You can literally unplug the ethernet, wait a while, and plug it back in, and you're still good to go with Digital Fountain, as long as you listen long enough.

    I should mention, DF has something called "Fair Layered Increase Decrease Protocol," or FLID, to keep their UDP burst from blowing away other TCP traffic on the network.

    For more information on the basic Digital Fountain technology, see: A Digital Fountain Approach to Reliable Distribution of Bulk Data [berkeley.edu].
    • Digital Fountain claims that TCP connections with RTT of 200ms and 2% packet loss have a bandwidth limitation of 300kbps, no matter how fast the actual transport channel is.

      First, 2% packet loss is terrible, even on a WAN.

      Second, 200ms is terrible latency, unless you're crossing an ocean.

      Neglecting packet loss (which ought to be neglectable, though admittedly isn't at 2%), your maximum TCP throughput will be (TCP Window Size)/(2 * Latency), or the bandwidth, whichever is more. That comes to about 1280kbps on that 200ms link if you aren't using TCP window scaling options, and much higher if you do.
    • You can easily use FEC for burst errors, you just use a cross-interleaved encoding on top of the basic encoding. That's what CD players do and they can eat a 4000 bit long burst error without a hiccup. And CD-ROM encodes another layer on top of CD Audio, and thus can sustain even larger burst errors.
    • As some people have mentioned, you can use UDP with FEC to achieve some error correction. But meta-content can handle long burst errors, whereas FEC is only appropriate for short, random errors.

      This depends on the parameters of your FEC algorithm. Most FEC algorithm do indeed divide the data to be transmitted into "slices" of n blocks, to which they add k blocks. If more than k blocks are lost per slice, you're SOL, even if enough extra blocks are available in other slices. However, there is a way around that: just tune your FEC algorithm so as to make n large enough that all of your file fits into one slice.

      The downside of this is that the larger the slice is, the more computationnally intensive it is. If the only thing you're concerned about are burst errors, just interleave your slices.

      You can literally unplug the ethernet, wait a while, and plug it back in, and you're still good to go with Digital Fountain, as long as you listen long enough.

      This is possible with run-of-the-mill FEC algorithms as well, as long as you put your entire file into a single giant slice.

  • NOT NEWS. (Score:5, Informative)

    by tqbf ( 59350 ) on Thursday December 13, 2001 @10:23AM (#2698882) Homepage
    Digital Fountain has been around, with product, for a long time. The technique they are building on for file transfer, has been around even longer. Their protocols are even IETF drafts/standards now.

    The concept of "sending equations" instead of data is extremely well-known. It's called Forward Error Correction (FEC). FEC is a very simple idea: take the source data and encode it with parity data so that you can reconstruct the original message from any N chunks of it. One form of FEC that even your stereo components might already do is Reed-Solomon encoding; you can look this up in CS textbooks. If you Google for "Luigi Rizzo Vandermonde", you'll find a fast, free C library for FEC that you can use in your own applications.

    Digital Fountain was started by Mike Luby, who is something of a luminary in information theory/cryptography. The kernel of their company's IP is "tornado codes", an FEC codec that is leaner and faster than competing codecs. The basis for their protocols, last time I checked, is FLID/DL and RLC. These protocols set up multiple channels (or "sources" if you must), each transmitting random chunks of FEC-encoded files.

    The drawbacks to FEC are that it can take a long time to encode data for transfer, which makes FEC hard for some real-time applications like mail, and that the amount of data transferred is going to be some percentage greater than the original file (owing to parity information). But the drawback to FEC file transfers protocols is much more significant: they aren't TCP-friendly.

    The whole Internet depends on protocols that have built-in congestion responses that mimic those of TCP. Protocols that don't either starve TCP flows, or are starved by them. Protocols with no real congestion response at all rapidly destabilize Internet links by consuming all available resources. Digital Fountain originally targeted multicast media transfer. Congestion avoidance in multicast schemes is still an open research question. Does this protocol really scale?

    More to the point, what's the benefit? There's obvious payoff for FEC in multicast, where backtraffic from ACKs and NAKs quickly overwhelms the group and kills performance. But in unicast-world, where we will all be living for the next decade, TCP and TCP-friendly forward-reliable protocols like SCTP already provide good performance.

    Slow news week at EETimes I guess. Or Digital Fountain just hired a PR firm.

    • Re:NOT NEWS. (Score:2, Informative)

      by adadun ( 267785 )
      The whole Internet depends on protocols that have built-in congestion responses that mimic those of TCP. Protocols that don't either starve TCP flows, or are starved by them. Protocols with no real congestion response at all rapidly destabilize Internet links by consuming all available resources. Digital Fountain originally targeted multicast media transfer. Congestion avoidance in multicast schemes is still an open research question. Does this protocol really scale?

      Yes, their protocol uses TCP-friendly congestion control - read about it in their SIGCOMM paper [acm.org].
  • I was a skeptic... (Score:5, Informative)

    by DaoudaW ( 533025 ) on Thursday December 13, 2001 @11:21AM (#2699166)
    For example, consider the transfer of a 1 GB file with an average Internet packet loss rate (2%) and global average RTT (just over 200 msec). For this case TCP has an inherent bandwidth limitation of 300 Kbps regardless of link speeds and thus will take more than 7 hours to transfer the file. A Digital Fountain server sending Meta-Content allows almost complete bandwidth utilization. With Digital Fountain, on a fully utilized 1.5 Mbps (T1) line the transfer of the 1 GB file will take about 1.5 hours and on a fully utilized 45 Mbps (T3) line the transfer time will be a little over 3 minutes.

    I was a Skeptic, but I'm now that I've read the Digital Fountain white papers: Meta-Content Technology White Paper [digitalfountain.com] and TCP White Paper [digitalfountain.com], I'm a True Believer.

    I don't pretend to understand all the details, but the technology is based on the Luby transform [google.com] which was invented by the company's founder. Essentially the content is broken into segments which are then randomly XORed to produce the metacontent. This provides redundancy since each segment of metacontent contains information from several of the original segments. Also the metacontent segments do not require sequential transmission.
  • by Animats ( 122034 ) on Thursday December 13, 2001 @01:54PM (#2700025) Homepage
    The article claims that there's no overhead added, in terms of extra data, to transmit data in this way. That has to be wrong. If you add redundancy, you have to add bits. Maybe not very many, but some. Basic thereom from Shannon.

    I'd guess they have to send about 2x as much data as the original for this to work. Maybe they're applying a compressor (like gzip) up front, and taking credit for that.

    As others have pointed out, there are lots of other schemes in this class. Bear in mind that this is a protocol for broadcast and multicast, not general file transfer.

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...