UDP + Math = Fast File Transfers 449
Wolfger writes: "A new file transfer protocol talked about in EE Times gets data from one place to another without actually transmitting the data..." Well, the information is transmitted, just not in its original form. I think there are a couple of other places working on similar ideas - wasn't there a company using this for a fast file download application? User would go to download a game demo or something, receive pieces from several different places, and knit them together? Wish I could recall the company's name.
Kazaa does that (Score:2, Informative)
The file sharing networks based on fasttrack technology do that. You download a movie or game from different users at the same time. Kazaa stitches it back together.
getright (Score:4, Informative)
Other Applications (Score:2, Informative)
GetRight uses multiple sites to download from and then pieces them back together.
http://www.getright.com [getright.com]
Flow Control (Score:3, Informative)
Michael, did you even read it? (Score:5, Informative)
This technology (from the write-up anyway) uses some kind of proprietary technique to re-map the data into another domain and send the information required to reproduce it. It sounds kind of like sending a waveform as a series of Fourier coefficients rather than as actual data samples. By changing to a different domain, it is possible to send metadata from which the original information can be recreated.
I have no idea exactly how they would do it, but it doesn't sound completely impossible.
However, its nothing like Kazaa or GetRight.
Swarmcast (Score:4, Informative)
Name of company and product (Score:4, Informative)
The company's name is OpenCola [opencola.com] and the name of the product was SwarmCast. The guy who did SwarmCast, Justin Chapewske, is now at a company he started named Onion Networks [onionnetworks.com]. OpenCola appears to have completely abandon its original Open Source approach to their software.
Apparently, Justin has taken the GPL portions of Swarmcast and is improving them at Onion Networks.
Not a new concept (Score:2, Informative)
This is called compression. Everybody is doing it and it has been done before.
When you download a ZIP file, you are not downloading the content. You are downloading a mathematically transformed version of it. You then translate it back. Modems have been compressing and decrompressing on the fly since the late 1980s.
Maybe they have a better compression scheme? (Fractal based?) That would be news. Everything else is a distraction.
Re:OOPS, name of person slightly wrong (Score:3, Informative)
Oops, make that Justin Chapweske. That's what I get for typing out an odd name from memory. :-)
my take (Score:5, Informative)
Judging from this:
The sending side transmits these symbols until the box on the receiving end confirms that it's collected enough symbols. The receiving box then performs an XOR operation on the symbols to derive the original data.
It appears to me that the transmitting side generates the symbols (parameters of the equations, I guess) and begins sending them to the receiving side as fast as it can. Apparently there are multiple solutions to the equations that will arrive at the same answer, so when the receiving end has received enough symbols to make it works it says, "stop sending already!" Apparently they're getting their speed because A) things don't have to go in any order (that's how the 'net is supposed to work, right?) and B) Alice and Bob don't have to keep up this conversation: Alice: Hey, Bob, can you send me X? Bob: Okay, are you ready? A: Yes, Go ahead? B: Okay, here it comes. A: I'm waiting. B: Here's the first packet." A: What? That packet didn't make it over. B: Okay, here it is again. A: Okay, I got that packet. B: Good. A: Okay, I'm ready for the second packet. B: Okay, here's the second packet.
Okay, I had too much fun with the Alice and Bob conversation there. Anyway, it looks like there scheme is compressing things in the form of their equations, and then just sending them in a burst until the receiver is happy.
Sounds like it might work, but it'll generate a ton of network traffic, I'd bet!
Re:Compression? (Score:4, Informative)
In essence, is not this the same as file compression? The amount of information is the same (for those, who remember, what Bit is).
It is more than merely compression. The received data is compressed, which saves transmission time, but this technology is already well known (and the company isn't claiming a compression rate better than entropy [data-compression.com], or anything else silly). The innovation here is the elimination of acknowledgement or ARQ packets. I'm speculating here, but it looks like they are encoding the data by transforming a file into a huge "codeword" -- when the codeword is transmitted, the receiver waits for enough packets to correctly decode the codeword, which results in the recovery of the file. There's no need for ARQ or TCP because transmitting extra codeword elements will automatically correct any errors incurred in transmission.
FTP already provides for this (Score:2, Informative)
Re:Vectors... (Score:5, Informative)
Please note that I'm not saying this is a good scheme. It is just an example one, and one that doesn't detail the chunk polynomial conversion, which would be very important. There are several papers describing schemes where people have actually worked at making them tenable.
Modulo compression, if you want such a system to require only receiving k packets (although you send more than that), the sum of the size of the k packets must be at least the size of the original file (otherwise, you could use such a scheme to compress the file).
XOR = advanced algorithm (Score:2, Informative)
Conclusion: Only sales & marketing would try to sell a product that turns 20MB into 100MB, sends it via UDP, only in order to have the results XOR'd together.
Where do they get these people?
Re:Link to product (Score:3, Informative)
Oh, yes, here's a link to the SourceForge project for SwarmCast [sourceforge.net].
Tornado Codes (Score:5, Informative)
I don't have the paper in a computer format, but the number is TR-98-021 and John and Michael were both at Berkeley at the time (1998), so it should be fairly easy to find if someone is interested. Doubtlessly, a number of other reports on the subject should also exist that deals with the same problem but with different solutions.
Re:Not a new concept (Score:4, Informative)
UDP drops packets. What they are saying is they can packetize things in such a way that as soon as you pick up any N packets, you get the file, no matter what. They are also implying that anything less than N packets leaves you gibberish. This is quite different from file compression. It may be related to fractal file compression, but I think it's probably more related to cryptographic key sharing schemes.
Re:Vectors... (Score:1, Informative)
I bet these guys have something like that: add a bit of redundancy, split things up, blast it out UDP to the destination. The receiving end puts it back in order and tries to recover from missing packets by using the XOR operations. It takes more bandwidth but still may be faster than FTP since it doesn't spend most of it's time doing handshakes.
I don't get why the box costs 70-150 kilobucks though. Yikes.
read then speak, not distributed, not compression (Score:2, Informative)
compression takes data, applies an algorythm to it to generate new data that is representation of the whole.
contrary to that, this transforms the data to an algorythm that simulates the whole.
it is a minor point of order, but one worth thinking about because it is theoretically different.
this is also not distributed, it is point to point, I don't know where the submitter got the distributed model from, I suspect he pulled it out the air, but this is very clearly not that. It requires a machine at each end of the point2point.
however, logically, all data that is not random, can be represented by equations that are smaller than the data. However, the real load exists in the generation of the equations from the data, and not the reconstruction of the data itself, so for me this seems to be quite a possible project, though i suppose it will take quite a bit optimization.
one other thing, the article does not say that it uses a standardized technique, and it would be interesting if they did not use standard vector analysis or the like. If they used vectors, then this could just reduce to a system of file conversion from one standard to another. I think it would be far more interesting to be what it says it is supposed to be a file conversion to mathematical simulation of the file.
More details of their method (Score:2, Informative)
The RFC (Score:5, Informative)
The use of Forward Error Correction in Reliable Multicast
Enjoy.
Forward Error Correction, MojoNation (Score:2, Informative)
Might Be more Related than you think (Score:3, Informative)
Those "Tornado Codes" you mentioned are coauthored by one of the executives of this company (Luby is CTO).
Re:Article is wrong (Score:1, Informative)
Look at google for information about forward error correcting, erasure codes, tornado codes, fcast and/or luigi rizzo. There is more to this topic that PR from business-types
I understand that Microsoft used fcast to multicast nightly builds over a LAN and found the rate limiting step to be the write speed of the hard drives - the math to generate the message is not a serious strain for modern cpu's. If all of us could speak fcast, getting the lastest Linux/*BSD distro via cable modem would go a lot faster.
Re:A good idea, but old (Score:3, Informative)
BTW, it helps to read the article before posting "insightful" comments.
Re:What's going on here? (Score:3, Informative)
Swarmcast (Score:5, Informative)
I am the creator of Swarmcast, and we at Onion Networks (http://onionnetworks.com/ [onionnetworks.com]) already have a product that can provide file transfers just as fast as Digital Fountain, but ours is 3-5.5% more efficient and much much cheaper.
On the open source front, we are working on the next generation of the Swarmcast technology which will combine parallel downloads over HTTP, Multicast IP, and UDP (for tunnelling through NAT). We have started work defining a standard for this system called the "Content-Addressable Web" [onionnetworks.com] and hope to see it implemented in everything from Apache to Mozilla.
Please mod this up, people shouldn't be paying $150k for these types of technologies.
Re:Michael, did you even read it? (Score:3, Informative)
Tornado codes (Score:5, Informative)
It's called Forward Error Correction (Score:2, Informative)
Line reliability in the "normal" sense is classified by bit error rates. Here, the analogous rating would be packet dropped per second. So, it seems like a straightforward application of FEC. It is useful for the reasons that they state. TCP transmissions look like this:
Client: Request Data Packet #00001
Server: Transmit Date Packet #00001 then wait for Client to say OK or Retransmit.
Client: Request Data Packet #00002
Server: Transmit Date Packet #00002 then wait for Client to say OK or Retransmit.
....
Client: Request Data Packet #20343
Server: Transmit Date Packet #20343 then wait for Client to say OK or Retransmit.
DONE
The FEC/UDP form would look like this:
Client: Request Data FILE
Server: Transmit FEC Data Packet #00001
Server: Transmit FEC Data Packet #00002
...
Server: Transmit FEC Data Packet #20343
Client: OK...I got it. Stop Transmitting.
DONE
There is no per-packet dialog between server and client. This removes network latency from the transfer equation. Note that FEC over UDP does require redundant information to be transmitted, so there is no free lunch here, but it is certainly likely to be faster than any TCP/IP implementation.
FEC Library (Score:5, Informative)
--
Justin Chapweske, Onion Networks [onionnetworks.com]
All about Digital Fountain (Score:5, Informative)
Digital Fountain's core technology is called "meta-content (tm)". The meta-content engine produces packets based on the original content, such that if the receiver receives enough packets (just slightly more than the size of the original content), the original content can be recreated. The neat part is that it doesn't matter which meta-content packets you get. If you need to receive 10 packets, you can get 5, miss 5, get another 5, and it works. Or you can get 1, miss 10, get 9, and it works as well. As long as you receive some 10 packets from the "fountain," you can recreate the original content.
Why is this cool? Several reasons. Digital Fountain claims that TCP connections with RTT of 200ms and 2% packet loss have a bandwidth limitation of 300kbps, no matter how fast the actual transport channel is. So you just go to town to full bandwidth with UDP to use up the entire channel, and use Digital Fountain technology so it doesn't matter which 2% of packets get lost, just as long as you transmit enough packets to make up for the packet loss.
OK, why else is this cool? Imagine a Digital Fountain continuously transmitting meta-data on a multicast address. If you want to receive a file, just listen to the multicast address. It doesn't matter when you start listening, just as long as you listen for enough packets to recreate the original file. Multicast file distribution.
Interestingly enough, Digital Fountain has also pioneered multicast on-demand streaming, but the real secret sauce there is something besides meta-content, but meta-content makes it easier.
As some people have mentioned, you can use UDP with FEC to achieve some error correction. But meta-content can handle long burst errors, whereas FEC is only appropriate for short, random errors. You can literally unplug the ethernet, wait a while, and plug it back in, and you're still good to go with Digital Fountain, as long as you listen long enough.
I should mention, DF has something called "Fair Layered Increase Decrease Protocol," or FLID, to keep their UDP burst from blowing away other TCP traffic on the network.
For more information on the basic Digital Fountain technology, see: A Digital Fountain Approach to Reliable Distribution of Bulk Data [berkeley.edu].
NOT NEWS. (Score:5, Informative)
The concept of "sending equations" instead of data is extremely well-known. It's called Forward Error Correction (FEC). FEC is a very simple idea: take the source data and encode it with parity data so that you can reconstruct the original message from any N chunks of it. One form of FEC that even your stereo components might already do is Reed-Solomon encoding; you can look this up in CS textbooks. If you Google for "Luigi Rizzo Vandermonde", you'll find a fast, free C library for FEC that you can use in your own applications.
Digital Fountain was started by Mike Luby, who is something of a luminary in information theory/cryptography. The kernel of their company's IP is "tornado codes", an FEC codec that is leaner and faster than competing codecs. The basis for their protocols, last time I checked, is FLID/DL and RLC. These protocols set up multiple channels (or "sources" if you must), each transmitting random chunks of FEC-encoded files.
The drawbacks to FEC are that it can take a long time to encode data for transfer, which makes FEC hard for some real-time applications like mail, and that the amount of data transferred is going to be some percentage greater than the original file (owing to parity information). But the drawback to FEC file transfers protocols is much more significant: they aren't TCP-friendly.
The whole Internet depends on protocols that have built-in congestion responses that mimic those of TCP. Protocols that don't either starve TCP flows, or are starved by them. Protocols with no real congestion response at all rapidly destabilize Internet links by consuming all available resources. Digital Fountain originally targeted multicast media transfer. Congestion avoidance in multicast schemes is still an open research question. Does this protocol really scale?
More to the point, what's the benefit? There's obvious payoff for FEC in multicast, where backtraffic from ACKs and NAKs quickly overwhelms the group and kills performance. But in unicast-world, where we will all be living for the next decade, TCP and TCP-friendly forward-reliable protocols like SCTP already provide good performance.
Slow news week at EETimes I guess. Or Digital Fountain just hired a PR firm.
Re:All about Digital Fountain (Score:2, Informative)
Re:NOT NEWS. (Score:2, Informative)
Yes, their protocol uses TCP-friendly congestion control - read about it in their SIGCOMM paper [acm.org].
Not quite like Kazaa (Score:1, Informative)
I was a skeptic... (Score:5, Informative)
I was a Skeptic, but I'm now that I've read the Digital Fountain white papers: Meta-Content Technology White Paper [digitalfountain.com] and TCP White Paper [digitalfountain.com], I'm a True Believer.
I don't pretend to understand all the details, but the technology is based on the Luby transform [google.com] which was invented by the company's founder. Essentially the content is broken into segments which are then randomly XORed to produce the metacontent. This provides redundancy since each segment of metacontent contains information from several of the original segments. Also the metacontent segments do not require sequential transmission.
Re:heh.. (Score:3, Informative)
Depending on how much noise goes undetected at your physical layers, you should expect a TCP session to pass thru incorrect data about 1 in 10^9 to 10^12 bytes passed (thats the metric I use) - and if this is unacceptable then your application layer should check and/or correct data itself, bearing in mind the end-to-end argument [reed.com] for system design.
T
Standardization Efforts (Score:1, Informative)
http://www.digitalfountain.com/technology/libra
Re:Tornado Codes (Score:2, Informative)
The folks at Digital Fountain are indeed using a highly-tuned (and proprietary) version of tornado codes. I also recommend the following papers if you're interested in what I think has the potential to be the greatest thing since TCP: tornado codes [bu.edu] + end-system multicast [cmu.edu]
<shameless plug>
I'm currently working on a research project [bu.edu] with John and others that integrates tornado codes and end-system multicast into a Freenet-like system. Best of all, it's GPL'd!
</shameless plug>
Bandwiz software of Digital Fountain appliance ?! (Score:1, Informative)
Swarm Cast (Score:1, Informative)
http://www.opencola.com/products/4_swarmcast/
Enjoy,l8r.
Re:Integer math -- even simpler than that! (Score:2, Informative)
Overall 6 "ticks" of stereo music contain 196 bits of data, which is expanded to 588 bits written to the disc. The end result is that any error burst of 8871 or fewer bits of data on the disc can be corrected.