ARPANET Co-Founder Calls for Flow Management 163
An anonymous reader writes "Lawrence Roberts, co-founder of ARPANET and inventor of packet switching, today published an article in which he claims to solve the congestion control problem on the Internet. Roberts says, contrary to popular belief, the problem with congestion is the networks, not Transmission Control Protocol (TCP). Rather than overhaul TCP, he says, we need to deploy flow management, and selectively discard no more than one packet per TCP cycle. Flow management is the only alternative to peering into everyone's network, he says, and it's the only way to fairly distribute Internet capacity."
inventor of packet switching (Score:5, Informative)
Re:And where can I buy this flow management? (Score:5, Informative)
Re:toss one packet?! (Score:2, Informative)
Should be able to get it anywhere. (Score:3, Informative)
I have been told that the ability to do this has been around since the 1970s. Don't all equipment makers have some version?
Re:Why not now? (Score:5, Informative)
You also have problems tracking flows; routes change, so while a router may be tracking an active flow, the flow may choose another path. The router has no way of knowing this, so it has to keep track of the flow until it times out (and the timeout would have to be more than just a few seconds).
There are flow-based router architectures, but they are not generally used for ISP core/edge routers because there are too many ways they can break.
Comment removed (Score:3, Informative)
Re:And where can I buy this flow management? (Score:4, Informative)
Re:toss one packet?! (Score:3, Informative)
IMHO hacks like this don't help enough to go through the trouble of installing, and if they do help, they likely need both endpoints to cooperate in which case you might as well use a custom UDP protocol.
Linux already has per-flow fairness (Score:2, Informative)
Re:Why not now? (Score:3, Informative)
Re:toss one packet?! (Score:2, Informative)
So it quickly drops down to below the available bandwidth then slowly grows the speed up to it.
This normally happens auto-magically between the two ends of a TCP connection to grow the connection to the capacity of the smallest link in the chain as a result of random drop or FIFO queues. By tracking each flow and their window management, the window size and thus speed of the flow can be controlled by any hop in the chain.
Re:That's all fine... (Score:2, Informative)
RED is tricky to set up, but neither Blue nor PI require much tuning, if any. (I'm running Blue on all of my 2.6 Linux routers, and RED on all the 2.4 ones.)
Yes, I have. On all of my hosts and routers. It's a big win for interactive connections, but doesn't matter that much for bulk throughput.
Unless you're running some fancy link technology, you don't get to tune your MTU. If, like most of us, you're running Ethernet and WiFi only, you're stuck with 1500 bytes.
As for window sizes, they're pretty much tuned automatically nowadays, at least if you're running a recent Linux or Windows Vista.
Re:And where can I buy this flow management? (Score:2, Informative)
Anyway, in 99% of cases to achieve same thing you could use Linux with SFQ queueing.
Re:inventor of packet switching (Score:3, Informative)
If you ask Larry Roberts he would say that the honor belongs to Kleinrock.
Personally I don't think that you can say that there is a sole inventor because several people contributed to the seminal idea.
Re:toss one packet?! (Score:4, Informative)
Let's say you have 500 hosts sharing a "fat pipe." If During peak times, the combined throughput used by TCP applications cause all available bandwidth on the link to be consumed. The result is, at that instant that all available bandwith is consumed, packets get dropped suddenly and indiscriminately. This means that 500 hosts all lose a slew of packets.
Per TCP specifications, when packets aren't acknowledged, all 500 hosts back off for a moment, and then retransmit at approximately the same time, causing another sudden burst in bandwidth usage, and more dropped packets.
This problem compounds until all hosts are simply busting packets, dropping packets, backing off, and repeating. The solution to this was a technique called "RED (Random Early Detection).
What this does is essentially detect when bandwidth is almost completely utilized, and then starts selectively and "fairly" dropping packets from the TCP streams. This causes the hosts to gradually back off, until bandwidth consumption is back in check. The result is that the whole "synchronization" issue is avoided, and the link is better utilized, as throughput is constant and reliable.
There is a variation called WRED or "Weighted Random Early Detection", in which certain types of packets get cut before others. This would allow the router to avoid dropping VoIP traffic, while implementing RED on non-realtime streams instead.
You can read more about this technique here: http://www.cisco.com/en/US/docs/ios/12_0/qos/configuration/guide/qcconavd.html [cisco.com]
Re:That's all fine... (Score:3, Informative)
The mechanism used is MPLS, using RSVP TE.
Essentially, traffic is classified based on chosen parameters (protocol, port, etc) and placed into logical tunnels, and each can reach the same destination via a different path. Every so often (depending on administrator configuration, often 15 minutes), the router looks at utilisation of each tunnel on each interface, and can signal a different path for various tunnels in case of congestion.
With suitably fine grained tunnels and hysteresis configured, oscillations can be kept at bay.
For a large network with no defined central backbone, it can result in very even distribution of traffic, even when source and destination networks are the same.