Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Technology

ARPANET Co-Founder Calls for Flow Management 163

An anonymous reader writes "Lawrence Roberts, co-founder of ARPANET and inventor of packet switching, today published an article in which he claims to solve the congestion control problem on the Internet. Roberts says, contrary to popular belief, the problem with congestion is the networks, not Transmission Control Protocol (TCP). Rather than overhaul TCP, he says, we need to deploy flow management, and selectively discard no more than one packet per TCP cycle. Flow management is the only alternative to peering into everyone's network, he says, and it's the only way to fairly distribute Internet capacity."
This discussion has been archived. No new comments can be posted.

ARPANET Co-Founder Calls for Flow Management

Comments Filter:
  • by gnutoo ( 1154137 ) on Friday April 04, 2008 @07:03PM (#22968900) Journal

    He seems to agree [stallman.org]. This surprised me but it seems that equipment can do this fairly.

  • The problem is, some people will start throwing away 2 packets instead of 1 so that they can get more "throughput" on more limited hardware. Someone else will compete by tossing 3, and the arms race for data degradation will begin.

    Will this method really offset the retransmits it triggers? Only if not everyone does it, unless I'm missing something.

    What might work better is scaled drops: if a router and its immediate peers are nearing capacity, they start to drop a packet per cycle, automatically causing the routers at their perimeter to route around the problem, easing up on their traffic.

    It still seems like a system where an untrusted party could take advantage to drop packets in this manner from non-preferred sources or to non-preferred destinations however.
  • Why not now? (Score:3, Interesting)

    by eldorel ( 828471 ) on Friday April 04, 2008 @07:14PM (#22969012)
    Can someone explain why this hasn't already been implemented?
    Seems like there would have to be a good reason, otherwise this would just make more sense, right?

  • Re:toss one packet?! (Score:5, Interesting)

    by shaitand ( 626655 ) on Friday April 04, 2008 @07:30PM (#22969114) Journal
    'I'm not big on networking but if I'm sending data to someone and some "flow management" dumps one of the packets, won't my computer or modem just resend it?'

    Yes and when the retransmission occurs the router may be able to handle your packet. The router won't be overloaded forever after all.

    The bigger part of the equation is that with TCP the more packets are dropped the slower you transmit packets. With this solution the heaviest transmissions would have more packets dropped and therefore be slowed down the most.

    I admit, I'd have to check the details of the protocol to see if this is open to abuse by those with a modified TCP stack. The problem is that the packets are dropped in a predictable manner and a modified TCP stack could be designed to 'filter' the noise and yet still degrade when other packets are lost and provide a reliable connection.
  • Reduce hop count. (Score:2, Interesting)

    by suck_burners_rice ( 1258684 ) on Friday April 04, 2008 @07:31PM (#22969128)
    This does not sound like a correct solution. Rather, emphasis should be placed on installing more links, both in parallel to existing links, and "bypass" links that will shorten the number of hops from one given location to another. Whether based on copper, fiber, satellite, or other technology, the sheer number of separate paths and additional routing points will make a huge difference. Special emphasis should be placed shortening the hop count between any two given areas.
  • Privacy issues? (Score:3, Interesting)

    by jmac880n ( 659699 ) on Friday April 04, 2008 @07:34PM (#22969150)

    It seems to me that by moving knowledge of flows into the routers, you make it easier to tap into these flows from a centralized place - i.e., the router.

    Not that tapping connections can't be done now by spying on packets, of course, but it would make it much cheaper to implement. High-overhead packet matching, reassembly, and interpretation is replaced by a simple table lookup in the router.

    Donning my tinfoil hat, I can foresee a time when all routers 'must' implement this as a backdoor...

  • by markov_chain ( 202465 ) on Friday April 04, 2008 @07:36PM (#22969170)
    Routing does not change based on traffic on that short a timescale, it changes if a link goes down, or a policy agreement changes, an engineer changes some link allocation, etc. Doing traffic-sensitive routing is hard because of oscillations; in your example, would the perimeter nodes switch back to the now congestion-free router?
  • by shaitand ( 626655 ) on Friday April 04, 2008 @07:36PM (#22969174) Journal
    That brings up a question of entitlement. It suggests that there are users who should be punished.

    Those who engage is low bandwidth activities are not entitled to more bandwidth while those engaging in high bandwidth activities are entitled to less. Both are entitled to equal bandwidth and have the right to utilize or not utilize accordingly.
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Friday April 04, 2008 @08:05PM (#22969318) Homepage Journal
    Ok, here's the theory. Two packts have travelled some distance along two distinct paths p1 and p2. If nothing is done, then at least one packet is guaranteed lost, and quite likely both will. Thus, you will need to retransmit both packets, thus hitting every node along p1 and p2 will have the total traffic shifted over some given time period increased. When traffic levels are low enough, the extra traffic is absorbed into the flows and there's no impact beyond a slight fluctuation in latency.

    If the total traffic is above some certain threshold, but below a critical value, then a signficant number of packets will be retransmitted. This causes the load to increase, the next cycle around, causing further packet loss and further retransmits. There will be a time - starting with a fall in fresh network demand - in which observed network demand actually rises, due to accumulation of erors.

    There will then be a third critical value, close to but still below the rated throughput of the switch or router. Provided no errors occur, the traffic will flow smoothly and packet loss should not occur. This isn't entirely unlike superheating - particularly on collapse. Only a handful of retransmits would be required - and they could occur anywhere in the system for which this is merely one hop of many - to cause the traffic to suddenly exceed maximum throughput. Since the retransmitted packets will add to the existing flows, and since the increase in traffic will increase superlinearly, that node is effectively dead. If there's a way to redirect the traffic for dead nodes, there is then a high risk of cascading errors, where the failure will ripple out through the network, taking out router/switch after router/switch.

    Does flow management work? Linux has a range of RED and BLUE implementions. Hold a contest at your local LUG or LAN Gamer's meets, to see who can set it up the best. Flow management also includes ECN. Have you switched that on yet? There's MTUs and window sizes to consider - default works fine most times, but do you understand those controls and when they should be used?

    None of this stuff needs to be end-to-end unless it's endpoint-active (and only a handful of such protocols exist). It can all be done usefully anywhere in the network. I'll leave it as an exercise to the readership to identify any three specific methods and the specific places on the network they'd be useful on. Clues: Two, possibly all three, are described in detail in the Linux kernel help files. All of them have been covered by Slashdot. At least one is covered by the TCP/IP Drinking Game.

  • by Anonymous Coward on Friday April 04, 2008 @08:26PM (#22969424)
    when a big hose will do the job better

    just build the capacity, .then they will come
  • by Burz ( 138833 ) on Friday April 04, 2008 @08:38PM (#22969484) Homepage Journal
    ...from one's own TCP stack.

    I think this proposal is a bit reckless and naive at the same time. Not a good combination. Add to that he is trying to set a precedent for data degradation when none is needed.

    If networks want to reduce traffic in a civil manner, they will price their service similar to the way hosting providers do: Offer a flat rate up to a set cap measured in Gb/month, with overages priced at a different rate. People would then pay for their excesses, allowing the ISP to spend more on adding capacity.

    End-users like this arrangement for cellphone service. They would understand and appreciate such a thing coming to their Internet service, especially if it meant that most of them ended up paying $10 less on their monthly bill.

    I think we are not seeing a transition to cap-and-overage pricing structure because the ISPs are more interested in becoming monopolies, not with competing the way wireless services naturally do. Verizon is turning its back on many lower-middle-income areas while it tears out common-carrier POTS lines from the neighborhoods it does serve. Comcast winks, nods and accepts its role for the lower-end neighborhoods, degrading throughput and behaving as a paternalistic manipulator of data in the process. They are carving up the market, so don't expect rational and time-tested solutions that benefit the customer.
  • by gnutoo ( 1154137 ) on Friday April 04, 2008 @08:52PM (#22969552) Journal

    Everyone's got their favorite experts and they are often a shortcut to lots of research you don't have time for. He's an independent expert who cares more about your rights than other things, happens to be an expert in OS design who's been working since the early 70s and knows something about networking as well. Finally, he likes to answers email.

  • Re:Why not now? (Score:2, Interesting)

    by klapaucjusz ( 1167407 ) on Friday April 04, 2008 @09:03PM (#22969610) Homepage

    To do anything based on flows, routers would have to keep track of all the active flows, which amounts to all open TCP connections going through that router.

    Only if you want to be fair.

    In practice, however, you only want to be approximately fair: to ensure that the grandmother can get her e-mail through even though there's a bunch of people on her network running Bittorrent. So in practice it is enough to keep track of just enough flow history to make sure that you're fair enough often enough, and no more.

    A number of techniques have been developed to do that with very little memory; my favourite happens to be Stochastic Fair Blue [wikipedia.org].

  • Re:Why not now? (Score:3, Interesting)

    by skiingyac ( 262641 ) on Friday April 04, 2008 @09:07PM (#22969638)
    I have to think tracking/throttling the rate per IP has to be already done by ISPs where bandwidth is a problem. Otherwise all the P2P sessions as well as UDP traffic (which doesn't have congestion control and so doesn't respond to a loss by reducing its rate) would clobber most TCP sessions. Fixing TCP will just lead people to use UDP. Skype, worms, and P2P applications already exploit UDP for exactly this reason. So, cut straight to the chase, don't bother with making TCP fairer or whatever, just do per-IP rate control at the ISP level.

    It isn't clear why this needs to be done in the core routers at all instead of just at the endpoints, if the goal is to make P2P traffic more manageable. P2P using so many flows (and different routes) will probably just get around any core-based solution anyway.

    Also "loss unfairness" is already solved by ECN, but ECN (which is implemented) isn't really used as much as it should/could be, because some routers drop your packets if you use it (I believe either chase.com or chaseonline.com still does this?), and nobody really cares. Why add something else to the client stacks that nobody will end up using either? That is basically why this hasn't been implemented.
  • by Anonymous Coward on Friday April 04, 2008 @09:29PM (#22969784)
    In this case he's not necessarily wrong, but certainly misleading. What he describes ("lower priority to large data transfers [...] as long as that is done fairly for all large data transfers") can't be done. It would require some form of trustworthy tagging (while we're dreaming, let's have world peace.) Otherwise a clogged router in the middle would throttle the traffic of a business, which uses one external IP address (web proxy, for example) on an expensive and fast uplink, harder than it would throttle the traffic of a measly DSL line. The information "large data transfer" is not available, plain and simple. Any attempt to use "number of TCP streams", "traffic per IP", "packet size" or other metrics as a substitute can easily be worked around and ultimately doesn't solve the problem, which is that users regularly can't use the network for the intended application because some higher-ups at the ISPs gave themselves a bonus or invested in technology for managing congestions instead of building a faster network. The edge routers could perform some sort of fair queuing, but obviously the edge routers are at a point where there shouldn't be a bandwidth shortage, because all customers pay for a defined uplink and if network congestion is a regular problem at that point, then the ISP oversold its capacity.
  • by m.dillon ( 147925 ) on Friday April 04, 2008 @09:33PM (#22969806) Homepage
    What I've noticed the most, particularly since I'm running about a dozen machines over a DSL line (just now switched from the T1 I had for many years), is that packet management depends heavily on how close the packet is to the end points. Packet management also very heavily depends on whether the size of your pipe near the end point is large relative to available cross country bandwidth, or small (like a DSL uplink).

    When the packet is close to an end point it is possible to use far more sophisticated queueing algorithms to make the flow do precisely what you want it to do. It's important for me because my outgoing bandwidth is pegged 24x7. Packet loss is not acceptable that close to the end point so I don't use RED or any early drop mechanism (and frankly they don't work that close to the end point anyway... they do not prevent bulk traffic from seriously interfering with interactive traffic), and it is equally unacceptable to allow a hundred packets build up on the router where the pipe constricts down to T1/DSL speeds, (which completely destroys interactive responsiveness).

    For my egress point I've found that running a fair share scheduler works wonderfully. My little cisco had that feature and it works particularly well in newer IOS's. With the DSL line I couldn't get things working smoothly with PF/ALTQ until I sat down and wrote an ALTQ module to implement the same sort of thing.

    Fair share scheduling basically associates the packets with 'connections' (in this case using PF's state table) and is thus able to identify those TCP connections with large backlogs and act on them appropriately. Being near the end point I don't have to drop any of the packets, but neither do I have to push out 50 tcp packets for a single connection and starve everything else that is going on. Fair share scheduling on its own isn't perfect, but when combined with PF/ALTQ and some prioritization rules to assign minimum bandwidths the result is quite good.

    Another feature that couples very nicely with queueing in the egress router is turning on (for FreeBSD or DragonFly) the net.inet.tcp.inflight_enable sysctl. This feature is designed to specifically reduce packet backlogs in routers (particularly at any nearby bandwidth constriction point). While it can result in some unfair bandwidth allocation it can also be tuned to not be quite so conservative and simply give the egress router a lot more runway in its packet queues to better manage multiple flows.

    The combination of the two is astoundingly good. Routers do much better when their packet queues aren't overstressed in the first place, only dropping packets in truely exceptional situations and not as a matter of course.

    The real problem lies in what to do at the CENTER of the network, when you TCP packet has gone over 5 hops and has another 5 to go. Has anyone tried tracking the hundreds of thousands (or more) active streams that run through those routers? RED seems to be the only real solution at that point, but I really think dropping packets in general is something to be avoided at all costs and I keep hoping something better will be developed for the center of the network.

    -Matt
  • by Somecallmechief ( 1103905 ) <crf AT the-shades DOT net> on Friday April 04, 2008 @11:53PM (#22970502) Homepage Journal
    When I lived in Germany, if I drove the exact speed limit (no more, no less) and if conditions were largely normal, I would rarely encounter stop lights in transit through the city. Traveling above or below the speed limit, which is to say breaking the rules, yielded different results. I'm no advocate of Internet regulation. However. In an *ideal* environment, in which the regulatory body constantly revises its rules to match real world parameters AND fosters independent, third party groups to design faster, safer and more reliable methods of transit--traffic CAN flow *more* efficiently. Traffic is traffic, whether by bit or car; and for a country to achieve safe and reliable speeds of 300 Kph on their autobahn, something must be right. Of course, accidents happen (inevitably). The rules (implicit or explicit) are broken by select individuals with radar detectors and jammers, drunks, or the careless. The existence of the rule has no causal relationship to its enforcement. *If*, however, a regulatory system were divinely crafted; and *if* transport methodology were improved... Well, that's a big if. At least someone, somewhere, somehow is taking a stab at the tofu. I applaud the sentiment, if not the practicality.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...