Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Technology

ARPANET Co-Founder Calls for Flow Management 163

An anonymous reader writes "Lawrence Roberts, co-founder of ARPANET and inventor of packet switching, today published an article in which he claims to solve the congestion control problem on the Internet. Roberts says, contrary to popular belief, the problem with congestion is the networks, not Transmission Control Protocol (TCP). Rather than overhaul TCP, he says, we need to deploy flow management, and selectively discard no more than one packet per TCP cycle. Flow management is the only alternative to peering into everyone's network, he says, and it's the only way to fairly distribute Internet capacity."
This discussion has been archived. No new comments can be posted.

ARPANET Co-Founder Calls for Flow Management

Comments Filter:
  • by gnutoo ( 1154137 ) on Friday April 04, 2008 @07:03PM (#22968900) Journal

    He seems to agree [stallman.org]. This surprised me but it seems that equipment can do this fairly.

  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Friday April 04, 2008 @07:07PM (#22968956) Homepage
    Oh, from his company of course.
    • by langelgjm ( 860756 ) on Friday April 04, 2008 @07:17PM (#22969028) Journal
      I was about to chastise you for being overly cynical, but then I visited the website [anagran.com] of the author:

      Anagran eliminates congestion in the worldâ(TM)s busiest networks with Fast Flow Technologyâ, developed from the ground-up to specifically eliminate and resolve congestion created by the proliferation of todayâ(TM)s broadband applications such as video, P2P, voice, gaming, YouTube etc. â" anywhere in the network.
    • by interiot ( 50685 ) on Friday April 04, 2008 @07:42PM (#22969200) Homepage
      That's an ad hominem, and an unnecessary one at that. A proposal to change something so important as TCP is bound to fail unless it has significant technical merit. To make things simple, let's just assume that the proposer openly admits they were motivated by self-interest to make the proposal. And the result is: nothing changes. Heck, Al Gore could make this proposal, and it wouldn't change the fact that proposals that are deeply technical will be evaluated on a technical basis.
      • Re: (Score:3, Insightful)

        I was being somewhat sarcastic. In reality I believe that Roberts decided that flow routing is a good idea and then started Anagran to implement it, so he's not a total opportunist. But even on a technical level, I'm having trouble finding people who like flow routing. So we have one expert with an idea that most of the other experts reject. So I don't quite trust this idea on a technical level and I don't entirely trust the guy who's selling it either.
        • Re: (Score:2, Informative)

          by zolf13 ( 941799 )
          There is no flow routing in Anagran solution - it is just per (TCP) flow shaper/policer that you put at the ingres/egress of your network.
          Anyway, in 99% of cases to achieve same thing you could use Linux with SFQ queueing.
  • by Anonymous Coward on Friday April 04, 2008 @07:09PM (#22968966)
    Larry Roberts was co-founder of the ARPAnet, but he did NOT invent packet switching. That invention goes to Donald Davies of the National Physical Laboratory in the UK. His work was well-credited by the ARPAnet designers.
    • Re: (Score:3, Informative)

      It is not quite that simple. There were multiple researchers working in that area at the same time, including Kleinrock and Baran. Kleinrock has a good claim based on a 1961 publication and his PhD dissertation. Baran clearly developed the concept in conjunction with his ideas about secure networks. Davies has a good case because he built the first working example.

      If you ask Larry Roberts he would say that the honor belongs to Kleinrock.

      Personally I don't think that you can say that there is a sole inventor
  • The problem is, some people will start throwing away 2 packets instead of 1 so that they can get more "throughput" on more limited hardware. Someone else will compete by tossing 3, and the arms race for data degradation will begin.

    Will this method really offset the retransmits it triggers? Only if not everyone does it, unless I'm missing something.

    What might work better is scaled drops: if a router and its immediate peers are nearing capacity, they start to drop a packet per cycle, automatically causing the routers at their perimeter to route around the problem, easing up on their traffic.

    It still seems like a system where an untrusted party could take advantage to drop packets in this manner from non-preferred sources or to non-preferred destinations however.
    • It still seems like a system where an untrusted party could take advantage to drop packets in this manner from non-preferred sources or to non-preferred destinations however.
      Sounds like something that Comcast might do...
    • by markov_chain ( 202465 ) on Friday April 04, 2008 @07:36PM (#22969170)
      Routing does not change based on traffic on that short a timescale, it changes if a link goes down, or a policy agreement changes, an engineer changes some link allocation, etc. Doing traffic-sensitive routing is hard because of oscillations; in your example, would the perimeter nodes switch back to the now congestion-free router?
      • Re: (Score:3, Informative)

        by riflemann ( 190895 )

        Routing does not change based on traffic on that short a timescale, it changes if a link goes down, or a policy agreement changes, an engineer changes some link allocation, etc. Doing traffic-sensitive routing is hard because of oscillations; in your example, would the perimeter nodes switch back to the now congestion-free router?

        Actually, many large scale networks *can* change routing based on congestion.

        The mechanism used is MPLS, using RSVP TE.

        Essentially, traffic is classified based on chosen parameters (protocol, port, etc) and placed into logical tunnels, and each can reach the same destination via a different path. Every so often (depending on administrator configuration, often 15 minutes), the router looks at utilisation of each tunnel on each interface, and can signal a different path for various tunnels in case of congest

      • by db32 ( 862117 )
        Only distance vector routing protocols are reduced to link going down. Link state protocols can factor a huge variety of metrics in. Link speed, bit error, arbitrary numbers (policy), or (drum roll...) congestion. Link state routing protocols aren't exactly new or uncommon these days either.
    • by jd ( 1658 ) <imipak@@@yahoo...com> on Friday April 04, 2008 @08:05PM (#22969318) Homepage Journal
      Ok, here's the theory. Two packts have travelled some distance along two distinct paths p1 and p2. If nothing is done, then at least one packet is guaranteed lost, and quite likely both will. Thus, you will need to retransmit both packets, thus hitting every node along p1 and p2 will have the total traffic shifted over some given time period increased. When traffic levels are low enough, the extra traffic is absorbed into the flows and there's no impact beyond a slight fluctuation in latency.

      If the total traffic is above some certain threshold, but below a critical value, then a signficant number of packets will be retransmitted. This causes the load to increase, the next cycle around, causing further packet loss and further retransmits. There will be a time - starting with a fall in fresh network demand - in which observed network demand actually rises, due to accumulation of erors.

      There will then be a third critical value, close to but still below the rated throughput of the switch or router. Provided no errors occur, the traffic will flow smoothly and packet loss should not occur. This isn't entirely unlike superheating - particularly on collapse. Only a handful of retransmits would be required - and they could occur anywhere in the system for which this is merely one hop of many - to cause the traffic to suddenly exceed maximum throughput. Since the retransmitted packets will add to the existing flows, and since the increase in traffic will increase superlinearly, that node is effectively dead. If there's a way to redirect the traffic for dead nodes, there is then a high risk of cascading errors, where the failure will ripple out through the network, taking out router/switch after router/switch.

      Does flow management work? Linux has a range of RED and BLUE implementions. Hold a contest at your local LUG or LAN Gamer's meets, to see who can set it up the best. Flow management also includes ECN. Have you switched that on yet? There's MTUs and window sizes to consider - default works fine most times, but do you understand those controls and when they should be used?

      None of this stuff needs to be end-to-end unless it's endpoint-active (and only a handful of such protocols exist). It can all be done usefully anywhere in the network. I'll leave it as an exercise to the readership to identify any three specific methods and the specific places on the network they'd be useful on. Clues: Two, possibly all three, are described in detail in the Linux kernel help files. All of them have been covered by Slashdot. At least one is covered by the TCP/IP Drinking Game.

      • Re: (Score:2, Informative)

        Linux has a range of RED and BLUE implementions. Hold a contest at your local LUG or LAN Gamer's meets, to see who can set it up the best.

        RED is tricky to set up, but neither Blue nor PI require much tuning, if any. (I'm running Blue on all of my 2.6 Linux routers, and RED on all the 2.4 ones.)

        Flow management also includes ECN. Have you switched that on yet?

        Yes, I have. On all of my hosts and routers. It's a big win for interactive connections, but doesn't matter that much for bulk throughput.

        There's

        • by jd ( 1658 )
          There's MTUs and window sizes to consider - default works fine most times, but do you understand those controls and when they should be used?,

          Unless you're running some fancy link technology, you don't get to tune your MTU. If, like most of us, you're running Ethernet and WiFi only, you're stuck with 1500 bytes. As for window sizes, they're pretty much tuned automatically nowadays, at least if you're running a recent Linux or Windows Vista.

          Youl'll find the Web100 patch for Linux provides much better a

    • Re: (Score:2, Interesting)

      When I lived in Germany, if I drove the exact speed limit (no more, no less) and if conditions were largely normal, I would rarely encounter stop lights in transit through the city. Traveling above or below the speed limit, which is to say breaking the rules, yielded different results. I'm no advocate of Internet regulation. However. In an *ideal* environment, in which the regulatory body constantly revises its rules to match real world parameters AND fosters independent, third party groups to design fa
  • toss one packet?! (Score:2, Insightful)

    by ILuvRamen ( 1026668 )
    I'm not big on networking but if I'm sending data to someone and some "flow management" dumps one of the packets, won't my computer or modem just resend it? Seems like not such a good idea to me.
    • Re: (Score:2, Informative)

      by denormaleyes ( 36953 )
      Yes, your computer will resend it. But it does even more! TCP interprets dropped packets as congestion and will reduce its load on the network, generally by 50%. Dropping more than one packet per round trip just serves to confuse TCP a bit more but the net effect is the same: reduce the load by 50%.
    • It will get resent via a different path, hopefully one that it is less congested.
    • Re:toss one packet?! (Score:5, Interesting)

      by shaitand ( 626655 ) on Friday April 04, 2008 @07:30PM (#22969114) Journal
      'I'm not big on networking but if I'm sending data to someone and some "flow management" dumps one of the packets, won't my computer or modem just resend it?'

      Yes and when the retransmission occurs the router may be able to handle your packet. The router won't be overloaded forever after all.

      The bigger part of the equation is that with TCP the more packets are dropped the slower you transmit packets. With this solution the heaviest transmissions would have more packets dropped and therefore be slowed down the most.

      I admit, I'd have to check the details of the protocol to see if this is open to abuse by those with a modified TCP stack. The problem is that the packets are dropped in a predictable manner and a modified TCP stack could be designed to 'filter' the noise and yet still degrade when other packets are lost and provide a reliable connection.
      • Re: (Score:3, Informative)

        Yes, TCP congestion control relies on everyone following the protocol. If you hack your TCP stack to send each packet twice, not cut down the congestion window, etc., you can get better performance. In practice, anyone doing this on a scale large enough to be noticed (think Apple) would get yelled at by the ISPs. Big players wouldn't do it because if the majority of users tried to cheat their performance would get worse.

        IMHO hacks like this don't help enough to go through the trouble of installing, and i
        • Big players wouldn't do something like use a hacked TCP stack, but a P2P application might. Just as there are P2P applications that use a hacked version of their own protocol to thwart fairness efforts.

          20 million P2P users with hacked stacks in this scenario would probably result in poorer performance and greater congestion than we have now.
          • Ultimately, cheating doesn't help if enough people do it, whether it's by using hacked TCP stacks, or roll-your-own UDP protocols. The only real way to improve performance is to grow the network capacity.
      • ...from one's own TCP stack.

        I think this proposal is a bit reckless and naive at the same time. Not a good combination. Add to that he is trying to set a precedent for data degradation when none is needed.

        If networks want to reduce traffic in a civil manner, they will price their service similar to the way hosting providers do: Offer a flat rate up to a set cap measured in Gb/month, with overages priced at a different rate. People would then pay for their excesses, allowing the ISP to spend more on adding c
        • 'End-users like this arrangement for cellphone service.'

          I am not sure what world you live in, but I don't know many people who are happy with the pricing schemes of cell phones.

          All the same problems would be shared with the scheme you propose. First, you would be charged for incoming bandwidth. Second, the rates are never lower than unlimited service people pay the higher rates because cell phones are more convenient. Third, you have to constantly track your usage and would have refrain from using your conn
          • by Burz ( 138833 )

            The only people who like the cell phone schemes and would like this scheme are those who do not fully utilize their connection.
            You said it right there. Where Internet service is concerned, most people come nowhere near reaching the unspoken transfer limits on their 'unlimited' plans. Only a tiny minority of subscribers generate the vast majority of traffic.
        • Re: (Score:3, Insightful)

          by stephanruby ( 542433 )

          End-users like this arrangement for cellphone service. They would understand and appreciate such a thing coming to their Internet service, especially if it meant that most of them ended up paying $10 less on their monthly bill.

          You're making the assumption that ISPs/cell phone companies base their prices on their ongoing cost. That's not entirely correct. The price of a service is often a function of supply and demand. And an ISP/cell phone company will often manipulate the perception of that supply and dem

      • Re:toss one packet?! (Score:4, Informative)

        by fltsimbuff ( 606866 ) * on Saturday April 05, 2008 @12:47AM (#22970790) Homepage
        This actually looks like a form of something a lot of Cisco equipment already does to prevent "synchronization."

        Let's say you have 500 hosts sharing a "fat pipe." If During peak times, the combined throughput used by TCP applications cause all available bandwidth on the link to be consumed. The result is, at that instant that all available bandwith is consumed, packets get dropped suddenly and indiscriminately. This means that 500 hosts all lose a slew of packets.

        Per TCP specifications, when packets aren't acknowledged, all 500 hosts back off for a moment, and then retransmit at approximately the same time, causing another sudden burst in bandwidth usage, and more dropped packets.

        This problem compounds until all hosts are simply busting packets, dropping packets, backing off, and repeating. The solution to this was a technique called "RED (Random Early Detection).

        What this does is essentially detect when bandwidth is almost completely utilized, and then starts selectively and "fairly" dropping packets from the TCP streams. This causes the hosts to gradually back off, until bandwidth consumption is back in check. The result is that the whole "synchronization" issue is avoided, and the link is better utilized, as throughput is constant and reliable.

        There is a variation called WRED or "Weighted Random Early Detection", in which certain types of packets get cut before others. This would allow the router to avoid dropping VoIP traffic, while implementing RED on non-realtime streams instead.

        You can read more about this technique here: http://www.cisco.com/en/US/docs/ios/12_0/qos/configuration/guide/qcconavd.html [cisco.com]

      • I admit, I'd have to check the details of the protocol to see if this is open to abuse by those with a modified TCP stack. The problem is that the packets are dropped in a predictable manner and a modified TCP stack could be designed to 'filter' the noise and yet still degrade when other packets are lost and provide a reliable connection.

        You'd have to modify the stack at both ends of the connection, as each end expects defined TCP behaviour. If you are a downloader, and a packet you're downloading gets lost, the other end will need to retransmit, and it _will_ slow down if it has to retransmit, so there's no way around this.

        Of course, if you have access to both ends, then theres no reason at all for you to use a defined protocol (TCP), you could just blast data between them using any mechanism, and get around congestion control mechanisms

    • Re: (Score:2, Informative)

      by dynchaw ( 1188279 )
      Yes, your computer will resend it but due to the sliding window protocol http://en.wikipedia.org/wiki/Sliding_window [wikipedia.org] it will also reduce the speed at which it is sending. By dropping a packet TCP will detect congestion and reduce the size of the window. For every dropped frame it will reduce the window by a factor of 2. Each time an entire window is sent without a drop, it increases the size of the window by one.
      So it quickly drops down to below the available bandwidth then slowly grows the speed up to it.
    • it works kind of, but it's murder on things like games which need a good ping time.

      in short i'd drop any isp that did this

  • Why not now? (Score:3, Interesting)

    by eldorel ( 828471 ) on Friday April 04, 2008 @07:14PM (#22969012)
    Can someone explain why this hasn't already been implemented?
    Seems like there would have to be a good reason, otherwise this would just make more sense, right?

    • I dunno but he said the flow management idea was just presented recently.
    • Re:Why not now? (Score:5, Informative)

      by Burdell ( 228580 ) on Friday April 04, 2008 @07:32PM (#22969140)
      Overhead. Right now, routers just track individual packets: receive a packet, look up the next-hop IP in the forwarding table (which might have 250,000 entries), and send it on its merry way. To do anything based on flows, routers would have to keep track of all the active flows, which amounts to all open TCP connections going through that router. For an active router, there would be millions of active flows at any one time, so the overhead would be huge. This would be like a NAT or stateful firewall device that could do line-rate forwarding at gigabit, 10G, or 100G port speeds.

      You also have problems tracking flows; routes change, so while a router may be tracking an active flow, the flow may choose another path. The router has no way of knowing this, so it has to keep track of the flow until it times out (and the timeout would have to be more than just a few seconds).

      There are flow-based router architectures, but they are not generally used for ISP core/edge routers because there are too many ways they can break.
      • Re: (Score:3, Informative)

        Comment removed based on user account deletion
        • Re: (Score:3, Informative)

          by Burdell ( 228580 )
          Netflow/Jflow are for statistics, and on the larger routers, they are just sampled (not every packet/flow is monitored, just one out of every N packets). Originally netflow could be used as a packet switching method on Cisco, but it is just for statistics now.
      • Good post. Another issue is that many flows are way too short for flow-tracking to help.
      • Re: (Score:2, Interesting)

        To do anything based on flows, routers would have to keep track of all the active flows, which amounts to all open TCP connections going through that router.

        Only if you want to be fair.

        In practice, however, you only want to be approximately fair: to ensure that the grandmother can get her e-mail through even though there's a bunch of people on her network running Bittorrent. So in practice it is enough to keep track of just enough flow history to make sure that you're fair enough often enough, and no m

      • by ghjm ( 8918 ) on Saturday April 05, 2008 @12:43AM (#22970772) Homepage
        Yes, you are precisely right. As things stand now, there's no good reason why tier one networks need to buy vast amounts of new routing infrastructure - what they already own works just fine. With this new plan, they will all have to buy ten times more hardware, which will return us all to the glory days of the dot com era. Or at least will return Lawrence Roberts to the glory days of the dot com era.

        Why, did you think this plan had something to do with providing better service to end-users? When does that ever happen?

        -Graham
      • by Z00L00K ( 682162 )
        And don't forget that a flow not necessarily has to go one path but actually can take multiple paths. This means that directing a flow through a single path can be a bad idea. Allowing it to trickle through multiple paths is a better idea.

        Just compare it with how water can flow through a maze with multiple ways through the maze. The trick is to avoid bottlenecks.

        Another issue is that even though the path that seems to have the best performance from the view of the end-routers may actually be the worst s

      • To do anything based on flows, routers would have to keep track of all the active flows, which amounts to all open TCP connections going through that router. For an active router, there would be millions of active flows at any one time, so the overhead would be huge. This would be like a NAT or stateful firewall device that could do line-rate forwarding at gigabit, 10G, or 100G port speeds.

        Bollocks.

        Any modern router worth its salt, _especially_ ones used in large networks, use flow based mechanisms for routing.

        Lets say a router has three possible, equal cost, paths to a destination network. Which path will it take? In the old days, it would pick one of those paths and stick with it. But that results in 2/3 of the network being unavailable.

        In the case of many destination routes, it might seem to be even, but if one of the destination networks has a much higher traffic flow than the others, yo

        • by Burdell ( 228580 )
          Equal cost multipath is rarely used in large Internet routers because even that decreases the packets-per-second throughput of the router. Also, using a hash is not the same as tracking flows; an individual TCP connection flow will always take the same path (unless the routes change) because the hash is always the same (but the hash is not stored anywhere). For the "flow management" scheme to work, the router actually has to keep track of all active flows to know which ones are eligible for packet drops.
    • Re:Why not now? (Score:4, Insightful)

      by Anonymous Coward on Friday April 04, 2008 @08:27PM (#22969440)

      Can someone explain why this hasn't already been implemented?


      It has been implemented and abandoned already because it doesn't scale. Serious routers today use the concept of interface adjacency: for a given inbound packet there are only a few possible destinations: they are each of the interfaces on the router.


      When a route is installed into the FIB, you can recursively follow that route until you find the egress interface and the layer 2 address of the next hop - those will typically never change! So long as the router always keeps this adjacency information up to date, individual packets never need to have a route lookup performed - the destination prefix is checked in the adjacency table, the layer 2 header is rewritten, and the packet is queued for egress on the appropriate interface.


      This allows for substantially higher throughput (in packets per second) than other methods because the adjacency table can be cleverly stored in content-addressable memory that provides constant time answers. A prefix will be installed in a content-addressable memory circuit as a lookup key. The value associated with that key is a pointer into the adjacency table that holds the interface and layer 2 information for that prefix.


      By reconsidering the routing problem, and by using some smart circuits, the route lookup for a single packet has been reduced from O(k) to O(1), where k is the length of the longest prefix. For IPv4, that's up to 32-bits - so that means you do a single fetch and lookup instead of 32 or so comparisons for each packet. At a million packets per second, that's a huge difference.


      Traditional flow-based routing requires creating in-memory structures for each flow, collectively called the flow cache. Each packet requires an initial full route lookup, which builds the structures for that flow. Then, subsequent packets in that flow can be matched against the cache and switched directly to the egress interface. This operation is much closer to that of a contemporary firewall. The good thing about this method is that it gives you a lot of visibility into the traffic. The bad side is that it requires a very large amount of memory for all of these structures. When that memory is exhausted, you can't route anymore flows!


      This comparison is a bit apples to oranges - the adjacency table described above is pretty much state-of-the-art for off the shelf gear, while the flow cache architecture is highly dated. But without some substantial advances in the ways flows are created, tracked, and expired, no flow router is going to reach the number of packets per second that are required for very large installations in the Internet.

    • Re: (Score:3, Interesting)

      by skiingyac ( 262641 )
      I have to think tracking/throttling the rate per IP has to be already done by ISPs where bandwidth is a problem. Otherwise all the P2P sessions as well as UDP traffic (which doesn't have congestion control and so doesn't respond to a loss by reducing its rate) would clobber most TCP sessions. Fixing TCP will just lead people to use UDP. Skype, worms, and P2P applications already exploit UDP for exactly this reason. So, cut straight to the chase, don't bother with making TCP fairer or whatever, just do p
      • by tuomoks ( 246421 )
        Yes, a very good point. I have designed network systems which, even they still look like TCP for applications, are actually using UDP - TCP was too inefficient over any wireless routed through public networks. We just couldn't get a steady flow when number of the users did go over 10K. There comes other benefits as message bundling, chaining, etc but they could be done in TCP if the system wouldn't have to wait some arbitrary timeout or ack for an window. People seem to forget that TCP is a transport protoc
    • by Z00L00K ( 682162 )
      Actually - it is possible to do, it comes down to routing protocols.

      There are a few problems to do good routing:

      1. The address allocations aren't reflecting the topology of the network, it has just been growing organically. Therefore the routing tables are horrible.
      2. Routing protocols were designed for robustness and not for the most effective path. Robustness and effectiveness intersect so we do have a relatively good network anyway. Some protocols are better than others too, and there are other that are p
    • by tuomoks ( 246421 )
      They did but networks were still thought to be a (relatively) slow medium. Many other reasons probably because this behavior was already known on hardware channels, buses, pipelines, etc but was not seen a real problem in network. TCP especially because it is seen as a "streaming" protocol instead of using datagrams, messages, blocks, whatever when in reality on lover level it isn't that (except when you have the pleasure to use IP traffic over old comm. protocols, modems, UARTs, etc)
      Back to subject, I also
  • by 3-State Bit ( 225583 ) on Friday April 04, 2008 @07:16PM (#22969016)
    So now we actually DO need to make the Internet more like a series of tubes??? brain asplode
  • This does not sound like a correct solution. Rather, emphasis should be placed on installing more links, both in parallel to existing links, and "bypass" links that will shorten the number of hops from one given location to another. Whether based on copper, fiber, satellite, or other technology, the sheer number of separate paths and additional routing points will make a huge difference. Special emphasis should be placed shortening the hop count between any two given areas.
  • by nweaver ( 113078 ) on Friday April 04, 2008 @07:33PM (#22969146) Homepage
    ANy device sophisticated enough to do the flow fairness described can also do "user" fairness by averaging behavior across multiple flows from the same soure, and the behavior of the source over time.

    This solves the P2P problem, and has a bunch of other advantages.

    Note, also, you only need to do this at the edges, as the core is pretty overprovisioned currently.
    • by shaitand ( 626655 ) on Friday April 04, 2008 @07:36PM (#22969174) Journal
      That brings up a question of entitlement. It suggests that there are users who should be punished.

      Those who engage is low bandwidth activities are not entitled to more bandwidth while those engaging in high bandwidth activities are entitled to less. Both are entitled to equal bandwidth and have the right to utilize or not utilize accordingly.
      • The problem is, without user fairness, the heavy users get MORE bandwidth. This is the multiflow problem.

        Your neigbor and you share a common bottleneck. Your websurfing, he's got 6 torrents downloading. He is going to have at least 24 active flows, running full bore, and you will have 1 or 2 (which are bursty even). Thanks to how TCP works, without traffic shaping, you will receive 1 packet for every 24 he gets.

        User fairness is necessary to be implemented in the network to keep his traffic from walking
        • Re: (Score:3, Insightful)

          by shaitand ( 626655 )
          The multi-flow problem is already solved with the level of management he has proposed. It equates the total bandwidth coming from one IP, not just a specific flow.

          'Your websurfing, he's got 6 torrents downloading. He is going to have at least 24 active flows, running full bore, and you will have 1 or 2 (which are bursty even). Thanks to how TCP works, without traffic shaping, you will receive 1 packet for every 24 he gets.'

          As things stand now, yes. But under the scheme he is suggesting my flows would be slo
    • From the article:

      Control each flow so that the total traffic to each IP address (home) is equally and fairly distributed no matter how many flows they use.

      This sounds like a pretty good idea until you start thinking about NAT'ed networks. Is it really fair to treat an entire office or dorm (or even a small country) the same as a single user who happens to have a unique IP from their ISP? And what about the transition to IPV6, when presumably IP addresses are no longer going to be scarce?

      • by TheLink ( 130905 )
        "Is it really fair to treat an entire office or dorm (or even a small country) the same as a single user who happens to have a unique IP from their ISP"

        The company I currently work for already does this (we provide "premium" aka $$$ internet services at various hotels and airports around the world).

        We do the traffic shaping at end points. At the end points we know who the users are, so we can give them different treatment.

        Possible scenario: the normal users get their fair share. The VIP users get their fair
        • You can do that at the edge ISP level because you know who your customers are. The core routers (which is what the article was about) don't have that kind of information available to them.
  • Privacy issues? (Score:3, Interesting)

    by jmac880n ( 659699 ) on Friday April 04, 2008 @07:34PM (#22969150)

    It seems to me that by moving knowledge of flows into the routers, you make it easier to tap into these flows from a centralized place - i.e., the router.

    Not that tapping connections can't be done now by spying on packets, of course, but it would make it much cheaper to implement. High-overhead packet matching, reassembly, and interpretation is replaced by a simple table lookup in the router.

    Donning my tinfoil hat, I can foresee a time when all routers 'must' implement this as a backdoor...

  • Weird solution (Score:4, Insightful)

    by Percy_Blakeney ( 542178 ) on Friday April 04, 2008 @07:38PM (#22969180) Homepage

    I couldn't help but laugh a bit at his solution. He talks about "flow management" being put into the core of the network to solve TCP's unfairness problem, but at the end of the article he says:

    Although the multi-flow unfairness that P2P uses remains, flow management gives us a simple solution to this: Control each flow so that the total traffic to each IP address (home) is equally and fairly distributed no matter how many flows they use.

    So, in other words, his solution to the "P2P problem" is just a fancy version of a token bucket.

    • Good luck doing that on a core router passing millions of flows, like another comment above said.

      Another point is that TFA seems to imply that fixing TCP, whatever that means, will somehow solve the network congestion problems. However, the only real way to fix congestion is to grow capacity, which seems to have worked thus far.
      • Re: (Score:2, Funny)

        by Bill Dog ( 726542 )

        However, the only real way to fix congestion is to grow capacity, which seems to have worked thus far.

        To use an obligatory car analogy, think of congestion on the roadways. What you're talking about is widening the highways. But that just encourages more packets to be placed onto the network. We need to stop letting people use the Internet for what they want, when they want (freedom is bad). What we really need is the equivalent of, you guessed it, public transportation. Here's how it would work: As we all

    • So, in other words, his solution to the "P2P problem" is just a fancy version of a token bucket.
      His (server side) solution is cheaper than requiring everyone, everywhere to upgrade/change their TCP stack.

      Hell, the idea of asking everyone to change their stack practically invites a "Your solution advocates a" reponse. You'd either have to lock out people running the 'old' stack or... wait for it... throttle them server side.
  • by Anonymous Coward
    When using Linux as a router there are already several ways to per-flow fairness. The simplest and most obvious one is Stochastic Fair Queuing. The problem is that commercial routers can't do that in hardware.
  • wonder if routing algorithms themselves aren't contributing to the problem.

    BGP and the intra-domain routing protocols assume there is at most one correct route from a given source address to a given destination address. That assumption could give rise to unnecessary congestion. For example, suppose the source wants to use bandwith of 100 units and the destination is capable of keeping up. But between them there are two routers, in parallel, each of which can supply only 50 units. If there's exactly one
    • One reason why that might be problematic in practice, is that, iirc, TCP doesn't like getting packets out of order, and tends to respond to out-of-order packets similarly to dropped packets. If you have packets taking multiple paths, they are very likely to arrive out of order.

      One could mitigate this, I suppose, by making sure all packets that are part of the same flow take the same path.

    • by Animats ( 122034 )

      BGP and the intra-domain routing protocols assume there is at most one correct route from a given source address to a given destination address.

      Interestingly, the original ARPANET didn't make that assumption and would load-share across links. The ARPANET was a denser mesh than the Internet today, but with much lower bandwidth links, only 56Kb/s.

      Incidentally, the original MILNET, which was a purely military network using ARPANET nodes, was about six-connected; that is, each node had connections to about

  • TCP is mostly controlled by round trip time measurement and window size. Response to packet loss is a backup mechanism. If packet loss were the primary control mechanism, TCP would never work.

    It's much better to throttle back before packet loss occurs, since any lost packet has to be resent and uses up resources from the sender to the drop point. Since the main bandwidth bottleneck is at the last mile to the consumer, the drop point tends to be close to the destination.

    Don't trust the "Clean Slate I [stanford.edu]

  • Like all technologies, the effects depend on how the technology is used. While there are issues of unfairness with random drops, one can *imagine* ways that (from TFA), "What is really necessary is to detect just the flows that need to slow down" - however, it would seem just as easily networks could "detect just the flows that need to slow down" based on who is paying more for that flow (the sender or the receiver) - leading to even more "unfairness" (read: non-neutral network a la net neutrality) than we
  • by m.dillon ( 147925 ) on Friday April 04, 2008 @09:33PM (#22969806) Homepage
    What I've noticed the most, particularly since I'm running about a dozen machines over a DSL line (just now switched from the T1 I had for many years), is that packet management depends heavily on how close the packet is to the end points. Packet management also very heavily depends on whether the size of your pipe near the end point is large relative to available cross country bandwidth, or small (like a DSL uplink).

    When the packet is close to an end point it is possible to use far more sophisticated queueing algorithms to make the flow do precisely what you want it to do. It's important for me because my outgoing bandwidth is pegged 24x7. Packet loss is not acceptable that close to the end point so I don't use RED or any early drop mechanism (and frankly they don't work that close to the end point anyway... they do not prevent bulk traffic from seriously interfering with interactive traffic), and it is equally unacceptable to allow a hundred packets build up on the router where the pipe constricts down to T1/DSL speeds, (which completely destroys interactive responsiveness).

    For my egress point I've found that running a fair share scheduler works wonderfully. My little cisco had that feature and it works particularly well in newer IOS's. With the DSL line I couldn't get things working smoothly with PF/ALTQ until I sat down and wrote an ALTQ module to implement the same sort of thing.

    Fair share scheduling basically associates the packets with 'connections' (in this case using PF's state table) and is thus able to identify those TCP connections with large backlogs and act on them appropriately. Being near the end point I don't have to drop any of the packets, but neither do I have to push out 50 tcp packets for a single connection and starve everything else that is going on. Fair share scheduling on its own isn't perfect, but when combined with PF/ALTQ and some prioritization rules to assign minimum bandwidths the result is quite good.

    Another feature that couples very nicely with queueing in the egress router is turning on (for FreeBSD or DragonFly) the net.inet.tcp.inflight_enable sysctl. This feature is designed to specifically reduce packet backlogs in routers (particularly at any nearby bandwidth constriction point). While it can result in some unfair bandwidth allocation it can also be tuned to not be quite so conservative and simply give the egress router a lot more runway in its packet queues to better manage multiple flows.

    The combination of the two is astoundingly good. Routers do much better when their packet queues aren't overstressed in the first place, only dropping packets in truely exceptional situations and not as a matter of course.

    The real problem lies in what to do at the CENTER of the network, when you TCP packet has gone over 5 hops and has another 5 to go. Has anyone tried tracking the hundreds of thousands (or more) active streams that run through those routers? RED seems to be the only real solution at that point, but I really think dropping packets in general is something to be avoided at all costs and I keep hoping something better will be developed for the center of the network.

    -Matt
  • Roberts has been harping on the same thing since 2000, probably earlier. Guess why he has built several failed companies around the concept. It seems that Slashdot forgets this every few months and posts another one of his rants.
  • by funkboy ( 71672 ) on Friday April 04, 2008 @11:43PM (#22970452) Homepage
    I have a lot of respect for Larry Roberts. The idea of only discarding a single packet per flow on a congested interface in order to slow things down is a good one.

    If WRED [wikipedia.org] didn't exist on every production-grade router made in the last 10+ years then there would certainly be a need for this technology. However, I'm not really sure how much benefit the "multi-flow fairness" concept would provide vs. just configuring WRED to discard only payload packets & not TCP control traffic. The tradeoff is the added complexity of the congestion avoidance mechanism having to be flow-aware, which increases cost, time to market, heat & power consumption, etc.

    Such a technique combined with microflow policing [ciscopress.com] would come closer to what he describes. In fact one could probably refer to the congestion avoidance technique described in the article as "adaptive microflow policing".

    A pretty standard config used with OpenBSD's PF firewall [openbsd.org] is to prioritize ACKs in both directions so that a line congested in one direction is still useful in the other.

    BTW, TCP has already been re-engineered; it's called SCTP [wikipedia.org]. If you've got a custom high-bandwidth point-to-point application where you have complete control over both ends (mostly research stuff at this point), check it out.

    A different approach to bandwidth management that is being developed by the major router vendors is the application-aware network. Imagine if the router was smart enough to read a field in an XML stream that indicates that this particular flow requires 64kbps or it should be dropped, it should have 256kbps to work well, and giving it more than 1mbps is not useful and you start to get the idea. That's just the tip of the iceberg.

    Anyway, congestion control is useful & necessary, but "quality of service is no substitute for quantity of service"...

A list is only as strong as its weakest link. -- Don Knuth

Working...