Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

New Router Manages Flows, Not Packets 122

An anonymous reader writes "A new router, designed by one of the creators of ARPANET, manages flows of packets instead of only managing individual packets. The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals. When overloaded, the router can make better choices of which packets to drop. 'Indeed, during most of my career as a network engineer, I never guessed that the queuing and discarding of packets in routers would create serious problems. More recently, though, as my Anagran colleagues and I scrutinized routers during peak workloads, we spotted two serious problems. First, routers discard packets somewhat randomly, causing some transmissions to stall. Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays, significantly reducing throughput (TCP throughput is inversely proportional to delay). These two effects hinder traffic for all applications, and some transmissions can take 10 times as long as others to complete.'"
This discussion has been archived. No new comments can be posted.

New Router Manages Flows, Not Packets

Comments Filter:
  • by girlintraining ( 1395911 ) on Friday July 10, 2009 @03:35PM (#28653897)

    So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules. Aren't we supposed to be against this? Because it sounds a lot to me like encrypted packets, UDP, and peer-to-peer, three things that certain well-funded groups have been trying to kill or restrict for awhile, would seem to be the worst-affected here.

  • Some thoughts (Score:4, Interesting)

    by intx13 ( 808988 ) on Friday July 10, 2009 @04:06PM (#28654263) Homepage

    First, routers discard packets somewhat randomly, causing some transmissions to stall.

    While it is true that whether or not a particular packet will be discarded is the result of a probabilistic process, it is unfair to call it "random". Based on a model of the queue within the router and estimation of the input parameters the probability of a packet being discarded can be calculated. In fact, that's how they design routers. You pick a bunch of different situations and decide how often you can afford to drop packets, then design a queueing system to meet those requirements. Queueing theory is a well-established field (the de-facto standard textbook was written in 1970!) and networking is one of the biggest applications.

    Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays

    You wouldn't expect uniform delays. A queueing system with a uniform distribution on expected number of customers in the queue is a very strange system indeed. Those sorts of systems are usually related to renewal processes and don't often show up in networking applications. That's actually a good thing, because systems with uniform distributions on just about anything are much more difficult to solve or approximate than most other systems.

    "Substantial" is the key word here. Effectively the concept of managing "flows" just means that the router is caching destinations based on fields like source port, source IP address, etc. By using the cache rather than recomputing the destination the latencies can be reduced, thus reducing the number of times you need to use the queue. In queueing theory terms you are decreasing mean service time to increase total service rate. Note however that this can backfire: if you increase the variance in the service time distribution too much (some delays will be much higher when you eventually do need to use the queue) you will actually decrease performance. Of course assumedly they've done all of this work. In essence "flow management" seems to be the replacement of a FIFO queue with a priority queue in a queueing system, with priority based on caching.

    Personally, I'm not sure how much of a benefit this can provide. Does it work with NAT? How often do you drop packets based on incorrect routing as compared to those you would have dropped if you had put them in the queue? If this was a truly novel queueing theory application I would have expected to see it in a IEEE journal, not Spectrum.

    And of course, any time someone opens with "The Internet is broken" you have to be a little skeptical. Routing is a well-studied and complex subject; saying that you've replaced "packets" with "flows" ain't gunna cut it in my book.

  • by RichiH ( 749257 ) on Friday July 10, 2009 @04:08PM (#28654299) Homepage

    > TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here

    In a dumb network with intelligence on the edges, you can:

    1) cause congestion and then back off (TCP)
    2) hammer away at whatever rate you think you need (UDP)
    3) use a pre-set limit (which might be too high as well so no one does that on public networks)

    State-ful packet switching is literally impossible, fixed-path routing not desirable for the reason you stated above and I would not want anyone to inspect my traffic _by design_, anyway.

    TCP may not be perfect, but I fail to see an alternative.

  • by elbuddha ( 148737 ) on Friday July 10, 2009 @04:09PM (#28654315)

    Yippee.

    Cisco (and probably several others) have done this by default for many many moons now. By way of practical demonstration, notice that equal weight routes load balance per flow, not per packet. What it allows is subsequent routing decisions to be offloaded from a route processor down to the asics on the card level. And don't try to turn CEF off on a layer 3 switch - even a lightly loaded one - unless you want your throughput to resemble 56k.

  • by John.P.Jones ( 601028 ) on Friday July 10, 2009 @04:11PM (#28654335)

    TCP's congestion control backs off exponentially because it has to. There is a stability property that if the network is undergoing increased congestion (this is how TCP learns the available throughput and utilizes it) and the senders do not back off exponentially then their backing off will not be fast enough to relieve congestion and therefore stabilize the system. If this router is selectively stalling individual flows I do not believe that will be fast enough to deal with growing congestion from many greedy clients.

    Basically, eventually the buffer space of the router will become exhausted and it will be forced to drop packets non-selectively hence initiating TCP backoffs from randomly selected flows, resulting in current behavior. So, of course in that gray area between the first dropped flow and when we need to revert back to normal behavior we may see improved network performance for some flows but they will just take advantage of this by opening up their TCP windows more until the inevitable collapse comes.

    The end result will be delaying backing off many TCP flows (which will speed them up creating more congestion) at the expense of completely trashing a few flows (which will stall anyways for packet reordering). and so the resulting system will be less stable.

  • by Anonymous Coward on Friday July 10, 2009 @04:26PM (#28654531)

    He has re-invented the layer 3 switch... now with less jitter and latency because:

    The FR-1000 does away entirely with the queuing chips. During congestion, it adjusts each flow rate at its input instead. If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down. And rather than just delaying or dropping packets as in regular routers, in the FR-1000 the output provides feedback to the input. If thereâ(TM)s bandwidth available, the equipment increases the flow rates or accepts more flows at the input; if bandwidth is scarce, the router reduces flow rates or discards packets.

    So we are going to implement WRED on a per flow basis, get rid of the queuing, and force the tcp stream to scale back it's window size when we run out of bandwidth by dropping a packet out of that conversation...

    I mis-spoke, this is a layer 2 and a half switch!

  • by OeLeWaPpErKe ( 412765 ) on Friday July 10, 2009 @05:00PM (#28654893) Homepage

    All older cisco equipment worked this way. This was nice, and worked very well for the first router(s) closest to the end customer. However for routers meant to route for large numbers of users this turned out to be a disaster.

    Just to give you an idea, this was EOS (end of support) before I turned 10 [cisco.com] (look for "netflow routing")

    There are a number of very problematic properties :
    -> trivial to ddos (just generate too many flows to fit in memory, or generally increase the per-packet lookup time)
    -> not p2p compatible (p2p will cause flow based routers to perform at a snail's pace, because they open so much connections)
    -> possible triple penalty for every new flow (first a failed flow lookup, followed by a failed route lookup, going to default route)
    -> very hard to have a good qos policy this way. A pipe has a fixed bandwidth, and you almost always oversubscribe. Therefore useful policies are very hard to formulate per-flow.
    -> if you divide bandwidth per-flow over tcp then a large overload will "synchronize" everything. So let's explain what happens if 3 users are happily surfing about and another user starts bittorrent. Bandwidth gets divided over all the flows, and *every* connection closes, due to timeouts.

    There are a number of advantages
    -> easy, very extensive QOS is trivial to implement
    -> stateful firewalling is almost laughably easy to implement, and very advanced firewalling can be done (e.g. easy to block ssh but not https, just filter on the string "openssh" anywhere in the connection. Added bonus : hilarity ensues if you email someone the text "openssh", and his pop3 connection keeps getting closed)

    Here's the deal : a router has to lookup in a table of about 300.000 entries in per-packet switching (excepting MPLS P routers). My PC is, at this moment, opening 331 flows to various destinations, each sending an average of 5 packets (probably a lot of DNS requests are dragging this number down), but you have to keep in mind that a flow-based router has to look up first in the "flow table" AND in the route table (which still has 300.000 entries).

    As soon as a flow-based router services more than 1000 machines (in either direction, ie. 100 clients communicating with 900 internet hosts = 1000 machines serviced), it's performance will fail to keep up with a packet-based router. That's not a lot. If a single client torrents or p2p's you will hit this limit easily, resulting in slower performance. 2000 machines and packet-based switching is double as efficient.

    So : flow-based routing ... for your wireless access point ... perhaps. For anything more serious than that ? No way in hell.

  • by raddan ( 519638 ) * on Friday July 10, 2009 @05:17PM (#28655059)

    TCP's congestion control backs off exponentially because it has to.

    Sure, but it's looking at the problem from the wrong end. IP has no feedback mechanism to allow for flow control (i.e., to prevent the sender from overrunning the receiver), so TCP has congestion control instead to stop it from happening when it does. Since TCP has no way of knowing what the available bandwidth is, it goes looking for it by causing the problem and then backing off. And since packet-switched traffic is "bursty", it resumes increasing the rate until it hits the ceiling again (because maybe you just so happened to have an abnormally low ceiling when you checked before), and backs off, ad infinitum.

    This is analogous to saying "I have this problem where cars keep crashing into my house!" and so, designing your house so it can dodge cars.

  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Friday July 10, 2009 @05:37PM (#28655223) Homepage

    Anagran has a paper on just this topic; they claim to do better than WRED because they track the rate of every TCP connection.

    http://www.packet.cc/files/IFD2c.pdf [packet.cc]

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...