Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

New Router Manages Flows, Not Packets 122

An anonymous reader writes "A new router, designed by one of the creators of ARPANET, manages flows of packets instead of only managing individual packets. The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals. When overloaded, the router can make better choices of which packets to drop. 'Indeed, during most of my career as a network engineer, I never guessed that the queuing and discarding of packets in routers would create serious problems. More recently, though, as my Anagran colleagues and I scrutinized routers during peak workloads, we spotted two serious problems. First, routers discard packets somewhat randomly, causing some transmissions to stall. Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays, significantly reducing throughput (TCP throughput is inversely proportional to delay). These two effects hinder traffic for all applications, and some transmissions can take 10 times as long as others to complete.'"
This discussion has been archived. No new comments can be posted.

New Router Manages Flows, Not Packets

Comments Filter:
  • Well duh (Score:5, Funny)

    by Em Emalb ( 452530 ) <ememalbNO@SPAMgmail.com> on Friday July 10, 2009 @02:31PM (#28653845) Homepage Journal

    Damn right, they manage flows. It keeps the tubes from clogging.

    Duuuurrrrrr.

    • Re: (Score:3, Funny)

      by nine-times ( 778537 )

      I don't know if I trust this guy with my interweb tubes, though. Did you notice the mess of cables [ieee.org] behind him?

      If we can't trust him to keep his wiring closet organized, how can we trust him to clean the tubes?

    • by OeLeWaPpErKe ( 412765 ) on Friday July 10, 2009 @04:00PM (#28654893) Homepage

      All older cisco equipment worked this way. This was nice, and worked very well for the first router(s) closest to the end customer. However for routers meant to route for large numbers of users this turned out to be a disaster.

      Just to give you an idea, this was EOS (end of support) before I turned 10 [cisco.com] (look for "netflow routing")

      There are a number of very problematic properties :
      -> trivial to ddos (just generate too many flows to fit in memory, or generally increase the per-packet lookup time)
      -> not p2p compatible (p2p will cause flow based routers to perform at a snail's pace, because they open so much connections)
      -> possible triple penalty for every new flow (first a failed flow lookup, followed by a failed route lookup, going to default route)
      -> very hard to have a good qos policy this way. A pipe has a fixed bandwidth, and you almost always oversubscribe. Therefore useful policies are very hard to formulate per-flow.
      -> if you divide bandwidth per-flow over tcp then a large overload will "synchronize" everything. So let's explain what happens if 3 users are happily surfing about and another user starts bittorrent. Bandwidth gets divided over all the flows, and *every* connection closes, due to timeouts.

      There are a number of advantages
      -> easy, very extensive QOS is trivial to implement
      -> stateful firewalling is almost laughably easy to implement, and very advanced firewalling can be done (e.g. easy to block ssh but not https, just filter on the string "openssh" anywhere in the connection. Added bonus : hilarity ensues if you email someone the text "openssh", and his pop3 connection keeps getting closed)

      Here's the deal : a router has to lookup in a table of about 300.000 entries in per-packet switching (excepting MPLS P routers). My PC is, at this moment, opening 331 flows to various destinations, each sending an average of 5 packets (probably a lot of DNS requests are dragging this number down), but you have to keep in mind that a flow-based router has to look up first in the "flow table" AND in the route table (which still has 300.000 entries).

      As soon as a flow-based router services more than 1000 machines (in either direction, ie. 100 clients communicating with 900 internet hosts = 1000 machines serviced), it's performance will fail to keep up with a packet-based router. That's not a lot. If a single client torrents or p2p's you will hit this limit easily, resulting in slower performance. 2000 machines and packet-based switching is double as efficient.

      So : flow-based routing ... for your wireless access point ... perhaps. For anything more serious than that ? No way in hell.

      • Thanks for the excellent debunking. I have very little to do with the mechanics of networks but I do remeber "netflow routing", IIRC there was quite a debate back in the 90's about packet vs flow management and flow lost. From a naive point of view, I could never see how adding an extra lookup table would make things more efficient.
      • by hitmark ( 640295 )

        "-> very hard to have a good qos policy this way. A pipe has a fixed bandwidth, and you almost always oversubscribe. Therefore useful policies are very hard to formulate per-flow."

        "-> easy, very extensive QOS is trivial to implement"

        is it me or is that contradictory?

        • Well, it simply means that the router is capable of very, very extensive QOS options with very fine-grained control.

          Compare it with saying "assembly gives you total control over the computer, and you *can* write enterprise applications directly in assembly. It is however very very hard to write anything even slightly oversized in assembly".

          The problem is the disconnect between pipe capacity and flow "capacity". This type of routers gives you total and complete control over flow bandwidth ... which does not

          • by hitmark ( 640295 )

            Sounds to me like the classical problem of ISP's overselling capacity at one end, and crying foul when the customer actually wants to make use of said capacity...

            • So let me guess, you have 5 computers at home, each on a 100 mbit connection, and you do not "oversell" ... so you have (and pay for) a 500 mbit synchronous internet connection, right ?

              After all, you're just an evil isp that wants to prevent it's customers of "actually using the available capacity".

              • by hitmark ( 640295 )

                no, but then again i am fully aware of the capacity of the connection out of my home.

                however, i never recall hearing about any isp that has told a customer about the capacity of the line after the dslam or whatever.

                if they had, i may be more willing to cut back on my expectations, and maybe order a lesser connection that will fit inside their capacity.

                and that i think is the crux of the issue. they still accept payment for the higher speed, but hide behind "best effort" when i wonder why i do not ever get a

      • by copec ( 165453 )

        As soon as a flow-based router services more than 1000 machines (in either direction, ie. 100 clients communicating with 900 internet hosts = 1000 machines serviced), it's performance will fail to keep up with a packet-based router. That's not a lot. If a single client torrents or p2p's you will hit this limit easily, resulting in slower performance. 2000 machines and packet-based switching is double as efficient.

        Where did you get these numbers from? In the article they claim their device can do a whole lot more then that.

  • So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules. Aren't we supposed to be against this? Because it sounds a lot to me like encrypted packets, UDP, and peer-to-peer, three things that certain well-funded groups have been trying to kill or restrict for awhile, would seem to be the worst-affected here.

    • by 0racle ( 667029 ) on Friday July 10, 2009 @03:03PM (#28654235)
      What you describe (packet inspection and prioritizes traffic based on internal rules) is QoS. No one in their right mind is against that. The net neutrality debate is about ISP's throttling some traffic in order to extort money from both their customers and content providers that otherwise have no other relationship with the ISP. The debate is that all ISP's should be are the tubes the content is delivered over, not gate keepers of content.

      That an ISP may prioritize services like VOIP over http or bittorrent is not what net neutrality is about and quite frankly is something that a good network engineer would look into and would probably implement.
      • Re: (Score:2, Insightful)

        That an ISP may prioritize services like VOIP over http or bittorrent is not what net neutrality is about and quite frankly is something that a good network engineer would look into and would probably implement.

        QoS isn't a bad thing, but the user should be in control of it, not the ISP; Who's to say that encrypted packet doesn't need a low-latency link more than the unencrypted VoIP connection? The ISP doesn't know -- it has to guess based on protocol data that may or may not be accurate. But that's a lot more work to implement and so most ISPs won't do it...

        • The problem is, there are WAY too many people out there that think QOS stands for Queen of the Stone-Age and not Quality of Service.

          The ISP is in a no win situation, IMO. On the one hand, they have potentially hundreds of thousands of users using VOIP services who don't know the first thing about QOS, but do know the effects of jitter or packet loss, so they complain.

          What's the ISP supposed to say? "Turn on QOS?" Not that simple. On the other hand, if they do prioritize packets, then they get people who

          • For starters, hardware that handles VoIP should be taking advantage of the TOS bits, and setting their packets for Minimum Latency. Secondly, QoS implementation should be taking advantage of this "opt in priority request" This is exactly the sort of situation that TOS, traffic class, and so on were designed for.
        • by babyrat ( 314371 ) on Friday July 10, 2009 @04:16PM (#28655033)

          QoS isn't a bad thing, but the user should be in control of it

          Exactly! That way MY packets (not some of them, ALL OF THEM) need to be prioritized.

          Kind of reminds me of the good old days when I had access to print queue priorities. No-one ever understood why my printouts always came out first...I maintained I was just lucky.

          • Comment removed based on user account deletion
            • by Lars T. ( 470328 )

              And this is why we can't have nice things.

              And why "nice" doesn't allow the users negative values.

          • by raddan ( 519638 ) *
            Well, what the user (where 'user' is more likely the application) should be in control over is the QoS parameters. Like, "I want low jitter, occasional packet loss is OK". Leave the mechanism up to the intermediaries. If someone asks for "all of the above", treat it as invalid input and ignore it, or even better, treat their traffic like shit, since they don't know how to play nice. "Priority" is only one QoS parameter, and one that the end-user should have no control over.
        • by pyite ( 140350 )
          <cite>QoS isn't a bad thing, but the user should be in control of it, not the ISP; </cite>

          The problem is that I'd be afraid of other people prioritizing all traffic rather than just some. So now I'm gonna prioritize all of my traffic. So now everything is in a gold queue and nothing gets prioritized. It is somewhat of a prisoner's dilemma. Contrast this with a network with end to end control where you can trust DSCP or COS values along the way. A possible solution is maybe to allow end users to
          • by Apocros ( 6119 )

            I was thinking this same thing... Say 50% of packets have to go into a normal/bulk queue, 35% into a medium queue, and the rest into high. Adjust the thresholds as you like. For users that don't know better, default to normal, and maybe promote to higher levels based on the destination port.

            Then, if the rolling average (over a few days or so) for packets marked "high" exceeds the threshold, all subsequent "high" packets get demoted until the average gets back to the required level. If high and medium qu

          • Not "prisoner's dilemma" but "tragedy of the commons". When a resource is free and "unlimited", individuals will use way more of it than they actually need. User-specified high QoS should come with a cost per GB. That would solve your problem. The optimal cost is left as an exercise to the reader.
        • QoS isn't a bad thing, but the user should be in control of it, not the ISP

          This statement shows you know fuckall about physical networking.

          How is Joe Blow supposed to prioritize his -extremely- time sensitive VOIP packets over Bob Schlob's very non-time sensitive Bittorrent packets if Bob Schlob sets their priority to the same level? Voice needs to have higher priority to function, because the useage is very sensitive to delays. Delay a packet more than a few milliseconds and you might as well drop it because the person on the other end will notice it. You should not drop that

    • So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules. Aren't we supposed to be against this?

      I dunno. If the router is designed to look at packet flow rather than the contents of said packets or its source and destination, then you have still can have net neutrality.

    • by B'Trey ( 111263 ) on Friday July 10, 2009 @03:07PM (#28654291)

      Exactly how is this different from what we currently have?

      Consider a conventional router receiving two packets that are part of the same video. The router looks at the first packet's destination address and consults a routing table. It then holds the packet in a queue until it can be dispatched. When the router receives the second packet, it repeats those same steps, not "remembering" that it has just processed an earlier piece of the same video.

      Uh, no. This is called process switching. It hasn't been used in anything but the most low-end routers for quite some time. CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control. The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds. MPLS adds a label to the packet which identifies the flow, so it isn't even necessary to check the packet for the five components which define the flow. Just look at the label and send it on its way.

      QOS (Quality Of Service) has multiple modes of operation and multiple queue types which address the issues of which packets to drop. It may or may not include deep packet inspection to attempt to determine the type of packet.

      Perhaps they've come up with some new innovations that aren't obvious in the write-up because it's written at a relatively high level, but there's nothing here that isn't already implemented and that I don't already work with on a daily basis in production networks.

      • by bogd ( 912084 )
        CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control. The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds.

        It's more than that. The older techologies ("fast switching" in the Cisco world) used to do this - route first packet, then switch the other packets in the flow. However, CEF goes one step forward, and allo
    • I believe hes talking about managing flows to keep streams from getting interrupted so if it sees data transfers that have been going on between 2 points consistently its going to be less likely to drop packets from that stream than it is some other random ping or small packet. Basicly the idea is to keep things working that are streaming instead of dropping a packet and stalling them or sending packets out of order due to queuing. Benefit is downloads and streams of data are more likely to stay working wh
    • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Friday July 10, 2009 @03:12PM (#28654357) Homepage Journal

      No, it doesn't break net neutrality in and of itself, any more than a traffic light or a roundabout breaks road neutrality. The idea of routing flows, rather than packets, permits more packets to get through for the same bandwidth.

      So long as all flows are treated fairly, this will actually BOOST network neutrality as network companies will have less justification to throttle back protocols which take disproportionate bandwidth - as they will no longer do so. Users will also have less cause to complain, as the effective bandwidth will move closer to the theoretical bandwidth.

      The only concern is if corporations and ISPs use this sort of router to discriminate against flows (ie: ensure unfair usage) rather than to improve the quality of the service (ie: ensure fair usage).

      The belief by ISPs that you cannot have high throughput unless you block legitimate users is nothing more than FUD. It has no basis in reality. It is possible, by moving away from best-effort and towards fair-effort, to get higher throughput for everyone.

      Congested networks can be modeled as turbulent flow in a river. Blocking streams is like damming up some of the tributary streams. It causes a lot of grief and isn't really that effective.

      On the other hand, smoothing out the turbulence will improve the throughput without having to dam up anything. QoS services are intended as smoothing mechanisms, not dams. For the most part, at least.

      Most "net neutrality" advocates would be advised to focus only on the efforts to build gigantic dams, rather than to be unkind or unfair on those merely smoothing the way, with no bias or discrimination intended.

    • by mysidia ( 191772 )

      Flow-based QoS, in the form of Flow-based WRED [cisco.com] is not a new concept.

      Furthermore, Flow-based routing is not a new concept, it's a very old one.

      Perhaps what has happened is general purpose computing hardware, CPUs, and Memory, have gotten a lot cheaper, at much higher speeds and capacities, than in recent years.

      It may now be possible to build routers that have the capacity to do it. Flow-based routing is extremely expensive, especially in terms of CPU and memory for bookkeeping all those flows.

      Thin

  • by raddan ( 519638 ) * on Friday July 10, 2009 @02:36PM (#28653907)
    It just makes the packet switching faster. But really, we're talking about the same idea here: datagram networks. Congestion avoidance has been known to be a difficult problem in datagram networks for a long time [wikipedia.org].

    TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here, and this router does nothing to fix that. The way to fix that is to dump TCP's congestion control and replace it with real flow control in the network layer. That requires lots of memory on intermediaries, because you need all the hosts along the data path to cooperate with each other to communicate about flow control, and that means keeping state. At which point, we're not talking about datagram networks anymore. And that means dumping the other desirable thing about datagram networks: fault tolerance. Packets are path-independent.

    Anyway: getting back to TCP's congestion control: his article even says that "During congestion, it adjusts each flow rate at its input instead." Wait, what? "If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down." That's how it works right now! The only difference that I can see is that he's being a little smarter about which packets to discard, unlike RED, which is what he's comparing this to. If so, that's an improvement, but it doesn't solve the problem. It will still take awhile for TCP to notice the problem, because the host has to wait for a missed ACK. TCP can only "see" the other host-- it does not know (or care) about flow control along the path. Solving the problem requires flow control along that path, i.e., in the network layer, but IP lacks such a mechanism.
    • Re: (Score:1, Informative)

      by Anonymous Coward

      Routing doesn't even concern itself with TCP. TCP issues such as windowing & acks are a host to host concern.

      There are some advanced QoS features on routers that will send an explicit congestion notifications to hosts if their tcp flow is in a class that's exceeding it bandwidth. The end hosts are supposed to back off their transmission rate without all the missed ack, window shortening, and retransmissions that would otherwise be involved if traffic was dropped. That's about as gracefully as a router

      • The thing is though, QoS policies are the antithesis of the net neutral / traditional best effort internet.

        That's nonsense, QoS is about prioritizing protocols from high time sensitivity to low time sensitivity. I.e. live voice and video are both very highly time sensitive. So much so that it is better to discard a delayed packet than to send it. A jpeg is not.

        Net Neutrality is about blocking -access- so you can charge more for certain parts of the web, or throttling competitors service so you can drive customers to your service. Putting something like VOIP at the highest priority and the bittorrent protocol

      • That's absolutely correct - routers can benefit from understanding what the end-to-end service is doing, but ultimately what they're doing is routing, and it's the endpoints that are doing the TCP or UDP.

    • by Cyberax ( 705495 )

      Flow control can be greatly improved by adding NACKs to protocol. I.e. a router will (try to) send a NACK packet after it drops your packet.

      This NACK might get lost, sure, so a timeout mechanism is still required. But in general NACKs give much better flow control. Another variant is heartbeat ACKs (used in SCTP), they allow a range of other optimizations.

      It's possible to do better than TCP. Though of course, circuit-switched networks are still superior in flow control.

      • by raddan ( 519638 ) *
        Of course, when you have the router inserting NACKs into the transport stream, you're violating your layers (although it could be argued that they're already violated because congestion avoidance got put into TCP). You're fixing lame congestion control in TCP by breaking IP. Wouldn't it be better to remove congestion control from TCP and put real flow control in IP?

        I think the real interesting thing to see happen would be LLC [wikipedia.org], because then you could run datagrams and virtual circuits on the same wire.
        • by Cyberax ( 705495 )

          NACKs in the IP layer won't be a violation of layered model if we move NACKs down to the IP level (as an optional feature, of course) :)

          In fact, we already have half-assed attempts like ICMP Source Quench. They are not sufficient, though.

          Virtual circuits (I mean real circuits with guaranteed bandwidth) are cool, but they are inefficient - you have to reserve bandwidth even if circuit is not used right now. That makes sense for telecom apps, but not for HTTP and other 'bursty' protocols.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Doesn't WRED (Weighted RED) already do this?

    • by RichiH ( 749257 ) on Friday July 10, 2009 @03:08PM (#28654299) Homepage

      > TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here

      In a dumb network with intelligence on the edges, you can:

      1) cause congestion and then back off (TCP)
      2) hammer away at whatever rate you think you need (UDP)
      3) use a pre-set limit (which might be too high as well so no one does that on public networks)

      State-ful packet switching is literally impossible, fixed-path routing not desirable for the reason you stated above and I would not want anyone to inspect my traffic _by design_, anyway.

      TCP may not be perfect, but I fail to see an alternative.

    • by John.P.Jones ( 601028 ) on Friday July 10, 2009 @03:11PM (#28654335)

      TCP's congestion control backs off exponentially because it has to. There is a stability property that if the network is undergoing increased congestion (this is how TCP learns the available throughput and utilizes it) and the senders do not back off exponentially then their backing off will not be fast enough to relieve congestion and therefore stabilize the system. If this router is selectively stalling individual flows I do not believe that will be fast enough to deal with growing congestion from many greedy clients.

      Basically, eventually the buffer space of the router will become exhausted and it will be forced to drop packets non-selectively hence initiating TCP backoffs from randomly selected flows, resulting in current behavior. So, of course in that gray area between the first dropped flow and when we need to revert back to normal behavior we may see improved network performance for some flows but they will just take advantage of this by opening up their TCP windows more until the inevitable collapse comes.

      The end result will be delaying backing off many TCP flows (which will speed them up creating more congestion) at the expense of completely trashing a few flows (which will stall anyways for packet reordering). and so the resulting system will be less stable.

      • Re: (Score:3, Interesting)

        by raddan ( 519638 ) *

        TCP's congestion control backs off exponentially because it has to.

        Sure, but it's looking at the problem from the wrong end. IP has no feedback mechanism to allow for flow control (i.e., to prevent the sender from overrunning the receiver), so TCP has congestion control instead to stop it from happening when it does. Since TCP has no way of knowing what the available bandwidth is, it goes looking for it by causing the problem and then backing off. And since packet-switched traffic is "bursty", it resumes increasing the rate until it hits the ceiling again (because maybe

    • by B'Trey ( 111263 ) on Friday July 10, 2009 @03:15PM (#28654387)

      This has already been addressed in the IP specs: ECN [wikipedia.org]

      One of the big problems with getting ECN adopted has been that Windows hasn't supported it. Vista does and I haven't seen anything specific but I'm reasonably certain that Windows 7 does as well. MAC OSX 10.5 supports it as well. Linux has supported it for quite awhile. It's usually disabled by default, so that may be an issue in getting it widely supported. But the issue isn't that we don't know how to do it better. It's just overcoming the inertia.

      • by swmike ( 139450 ) *

        The problem with ECN is that no major core router platforms support it, neither does the smaller routers. The only "core" networking device I know of that supports ECN is the CPU type routers (7200 et al) that Cisco makes. Their 12000 and CRS-1 do not.

        There is little reason to have ECN turned on at end systems, if the intermediate devices that move/buffer packets doesn't set/use the ECN flag.

    • I defer to your knowledge relative to mine, but I do wonder why we work on switching pieces of transport protocols around and changing the tiny things when we could just move to something entirely different. I recall reading IPv6 has a host of new mechanisms built directly into the protocol which address these types of concerns. IPv6 is far more than just NAT avoidance and long IP addresses - with its built in packet priority values and other bells and whistles I think IPv6 could help solve this type of p
      • Comment removed based on user account deletion
      • IPv6 is a new version of IP, not a new version of TCP/UDP (though it forces those protocols to change, because the IP addresses are longer.) Yes, there are priority bits, but there are also priority bits in IPv4, and some ISPs support them for traffic within that ISP, but very few support them between ISPs. The important change in IPv6 is of course longer addresses, plus a lot of boundless optimism about "if we're changing IP anyway, we can fix all the problems it has", some of which is warranted but most

    • by snaz555 ( 903274 )

      TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here, and this router does nothing to fix that. The way to fix that is to dump TCP's congestion control and replace it with real flow control in the network layer.

      Just remove the excess forwarding buffers; there's no point buffering more than what's required for the internal forwarding jitter, which should really be no more than a few datagrams at most. TCP is based on a model where congestion = loss, not congestion = pileup. Other UDP based protocols - DNS, etc, all have their own retransmission mechanisms, also based on the same model of congestion = loss. What happens when routers have ridiculous quantities of buffer - several seconds' worth - is that entire

  • so... (Score:2, Funny)

    by Anonymous Coward

    a router tampon?

  • Ah yes, Larry Roberts. He seems to poke his head up every once in a while. From Caspian Networks, and now Anagran. He certainly likes to push flow routing, although it's been shown not to scale in practice.

    • Re: (Score:3, Insightful)

      Can you explain a little more? I just RTFA and I'm not convinced this is revolutionary either, but it's hard to say because this seems more like marketing than actual research. However, I'm hesitant to say he's full of shit without hearing a bit more of the debate around his ideas.

      One question I was hoping would be answered is what this flow routing buys you that something like SCTP wouldn't?
      • Re:This isn't new (Score:5, Informative)

        by Spazmania ( 174582 ) on Friday July 10, 2009 @03:35PM (#28654641) Homepage

        I'm hesitant to say he's full of shit without hearing a bit more of the debate around his ideas.

        There really isn't a debate around his ideas, at least not any more.

        The hitch is management overhead. Managing a flow requires remembering the flow. That means data structures and stateful processing. It's expensive and no one has demonstrated hardware accelerators that do a good job of it. On the other hand, devices like a TCAM can accelerate stateless packet switching a couple orders of magnitude past what's possible with a generic PC.

        At low data rates where DRAM latency is not an issue (presently around the 500mbps range), flows can work and accomplish much of what he claims. At higher data rates (like the 10-100gbps links on the backbone) we simply can't build hardware capable of managing flows for any kind of reasonable price.

        Beyond that, Larry has really missed the boat. The next routing challenge isn't raw bits per second. That's pretty much in hand. Rather the next challenge is the number of routes in the system. If you want two ISPs for reliability (instead of one), you currently have to announce a route into the backbone that is processed by every single router in the backbone even if it never sees your packets. That currently costs about $8k per route per year, the cost is falling a lot more slowly than the route count is climbing and the lack of filtering and accounting systems mean that each one of those $8k's is an overhead cost to the backbone networks rather than a cost directly recoverable from the user who announced the route.

        Flow based routing doesn't help us solve that challenge in the least. If anything, it makes it worse.

        If you're interested in routing theory and research, I recommend the Internet Research Task Force Routing Research Group (IRTF RRG). They're chartered by the IETF to perform basic research into Internet routing architectures and anyone interested can participate.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      Definitely not new.

      "The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals."

      Where have I heard this before...oh hay...

      http://en.wikipedia.org/wiki/Cisco_Express_Forwarding [wikipedia.org]

      • Yeah, CEF has done that for a while, but it's pretty dumb as well as fast. This article sounds like what he's doing also provides somewhat fancy queue management to take advantage of the flow information during periods of congestion. It's possible that he's also doing a fancier version of CEF, which can get into trouble if there are too many traffic flows so the router runs out of TCAM, but there's not really enough detail in the article to tell.

    • by cgori ( 11130 )

      Thank you, he is the very same one whose ideas evaporated 200M+ in VC money on Caspian, right? They were across the highway from me for years when I was in the valley. plus ca change...

  • by Anonymous Coward

    Why can't we just put a pretty girl on top of it and make the packets go faster.

    Seems to work with car advertising and on animals.

  • It manages flow of traffic, recognizing when one packet belongs with the others. This sounds wonderful, at least for people trying to inject packets.

    I hope these things recognize the evil bit [faqs.org].

  • Puffery by a startup (Score:5, Informative)

    by Ungrounded Lightning ( 62228 ) on Friday July 10, 2009 @02:54PM (#28654129) Journal

    The main players in the routing industry have been working on flow-aware routing for years.

    (I'm in the hardware side of our company so I'm not sure where how many and which of the features built on the flow-based architecture are already in the field. But I'm willing to bet a significant chunk of change that that the full bore will be deployed on more than one name-brand company's product line and be the dominant paradigm in routing long before these guys can convince the telecoms and ISPs to adopt their product. No matter how many big names they have on staff - or how good their box is. Breaking into networking is HARD.)

    • Ya I'm failing to see what is special here. Now the article was kind of light on the details, so maybe there's more to it, but to me it sounds like what the Cisco 6000s and such already do. When you start a flow the first packet hits the router and it decides where it is going, if it is allowed and all that jazz. After that, the subsequent packets are switched which makes it much faster. Routing is essentially done on flows, not packets.

      Maybe this is somehow way more amazing, but it doesn't look like it.

    • Especially trying to break into a market by telling everyone about your awesome super cool new way of doing things ... that everyone else has been doing for 10 years already.

  • But seriously, flow management/queuing may be useful at the very edge of the network, like a BRAS. But most provider edge products (Juniper, Ericsson/Redback, ...) already have similar capabilities. Flow management past the edge of a network is pointless, especially for TCP/IP traffic.
  • I read the article and can't exactly distinguish this from IntServ [wikipedia.org]. What's the difference?
    • IntServ assumed that flows are signalled (with RSVP). The Anagram box (and the Caspian box before it) detects the first packet in a flow (by a miss in the flow cache), and then creates a flow cache entry.

      The utility of this feature is very questionable, especially since routers have been able to IP forward and apply ACLs at line rate for years.

  • Some thoughts (Score:4, Interesting)

    by intx13 ( 808988 ) on Friday July 10, 2009 @03:06PM (#28654263) Homepage

    First, routers discard packets somewhat randomly, causing some transmissions to stall.

    While it is true that whether or not a particular packet will be discarded is the result of a probabilistic process, it is unfair to call it "random". Based on a model of the queue within the router and estimation of the input parameters the probability of a packet being discarded can be calculated. In fact, that's how they design routers. You pick a bunch of different situations and decide how often you can afford to drop packets, then design a queueing system to meet those requirements. Queueing theory is a well-established field (the de-facto standard textbook was written in 1970!) and networking is one of the biggest applications.

    Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays

    You wouldn't expect uniform delays. A queueing system with a uniform distribution on expected number of customers in the queue is a very strange system indeed. Those sorts of systems are usually related to renewal processes and don't often show up in networking applications. That's actually a good thing, because systems with uniform distributions on just about anything are much more difficult to solve or approximate than most other systems.

    "Substantial" is the key word here. Effectively the concept of managing "flows" just means that the router is caching destinations based on fields like source port, source IP address, etc. By using the cache rather than recomputing the destination the latencies can be reduced, thus reducing the number of times you need to use the queue. In queueing theory terms you are decreasing mean service time to increase total service rate. Note however that this can backfire: if you increase the variance in the service time distribution too much (some delays will be much higher when you eventually do need to use the queue) you will actually decrease performance. Of course assumedly they've done all of this work. In essence "flow management" seems to be the replacement of a FIFO queue with a priority queue in a queueing system, with priority based on caching.

    Personally, I'm not sure how much of a benefit this can provide. Does it work with NAT? How often do you drop packets based on incorrect routing as compared to those you would have dropped if you had put them in the queue? If this was a truly novel queueing theory application I would have expected to see it in a IEEE journal, not Spectrum.

    And of course, any time someone opens with "The Internet is broken" you have to be a little skeptical. Routing is a well-studied and complex subject; saying that you've replaced "packets" with "flows" ain't gunna cut it in my book.

  • by Anonymous Coward

    this sounds fancy, but the only real improvement is hash-table lookup, everything is already implemented with current generation routers.

    and it starts at $30000 a model, ROFLMAO. Thanks, umm , but NO thanks!

    • and it starts at $30000 a model, ROFLMAO. Thanks, umm , but NO thanks!

      He claims 80gb/s route processing. Cisco's 20gb/s ASR1000 starts at $50,000, and it would cost another $45,000 to get it up to 80gb/s. I'm not talking about throughput, I'm talking about the ability to process routes.

      Cisco's ASR9000 does terabit level processing, which is what the Caspian project was aimed at, but that one starts at $450,000.

  • by elbuddha ( 148737 ) on Friday July 10, 2009 @03:09PM (#28654315)

    Yippee.

    Cisco (and probably several others) have done this by default for many many moons now. By way of practical demonstration, notice that equal weight routes load balance per flow, not per packet. What it allows is subsequent routing decisions to be offloaded from a route processor down to the asics on the card level. And don't try to turn CEF off on a layer 3 switch - even a lightly loaded one - unless you want your throughput to resemble 56k.

  • by Anonymous Coward

    Among the innovations:

    no ram for buffering flows to cope with any temporary overcommitments. Instead it does this:

    "Even more significant, the FR-1000 does away entirely with the queuing chips. During congestion, it adjusts each flow rate at its input instead. If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down."

    Um, discarding a random packet in the middle of my session will indeed slow the flow down, much in the same way as if you s

  • Revolutionary? The concept isn't new. On software based router, we cache route information after the first lookup from the routing table for a certain period of time based on parameters like destination ip address, nexthop and interface. So instead of looking up the route table again, we just look up the cached route. It's called IP Flows and it's way old.
  • by Anonymous Coward

    Someone put the spades back. flow routing is SUCH old news... How the heck did this make slashdot?

    • by swmike ( 139450 ) *

      I agree. Flow routing with 4M flows didn't work in devices engineered in the late 1990ies, and they won't work now. Any script kiddie can create new "flows" by sending random port/dst IP packets thru the device, and it'll fall over and die just like the devices 10 years back did.

      We stopped doing flow routing for a reason, it didn't work. Routers need to care about IP addresses and perhaps take into account port numbers to do load sharing between equal cost links, but nothing else. Looking into flows does NO

      • Well, by that logic, packet-based routers don't work either. Any script kiddie can create new 'packets' at random, flood them at the router, and the router'll fall over and die.

        • by flux ( 5274 )

          Nope. Those 'packets' don't need to be remembered by the router. Flows, which are created by upon receiving packets, do.

  • by neelsheyal ( 1595659 ) on Friday July 10, 2009 @03:15PM (#28654399)
    Routing/Switching based on flows is highly flawed. The article claims that the benefit is due to reduced table lookup based on individual packet content. Instead if the 5 tuple is hashed to a flowid. then the presence of flowid indicates that the flow is already active and will be treated preferentially during a congestion. First of all, if the number of flowids are large then there is no way to store all the different flowids in a scalable and cost effective manner. Which means you associate an eviction clause which can hurt you more with all these complexities. Secondly, there is concept of hardware caching which works better than hashing flowids. Finally, all the classes of flow which are really important, can be protected with class based queuing.
  • by BigBlueOx ( 1201587 ) on Friday July 10, 2009 @03:16PM (#28654429)
    Why?
    It would be bad.
    I'm fuzzy on the whole good/bad thing. What do you mean, "bad"?
    Try to imagine all the packets on your network stopping instantaneously and every router on the Internet exploding at the speed of light.
    Total TCP reversal!!
    Right, that's bad. Important safety tip. Thanks, Egon.
  • This capability is especially convenient for managing network overload due to P2P traffic. Conventionally, P2P is filtered out using a technique called deep packet inspection, or DPI, which looks at the data portion of all packets. With flow management, you can detect P2P because it relies on many long-duration flows per user. Then, without peeking into the packets' data, you can limit their transmission to rates you deem fair.

    If routers started doing this, wouldn't torrent clients just start randomizing

    • by Gerald ( 9696 )

      If you have dozens or hundreds of long-duration, active flows (BitTorrent) and your neighbor has a few intermittent, short-duration flows (Firefox), it's pretty obvious who to throttle. The port numbers in use is irrelevant in this case.

  • Wrong (Score:3, Informative)

    by slashnik ( 181800 ) on Friday July 10, 2009 @03:19PM (#28654461)

    "TCP throughput is inversely proportional to delay"

    Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay
    As long as the window is large enough

    • You do run into throughput limitations per flow though based on the speed of light.

      http://fasterdata.es.net/ [es.net]

    • "TCP throughput is inversely proportional to delay"

      Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay As long as the window is large enough

      Actually it is correct. The throughput is equal to TCP's congestion window divided by the round trip time (end to end delay), or TP = CWND/RTT. What he means is that assuming the window size is the same, the throughput is inversely proportional to delay.

    • by mevets ( 322601 )

      Wouldn't it be cool to have a protocol where throughput was not indirectly proportional to delay? The more {comcast,bell,...} throttled your pirate^Wtorrent, the faster it got. If you could combine this with a protocol where latency was inversely proportional to bandwidth, just by unplugging you could have an infinite throughput, zero latency network. How cool is that!

    • by n6mod ( 17734 )

      Grandparent is absolutely right in any real network. I've done this for a living, twice.

      In any real network, packet loss is not zero. Packet loss happens for any number of reasons, including, as has been pointed out, congestion. In fact, the guys doing TFRC determined that the throughput of tcp is approximately:

      Throughput = 1.3 * MTU / (RTT * sqrt(Loss))

      Note well that window size is not a term in that equation. Bandwidth of the link isn't either, though obviously that's an upper limit.

      It's true that bandwid

  • by Anonymous Coward

    He has re-invented the layer 3 switch... now with less jitter and latency because:

    The FR-1000 does away entirely with the queuing chips. During congestion, it adjusts each flow rate at its input instead. If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down. And rather than just delaying or dropping packets as in regular routers, in the FR-1000 the output provides feedback to the input. If thereâ(TM)s bandwidth available, the equi

  • by JakiChan ( 141719 ) on Friday July 10, 2009 @04:02PM (#28654929)

    He's doing it *without* custom ASICs and without TCAM. TCAM is very expensive. I'm not sure this is faster than CEF or the like, but it may very well be cheaper.

  • Fun watching people say this "doesn't work" -- back when I was at Caspian, the real world runs were working quite well at gigabit speed and if memory serves, they had a 10 gigabit line card (this was 2006). The cost was they had to design asics to do this and they were trying to get the same performance out of commodity hardware. It looks like this is the case - which means it's dropped the cost of the equipment significantly.

    Where it does improvement over current routing and qos is that it does it on the

  • Isn't this something that you can accomplish with OpenBSD packet filter?

    • by Shatrat ( 855151 )
      Software routing/switching isn't going to touch a hardware solution for speed and reliability.
      Unix routers have their place, but they can't do big pipes.
    • Netcraft confirms - OpenBSD is... oh, sorry, wrong thread...

      Both of them are identifying flows, but the similarity pretty much stops there.
      OpenBSD packet filters are a firewall function, typically used to protect your endpoint from evil traffic. This is a router, deciding which interface to send a packet out when it arrives on a different interface at an ISP backbone location. Packet filters are running at the speed of your computer's interfaces and application processing, typically well under a gigabit/

  • "managing flows" unburied a memory of a tampon commercial.

    During your period, your flow level can change from one day to the next. That's why Tampax developed the Compak Multipax. You get 3 tampon absorbencies to meet your changing needs, in one convenient package.

  • Cool, a transfer protocol that adapts what's sent when according to traffic flow. It needs a catchy name.

    I suggest Zmodem.

  • And it appears most of the responders didn't. Understandable given it's length. One choice quote I'd like to point out though is:

    We designed the equipment to operate at the edge of networks, the point where an Internet service provider aggregates traffic from its broadband subscribers or where a corporate network connects to the outside world. Virtually all network overload occurs at the edge.

    This isn't to replace routers, this is supposed to sit between end-users and the rest of the infrastructure so things get throttled before they get into the main router/backbone/wherever it's going.

The "cutting edge" is getting rather dull. -- Andy Purshottam

Working...