Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking IT

Network Monitoring Appliance Looks Below 1 Microsecond 78

eweekhickins writes "Corvil has unveiled a new tool to help network managers cope with increasing pressure to improve performance. This appliance, from the Dublin-based company (with backing from Cisco), passively monitors traffic across networks in segments below 1 microsecond in length and correlates monitoring data with remote appliances and gives a complete picture of latency, jitter, packet loss and other phenomena that affect network and application performance. Corvil CEO Donal Byrne noted that 'If you can drop a millisecond [of latency] off, you're a hero.'"
This discussion has been archived. No new comments can be posted.

Network Monitoring Appliance Looks Below 1 Microsecond

Comments Filter:
  • Drop a millisecond (Score:1, Interesting)

    by Anonymous Coward
    "If you can drop a millisecond [of latency] off, you're a hero."

    This is the kind of attitude that breeds the Scotty types (you know who you are). If you can cut 2 ms, then only cut 1 ms now and save the other for when you really need it. And when the company is going to spend thousands for analysis, then suddenly cut the last 1 ms.
    • Re: (Score:3, Interesting)

      by BSAtHome ( 455370 )
      However, it might be more effective to make your application more tolerant to latency (and fix your TCP window first).
      • by molo ( 94384 ) on Monday October 22, 2007 @06:10PM (#21078171) Journal
        Some applications are natively sensitive to latency and jitter. Consider VOIP or teleconferencing, or algorithmic stock trading.

        -molo
        • by khasim ( 1285 )

          Some applications are natively sensitive to latency and jitter. Consider VOIP or teleconferencing, or algorithmic stock trading.

          I guess that would depend upon where both points are. One has to be on your network. The other ... ?

          Now, with Ethernet, one machine can hog the switch (I'll guess that they aren't using hubs). What use is shaving a millisecond off the app if you're still vulnerable to someone else hogging the network at the moment that you're trying to complete your transaction?

          • Re: (Score:2, Interesting)

            Now, with Ethernet, one machine can hog the switch (I'll guess that they aren't using hubs). What use is shaving a millisecond off the app if you're still vulnerable to someone else hogging the network at the moment that you're trying to complete your transaction?

            That's what proper network segmenting is for. The guy that hogs the bandwidth usually has some business need to do so (but not always ;). Anyway, say the CAD guys do large file transfers multiple times a day. Well, you segment them off. That wa

            • Or perhaps implement QoS and throw in VLANs for good measure?

            • Actually, that is what QOS is for. Segmenting off everyone that wants to do some transfer makes for a fragmented unsummarized network with stretch VLANS all over god and country. Segmenting in little networks is fine...but a disaster in large ones.
              • Re: (Score:2, Interesting)

                by smellotron ( 1039250 )

                Actually, that is what QOS is for. Segmenting off everyone that wants to do some transfer makes for a fragmented unsummarized network with stretch VLANS all over god and country. Segmenting in little networks is fine...but a disaster in large ones.

                You're missing where one of the parents commented about cases where speed matters. If you're doing algorithmic trading and you're using software QoS, and your competitor is using physical hardware segmentation, your competitor wins (all other things being equ

              • Re: (Score:2, Informative)

                by Anonymous Coward
                >Actually, that is what QOS is for. Segmenting off everyone that wants to do some transfer makes for a fragmented unsummarized network with stretch VLANS all over god and country. Segmenting in little networks is fine...but a disaster in large ones.

                All of which just basically proves how shitty collision-based networking is, especially as network size and speeds increase: You have to throw more and more hardware at it, to preserve performance.

                The only thing that switching and VLANs do, is attempt to recti
                • by mikkelm ( 1000451 )
                  >Because, when you get right down to it - it's still Ethernet, and so, is still basically CSMA/CA, though the switches, VLANs, etc., hide it for the most part.

                  When did wired Ethernet become CSMA/CA, and what decade are you in? Collision-based networking? The CD in CSMA/CD has been irrelevant after almost a decade of full-duplex microsegmentation, effectively rendering the MA point-to-point, rather than "multiple access", and throwing out the CS in favour of "empty your buffers as fast as you can".

                  If you
                  • When did wired Ethernet become CSMA/CA, and what decade are you in? Collision-based networking? The CD in CSMA/CD has been irrelevant after almost a decade of full-duplex microsegmentation, effectively rendering the MA point-to-point, rather than "multiple access", and throwing out the CS in favour of "empty your buffers as fast as you can".

                    Even your 'high school senior fresh out of Net+' knows that a broadcast storm will very effectively return your switched, full-duplex, microsegmented VLAN to a CSMA/CD network -- quickly. We have all kinds of fancy tools these days to hack Ethernet to do what we want, such as switches and bridges and routers and spanning tree protocol, and fast routing, etc., but in the end, it's still Ethernet. With a few simple hacks, I can force all of the ports on even the best Cisco switch equipment on a given VLAN

                    • Broadcast storms can be avoided with even a modicum of proper planning. Of course you can hack it, but you can hack anything if you have the patience and skill. That does by no means constitute any fundamental failures of Ethernet.
                    • Broadcast storms can be avoided with even a modicum of proper planning

                      All it takes is one badly behaving Windows client to create a broadcast storm. :)

                      That does by no means constitute any fundamental failures of Ethernet.

                      I'm not saying that Ethernet isn't the best technology available given the options. Its ubiquitousness is part of what makes it the best strategy, though. Realistically, Ethernet is an old technology and a fresh approach, given what we know today, could do much better technically. It has its flaws, and some of them are very fundamental. You're right in that you can hide these flaws through good network engineering and adm

                    • That's not what I'm saying at all.

                      You can effectively stop broadcast storms at layer 2 with the right implementation, so what I'm saying is akin to saying "Well, sure, you can hack Windows with a few simple scripts, but that doesn't constitute any fundamental failure in the concept of operating systems."

                      There are of course better things than vanilla Ethernet for regular IP networks, but Ethernet is still a very viable strategy if implemented correctly.
        • by Alarash ( 746254 )

          Some applications are natively sensitive to latency and jitter. Consider VOIP or teleconferencing, or algorithmic stock trading.

          Most VoIP codecs can work with a maximum of 30ms Jitter. You can't drop below 1ms because of the latency implied by the network equipments (just to go through their hardware takes a few milliseconds - not to mention stateful equipment such as firewalls or load balancers, etc.)

          Also, I wonder how they can passively measure latency or jitter - accurately, that is. Network Testin

          • Re: (Score:3, Funny)

            by amorsen ( 7485 )
            You can't drop below 1ms because of the latency implied by the network equipments (just to go through their hardware takes a few milliseconds - not to mention stateful equipment such as firewalls or load balancers, etc.)

            Here's a nickel, kid, go buy yourself a real firewall.
            • by Alarash ( 746254 )
              I read my post again and I wasn't clear. I didn't mean that one firewall created implied several millisecond of latency (although this can be true when you reach some critical load). But every device adds some latency, lower than 1ms, and since packets go through a whole lot of these equipments, in the end you can't really drop below a certain amount of latency.
    • "If you can drop a millisecond [of latency] off, you're a Hero."

      Does this mean that a next step in human evolution is being able to measure time with microsecond accuracy?
      • I wouldn't believe it if I hadn't seen it for myself, but the users at my previous company could detect a 0.3 millisecond additional delay on a fibre line less than 100m long (they were whiny son's of bitches anyway). We took out the old Ethernet line out for some reason or another, and when we switched them over to this link there was instant complaints. They didn't even know we'd made the switch but still they cried.

        So I guess we've already evolved that far... the next step would have to be inbuilt dart
        • by Mr Z ( 6791 )

          Note: the delay appears to have been the switches at either end not working nicely with the new link medium

          Was there any sort of increased packet loss? Also, was it merely an average increase of 0.3ms? If there were any sort of peaks in the latency, i.e. increased jitter, that could be much more noticeable than an average latency increase might suggest.

          If your signals travel the speed of light, the propagation delay from pt. A to pt. B (100m in your case) should be around .33us (microseconds). Propaga

          • Unfortunately (or thankfully as the case may be), my information is second hand, coming from the system administrator at the time rather than my own checks.

            There wasn't any packet loss that I am aware of, some pretty intensive (proprietary) UDP applications operating across the link (there was no TFTP style checks on this) and dropping a packet would have been noticed. Average latency was, (for example) 0.7ms and increased to 1.0ms... some fairly standard diagnostics such as a continuous ping showed no maj
  • Can anyone explain to me what the advantages of this actually is?
    sorry if I sound stupid. It seems like greak to me. I'm just used wireshark etc
    • by Kazrath ( 822492 )
      I am definitly not a subject matter expert... however using wireshark to trace packets from a specific box to another with intentions of determining and fixing a network issue is much different than activly monitoring and storing all traffic going through your switches. Wireshark is "on-demand" while what they are talking about is "real-time".

      The breakthrough appears to be that it is the fastest of these type of devices available.

    • Re: (Score:3, Funny)

      by evil agent ( 918566 )

      sorry if I sound stupid. It seems like greak to me.

      That just about says it all...

      • Ah, sorry about the spelling in that post everyone!
        I managed to miss out more words, and make more spelling mistakes in that single post than I usually do in a week.
        I guess I need some more Coffee.
    • by DigitalCH ( 582593 ) on Monday October 22, 2007 @06:50PM (#21078609)
      The benefit depends on the person using it. Take an investment bank and an algorithmic trading system. Most of your money is made on volume, the faster you reply the more deals you get, the more volume you have, the more money you make. I've seen a lot of presentations at investment banks where every 5 milliseconds they shave off is $50+ million/year more money they make. Keep in mind that most of these companies have gotten to the point where they can do round trip for the whole trade transaction in 5 milliseconds or less. So each millisecond is like a 20% improvement.
    • Re: (Score:3, Informative)

      It seems quite simple. I took the following from the article:

      They timestamp the packet at some point in the network and when it arrives at the other side they timestamp it again to work out the trip time. Not really rocket science, but they seem to have come up with ways of measuring time pretty accurately at two different places and keeping the clocks in sync or working around clock drift in their measurements.

      The other part of their system is some algorithmic work that correlates packets and tries to wo
  • No offense guys, but unless you can make something that cuts the ping time in half, we won't be having any good FPS games against the Americans without increasing the ping from 60ms to 250ms or higher. 249ms won't cut it. It just won't.
    • Damn you Americans and your low latency connections, I'm lucky to get under 300ms to most game servers (500+ for World of Warcraft) and I am on adsl with no interleaving.
    • by NSash ( 711724 ) on Monday October 22, 2007 @07:19PM (#21078929) Journal
      "There is an old network saying: Bandwidth problems can be cured with money. Latency problems are harder because the speed of light is fixed - you can't bribe God."

      A beam of light takes roughly 1/7 of a second to travel around the world. That means that if you're playing on a server on the other side of the world, your ping will always be at least 143 ms. That's a hard physical limit: the only way to decrease that time would be to drill a hole through the Earth, or move closer.
      • by Anonymous Coward
        Electrical impulses propogate much, _much_ slower than the speed of light when run through copper (and perhaps fiber optics since the light beam has to bounce around so much that the path is many times longer?), so your hard limit may be ~143 ms, but only if your signal went through vacuum all the way to the server and back.

        The real latency should be actually much higher due to switching or forwarding overhead and any monitoring that the NSA does. 300ms at least.
        • by Agripa ( 139780 )

          Electrical impulses propogate much, _much_ slower than the speed of light when run through copper (and perhaps fiber optics since the light beam has to bounce around so much that the path is many times longer?)

          "Much much slower" is a pretty big exaggeration. Propagation velocity using a solid polyethylene dielectric is 66% and that is about as slow as it gets for electrical signals. Some glasses are as low as 50%. Store and forward ethernet switches at 100 Mbits/s have to add at least 150 microseconds pe

      • That's why I only attend LAN parties in the vacuum of outer space.
      • A beam of light takes roughly 1/7 of a second to travel around the world.
        So the hard limit would be 1/14 to get a packet halfway around the planet (approx 71 ms) which actually is pretty fast!
        • and another 1/14 to get it back again making a total ping time of 1/7 of a second.

          and light in fiber moves slower than light in free space.

          Of course UK to USA isn't as much as halfway round the world.

      • by Anonymous Coward
        ..you just make a Pre-Cabled Domestic Wormhole..

        To make your own wormhole:

        1. Put your network cables inside garden hoses first as wormholes can be moist environments
        2. Take two horizontal clothes washing machines and put them back to back (you can use tumble dryers but its a bit more dangerous - not wet enough)
        3. Open the door on each washing machine and take out any socks or coins
        4. Drill a hole through each drum and thread the garden hoses through
        5. Place a teaspoon of a uranium and a tablespoon of marzip
      • No, it would be more like 71.5

        Going half way around the world only takes 1/2 of 1/7, or 1/14th.
    • by dodobh ( 65811 )
      Just host your own server and make the Americans play against you.
  • Argh, the buzzwording! It burns us! Make it stop!!

    More seriously though, the article might benefit from a little bit more context. As mentioned above, taking 1ms off network latency is meaningless across long connections, where you expect 40ms latency just from the routers, speed of light, etc. Taking 1ms off when microseconds count, when your latency is 5ms, within a system where automated transactions and big money are involved, then the situation is different.

    The readers do not recognize the requirem
  • by jjgm ( 663044 ) on Monday October 22, 2007 @06:25PM (#21078337)

    The RIPE NCC [ripe.net]'s Special Projects [ripe.net] group have been offering sub-microsecond latency/jitter/analytical services to ISPs for years. Their data is invaluable and unique, since it measures latency, jitter and packetloss in a single direction (unlike ICMP Ping, which is a round-trip measurement over an asymmetric path) and goes back at least to 2000. The paper claims accuracy to 0.0006 ms, which was good for the time when the product was designed.

    Read about the project here [ripe.net] and the paper on TTM [ripe.net] [pdf] that was presented at the PAM2001 conference [ripe.net].

    (This isn't what Corvil do.)

  • "For every packet [that the appliance records], we compute a signature and a time stamp"

    "When it exits on the other side of the WAN, we time-stamp it, and we can correlate the data across the whole network"

    Well, doh ... how about NMAP and Wireshark ...

    'Byrne said Corvil's customer base is "more than 10 but less than 100."'

    What ever happened to that perpetual motion outfit ..

E = MC ** 2 +- 3db

Working...