Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking The Internet Hardware

Cisco Routers to Blame for Japan Net Outtage 78

An anonymous reader passed us a link to a Network World article filling in the details behind the massive internet outage Japanese web users experienced earlier this week. According to the site faulty Cisco routers were to blame for the lapse, which left millions of customers without service from late evening Tuesday until early in the morning on Wednesday. "NTT East and NTT West, both group companies of Japanese telecom giant Nippon Telegraph and Telephone (NTT), are in the process of finalizing their decisions on a core router upgrade, according to the report. The routing table rewrite overflowed the routing tables and caused the routers' forwarding process to fail, the CIBC report states."
This discussion has been archived. No new comments can be posted.

Cisco Routers to Blame for Japan Net Outtage

Comments Filter:
  • Djikstra (Score:5, Funny)

    by pwrtool 45 ( 792547 ) on Saturday May 19, 2007 @06:41AM (#19189707)
    Japanese police have put out an APB for some guy named "Dijkstra."
  • Eggs in one basket (Score:4, Insightful)

    by slashthedot ( 991354 ) on Saturday May 19, 2007 @06:46AM (#19189723) Homepage

    "Clearly, this failure doesn't reflect well on (Cisco) and at the very least highlights the need for two vendors," states CIBC analyst Ittai Kidron in the report.
    Yeah, don't keep all your routers in Cisco basket.
    • by sarathmenon ( 751376 ) <{moc.nonemhtaras} {ta} {mrs}> on Saturday May 19, 2007 @08:10AM (#19189995) Homepage Journal

      Yeah, don't keep all your routers in Cisco basket.


      I don't agree the blame is with Cisco, not until I see more evidence. Cisco has some of the most stable operating systems. The cmd line interface can sometimes suck, but their stability is very remarkable. The fault I am guessing is with the ISP for not planning network redundancy and not scaling their networks in time. Cisco might look bad in this article, but their track record in creating an OS with less number of bugs is much better than Microsoft, Sun and the others.
      • ... less number of bugs is much better than Microsoft, Sun and the others.

        Microsoft sure, but Solaris is pretty reliable.
      • We are talking about 2 -4 thousand routers. i think there redudancy was there. I however am willing to blame a routing protocol before i blame the OS. the article unfortunately doesnt talk about which routing protocol they were using. My guess is they were probably using IS-IS.
        • NM , its morelikely BGP4.
          • by rekoil ( 168689 )
            Actually, you use both. BGP is an Exterior Gateway Protocol, which gives each router an "exit point" to a given prefix - that is, how to get the packet out of NTT's network to get it where it needs to go (i.e. "send it to Google's peering point at the Tokyo exchange point"). IS-IS, OSPF, EIGRP, etc are Interior protocols, which map out the NTT network so that a given router knows which neighboring router is closer to that exchange point.

            Effectively, it's a two stage lookup - BGP will tell you that your gran
            • by sirket ( 60694 )
              Effectively, it's a two stage lookup - BGP will tell you that your grandmother lives in Chicago, but you need IS-IS to tell you which highway to get on.

              This is a terrible analogy. It isn't a two stage lookup- it's a single routing table lookup. BGP populates the routing table with routes it learns from external autonomous systems, and an interior routing protocol like OSPF populates the routing table with routes learned from within the autonomous system itself. Where both protocols know of the same route th
      • by frost22 ( 115958 )
        im a fraid you are a little off here.

        As I interprete TFA, it was BGP problem - possibly a failover
        situation not handled correctly. Or they (NTT) did some
        seriously weird thing with their BGP design.
        • by rekoil ( 168689 )
          Might have been a EBGP-to-IGP redistribution event - the BGP table carries close to 217,000 routes today, as it's designed to do, but IGPs are only designed to carry at the most tens of thousands of routes, as those routes need far more detailed information on them than BGP routes. Occasionally due to either a config error of a software bug the BGP routes will get injected into the IGP (OSPF or IS-IS), and each router's IGP process chokes on the routes, but not before passing them on to the next router, and
      • by Anonymous Coward
        You're doing an Apples and Oranges comparision. Cisco's IOS is far more dedicated to a specific set of tasks than the other notable OS's. So yes, one would expect far less bugs to be visable. That doesn't mean they aren't there; just that they haven't been discovered.

        Having worked at many of the companies which supply OS's, Cisco is, IMHO, the worst. They go for lots of cheap talent. The common theme is to hire lots of low paid talent rather than focusing on getting the best and the brightest. And it shows.
      • Re: (Score:3, Insightful)

        by sargon ( 14799 )

        Cisco has some of the most stable operating systems.

        You must be using some Cisco OS I don't know about. I am in the process of upgrading 120 Cisco boxes thanks to that "stable operating system."

        Junipers are a different matter. MUCH more stable.

        Cisco might look bad in this article, but their track record in creating an OS with less number of bugs is much better than Microsoft, Sun and the others.

        Riiiiight. Apparently you have never had to deal with Cisco's inability to produce an IOS which doesn't have a BGP bug in it. Or MPLS bug. Or... Well, the list is long.

      • by medea ( 38161 ) *
        > Cisco has some of the most stable operating systems.

        Ah, yeah. I am not sure, but it seems you never really worked with Cisco gear in an serviceprovider world...

        I just have one word for you: CEF-Bug
      • by sloth jr ( 88200 )

        I don't agree the blame is with Cisco, not until I see more evidence. Cisco has some of the most stable operating systems. The cmd line interface can sometimes suck, but their stability is very remarkable. The fault I am guessing is with the ISP for not planning network redundancy and not scaling their networks in time. Cisco might look bad in this article, but their track record in creating an OS with less number of bugs is much better than Microsoft, Sun and the others.

        I think you need more time with Cisc

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Excuse me for speaking bluntly but:
      PEBCAK
    • Re: (Score:3, Funny)

      Yeah, I guess now they'll be supplementing with some Belking and D-Link routers as well.
  • by DrYak ( 748999 ) on Saturday May 19, 2007 @06:46AM (#19189727) Homepage
    ...They will not be anymore the dot in .jp
  • JunOS (Score:5, Funny)

    by Anonymous Coward on Saturday May 19, 2007 @06:49AM (#19189735)
    For those that have used JunOS before, im sure are all saying.

    "A Juniper router is like my girlfriend.. It will never go down on me."

    • Yeesh - sorry man! :)
    • For those that have used JunOS before, im sure are all saying. "A Juniper router is like my girlfriend.. It will never go down on me."
      Are you saying that /. readers have girlfriends here?
  • CEF and the routers. (Score:5, Informative)

    by wickedsun ( 690875 ) on Saturday May 19, 2007 @06:57AM (#19189765)
    I think it's funny. Usually, when you open Cisco TAC about a "faulty" router not forwarding traffic anymore, Cisco will tell you it's your config's fault if it's not working properly.

    Usually what happens is that the router doesn't have enough memory to store all the CEF (Cisco Express Forwarding) info, causing the router to not forward packets for certain subnets. I've seen it happen often enough to know. While Cisco is right, the problem is caused by a lack of memory for the config, I think it shouldn't stop forwarding the packets all together (as in stop using CEF if the table gets out of hand).

    While I think Cisco is not completely to blame (badly scaled networks, not upgrading routers in time), it sucks that this will hit them. There are better solutions out there, but I have to say that Cisco's support is quite good and they're pretty fast. I work in an all-Cisco environment (for the routers) and they've been fast whenever we needed a router analyzed.
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      If that were the only breakage in CEF...
      I have a Cisco with a complex config with tunnels and there really is no way it will work reliably with CEF enabled.
    • Sounds like to me that unless your dealing with a massive network, you are not summarizing your subnets well.
  • Underspec routers (Score:5, Interesting)

    by ReidMaynard ( 161608 ) * on Saturday May 19, 2007 @06:59AM (#19189767) Homepage
    Phrases like

    The routing table rewrite overflowed the routing tables
    and

    router capacity was partly responsible for the failure
    leads me to think this was a problem which was probably reported numerous times to middle management and perpetually postponed.
    • by thogard ( 43403 )
      leads me to think this was a problem which was probably reported numerous times to middle management and perpetually postponed
      Who's middle management? Cisco's?

      Their routers have been perpetually running out of memory for reasonable routing tables since at least 1992.
  • by Anonymous Coward on Saturday May 19, 2007 @07:04AM (#19189779)
    "The routing table rewrite overflowed the routing tables and caused the routers' forwarding process to fail, the CIBC report states"

    Ok.. That says to me that their routing tables got really big, the routers ran out of memory... Or.. they Had a prefix limit set, and it kept dropping the BGP session(s)...

    If either of the above is true, properly designed filtering of the prefixes they send/receive to their BGP neighbors would have resolved this outage... It sounds like someone may have been incompetent, and they are trying to pawn off the "ownership" of this outage on Cisco.

    Either that, or its a major IOS bug, and the article's author just sucks and didn't mention that..
  • by Anonymous Coward on Saturday May 19, 2007 @07:15AM (#19189809)
    Being a current CCIE, and having extensive experience with both vendors boxes, I wouldn't use anything other than a Juniper for core infrastructure, and I'm never going back to cisco kit..

    To be fair Cisco is untouchable in the enterprise class with their CPE's..
    • by Anonymous Coward on Saturday May 19, 2007 @11:18AM (#19190791)
      We're a Cisco shop but are seriously looking into Juniper due to some negative service impacting experiences. Juniper, especially M series, look like it was designed very intelligently from the ground up with superior hardware architecture with separation of routing engine/packet forwarding/control plane, much more powerful CLI/config error checking/timed roll-back, wire rate granular filtering, one train of code to follow and so on. Unlike Cisco 6500/7600 Sup720-3B/3BXL with hardware limitation of 256K and 512K IPv4 routes respectively, even Juniper's older M20 platform has been tested with upwards of 1 million routes. As for stability, Juniper is found in the core of most service providers, government, academia and research (Internet2 high speed network http://www.abilene.iu.edu./ [abilene.iu.edu] I see Juniper as the Unix of routers and Cisco the Windows of routers. If you desire stability, security, performance and flexibility go Juniper. Cisco still has a place such as in enterprises that still run legacy IPX.
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        I run a 10Gig network with 0 Cisco products in it. That said, Cisco needs a little defending based on your statements:

        one train of code to follow and so on.

        Juniper has different code loads for M/T series vs. J-series vs. E-series. The nice thing is that there's only 1 load per series.

        Unlike Cisco 6500/7600 Sup720-3B/3BXL with hardware limitation of 256K and 512K IPv4 routes respectively

        The Cisco 6500 is a layer3 switch, not a router. The 7600 was designed by Cisco to be an edge aggregation router, not
      • IIRC, JunOS is actually a Unix (FreeBSD variant).
  • On the basis of the information in the NW article, I can't make out what the general nature alleged fault is on the "faulty" routers. I get that some routing table size limit was exceeded. But what was the nature of the problem?
    • Did a manual change exceed a design limit? If so, why wasn't the manual change rejected? (If it was rejected, that's not a fauilt, it's user error)
    • Did an automatic change (like fail-over) applied to a valid configuration produce an invalid one? If so, did the routers report this, generate some kind of trap or alarm? If so, I guess the problem is a bit nebulous; maybe a monitoring failure, but maybe the system could have issued warnings that certain kinds of possible failover could exceed implementation limits. Hard to know without more detailed information.
    • Did an automatic change silently produce the wrong result (like forwarding some traffic and not other traffic) *without* generating a trap or alert? If so, I would certainly call this a fault (bug). But the article doesn't contain enough information to point conclusively in this direction.
    The event is big news, so I guess NW felt they had to say *something*. But while I'm no big fan of Cisco gear, it looks to me that the explanation is as likely to be human error as equipment faults or bugs. One potential cause of problems in big routers is that the high-level software's view of the state of the routing engine gets out of sync with the actual state of the ASICs. I wonder if that happened here. My guess is that once more details of the incident emerge it will turn into a not-news story.
    • by Anonymous Coward
      I have a friend who works for Cisco, setting up and configuring routers remotely via telepresence, all over the world. I know that lately they have been working on some routers in Japan.

      *IF* (and I don't know for sure that this is the case, so it's a big IF) the project this person has been working on relates to the problem in this story, then I would say that your guess of 'human error' is likely a very large part of what happened.

      I say this because my friend has filled me in on some of the stories relatin
  • by ctime ( 755868 ) on Saturday May 19, 2007 @07:28AM (#19189849)
    I think that it's great when a company is blamed on having poor products when it's really the company using them (in this case, NTT). The way the article is presented seems unfairly biased. The problem isn't with cisco products here but the lack of knowledge on scaling them properly. The headline is similar to saying something like "Ford Motor company cars involved in most car accidents historically". A properly designed network with just about any vendor, especially cisco, would have avoided this issue.
  • Also.. (Score:4, Insightful)

    by niceone ( 992278 ) * on Saturday May 19, 2007 @07:36AM (#19189877) Journal
    Cisco routers to blame for most of the rest of the internet's non-outage.
  • TCAM exhaustion (Score:5, Informative)

    by anticypher ( 48312 ) <[moc.liamg] [ta] [rehpycitna]> on Saturday May 19, 2007 @07:46AM (#19189907) Homepage
    This was certainly a problem with slightly older Cisco kit, such as 6500s with Sup720a cards. Their TCAM memory (that holds prefix+destination tuples in a form of cache) overflowed as the internet approaches 245,000 routes. Once there is no more space in TCAM, many Cisco architectures fall back to processor routing. That means that when traffic that was switched in hardware starts hitting the CPU, the box falls over whimpering for mercy.

    If NTT had been following Cisco mailing lists, or keeping up to date on what their salesmen had been telling them for several years, they would have seen this problem looming and changed their routing structure or at least upgraded the processors for something with slightly more TCAM. The size of the internet is not going to stop growing because many companies chose to go with underpowered Cisco kit. The internet will continue to grow by 12,000 to 17,000 routes per month, accelerating over the next few years as IPv4 space becomes exhausted and de-aggregation becomes the norm.

    This is one of my long standing grudges about Cisco design. They always are designing their core routers to be just slightly ahead of the size of the internet, forcing people to upgrade within a few years. Designed obsolescence is the term. Even their new CRS1 platform will fail over to CPU near 512,000 routes (0x80000), or sometime around the end of 2008 to mid 2009. By then, they'll probably have an expensive upgrade path for customers that will hold for just another year or two.

    It's not just Cisco kit that is going to have problems over the next few months. By the end of June the internet will be at 256,000 routes (really 262,144 or 0x40000), which will be a problem for some other manufacturers. Some are starting to fail at 0x3C000 (245,000) routes, some already failed at 0x30000 last year.

    On the plus side, the OpenBGPd crowd doesn't suffer from this, since their code is all CPU switched (but using very clever and efficiently coded routing tables) so their routing table is limited only by memory. But an OpenBGPd machine will never have the raw efficiency of a VLSI based hardware solution.

    A quick look at my local looking glass shows 233,979 routes on the internet this morning.

    the AC
    • by swmike ( 139450 )
      The CRS-1 is tested with at least 2M IPv4 routes. It should be enough to 2015 or more.
      • Re:TCAM exhaustion (Score:5, Informative)

        by anticypher ( 48312 ) <[moc.liamg] [ta] [rehpycitna]> on Saturday May 19, 2007 @08:27AM (#19190071) Homepage
        The CRS-1 is tested with at least 2M IPv4 routes.

        It appears to be four separate instances of 512K routes, the total is for MPLS customers shoving full BGP tables into their mesh. With more than 8 MPLS customers doing screwy things today, the box starts hitting its CPUs. I haven't received a denial from the CRS-1 guys, just some hand waving and a promise to look into it. Implications that a better config would help hasn't actually produced an example of what to do, and the XR code is just different enough to hide underlying architecture deficiencies. The other problem is that every CRS-1 seems to be put into production before engineering has time to play with them and learn their tricks. Given time, all kinds of clever designs for XR code will spread around, just as there are tricks of the trade the most experienced IOS-based engineers grok.

        It should be enough to 2015 or more.

        And 640k should be enough for everyone. Seriously, I keep running across 2500s still doing their thing, but not as core BGP routers. So the CRS-1 platforms may quite well be running tucked into edges in 2015. Bean counters love kit that has amortised many times over.

        the AC
        • Re: (Score:2, Informative)

          by swmike ( 139450 )
          If you're running four full-bgp VPN-customers in your core routers that can handle 2M routes, you've done a major design mistake and that's not the fault of the hardware manufacturer.

          Regarding routing table growth, hopefully IPv6 might stifle that a bit as we're going to be running out of IPv4 space in the next 3-5 years and IPv6 space is allocated in much larger blocks requiring fewer routes.
          • Re:TCAM exhaustion (Score:4, Interesting)

            by anticypher ( 48312 ) <[moc.liamg] [ta] [rehpycitna]> on Saturday May 19, 2007 @03:12PM (#19192479) Homepage
            you've done a major design mistake

            Not one of MY designs, but you are right about the mistake part. I know of a carrier with CRS-1s struggling with a poor design coupled with an out of control sales force that will not ever say "NO!" to a customer doing bad things to their MPLS service. That's the origin of the idea of a maximum of four instances of 512K routes in 4 separate TCAMs per chassis (or per line card, or per virtual machine, or something). Not really my job any more, so I learn this over beers next to the data centre and extend my sympathies to those stuck in the Cisco world.

            hopefully IPv6 might stifle that a bit

            Well, the IPv6 table is ~850 routes right now, growing by 10 to 20 new routes per month. Just like the early days of the internet as BGP rolled out. Now I can toss out the obligatory "You kids get off my LAN".

            Problems are already starting to be seen by the RIRs, where speculative companies have started grabbing IPv4 allocations with no intention of using them, betting on a market for buying and selling prefixes and forcing the RIRs out of business. Exactly what happened to the DNS market when it became apparent that second level domains could be rented for yearly fees for a large profit.

            If companies start buying and selling prefixes in an unregulated free market frenzy, aggregation will become a fond memory and expect every router to need several Gigabytes to hold the 2 million+ routes on the old IPv4 internet. At RIPE meetings, there is a hope that this is a worst case scenario, but it seems to be a business plan for some less altruistic people at ICANN.

            the AC
    • Re: (Score:3, Insightful)

      by jacksonj04 ( 800021 )
      So basically what you're saying is there was insufficient routing capacity on the network causing it to fail? Well, I'm shocked!

      Seriously though - would you try run a datacentre on a home router from NetGear? If I did and the network fell over in a fiery mass of routing tables I wouldn't say NetGear was to blame for building a bad router. I'd blame the network architect who thought they could shove hundreds of servers through a 5-port-with-wifi device.
      • The problem was that the internet had grown beyond the capacity of their core routers, hence the core router upgrade that was "in progress". The headline should actually read:

        OLD Cisco Routers to Blame for Japan Net Outage (with only one 't' in "Outage', just as in TFA!)

        Hey folks, don't stop CIDR'ing routes just because there seems to be enough routing table space "right now"!

        • Since the routers would have continued to do fine if they had been used within their design specifications, surely it should actually be:

          Failure to Invest to Blame for Japan Net Outage

          Or, more to the point:

          Management Idiocy To Blame for Japan Net Outage

          Evidently, Japan has it's share of PHBs too.
    • Re: (Score:3, Funny)

      The size of the internet is not going to stop growing because many companies chose to go with underpowered Cisco kit. The internet will continue to grow by 12,000 to 17,000 routes per month, accelerating over the next few years as IPv4 space becomes exhausted and de-aggregation becomes the norm.

      It sounds like what we need is legislation to enforce some hard limits on the growth of Internet routing tables in order to avoid these kinds of DoS attacks in the future. If we lobby Congress now we can hopefully a

      • Re: (Score:3, Insightful)

        by thogard ( 43403 )
        Yep, limit it to 16,777,216 /24 routes. That will fix it. If your router has 16 interfaces, you can do this with 8mb of cache ram to make the quick decisions and whatever else you need to processes the routes.
    • by Anonymous Coward
      250K is quite lame. I just tested a bit over 1 million installed routes on a Juniper in my lab. ... a 5 year old m-series at that.

      This is no shock: Juniper's first innovation was the use of high speed ram rather than tcam for tcam lookup so route table scaling has never been a problem for them...

      Marketing on the other hand... geesh those cartoons still give me nightmares.
      • Re: (Score:3, Informative)

        by frost22 ( 115958 )
        IIRC we tested amounts like that even in well equipped 75ers .

        As others already noted the 6500/7600 is a switch with limited
        routing capabilities. you use it as a core router at your own
        risk (and peril).
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      TCAM (tertiary memory) exhaustion sounds plausible. We were looking into 6500/7600 to upgrade our 7200 platforms and were quoted Sup720-3B supervisor/routing engine. Being doubtful of Cisco these days I did some double checking on my own since they seem to have put their focus on being a marketing gorilla instead of a technological leader. It turns out Sup720-3B has limited TCAM memory that only supports 256K IPv4 routes and even fewer IPv6 routes. The current BGP routing table is just shy of that mark.
      • Everyone using older (as in two or three years old) Cisco kit has been dumping their Sup720-A or 3B cards on the used market. The price of those cards has completely collapsed. There is the -3BX card, which can handle 393,000 routes in TCAM, but they'll be obsolete by the end of 2008.

        If you have a design where a 6500 or 7600 isn't doing core routing, somewhere out on the edge, just buy the chassis and line cards from Cisco, and pick up one of the TCAM-poor routing engines for less than 5% of GPL.

        Juniper has
    • by sjames ( 1099 )

      The thing is, 4GB would be enough to handle routing even if each single IP address was assigned randomly (that is, if everyone was allocated nothing but /32s and it was done such that no aggregation at all was possable). 4GB is not THAT huge these days. There's not really a great reason (other than planned obsolescence) not to be fully future proof.

  • I read this as human error and lack of planning that some admins have successfully blamed on hardware.
  • ...to ficks mye badd speling. Cant seam two finde "outtage" (orr "expirey") inn mye dikshunairy...
  • 'routers went down .. after a switchover to backup routes triggered the routers to rewrite routing tables'

    "At this time, Cisco and NTT have not determined the specific cause of the problem"
  • Why would it be neccisary to run full BGP routes in all of your routers anyways? Couldnt you rull partial routes in the edge routers and save your full routes for your core and peering points? 2 -4 thousand routers is a lot to go down at once.
  • by Allnighterking ( 74212 ) on Saturday May 19, 2007 @02:39PM (#19192187) Homepage
    I'm sorry but I've got a total of 3 data centers and I can' tell you when where and which router/pix/switch has a problem. I do it my not only monitoring the item itself (this is what everyone should do.) but also by monitoring "through" the item in question. I monitor a specific point on the other side (or in some cases set of points) not to find out if that point is good, but to find out if the path is good. 3 things have to be monitored.

    1. Local status (Am I alive)

    2. Path (can I get from me to you, what is the quality of the path?)

    3. End point (are you there?)

    If at any time you let the number of paths and interconnects overwhelms you. Get a new job. You've lost control. Draw pictures of the network. When you have an outage start looking immediately at what you have connectivity with and what you don't. Large data centers can get complex in their interconnects. Divide it up into "blocks" verify a block and move on.

    The biggest problem in a situation like this is that I'm willing to bet the techs were wasting their time trying to figure out why the network went down. Who cares why. You need to quickly assess what is down. What you can do. What you can't do. You need to know what is normal and what is not. If you don't a situation like this can happen.

    The worst thing that can happen is if the network is divided into "territories." Usually in a case like this people spend more time trying to blame the other guy then they do finding the cause of the problem. Finally design. Somewhere along the line some pencil pusher decided that a single point of failure was economically feasible. The techs were willing to sheep right along, the Sr Admin was played politics and didn't rock the boat.

    In the end. The techs blew it. The after action report and follow up will tell the final tale.

On the eighth day, God created FORTRAN.

Working...