Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Google

Gmail, Slack, Amazon, Spotify, Twitch, Hulu, Google Are Suffering Outages for Some Users 34

A wide-range of services including Gmail, Google, business collaboration service Slack, Amazon, Twitch, Hulu, and Spotify are suffering outages, several users and readers have reported. The reports started to come in half an hour ago, but the cause of the disruption is yet to be identified.

Update: Verizon says there is a fiber cut in Brooklyn. Further reading: Verizon Fios is experiencing outages on the East Coast.
This discussion has been archived. No new comments can be posted.

Gmail, Slack, Amazon, Spotify, Twitch, Hulu, Google Are Suffering Outages for Some Users

Comments Filter:
  • by nwf ( 25607 )

    Related to the Verizon fiber cut in Brooklyn?

  • by drkshadow ( 6277460 ) on Tuesday January 26, 2021 @12:38PM (#60993656)

    The cause of the disruption is a cut fiber in Brooklyn:
    https://twitter.com/VerizonSup... [twitter.com]

    • by gweihir ( 88907 )

      Somebody did not do their due diligence when planning and verifying their redundant connections...

      • by cusco ( 717999 )

        Over a decade ago there was fiber cut in the Midwest (farmer with a backhoe) that took out about 1/3 of the cross-country Internet capacity. Turns out that several providers ran their primary and redundant connections in separate conduits that ran through the same trench.

        • by jhecht ( 143058 )
          A 2001 tunnel fire in Baltimore https://en.wikipedia.org/wiki/... [wikipedia.org] knocked out multiple long-distance routes. Some carriers had been told their services were going over different fibers, but multiple fibers went through the tunnel. They got more careful afterwards, but I guess the engineers who went through that have all retired now.
          • by jythie ( 914043 )
            I don't know.. given how much of the outage has been erratic and slow connections, I could see it being a case of the reducent systems working but all that extra data saturating its available bandwidth.
        • by gweihir ( 88907 )

          A classic mistake. Same trench, same bridge, same tunnel, relay stations on same power circuits, etc.
          Done right, you find these and work around them. Done wrong, you notice when it hits the fan, and sooner or later it will.

      • by jon3k ( 691256 )
        In Nashville after that guy blew up that AT&T location downtown AT&T cell service was down for several days, all the way up into Kentucky and down into Alabama and hours outside of Nashville. Basically every store's POS was down as well. It was hard to believe a single building could cause such widespread issues.
        • by Pascoea ( 968200 )
          Yes, but did he disrupt AT&T's audit of the Dominion Voting Machines, or shut down the Kraken Supercomputer whose cooling system was run out of that building? /s
      • That costs money.
        Which is really sad, that a major part of our infrastructure can get hit, because a company is too cheap to plan for redundancy, and rerouting. Because it is cheaper to take an outage and fix the problem when they happen than having a robust environment where outages don't happen.

      • by mydots ( 1598073 )

        Apparently you can plan and verify, but when the datacenter where you have some servers are no longer accessible and you ask the company that owns the datacenter and they tell you the company that handled running the fiber in fact actually ran redundant lines along the same path instead of different paths unlike originally stated in their contract when the datacenter was being built, so all the fiber to the datacenter was cut at the same time.

        • by gweihir ( 88907 )

          Well, that is what I mean with "verify". Sure, it is expensive having somebody physically verify where the actually fibers are. I have done things like this on occasion, including also looking at power ingress. But there was always somebody in management that really wanted to know.

  • I do not think that this is a problem with hosting providers, but a problem with ISPs. There must be a trunk down somewhere, I can't connect reliably to a number of non-commercially hosted services.

  • People keep saying it is this Brooklyn fiber cut, but why would one cut in Brooklyn take out service in Washington DC?
  • by thereddaikon ( 5795246 ) on Tuesday January 26, 2021 @12:52PM (#60993728)

    The old telco networks were supposed to be resilient enough to withstand severe damage and disruption from wartime. Old Bell built a tough network. But recent issues have shown that modern telecommunications infrastructure is incredibly fragile. Its a bit of a joke. Especially when the protocols the internet is built on are designed to be self healing. How do we have single points of failure anywhere in CONUS? We knew the ISPs were pissing away the tax payer dollars they've been given but I don't think we understood just how much.

    • probably due to CDN's and routing you based on location to the nearest source of data. if there is a fiber cut there might not be code to route you to another CDN server for the data

    • by drkshadow ( 6277460 ) on Tuesday January 26, 2021 @01:23PM (#60993836)

      To be "fair" the network did survive and is alive. They're experiencing latency, dropped packets, and as a result a loss of connectivity -- but the hosts _are_ reachable.

      What this really shows is that their capacity planning is insufficient for a fault in the network. They're being hounded by Congress now that they have plenty of bandwidth, 'See, look, everyone's using it, and it works, even with the virus, even with usage spikes, you've got plenty of bandwidth, you don't need to restrict users.' The come-back is that they clearly don't have plenty of bandwidth - lose one fiber, and they've lost the ability to serve the needs. The compromise is to limit users when there's a detected fault. (Maybe that's not as easy -- but it is doable.)

      Capacity planning is the fault. They need at least double the required capacity, at _least_ for major population centers (or datacenter-population centers) in order to be able to serve the needs in case of a fault. When we say "double" we mean coming in on two different cables, not two fibers in the same cable. All of that will increase costs further, and you can expect that those costs will be passed on (along with a "slight" percentage increase to profits). Did Verizon end up routing traffic through Level3, who over-advertised their capacity? Or did Verizon under-purchase the capacity to these data centers?

    • by jythie ( 914043 )
      Oddly enough, I saw this outage as a good example of resilience. Connections became unstable and slow, but services did not completely go down. There is not enough bandwidth for seamless rerouting, but rerouting did happen.
  • And cant keep their shit from going down.
  • Strikes again!

    Or maybe it's finally The Storm!(TM)

  • Car 54 where are you!?
  • Bezos, etc. time to put something back into the infrastructure that made you ungodly wealthy.
  • Not saying that it was the aliens but it was the aliens.

Don't get suckered in by the comments -- they can be terribly misleading. Debug only code. -- Dave Storer

Working...