Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

BT Internet Outage Was Our Fault, Says Equinix (theregister.co.uk) 68

Kat Hall, reporting for The Register: Telecity's owner, Equinix, has 'fessed up to a "brief outage" which subsequently knocked 10 per cent of BT internet users offline this morning as well as a number of other providers. A spokesman from the group, which slurped up Telecity for 2.3bn euro last year, confirmed that the outage occurred at its LD8 site in the Docklands. The company has nine London sites which service more than 600 businesses.The outage occurred due to power failure, which lasted for around 75 minutes. ( Update: Some readers note that the outage lasted for as long as three hours. ) BT wasn't the only ISP that suffered an outage earlier this morning. All services have been restored, according to Ars Technica. Update: 07/20 14:57 GMT: It was apparently a faulty UPS that caused the outage.
This discussion has been archived. No new comments can be posted.

BT Internet Outage Was Our Fault, Says Equinix

Comments Filter:
  • by account_deleted ( 4530225 ) on Wednesday July 20, 2016 @10:26AM (#52547173)
    Comment removed based on user account deletion
    • Wow. Here in Seattle we are still using 9600 baud modems.
    • i have a hot spot on my iphone. if my internet went out and i absolutely needed to vpn into work i'd use my phone and expense it later. or i'd call the people that control our company phones and ask them to enable the hot spot feature on my employer owned Samsung Galaxy. not a big deal and not worth a second monthly bill just to have a circuit i'll rarely use
  • The outage lasted a LOT longer than 75 minutes. I tried repeatedly to get into BT webmail all morning - it was at least 3 hours after the outage before I succeeded. And during all that time, the BT DNS service was not working, so I couldn't do any other work.

    #RANT# The BT-supplied router, the fornicating clunky useless and slow Home Hub 5, does not allow you to put in your own DNS servers. So while it is proof against subscriber morons, it is totally vulnerable to central morons#/RANT#
    • Comment removed based on user account deletion
    • by tomxor ( 2379126 )

      ...And during all that time, the BT DNS service was not working, so I couldn't do any other work. #RANT# The BT-supplied router, the fornicating clunky useless and slow Home Hub 5, does not allow you to put in your own DNS servers...

      1. Never rely on routers supplied by ISPs (especially BT), they reliably suck giant fucking donkey balls.

      2. Your OS doesn't have to (and arguably shouldn't) use the DNS server address given by the DHCP (router).

      Arguably you should specify the DNS on your OS for security purposes anyway (e.g a compromised router sending all your DNS requests to a malicious server and sending you off to some amazon.com impersonation... that's what you risk every time your computer connects to a public AP).

      Another reason not

  • by Anonymous Coward

    Must be somewhere in The London.

    Heh.

  • Graph of outage. [ukinternetreport.co.uk]

    It was pretty funny; downdetector.co.uk showed the problem very clearly, affecting large swathes of the country for about 3 hours. And on the same page, there was BT Care suggesting that people reset their routers and reboot their PCs :)

    When it went down, a quick traceroute showed the problem to be at BT@Telehouse. Luckily, we retained connectivity to our hosted server (even though most of the rest of the net was unreachable) so a combination of 'ssh -D 1080' and twiddling proxy settings wor

  • I own an ISP. We consider a 'brief outage' something under 2 minutes, not 75+
    • Now you know better. Next time you can finish that cup of coffee and the BLT before going to sort out the problem. It will still be well within the new BT-defined brief outage and nobody can complain at you.

  • As someone who lives in a captive Windstream area, I can tell you that 75 minutes of outage would be GREAT! We regularly have outages that last for over a day!! Of course, here in Conservativia-land, any discussion of using the Gummint to force Windstream to allow competing ISPs to use the existing copper plant won't even get started, despite the suffering that local businesses go through over the outages.
    • As someone who lives in a captive Windstream area, I can tell you that 75 minutes of outage would be GREAT! We regularly have outages that last for over a day!! Of course, here in Conservativia-land, any discussion of using the Gummint to force Windstream to allow competing ISPs to use the existing copper plant won't even get started, despite the suffering that local businesses go through over the outages.

      I live in a blue city and have the same problem. You are naive in thinking that just because the democrats patronize you, that means they are in bed just as much - and a lot of times more - than the conservatives.

  • If it's a UPS that's not U, doesn't that just make it a PS? Perhaps an IPS, or even a PoS?

  • by Not-a-Neg ( 743469 ) on Wednesday July 20, 2016 @01:03PM (#52548005)

    Reminds me of a company I know that built a brand new data center, put in an over-sized UPS system, over-sized electric generator, state of the art power monitoring/transfer system, and tested the generator bi-annually. Only there were three problems when the area finally suffered a blackout:

    1.) They never tested fail-over to the UPS, they had only tested starting up the generator and then manually switched off mainline power once the generator was fully operational to see if it worked.

    2.) The UPS installer never bothered to connect the batteries to the power control unit so when power did fail everything immediately lost power (these were racks of large batteries wired together, not the plug-and-play consumer stuff.) When the generator kicked in, everything tried to turn back on at the same time and tripped the breaker. LOL.

    3.) The air conditioner was not connected to the fancy power transfer system and after a little over an hour, servers started throttling and eventually shut down from the heat. This all happened in early August on one of the hottest days of that year.

  • Back when I was a system administrator for a government department looking after the computers for the web sites losing a UPS in the data centre would have been no big deal. Each chassis holding our blade servers held four power supplies. Two power supplies were connected to one power distribution unit (PDU) and the other two power supplies were connected to a different PDU. The PDUs were cabinet models and each PDU was connected to a separate UPS. The whole data centre had a diesel generator for a backup

The use of money is all the advantage there is to having money. -- B. Franklin

Working...