Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Communications Google Technology

GMail Experiences Serious Outage 408

JacobSteelsmith was one of many readers to note an ongoing problem with Gmail: "As I type this, GMail is experiencing a major outage. The application status page says there is a problem with GMail affecting a majority of its users. It states a resolution is expected within the next 1.2 hours (no, not a typo on my part). However, email can still be accessed via POP or IMAP, but not, it appears, through an Android device such as the G1." It's also affecting corporate users: Reader David Lechnyr writes "We run a hosted Google Apps system and have been receiving 502 Server Error responses for the past hour. The unusual thing about this is that our Google phone support rep (which paid accounts get) indicated that this outage is also affecting Google employees as well, making it difficult to coordinate."
This discussion has been archived. No new comments can be posted.

GMail Experiences Serious Outage

Comments Filter:
  • Anti-Slashdot Effect (Score:5, Interesting)

    by ink ( 4325 ) * on Tuesday September 01, 2009 @05:24PM (#29278415) Homepage

    Seems to be fine at the moment. Is this the first anti-slashdot-effect?

  • Re:Indeed (Score:5, Interesting)

    by HangingChad ( 677530 ) on Tuesday September 01, 2009 @05:51PM (#29278769) Homepage

    So much for handing your email over to Google

    We handed our mail over and it's the first time I've ever had a problem with them as a corporate mail provider. Almost two years. There may have been one other short outage, but I don't remember it being during business hours.

    I doubt you could run a mail server more reliably. And, for the difference in cost, I'd stay with Gmail.

  • my domain via gmail (Score:5, Interesting)

    by The Yuckinator ( 898499 ) on Tuesday September 01, 2009 @06:00PM (#29278857)
    I drank the Google kool-aid about six months ago and moved my personal domain's mail over to the free gmail service. I've been extremely happy with it ever since.

    I think it's interesting that I couldn't access my personal domain gmail during this outage, but my @gmail.com account worked without issue.
  • by jesdynf ( 42915 ) on Tuesday September 01, 2009 @06:05PM (#29278901) Homepage

    ever serviced a discovery litigation from google?

    No. Have you?

  • Re:Wow (Score:3, Interesting)

    by JumpDrive ( 1437895 ) on Tuesday September 01, 2009 @06:16PM (#29279039)
    I doubt it. Once you get out of high school and work in the real world, you'll find that just because something happens, people don't always get fired.
    Why, because usually you'll be firing one of your best employee's a 20 percenter, one of the ones that actually does the work and knows what is going on. And even if it wasn't a 20 percenter , you don't want to send out the message, that if you do something and it causes a problem you're going to get fired.

    I can hear it now, "Remember Bob, he was like us, then one day he went out and did something, something went wrong, so they shit canned him. That was five years ago and we haven't done anything since."
  • by justleavealonemmmkay ( 1207142 ) on Tuesday September 01, 2009 @06:28PM (#29279145)

    Working for a mobile phone company.

    I believe our operations engineers all have spare sim cards, not of the competition, but of foreign operators.

  • by ryanvm ( 247662 ) on Tuesday September 01, 2009 @06:30PM (#29279169)
    Bullshit.

    I have been the full time sysadmin responsible for the mail server. I have had the job of keeping the mail service up. It's not cheap. You need redundant networking, redundant servers, redundant storage, redundant staff, and the glue to make sure it all works. For anyone spending less than a couple hundred thousand a year on IT, it's damn near impossible to beat Google's uptime for hosted mail.

    As for your other concern about getting the data out of Gmail - you use the same protocols the rest of the Internet uses - IMAP/POP and SMTP. Not rocket science.

  • by Atario ( 673917 ) on Tuesday September 01, 2009 @06:41PM (#29279253) Homepage

    I've never really understood why so many Slashdotters have this attitude about hosted services. Perhaps they are local IT folks for smaller companies, and fear for their jobs?

    It's the same reason Slashdot has:

    • such a large component of libertarians
    • every tech/science story hit with a slew of +5ed comments questioning the basic underlying premise of the research and/or machinery
    • every story about a study tagged with "correlationisnotcausation"
    • etc.

    ...and that reason is that code-hackers, having succeeded in something most people find impossible, go on to generalize that they must simply be hypercompetent, and therefore anything done by others must be questionable by comparison. Thus, hosted services, being run by mere mortals, can't be as good as something set up by one's own brilliant self.

  • Re:Indeed (Score:3, Interesting)

    by paulius_g ( 808556 ) on Tuesday September 01, 2009 @07:36PM (#29279777) Homepage

    Why is the parent modded funny? I think it's an honest comment. I've been using Gmail for 5 years now (precisely since September 2004) and this is only the second outage that I've experience which prevented me from logging in.

    The only thing that bugs me is the Gmail user interface. Sometimes it doesn't record my actions (such as reading messages) and has an indefinite "Loading..." message which forces me to reload the whole page. But, this could also be something related to Safari.

  • by internic ( 453511 ) on Tuesday September 01, 2009 @07:57PM (#29279965)

    I think it's just the psychological impact of the lack of control. It's the same reason that people fear flying more than driving (one of the reasons, anyway) or that it's much scarier when you're the passenger during a dangerous maneuver than if you are driving the car and doing the same thing yourself.

  • Re:Indeed (Score:4, Interesting)

    by im_thatoneguy ( 819432 ) on Tuesday September 01, 2009 @08:11PM (#29280083)

    There's the trouble of overlap however:

    We've had longer outages locally... but we're a small company so when the exchange went out it took everything out with it: exchange, domain and by extension of domain--file servers.

    While we may have had 3-4 hours or so of domain related outages this year they were times when we couldn't do anything anyway. We've never had JUST our exchange go out since it's on the same system as our domain.

    If Gmail goes out for 2 hours and we have 4 hours of general down time per year then the Gmail (despite being more reliable) actually increased our email down time by 50% over hosting locally.

  • by Daniel Boisvert ( 143499 ) on Tuesday September 01, 2009 @09:37PM (#29280671)

    Umm, an hour of downtime doesn't mean your data is gone. I'll also echo earlier comments -- locally hosted email generally has more problems, as no company but the largest enterprise has the same magnitude of IT equipment and experience as Google.

    I've never really understood why so many Slashdotters have this attitude about hosted services. Perhaps they are local IT folks for smaller companies, and fear for their jobs?

    It's more than that. There are more moving and breakable parts between you and a hosted provider than between you and an internal service, which changes the math a bit.

    Some of the single points of failure are shared between both approaches too, so they're a wash for a small implementation. If you're a small company and your non-redundant core switch fails, your email is down either way, because you can't get to your email server or to your hosted provider, no matter how redundant your provider is. There are various components for which this is true, which helps to mitigate the benefit of a hosted service where your mail server is replaced by a massively redundant cluster.

    You also have additional dependencies. If you're a small business with a single T1 to the internet, let's say, and the telecom bunker outside your building catches fire and you lose internet access, you've got problems. With a local email service, internal mail works, but you can't send email to or receive email from external users (let's pretend you don't have an offsite secondary MX or an outbound mail spool where this stuff queues, mostly invisibly to users). For organizations that are hugely dependent on internal email, that's quite a bit better than having no access to your (hosted) email at all.

    Additionally, you get concerns about "If we outsource this today and we have problems in 2 years, will we still have somebody here who can design/build/find a better solution, or will it cost us a fortune in consultants if we let the in-house expertise lapse?".

    You also have support issues. Google specifically is well-known for only doing things that can be automated (and doing them well, mind you). Support isn't always one of those things, and small companies are well-acquainted with getting the shaft from vendors because your business isn't worth enough for them to care (check out the quality differences between the enterprise and SMB versions of various products for examples). Given the importance of email to most organizations today, folks are a bit reluctant to hand it over to an outsider with minimal financial incentive to devote resources to their specific problems.

    If you're a 5-person business, outsourcing email is likely a good idea, but once you start getting into the teens and twenties or so, it's probably worth a look at your particular circumstances before continuing that assumption.

    Full disclosure: I'm currently a local IT guy for a smaller company, with enough on my to-do list that if I thought outsourcing email would work well for my users and save us time & money, I'd be all over it.

  • by petrus4 ( 213815 ) on Tuesday September 01, 2009 @10:18PM (#29280933) Homepage Journal

    ...and as someone else wrote, we're now seeing the reason why.

    Cloud computing is exactly the kind of buzzword-laden, idiotic fad that tends to be loved both by corporate marketing droids and technophobic Baby Boomers, both of whom have roughly equivalent levels of intelligence.

    All it is going to take is a single major, successful DDoS attack against Google or some other cloud provider, and the cloud will go to the memetic rubbish bin where it belongs.

    If you're one of the intellectual cripples who has difficulty understanding why cloud computing is a bad idea, ask yourself the question of whether or not you're going to be able to access your email if Google goes down, or if web access outside your ISP's own subnet does.

    Yes, I have a Gmail account, but it is a convenience linked to my WoW blog, and a spam trap at best. It isn't something which I rely on for anything truly important, because I'm old enough to remember decentralised email, and to have more fucking sense.

    Darn fool kids; they never learn. We keep seeing the same old mistakes being made, over and over and over again. I'm reminded of the old Frantics [youtube.com] song, here.

    Dumb terminal/"cloud" computing? Boot to the head. Creating a single, centralised point of failure which is just waiting for a DDoS attack. Genius.

    XML/binary format RPC in GUIs? Boot to the head. Opaque, undiscoverable, uneditable, and totally unnecessary, except in the minds of marketing suits, or post-pubescent CS grads who've been fed corporate Kool-Aid. Use sockets, morons.

    Binary subpackaging of libraries? Boot to the head. Given what bandwidth and disk space is at these days, any claim that it saves space is totally bogus, and the only thing it does do is add needless complexity, and reduce reliability. Put the whole thing in a single package, and stop thinking you're smart for doing otherwise. You're not.

    Writing opaque package management in C, with a dep list a mile long, when a system written in shell, awk, and using the graph/dep management ability of Make itself would work probably more effectively? Boot to the head. Although sorry; I keep forgetting that Awk isn't considered a "real," programming language. You might want to let the guys using it for AI research know that, though; they could forget otherwise.

    Being a snot nosed, latte sipping, yuppie CS graduate who thinks they know how to code, and then spawning attrocities like Dbus? Boot to the head. The kernel hardware notification system and udev work perfectly well by themselves. Adding more daemons when you don't need to simply adds unnecessary complexity, which again potentially reduces robustness.

    Writing opaque, non-standard, dynamic GUI "automounter" garbage for Crapbuntu instead of teaching users how to edit /etc/fstab? Boot to the head. Use things which are easily locatable, and written in text which can likewise be edited easily. Then again, I guess I can't expect the Stallmanite 14 year olds who code Linux's userland these days to know about real UNIX philosophy, now can I?

    Causing GRUB to default to "quiet splash," in Crapbuntu so that when the boot process inevitably fails due to the distro coming with Bit Torrent servers by default, the user can't see the daemon that is causing the boot process to fail, and are thus left with a totally opaque, unfixable black screen that they can't recover from? Boot to the fucking head, x100.

  • by Fencepost ( 107992 ) on Tuesday September 01, 2009 @10:47PM (#29281109) Journal
    Most of the long distance in the country dropped that day, triggered by 4ESS switches hitting a bug, detecting, it, going offline (with load shifted to other switches). Increased load made the bug in question more likely to be hit, so those switches would in turn drop and shift load away (sometimes back to the originator). 9 hours of basically no long-distance service.

    And just think, it was a year and a half before Berners-Lee announced the "World Wide Web" and Linus announced that he was working on this "Linux" thing.
  • by Nefarious Wheel ( 628136 ) on Tuesday September 01, 2009 @11:49PM (#29281435) Journal

    I would look at it this way. There is absolutely no excuse for 24 Hour Fitness to have a single hour were they do not have functioning treadmills.

    I would have to agree. Although how they use the treadmills in fault tolerant arrangements is important. Do they simply route people to a working treadmill, for example, when one fails? Or do they operate in an active-active clustering arrangement, where a person uses two treadmills simultaneously and fails over to a single treadmill when one stops? I imagine co-location of the treadmills would be a key success criterion in the latter configuration.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...