Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Cloud The Internet

Researcher: Interdependencies Could Lead To Cloud 'Meltdowns' 93

alphadogg writes "As the use of cloud computing becomes more and more mainstream, serious operational 'meltdowns' could arise as end-users and vendors mix, match and bundle services for various means, a researcher argues in a new paper set for discussion next week at the USENIX HotCloud '12 conference in Boston. 'As diverse, independently developed cloud services share ever more fluidly and aggressively multiplexed hardware resource pools, unpredictable interactions between load-balancing and other reactive mechanisms could lead to dynamic instabilities or "meltdowns,"' Yale University researcher and assistant computer science professor Bryan Ford wrote in the paper. Ford compared this scenario to the intertwining, complex relationships and structures that helped contribute to the global financial crisis."
This discussion has been archived. No new comments can be posted.

Researcher: Interdependencies Could Lead To Cloud 'Meltdowns'

Comments Filter:
  • by houstonbofh ( 602064 ) on Saturday June 09, 2012 @11:27PM (#40272013)
    If you have a critical service, have it at more than one host... That way when AWS has a bad hair day, you are still up.

    Or, have your entire business totally dependent one someone else. (Sounds kinda scary that way, don't it?)
  • by stephanruby ( 542433 ) on Saturday June 09, 2012 @11:34PM (#40272045)

    The analogy the author uses doesn't work.

    A better analogy would be the airline industry. The airline industry likes to over-book airplane seats it may not have because it's always trying to optimize its profit-margin.

    The same will happen with cloud-services. Cloud-services will always try to optimize their own profit-margins, at the risk of triggering significant outages.

    And I don't see what this has to do with the financial crisis at all.

  • by c0lo ( 1497653 ) on Saturday June 09, 2012 @11:57PM (#40272119)

    we live in an age where information is distributed, even if statistical. (hell I made a fake Facebook account and somehow they found my mom, and she is no where close to me) a meltdown of information can't happen unless there is a world wide melt down of power. we have backups, but also ways of statistically restoring those backups.

    Redundancy helps but it is not bullet-proof. A good chunk of it is the "topology" in which this redundancy is engaged in events of failure.(e.g. we had cascading blackouts in the past even if the energy network had enough total power to serve all consumers)

    Have a look on cascading failures [wikipedia.org].

  • by pitchpipe ( 708843 ) on Sunday June 10, 2012 @12:02AM (#40272137)

    A better analogy would be the airline industry.

    I think a better analogy is the power grid. System hits a peak, one line goes down, others try to compensate becoming overloaded, another can't handle the load and goes down, and behold: cascading failures.

  • by Dan667 ( 564390 ) on Sunday June 10, 2012 @12:05AM (#40272147)
    I think it is funny that lessons learned years ago with mainframes are being presented as new by just changing the word mainframe to cloud.
  • In other words (Score:2, Insightful)

    by Anonymous Coward on Sunday June 10, 2012 @12:24AM (#40272197)

    Unmanaged systems are hard to manage.

  • by plover ( 150551 ) * on Sunday June 10, 2012 @12:39AM (#40272227) Homepage Journal

    I think by "financial crisis" he meant "a minor market crash due to autotrading algorithms", and not the real crisis being caused by thieves running trillion dollar banking, mortgage, and insurance scams.

    The point is "if you use similar automated response strategies as a large set of other similar entities, you could all suffer the same fate from a common cause."

    Supposedly a market crash was triggered by autotrading algorithms that all tended to do exactly the same thing in the same situations. So when the price of oil shot up (or whatever the trigger was) then all those algorithms said "sell". As all the sell orders came in, the market average dropped, and the next set of algorithms said "sell moar". So there was a cascade because so many systems had identical responses to the same negative stimulus. Think of those automated trades as being akin to a "failover" IT system: if host X is failing, automatically shift my service load this way.

    So that's the analogy the author is trying to make with respect to systems that depend on automated recovery machinery like load balancers: if response time is too high at hosting vendor X, my automated strategy is to failover to hosting vendor Y. And perhaps 500 large sites all have a similar strategy. Now let's say that vendor X suffers a DDoS attack because they host some site that pissed off Anonymous. So now all these customer load balancers see the traffic slowing down a X, and they simultaneously reroute all app traffic to vendor Y in response. Vendor Y then gets hammered due to the new load, and the load balancers shift the traffic elsewhere. Now two main hosting providers are down while they try to clean up the messes, and the several smaller providers are seeing much bigger customers than usual using them as tertiary providers, and they start straining under the load as well, causing their other clients to automatically shift.

    And if that isn't exactly what plays out next year, might not something similar happen with payment gateways, or edge content delivery systems, or advertising providers?

    It's a cascade of failures due to automated responses that's remarkably similar to the electrical grid overloads that caused the northeast coast blackout in 2003. The author's point is "we don't know precisely what bad thing might happen within this particular ecosystem, but there is significant risk because we've seen complex interdependent systems have similar failures before."

  • by martin-boundary ( 547041 ) on Sunday June 10, 2012 @12:42AM (#40272243)
    There's a limited number of cloud hardware providers on the internet, and the rest are middle men. It's useless to diversify yourself on the middle men, they will all be affected when the common underlying hardware provider has an issue. Thus there's a limit to the reliability that can be achieved, irrespective of how much mixing and matching is performed at the "business end".

    Diversification only "works" when the alternatives are provably independent. That's not true in a highly interconnected and interdependent world, which is TFA's point, I believe.

  • by dbIII ( 701233 ) on Sunday June 10, 2012 @12:47AM (#40272259)
    It's a leap year, February 28, and all over the world, completely out of the blue (or azure if you prefer) cloud clusters crash as the local clocks swing around to midnight, then stay down all day.
    Still, it's three nines of uptime when it's spread out over a few years :)

    A highly interdependant system is only as reliable as the QC on the weakest link. Who would have thought that somebody from a company that had a lot of embarrassing press about a leap year stuffup would make such a stupid and obvious mistake four years later? That's the cloud, where even the biggest names still don't care anywhere near as much as you would about your own systems and so don't pay enough attention to detail.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...