Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Software Technology

Researchers Critique Today's Cloud Computing 63

Red Leader. writes "MAYA Design just released an excerpt from one of their forthcoming books as a white paper. The paper offers a different perspective on cloud computing. Their view is that cloud computing, as currently described, is not that far off from the sort of thinking that drove the economic downturn. In effect, both situations allowed radical experiments to be performed by gigantic, non-redundant entities (PDF). This is dangerous, and the paper argues that we should insist on decentralized, massively-parallel venues until we understand a domain very, very well. In the information economy, this means net equality, information liquidity, and radically distributed services (and that's pretty much the opposite of 'cloud computing' as described today). While there is still hope for computing in the cloud, it's hard not to wonder if short-term profits, a lack of architectural thinking about security and resilience, and long-term myopia aren't leading us in the wrong direction."
This discussion has been archived. No new comments can be posted.

Researchers Critique Today's Cloud Computing

Comments Filter:
  • can someone tag "goodluckwiththat" please?
  • by Anonymous Coward
    "...decentralized, massively-parallel venues until..." until its possible to slice, dice and securitize the cloud as investment opportunities. Oh wait...
    • Closer to the truth than you think. The entity that ends up either controlling or corrupting such a "cloud" wins, hands down. Unlimited access to everybody's plans, secrets, and even dirty little secrets - especially the dirty little secrets...I can think of a number of governments and corporations that must be positively drooling at the prospect.
  • by RichardDeVries ( 961583 ) on Saturday April 25, 2009 @04:44AM (#27711035) Journal
    The article really is about how the term cloud computing as it is used now, is different from how the researchers used it in the nineties.

    As you've seen, today's cloud computing is not at all the same thing as our vision of the P2P cloud

    I don't use the term at all. Putting thinks like network storage, Amazon's EC2, Google Docs and del.icio.us together is non-sensical. Yes, 'cloud computing' is a buzzword. I knew that. Maya's vision for a true P2P information network is nice though, albeit somewhat too idealistic.

    • Re: (Score:3, Insightful)

      by Fuzzums ( 250400 )

      I totally agree. If you host your servers in one datacenter (cloud) you chose that one and not an other one. And if that datacenter brakes down, you have a problem. Same with clouds.

      The article says a cloud works the same as a normal cloent-server application. On the otside that is true, but one dig difference is it should (if the application is desishould well) scale rather nicely.

      Example: There is a website that compares different ideas of different parties and with a couple of questions it tells you that

      • by Decado ( 207907 ) on Saturday April 25, 2009 @06:15AM (#27711275)

        What puzzles me though is that the article tries to argue that on one hand the cloud concept is no different from client-server as it stands but on the other that the problem is the lack of interoperability.

        A random Microsoft server can no more interoperate with a random Oracle or Apple server than a cloud service can so exactly how is it worse?

        I also think the term cloud computing is just a bit jumbled. I think of it as the Amazon model, you basically design your server as a VM and then multiple copies of that get instanced as needed. The strange thing is that model is far more vendor neutral than anything currently on the market. In theory there is no reason why any company with the hardware resources can't fire up 1000 copies of that VM if you choose to change vendor. In effect cloud computing by that definition (which I believe is the most common) is no different than leasing servers from a hosting service at present, it just scales a lot easier if you need to.

        The economic comparison is equally false. If for example Amazon were to oversell their hardware by 10% then all that happens is the sites they host end up running a bit slowly and people move off the service. The whole company doesn't end up in negative equity and going broke because of that. That metaphor just seems so wrong in this situation that it pretty much makes no sense.

        If we were seeing a situation where web hosting and data center companies were merging wholesale while pursuing shaky business models then you could argue that there was a comparison but we are not. Cloud computing is a technical development, and until we see huge companies hosting the entire internet there is no real risk.

      • by smallfries ( 601545 ) on Saturday April 25, 2009 @06:22AM (#27711307) Homepage

        Not it is not the same with clouds and that was the researchers point. The term "Cloud" should not be a marketing buzzword for scalable. There should not be multiple "Clouds". The entire point of the Cloud as they refer to it was that it was amorphous and ubiquitous. One of their nicest points is that you slap a brand name on it then it is no longer the Cloud.

        The current crop of so-called "Cloud" services (like Amazon) are a decent enough attempt to provide this type of platform, but provides a highly redundant platform as a single point of failure is not the same thing as removing the single point of failure. Amazon's platform can still fall over (and did for several hours earlier this year). In real Cloud computing that would not be an issue because rather than being tied to Amazons servers / datacenter you could execute your code anywhere that provided the service.

        One of the problems with the real Cloud of the 90s was that it became too successful. So now we don't see it anymore. The architecture of the lower levels of the internet is now firmly entrenched: tcp/ip, dns etc. That is the real Cloud, a platform that we can write code for that really is ubiquitous. Services like Amazon are a logical progression of that platform, but they are not the logical endpoint. When there is a standardised API for code that will run on any of the major providers, accessing storage from any of the major providers, and able to replicate across them at will (chasing the cheapest prices) then the Cloud will really have arrived.

        Finally, just to answer your question in a more concrete and exact way:

        Coming back to your point (and that of the paper): you pick one cloud to host your application and then why would you want to be able to communicate with different clouds???

        It's important to drill through the marketing buzzwords that the paper is rallying against. By definition there are no multiple clouds. So really what you are asking is why should I want to talk to different providers within the same Cloud?

        Resilience. Partly to avoid downtime, partly to avoid vendor lockin. If I develop my application for EC2 and then Amazon decide to get out of the scalable computing business - I'm screwed. It's a similar situation to the drm-locked media where the license servers shut down after the company stopped selling it.

        Another reason - competition. If I can move my application between Google and Amazon at will then I will pay the cheapest price for my cycles. If I have to recode my application to redeploy it then it will require a huge price differential before I do that.

  • by Jack9 ( 11421 ) on Saturday April 25, 2009 @04:48AM (#27711045)

    In our opinion cloud computing, as currently described, is not that far off from the sort of thinking that drove the economic downturn. In effect both situations sound the same... we allowed radical experiments to be performed by gigantic, non-redundant entities.

    This makes no sense. Even the deduction makes no sense, in context. TSK TSK those idiots who invented the mouse were engaging in risky behavior?! Let's demonstrate insight by mentioning an economic trend that has nothing to do with technical innovation? Why would radical experiments be conducted by redundant entities? I am scared to download the PDF, for fear it's got more insight that will frustrate and elicit vitriol from me.

    • by PolygamousRanchKid ( 1290638 ) on Saturday April 25, 2009 @05:16AM (#27711139)

      we allowed radical experiments to be performed by gigantic, non-redundant entities.

      The Japanese call it "Hentai."

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • Their concept of cloud computing, while compelling, is ultimately unworkable, due to nothing more complicated than market forces. If there is no money to be made in it, no corporate entity will become associated with it.

        So what? You've never heard of a co-operative before? Never heard of publicly funded works for the common good? Corporations aren't the only structure humans can use to collaborate. And looking at how things turned turn out, they don't seem to be a particularly good structure, either
        • by jdgeorge ( 18767 )

          Their concept of cloud computing, while compelling, is ultimately unworkable, due to nothing more complicated than market forces. If there is no money to be made in it, no corporate entity will become associated with it.

          So what? You've never heard of a co-operative before? Never heard of publicly funded works for the common good? Corporations aren't the only structure humans can use to collaborate. And looking at how things turned turn out, they don't seem to be a particularly good structure, either. They repeatedly keep running things on the edge of collapse, calling it efficiency, and putting the extra resources up someones nose or wherever the hell it disappears to.

          The concept of "cloud computing" described in the paper is, as far as I can discern, purely hypothetical. The model of theoretically superior cloud computing never existed, if I understand the paper correctly.

          This paper contains lots of hand waving and hand wringing, but no concrete comparisons of anything, and no data which would enable anything remotely resemble an objective analysis.

          I think there's a seed of an interesting discussion in there, but nothing more. Give us a proposal, a real analysis of what

    • What?

      If everyone builds applications that store their shit on Amazon, then Amazon becomes "too big to fail".

      That's the point.

    • by monk ( 1958 )

      I'll agree it's a stretch to compare EC2 to Citigroup, but the concern is valid. I think a better comparison would be to mainframe timeshares. There were several reasons buying time on a mainframe was less desirable than distributed computing. Open APIs are better than vendor lock-in for the consumer, end of story.

      But one comparison to the banks is worth making, imagine for a moment that Amazon or Google did take a financial hit or make a bad decision that lead to them shutting down their grid offerings. It

      • by Jack9 ( 11421 )

        I believe rightscale does this. It can't possibly be the only service to abstract a bunch of cloud services, but it's the only one I've seen to be decent (in theory).

  • by OeLeWaPpErKe ( 412765 ) on Saturday April 25, 2009 @05:19AM (#27711145) Homepage

    allowed radical experiments to be performed by gigantic, non-redundant entities (PDF). This is dangerous,

    So we should break up the large banks, and replace them with an untold number of smaller, local banks that each follow their own strategy ?

    Letting them go bankrupt should have exactly this effect. Destroy the whole, sell of the pieces one-by-one to the highest bidder.

    And the alternative is propping up banks, running the extremely enormous risk that we've misidentified the cause of the current crisis ... (too much regulation ? too little ? Obama ? Bush ? Clinton ? CRA ? The devil ? Oil price ? Energy prices ? GW (not GW itself obviously, but the policies "to prevent it" are affecting the economy) ? I'm not arguing for anyone of them, I'm just saying there's probably good arguments for a lot of these factors)

    If we misidentify the cause of the current failure, or fail to act on it, even slightly, then we'll have an even bigger disaster on our hands in a few months/a few years ...

    So we should force the banks to follow a much more capitalist course, versus Obama's communist "fix" ... well one would have to admit that's a given.

    (I'm using the capitalist/communist distinction in it's original distinction : centrally (government) directed versus distributed decision making. With these (long used) definitions Obama's actions are squarely in the communist camp).

    • It's not really a good choice. In an ideal world all banks would be small enough to fail. The shareholders would get wiped out and the depositors would get compensated. But we are not in an ideal world. The government doesn't have a choice between letting small banks fail or propping them up.

      Right now we have a system with a few huge banks that are all under-capitalised because the majority of their real assets just dropped 30% in value, and the majority of their unreal (leveraged) assets fell off of a clif

    • While slightly off topic, it's one of the more even and provoking "armchair" analysis'(?) that I've seen on the subject.
      • by maxume ( 22995 )

        Except it is quite clear that excess leverage and dishonesty where huge contributors to this particular meltdown. There isn't really anything pernicious about regulating the leverage ratio of a deposit backed institution (they are only ever technically solvent anyway), and the meltdown itself eliminated much of the credibility that made dishonesty so profitable.

        So given that much of the government action is focused on decreasing the leverage at the big financial institutions, that particular aspect of the p

    • > So we should break up the large banks, and replace them with an untold number of smaller,
      > local banks that each follow their own strategy ?

      I didn't view the recommendation that way. I took it as meaning that radical experiments (like the experiments in home mortgage valuation that triggered the financial meltdown), should be performed by smaller, redundant entities. That would prevent experiments like this from taking down key "too big to fail" parts of the structure.

      It would be okay for larger ent

    • So we should break up the large banks, and replace them with an untold number of smaller, local banks that each follow their own strategy ?

      Letting them go bankrupt should have exactly this effect.

      and throw out the baby with the bathwater.. e.g. the entire economy, the middle class way of life, and arguably the US as a stable nation-state.

      The issue of systematic risk can be handled through intelligent regulation. Set a maximum net worth of a financially and managerially independent subsidiary.. this will make large corps more modular and less likely to suffer catastrophic failure without impacting ownership/imposing wealth caps.

      So we should force the banks to follow a much more capitalist course, versus Obama's communist "fix" ... well one would have to admit that's a given.

      Because free market capitalism worked so well right? *cough 1929 cough* *

    • So we should break up the large banks, and replace them with an untold number of smaller, local banks that each follow their own strategy ?

      Letting them go bankrupt should have exactly this effect. Destroy the whole, sell of the pieces one-by-one to the highest bidder.

      The problem is that you can't do it that way, precisely because they are currently "too big to fail." Try this analogy: we're currently holding up the roof of a building with one big support beam. Since if that support beam fails, the buildin

    • The article recognizes there is room for large entities, The point is about a large-scale experiment that may fails and drive many other businesses with it.
  • [...] it's hard not to wonder if short-term profits, a lack of architectural thinking about security and resilience, and long-term myopia aren't leading us in the wrong direction.

    This coming from a generation that still thinks web apps are cool.

  • by Anonymous Coward

    With power outlets at nearly every seat on newer planes going from New York to Asia, cloud computing is great. The only problem is when they guy in front of you tilts his seat back too far. Of course, there is no redundancy, one laptop is expensive and heavy enough, thank you.

  • If you are a business trusting mission critical data or computing to the cloud, then you need to verify how well the cloud handles these issues. Just because they have a big data center doesn't mean that they have redundant services spread across the data center and multiple data centers in case one is hit by a disaster. http://www.datacenterknowledge.com/archives/2009/03/23/carbonite-lawsuit-reveals-data-loss/ [datacenterknowledge.com]
  • Missing the point (Score:4, Insightful)

    by chabotc ( 22496 ) <(moc.liamg) (ta) (ctobahc)> on Saturday April 25, 2009 @06:25AM (#27711315) Homepage

    The article is missing the point that many of the organizations that offer a 'cloud solution' (Amazon, Google, Joyent, etc) have already been experimenting with cloud computing for a long friggin' time, and the massive parallel experimentation phase was "who can grow without breaking". Now they're offering what they learned from that as a service.

  • by Anonymous Coward

    A measure of the likely price impact of executing my information order?

    Bullshit! [bullshitbingo.net]

  • <del>cloud computing, as currently described,</del> <ins>Windows,</ins> is not that far off from the sort of thinking that drove the economic downturn
  • This article is classic self promotion in the vein of:
    1. choose concept
    2. label evil
    3. ???
    4. profit!

    At the risk of being constructive, if Maya actually spent time looking at what larger enterprises are planning over the next few years they would see the obvious architectural foundations for creating compute/storage/application pools emerging within enterprise data centres.

    Extending these across physically or geographically dispersed platforms (eg DR) will be well within our technical capabilities in the ti

  • by davide marney ( 231845 ) * on Saturday April 25, 2009 @07:42AM (#27711623) Journal

    OK, sure, the "cloud" buzzword is annoying and not very useful. That happens a lot in our wonderful business. But saying that EC2, GoogleApps, and Azure are all dead ends because they're the products of large corporations is a lot of fuss over nothing.

    No doubt the definition of "cloud computing" will evolve. For today, it primarily means not having to know any details about specific servers anymore, or worrying about how to connect to them. That's not a terribly original notion, but it is a big step forward.

    To those of use who remember life before ubiquitous networking, ubiquitous data protocols, and ubiquitous storage, we are hugely grateful for what little bit of cloud computing we've got.

  • So they are saying that the modern definition of "cloud computing" involves putting all you data in one place, so that it can be managed at a single point by dedicated IT professionals? Isn't that exactly what they used to call "mainframe computing"? Will we all be using 3270 terminals to access the cloud?
  • This is copied from my humble blog: http://thefortifiedhill.blogspot.com/2009/04/rejecting-cloud.html [blogspot.com]

    To understand the reason the Cloud is a bad idea, we need to look at the short history of the web since the late 90's. The best example to look at is e-mail, but the same arguments apply to most Cloud applications. In those days, you got email access through POP and later IMAP. The service you were paying for was just a reliable email server and an account on it. Some free sites generated ad revenue by inje

    • Third, the browser is a bad platform for these kinds of applications. The browser was never designed to be a host for dynamic applications of this complexity. Their are numerous development and usability issues in web development. Almost all web work involves hacks and workarounds to accommodate situations where browsers don't adhere to the web standards. The browser has been contorted to fill a role that your computer environment should have filled all along.

      This can't be said often enough. 'Everything in the browser' has been a user interface disaster of the first order. I've posted this before on Slashdot, and argued the point before. Anybody who wants to read the full exchange can find it in my posting history, but in a nutshell, I say web-based 'apps' are crippleware. They sacrifice 20 years of API development that made local applications what they are, and some substantial chunk of what web-based is now is an unstandard uncommon bastardized retrofit.

  • I've just been reading overviews of the ZFS, and this white paper uses a lot of the same buzzwords with what appear to be the same meaning. ZFS is all about reliability, scalability and freedom from lock-in to any device; just like the cloud. I understand cloud computing to have four components:

    1. a single ZFS pool which consists of every storage device in the world, using open-format data that are accessible to anyone, anywhere with proper access, and not available to anyone, anywhere who doesn't.

    2. applic

  • Google's Gmail interface is one side of a P2P.

    Cloud metaphors in place of FUD, blue smoke and mirrors for purposes which remain, well, clouded.

  • While there is still hope for computing in the cloud, it's hard not to wonder if short-term profits, a lack of architectural thinking about security and resilience, and long-term myopia aren't leading us in the wrong direction.

    What? Of course those aren't leading us in the wrong direction --- they aren't LEADING anywhere! Science still LEADS technology development, and Slashdot has even run multiple stories about the Open Science Grid [opensciencegrid.org]. OSG, of course, is just ONE huge example of a set of massively dist

  • Comment removed based on user account deletion
  • I've had my first true cloud experiment a good week ago. It saved the sites of dozens of customers hosted at that same server.

    Without clouds, bandwidth & servers had to be upgraded within 5 hours to withstand the load. With clouds, all load was handled perfectly by putting the highest load (images and javascript) on the cloud and leave the pure webserving to the server. It brought the load from unworkable to fully bearable. I've written about this [gowildchild.com] at my blog.

    This new form of clouding could for sure hurt

  • we should insist on decentralized, massively-parallel venues

    We did. It has worked astoundingly well. The centralized, massively-serial venue was called the mainframe. The decentralized, massively-parallel venue is called the desktop PC connected to others via LANs and the Internet. Isn't the paper preaching 'do more of the same'?

    I would argue that the 'until we understand it very very well' part hasn't come to pass yet... We haven't finished understanding network effects, or Youtube wouldn't be losing money. If Google, king of the leveraged network effect, c

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...