Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet IT

Why Auto-Scaling In the Cloud Is a Bad Idea 124

George Reese writes "It seems a lot of people are mistaking the very valuable benefit that cloud computing enables — dynamically scaling your infrastructure — with the potentially dangerous ability to scale your infrastructure automatically in real-time based on actual demand. An O'Reilly blog entry discusses why auto-scaling is not as cool a feature as you might think."
This discussion has been archived. No new comments can be posted.

Why Auto-Scaling In the Cloud Is a Bad Idea

Comments Filter:
  • by Yvan256 ( 722131 ) on Saturday December 06, 2008 @05:49PM (#26015665) Homepage Journal

    I think auto-scaling the clouds based on actual demand is a really great idea. I think farmers would really like that feature, in fact.

    Wait, what clouds?!

    • Re: (Score:3, Funny)

      Wait, what clouds?!

      Cumulo-mumbo-jumbo-nimbus clouds maybe?

    • by u38cg ( 607297 )
      I just find it funny that an article on how to deal with slashdotting is on slashdot. Sadly, it has failed to fall over, creating what would have been a fine piece of irony.
  • Like cellphones (Score:5, Insightful)

    by Tablizer ( 95088 ) on Saturday December 06, 2008 @05:50PM (#26015671) Journal

    Without a hard-limit, some people run up big cell-phone bills. If you are forced to stop and plan and budget when you exceed resources, then you have better control over them. Cloud companies will likely not make metering very easy or cheap because they *want* you to get carried away.

    • Of course cloud companies want you to need to scale.

      Also, most successful companies want to scale too. Doesn't the customer and provider win in the scenario? As long as growth = profit for the customer, I mean.

      • Re:Like cellphones (Score:5, Interesting)

        by lysergic.acid ( 845423 ) on Saturday December 06, 2008 @09:33PM (#26016981) Homepage

        i think the author's point is that dynamic scaling should always be planned; partly because it results in better understanding of traffic patterns, and thus better long-term capacity planning, and partly because you need to be able to distinguish between valid traffic and DDoS attacks. still, i think the author is overstating it a bit. one of the main draws of cloud computing to smaller businesses is the ability to pool resources more efficiently through multitenancy, part of which is precisely due to auto-scaling. without the cloud being able to dynamically allocate resources to different applications as needed in real-time (i.e. without human intervention), there isn't much of an advantage to sharing a cloud infrastructure over leasing dedicated servers.

        for instance, let's say there are 10 different startups with similar hosting needs, and they can each afford to lease 10 application servers on their own. so using traditional hosting models they would each lease 10 servers and balance the load between the them. but after a few months they realize that 75% of the time they only really need 5 servers, and 20% of the time they need all 10, but an occasional 5% of the time they need more than 10 servers to adequately handle their user traffic. this means that in their current arrangement, they're wasting money on more computing resources than they actually need most of the time, and yet they still have service availability issues during peak loads 5% of the time (that's over 2.5 weeks a year).

        all 10 of these startups share a common problem--they each have variable/fluctuating traffic loads severely reducing server utilization & efficiency. luckily, cloud computing allows them to pool their resources together. since the majority of the time each startup needs only 5 servers, the minimum number of virtual servers their cloud infrastructure needs is 50. and since each startup needs double that 20% of the time, 10 extra virtual servers are needed (shared through auto-scaling). but since each startup needs more than 10 servers for about 2.5 weeks each year, we'll add another 15 extra virtual servers. so all in total, the 10 startups are now sharing the equivalent of 75 servers in their cloud.

        by hosting their applications together on a cloud network, each startup not only has their hosting needs better met, but they also stand to save a lot of money because of better server utilization. and each startup now has access to up to 30 virtual servers when their application requires it. this kind of efficiency would not be possible without a cloud infrastructure and auto-scaling.

        • Along the same lines, the whole cloud computing thing got started really because companies like Amazon really went out and bought all the servers they needed to hit their peaks.. which is a lot of overkill 50% of the time. So they started renting it out to little people.

          The thing the article's author misses is that regular small-medium businesses don't want to be web experts and they don't have hundreds of thousands of dollars to pay for web server farms or analysis. They want to get some usage numbers for

    • Re: (Score:3, Insightful)

      by Cylix ( 55374 )

      Actually, metering is cheap and easy, simply because they *need* to meter your traffic. Companies with infrastructure requirements and not a great deal of dumb users will generally have to be honest to keep your business.

      Loyalty is based on performance and meeting customer expectations.

      Phone companies get away with this crap because they are either a monopoly or engage in lengthy customer lock in. It also doesn't help that it is pretty much the norm to nickel and dime the customer.

      Ec2 and other retail outle

    • Re: (Score:2, Interesting)

      by enovikoff ( 1425983 )
      As a cloud computing provider, I actually have no interest in having my customers suddenly run up huge bills. The reason is that as the article said, something is most likely wrong somewhere, which means that as their services provider, I'll also be responsible for figuring it out :) I can't speak for Amazon, which has a more hands-off model, but my success is invested in the success of my customers, so I won't sit idly by while they waste their money. However, looking at my company's balance sheet, we
    • Cloud companies will likely not make metering very easy or cheap because they *want* you to get carried away.

      I've only used Amazon EC2, but I can tell you for a fact that they make it very easy for you to know where you stand. And yes, they also have hard limits.

      With EC2, you are limited to 20 concurrent instances unless you request more. The cost of running 20 of their highest-priced servers is $18.00/hr. So as long as your auto-scaling system pings you when your resources go over your comfort threshold, you should be able to get yourself to a computer, cellphone, whatever, and override what your auto-scaler d

  • by Anonymous Coward on Saturday December 06, 2008 @05:54PM (#26015691)

    THe author states that one reason he doesn't like autoscaling is because it can take a while to take effect. Thats bad technology, waiting for someone to come along and improve it.

    He also says he doesnt like autoscaling even with limiters. Autoscaling with limiters makes sense to me, especially if the limits are things along the line of 'dont spend more than XXX over time Y'.

    Finally, not using autoscaling because you might get DDoS'd is just stupid. You lose business/visitors. Thats worse than paying more to avoid being taken down, because your reputation gets hurt AS WELL AS losing you business.

    • There are better ways to deal with too much traffic than auto-scaling.

      One way is to use caching intelligently. This will allow you to use much less in the way of disk I/O resources, so your bottleneck will be one of {CPU, RAM, bandwidth}. CPU and RAM are very cheap for the amount you need to meet any reasonable demand, compared to I/O throughput. Bandwidth in a cloud (specifically EC2/S3) is virtually unlimited, though you'll pay for it. S3 has a CDN-like feature now too, so you can save money if you pu

      • There are better ways to deal with too much traffic than auto-scaling.
        One way is to use caching intelligently....

        Yes. You could also rewrite your app in C, etc... Point is, sooner or later, you're going to run into a problem which requires you to scale.

        And it would be pretty cool if, on being Slashdotted, you could have your auto-scaling tool kick in and have your site actually be live while you look for things to tweak (caching, etc) -- but not with the purpose of "getting the site back up", but rather, "saving some money".

        I suppose it depends what kind of business you're in -- whether you can afford to take that dow

    • Re: (Score:3, Insightful)

      by narcberry ( 1328009 )

      He complains that 10 minutes for a computer to scale is too slow, then states

      Auto-scaling cannot differentiate between valid traffic and non-sense. You can. If your environment is experiencing a sudden, unexpected spike in activity, the appropriate approach is to have minimal auto-scaling with governors in place, receive a notification from your cloud infrastructure management tools, then determinate what the best way to respond is going forward.

      It's 4pm on a Saturday, and your site is getting hit hard. Rally the troops, call a meeting, decide the proper action, call Fedex to ship you more infrastructure, deploy new hardware, profit from your new customers, all the while laughing at the fools who waited 10 minutes for their cloud to auto-scale.

      • It's 4pm on a Saturday and chances are that your site is being hit hard either because you were being an idiot or because someone is engaged in an attack on you.

        If you plan properly, there are no sudden 4pm on Saturday spikes in traffic.

        • Re: (Score:3, Funny)

          by narcberry ( 1328009 )

          Oh right, Al Gore internet rule number 1. Internet closes on weekends. Only hackers can visit sites, and only with malicious intent.

        • It's 4pm on a Saturday, and your site is getting hit hard. Rally the troops, call a meeting, decide the proper action, call Fedex to ship you more infrastructure, deploy new hardware, profit from your new customers, all the while laughing at the fools who waited 10 minutes for their cloud to auto-scale.

          RTFA. The author specifically makes the case for dynamic scaling, just not auto-scaling.

          That is, you rally the troops, call a meeting, decide the proper action, and have someone do an 'ec2-run-instances' command.

          It's 4pm on a Saturday and chances are that your site is being hit hard either because you were being an idiot or because someone is engaged in an attack on you.

          Or you got Slashdotted.

          If you plan properly, there are no sudden 4pm on Saturday spikes in traffic.

          If you plan properly, you are prepared for the typical 4pm-on-Saturday spikes, if those are typical for you.

          Which does nothing if you then get Slashdotted at 7 AM on a Sunday. Or whenever.

          As to which is better, the question you have to ask is, what is the cost of not respondin

      • by julesh ( 229690 )

        It's 4pm on a Saturday, and your site is getting hit hard. Rally the troops, call a meeting, decide the proper action, call Fedex to ship you more infrastructure, deploy new hardware, profit from your new customers, all the while laughing at the fools who waited 10 minutes for their cloud to auto-scale.

        Or, receive an SMS on your phone telling you that capacity is nearly exhausted. Think about it for thirty seconds; call somebody who's likely to be sitting in front of a computer (surely your company's IT st

    • Re: (Score:3, Interesting)

      by Cylix ( 55374 )

      His complaint with auto-scaling was that if the org is doing their proverbial homework then and planning for additional capacity then they should not need it.

      There are times when traffic boosts come as a bit of a surprise. However, depending on size and free capacity some bumps should be able to smooth out.

      Another trick is to have the means to scale some functionality down to allow for additional traffic. Slashdot for instance used to flip to a static front page when traffic was insane.

      Personally, a very li

    • by mcrbids ( 148650 )

      I have. My company lives (or dies) by the !@# SLA.

      Our agreements require no less than 99.9% uptime, about 8 hours of downtime per year. We never gotten close to that - our worst year was about 2.5 hours of downtime because of a power failure at our "fully redundant" hosting facility. [blorge.com]

      In this world, where I have up to 8 hours per year, 10 minute response would be a god-send. We've just spent *alot* of money revamping our primary cluster so that we now operate with 100% full redundancy on everything. Redunda

      • by julesh ( 229690 )

        I have. My company lives (or dies) by the !@# SLA.

        Our agreements require no less than 99.9% uptime, about 8 hours of downtime per year. We never gotten close to that - our worst year was about 2.5 hours of downtime because of a power failure at our "fully redundant" hosting facility.

        Hmm. When my last hosting provider had a power failure at their hosting facility, it took them more like 2.5 days to get all their clients back up and running. Turns out some of the machines hadn't been rebooted for years, an

    • by julesh ( 229690 )

      Finally, not using autoscaling because you might get DDoS'd is just stupid. You lose business/visitors. Thats worse than paying more to avoid being taken down, because your reputation gets hurt AS WELL AS losing you business.

      The solution to a DDoS is not to keep scaling up your hardware and hoping that the next server you add will be able to withstand it. It won't.

      The solution to a DDoS is to get on the phone to your ISP and ask them to step up the filtering on your server. They can limit the rate at whic

      • Re: (Score:3, Interesting)

        Of course, you can only do this if you know you're under attack, and if your infrastructure is set to autoscale, you probably won't know. Until you receive the bill.

        Yes because if you happen to use some sort of auto-scaling system, be it at the cloud level or your own management system, it's very likely that you never thought to put in the same monitoring and alerting systems that you already had on your non-cloud, non-autoscaling systems thus ensuring that you will be blindsided by the scenario you just laid out.

        Or, you have more than two brain cells to rub together and you already had all of that in place and just pointed it to the auto-scaling cloud system enabli

  • by Daimanta ( 1140543 ) on Saturday December 06, 2008 @06:00PM (#26015721) Journal

    The blogosphere has disagreed with the use of web2.0 in the cloud. Sure, we all know that data is king and that's why we use software as a service nowadays with the web as a platform using AJAX and RSS extensively. This has helped to solve the challenge of findability since lightweight companies helps to connect user needs. The fact is that the long tail is part of the paradigm of user as co-developers in server wiki-like sites. Unfortunately this brings up the problem of ownership of user generated content. But I think that perpetual betas help the architecture of participation to stimulate web2.0. Interaction does make the experience good.

  • by chill ( 34294 ) on Saturday December 06, 2008 @06:10PM (#26015767) Journal

    Someone get this guy a cane to shake at the whipper-snappers. "In my day, you learned proper capacity planning or you didn't enter the data center!"

    It can take up to 10 minutes for your EC2 instances to launch. That's 10 minutes between when your cloud infrastructure management tool detects the need for extra capacity and the time when that capacity is actually available. That's 10 minutes of impaired performance for your customers (or perhaps even 10 minutes of downtime).

    Like, you could do it so much faster than 10 minutes without auto-scaling. Bah! If you've read The Art of Capacity Planning you would've mailed in the coupon for the free crystal ball and seen this coming!

    Properly used, automation is a good thing. Blindly relying on it will get you burned, but to totally dismiss it out of hand is foolish.

    • by VoidEngineer ( 633446 ) on Saturday December 06, 2008 @07:05PM (#26016083)
      Properly used, automation is a good thing. Blindly relying on it will get you burned, but to totally dismiss it out of hand is foolish.

      First Rule of Automation: Automation applied to an efficient task increases it's efficiency; likewise, automation applied to an inefficient task will simply increase the problem until it's an all out clusterfuck.

      Second Rule of Automation: Automation applied to an effective task will be effective; likewise, automation applied to an ineffective task will still be a pointless waste of time.

      Or something like that. My eloquence appears to be -1 today.
      • Re: (Score:3, Interesting)

        by TubeSteak ( 669689 )

        First Rule of Automation: Automation applied to an efficient task increases it's efficiency; likewise, automation applied to an inefficient task will simply increase the problem until it's an all out clusterfuck.

        Last time I checked, most sites that get slashdotted are either some shiatty shared hosting or a dynamic page.

        Static pages & CoralCDN would keep a lot of websites from getting hammered off the internet.

        • by julesh ( 229690 )

          Static pages & CoralCDN would keep a lot of websites from getting hammered off the internet.

          Unfortunately, most interesting sites need dynamic pages. If you're producing different page content for each user (based on preferences, past browsing history, geographic location, or anything else you can think of), you can't really do static pages. Unless you generate a lot of them. Coral can't help with this either, because it would result in each user seeing pages that were intended for another user.

      • by initialE ( 758110 ) on Sunday December 07, 2008 @04:25AM (#26018669)

        The second rule of automation is you do not talk about automation.

    • Re: (Score:3, Insightful)

      by Aladrin ( 926209 )

      And in addition, if that capacity is needed on my current servers (which aren't all cloud-y), how long does it take to scale up? I have to order a new server, install an OS, configure it, install all the software I need, test it, carefully roll it out.

      Can I do that in 10 minutes? Not a chance! If I did that in 10 hours it would be a miracle. 10 days is a lot closer to reality, for a true rush job.

    • Re: (Score:3, Interesting)

      by nine-times ( 778537 )

      Yeah, it seems like is argument really comes down to a couple points:

      • Auto-scaling isn't fast enough- Apparently EC2 doesn't react quickly enough. To me, this seems to be a technical question as to whether auto-scaling can be designed to be reactive enough to be practical, and not necessarily an insurmountable problem with the concept of auto-scaling.
      • Auto-scaling might incur unexpected costs- The basic idea here is that, if you're paying a certain amount per measurement of capacity and it scales automa
      • by Cylix ( 55374 )

        The scaling logic is in your software. The cloud service shouldn't know best. In theory, a management and monitoring agent would dispatch an additional node and add that node to the pool.

        Since images can be templated it's really a matter of automating the deployment.

        Ec2 systems take time to transfer the image and initiate an instance. I do wonder about the 10 minute portion though. Since the last time I spun up a virt it was ready in about 5.

        I suspect this is his concept of instantiate a virt, deploy packag

        • The scaling logic is in your software. The cloud service shouldn't know best.

          In my mind that would depend to some degree-- whichever was a better solution given your needs. If the scaling logic of the cloud was much better than I could come up with without significant investment, and I weren't in a position to make a significant investment on that logic... well...

      • So if someone offered a service where auto-scaling was fast, and there was some kind of limits on what you could be charged under what sorts of situations, would he still have a problem with auto-scaling?

        Probably. At the very least, he does mention the possibility of limits, and claims it doesn't address the core issue -- which is that if it's an unexpected spike, a human should look at that traffic to see if it's legitimate before spending money on it.

        I'd say, for most sites, it's probably worth it to auto-scale first, and then page the human. If it's not legitimate traffic, you can override it. If it was legitimate after all, 10 minutes to boot an EC2 instance is much faster than 10 minutes plus the time

        • I'd say, for most sites, it's probably worth it to auto-scale first, and then page the human.

          That sounds reasonable enough to me. Sometimes you just have to analyze, "Given the risk of [event A] happening and the money I stand to lose if it does, and given the cost of doing what it takes to prevent [event A] from happening, is it worth investing in a system to prevent [event A] from happening?" And often you can't outright prevent Event A from happening, but you're just trying to make it more unlikely, or reduce the costs associated with that risk.

          So I think the question is, how much is "proper"

          • Re: (Score:3, Interesting)

            there would be various triggers of "if capacity exceeds A in time frame B, someone gets emailed/paged and is given the opportunity to override."

            Point is, the overriding should probably happen after the system has attempted to auto-scale.

            For instance, if I got Slashdotted, I'd probably want to scale to handle the load. If I have to be called in to make a decision before any scaling happens, I've probably missed an opportunity. On the other hand, if I've set reasonable limits, I then have the choice to relax some of those limits, or to decide I can't afford surviving Slashdot this time (or maybe realize it's a DDOS and not Slashdot), and pulling the

      • The biggest flaw I see in autoscaling isn't that it isn't fast enough or might cost too much (in both cases it beats the current "scramble out a new server" or "continuous overcapacity" solutions. The biggest flaw is that it doesn't go far enough. I see it as only the first baby step torwards Transcontinental Demand Load Balancing [sun.com]
    • > George Reese is the founder of [...] enStratus Networks LLC (maker of high-end cloud infrastructure management tools)

      So, it is just that the guy don't want to be pushed out f business. Of course, auto-scaling is good, unless you are an infrastructure management tools vendor...

      • Huh? enStratus and just about every other infrastructure management tool performs auto-scaling. It's a baseline feature, and you need tools like enStratus to do auto-scaling for you since Amazon does not (currently) support auto-scaling.

      1. It does not take 10 minutes to launch a server in EC2, even if you don't know what the hell you're doing. My servers launch in about 90 seconds, but I've taken the time to make my own custom images that are optimized to boot quickly.

        A web head that just uses one of the stock EC2 images, and then uses the distro's package manager to install apache, etc., is going to take, at most, 5 minutes to come online. Yes, that's right, you can specify a boot-up script for your images, and the boot-up script can call

  • by Gothmolly ( 148874 ) on Saturday December 06, 2008 @06:15PM (#26015783)

    So I hand over my business logic and data to a third party, who may or may not meet a promised SLA, and whose security I cannot verify? Does this mean I can be rooted and lose my customer data faster, and at a rate proportional to the hack attempts? Cool!

    • That's a very poorly thought out view of a cloud infrastructure.

        With Amazon in particular, you do have SLAs and you can easily design an infrastructure that will be very secure for most organizational needs and exceed the SLAs offered by Amazon.

    • Re:Auto-rooting? (Score:4, Interesting)

      by Eskarel ( 565631 ) on Saturday December 06, 2008 @10:36PM (#26017343)
      Well yes, you could also look at it from the point of view of. "I have a really clever idea, which will probably take off, and which if it does take off will require a lot of resources. I don't have a lot of money, but I can scrape together the cash for a small cloud investment and if my idea takes off I can afford as many servers as I want. I could buy a couple of regular servers and be unable to meet demand for several weeks while I order new equipment and possibly lose my start because people got sick of my site not being up, I could sell my idea to some venture capital people who, if they invest at all will take half my profits, or I can use the cloud, expand in ten minutes, and maybe make a lot of money without having to give it all to someone else".

      That's the strength of the cloud my friend, being able to start an idea without having to promise 90% of it to someone else to get funding.
    • because we all know that self-hosted servers never get hacked or suffer down-time. and i'm sure a small business can afford better network & server management/equipment than Amazon, Google, or Microsoft.

      do you also keep all of your savings (which is no doubt in gold bullions) in a safe at home that you stand guard over yourself with a 12-gauge shotgun?

    • by davecb ( 6526 ) *

      This reminds me of a large company which outsourced enthusiastically, until at one point they discovered they'd outsourced decisions about maintenance... causing the outsourcer to have control over the maintenance budget.

      As you might expect, after it ballooned, they started in-sourcing!

      Giving others control over financial decisions is almost always unwise, even if doing so is the newest, coolest idea of the week.

      --dave

  • by HangingChad ( 677530 ) on Saturday December 06, 2008 @06:20PM (#26015817) Homepage

    While a content site might run the risk of getting slashdotted or Dugg, that isn't necessarily a big risk for applications. And your platform choice makes a big difference. We do our business applications on a LAMP stack. If we need capacity, we can stand it up for the cost of hardware. Nice thing about LAMP is at least the AMP part is OS portable, so we can rent capacity where ever it's cheap. So far we haven't needed to do that but it's nice to have the ability.

    To date we haven't run into any problems. If we're expecting a surge of new customers, we have a pretty good idea of expected traffic per customer. We can stand up the capacity well in advance. Hardware is cheap and can be repurposed if end up not needing all the extra capacity.

    Our platform choice gives us a tremendous amount of flexibility. You don't get that with Windows. Any increase in capacity has a significant price tag in license fees associated with it. Once you build the capacity there are fairly significant ongoing expenses to maintain it. You can take it offline if you need to scale down but you don't get your money back on the licenses. There's a whole new set of problems outsourcing your hosting.

    I like our setup. The flexibility, the scalability, the peace of mind of not struggling with capacity issues, negotiating license agreements with MS or one of their solution providers and not being limited to their development environment. We can build out a lot of excess capacity and just leave it sit in the rack. If we need more just push a button and light it up. I'm not sure an Amazon or anyone else could do it cheap enough to justify moving it. And I really like having the extra cash. Cash is good. Peace of mind and extra money...what's not to like? Keep your cloud.

    • ...Nice thing about LAMP is at least the AMP part is OS portable...

      Careful with that - some nuances will turn up that will bite you on the ass. I found out last year that Apache's MD5 module creates different hashes(!) on Windows than it does on UNIX.

      I finally convinced my employer to use Subversion to provide version control on our Pro/E CAD files by bringing in my BSD server and doing a demo for the bosses. It was a beautiful setup, including ViewVC so the gals in Customer Service have access to drawings and visibility into what has been completed. Our IT guy is a

      • by Wonko ( 15033 ) <thehead@patshead.com> on Saturday December 06, 2008 @07:55PM (#26016341) Homepage Journal

        Careful with that - some nuances will turn up that will bite you on the ass. I found out last year that Apache's MD5 module creates different hashes(!) on Windows than it does on UNIX.

        If that is true then at least one of them isn't actually generating an MD5 hash.

        I'm just guessing, but I bet you were also encoding the line ending characters. That would be encoded differently on Windows and UNIX, so you'd actually be hashing two strings that differed by at least one byte.

        • That's brilliant! I didn't even think of that stupid '^M' business... That is the exact kind of nuance I was referring to in my previous post.

          At the time, I found myself with a totally unexpected problem in the process of rolling out the system. The pressure that you feel when there's a problem, after telling your boss that it will run on Windows with no problem, just sucks! I was trying to spare somebody that pain.
    • by julesh ( 229690 )

      Nice thing about LAMP is at least the AMP part is OS portable, so we can rent capacity where ever it's cheap. So far we haven't needed to do that but it's nice to have the ability.

      Only if you're very careful. I'm going to assume that by LAMP you mean Linux/Apache/MySQL/PHP as that's the most common meaning these days. Some of what I say also applies to PERL and/or Python, which also sometimes end up in the same acronym.

      The first thing to be aware of is that you're likely using Apache with the prefork MPM

    • We can build out a lot of excess capacity and just leave it sit in the rack. If we need more just push a button and light it up. I'm not sure an Amazon or anyone else could do it cheap enough to justify moving it.

      With EC2, I can have a server fully configured and operational in 90 seconds at the cost of $0.10. How quickly can you get a server up and running, and at what cost?

      That being said, EC2 is not for everyone, and it may not be for you. The whole point of the Elastic Computer Cloud, is that you bring up and shut down instances as needed. If your computing needs are static, and it sounds like yours are, then EC2 starts to get expensive. Their smallest server costs $72/mo+bandwidth if you leave it running 24

  • When in doubt, use a ladder. Elevators cannot be trusted.
  • by tpwch ( 748980 ) <slashdot@tpwch.com> on Saturday December 06, 2008 @06:46PM (#26015947) Homepage
    He seems to be assuming that you only want to run a website on this service. I don't think hosting websites on this kind of service is a good idea at all. There are many other types of application you run on clould computing infrastructure, which makes much more sense, and negates almost all of his claims.

    Consider for example a rendering farm. One day you may have two items to render. Another day 10 items. The next day 5 items. Should you really scale up and down manually each day, when you could just as easily just start the amount of servers you need based on how many jobs have been submitted for that day, and how large the jobs are?

    There are many other examples. Websites are not the only thing you run on these services.
    • Re: (Score:3, Interesting)

      by Cylix ( 55374 )

      What if someone posts a bad batch or accidently malforms some package in such a way to chew though 10x the resources.

      I think there are many great uses for cloud environments, but people have to be careful when it is pay for play.

      It's a bit different then tying up all the resources on the web server. Sure, there is cost in time, but rarely does anyone get billed for those man hours.

    • Re: (Score:3, Informative)

      by Animats ( 122034 )

      Consider for example a rendering farm.

      Such as ResPower. [respower.com] They've been around for a while, from before the "grid" era (remember the "grid" era?). This is a good example of a service which successfully scales up the number of machines applied to your job based on available resources and load. Unlike a web service, though, ResPower normally runs fully loaded, and charges a daily rate with variable turnaround, rather charging for each render. (They do offer a metered service, but it's not that popular.)

      I

  • This guy makes a good case against "dumb" auto-scaling; that is, doing a simple "more traffic = scale up" calculation. However, it should be trivial to create more sophisticated algorithms that eliminate or at least reduce the problems he gives. For example, a module that can "recognize" DoS attacks versus slashdotting in most cases and either block or scale based on the results shouldn't be hard.
    • by Cylix ( 55374 )

      Not all DoS attacks are simple ping floods. Those are in fact the weakest of the breed and easy to clean out at the upstream provider.

      An attack designed to chew up your instances would perform valid page requests. Thus, your application would believe that more hits equals more traffic and it should accommodate.

      I'm all for a temporary buffer with severe limits and the big red light going off. The cost associated with a temporary reprieve in order to react to a situation would be well be worth it.

      It's like pu

  • by Skal Tura ( 595728 ) on Saturday December 06, 2008 @07:11PM (#26016111) Homepage

    Yeap, that's right. With over 7yrs of solid hosting industry experience, it's very easy to see.

    Atleast Amazon's service is WAY overpriced for long term use. Sure if you need it just for few hours ever it's all good, but for 24/7 hosting it ain't, none of them.

    It's cheaper to get regular servers, even from a very high quality provide than to use amazon's services.

    Best of all: You can still use their service to autoscale up if you prepare right, and yet have low baseline cost.

    If it's only filehosting service you need, the BW prices amazon offers are outrageous, take a bunch of cheapend shared accounts, and you'll get way better ROI, and still, for the most part, do not sacrifice any reliability at all. Cost: Greater setup time, depending upon on several contingency factors.

    Case examples: you can get from bluehost, dreamhost etc. plenty of HDD & Bandwidth for few $ a month. Don't even try to run any regular website on it, they'll cut you off (CPU & Ram usage), but for filehosting, it's great bang for buck :)

    Scared of reliability? Automatically edit DNS zone according to locations availability and have low(ish) TTL. Every added location increases reliability.

    • by Cylix ( 55374 )

      Those are horrible examples.

      Cheaper environments can be shared resources, have poor SLAs and not provide service gurantees. Sure, you can run cheap and it won't cost you until it breaks.

      DNS isn't exactly a real time solution when those entries are cached. I have encountered a large number of providers who flat out ignored cache time out settings.

      Again, a business can run on the cheap, but the idea is the servers are generally generating revenue when they are in use. Some places don't like down time.

      • When you have say 10 different locations setup, one being utilized when down for 10minutes accounts still for 90% uptime on the period.

        However, that being said, there are mission critical application, and i never said this is the perfect solution for everyone.

        Also there are other means to load balance, say you want a host a single file, ie. your application on each of these, on your main website you have download page, which chooses the mirror according to availability.

    • by chrb ( 1083577 )

      bluehost, dreamhost etc. plenty of HDD & Bandwidth for few $ a month. Don't even try to run any regular website on it, they'll cut you off (CPU & Ram usage)

      Not having used the providers in question, I have to ask, why shouldn't I try to run a regular website on them? Isn't that exactly what they do - web hosting, of regular web sites? There's no reason why a regular web site should use excessive amounts of CPU or RAM.

      but for filehosting, it's great bang for buck :)

      I just skimmed the DreamHost TOS [dreamhost.com] an

      • well, they actually provide an online storage service with at least some of their web hosting packages. you just can't use it for public data storage.

        so if you yourself want to backup a few hundred gigs of personal files that only you will have access to, you can (as long as it's not pirated material). though if you create a dreamhost account just so you can dump your company's 200 TB data warehouse onto their servers and exploit their "unlimited" storage offer, then you'll probably run into some trouble.

      • I use Bluehost and feel like I get my money's worth. They put tons of sites on fairly beefy linux servers, which is fine most of the time. The problem is that every now and them someone else's site or script runs out of control or the whole box gets DOSed (seriously, they attack the box's CPanel IP instead of a specific domain). There are also CPU limits (idk about RAM). I've only run into the CPU limits when batch resizing images with a photo gallery. Your account goes offline for 5 mins when that happens.

        • by Cylix ( 55374 )

          That's fairly aweful since you put it that way.

          Modern virtualization allows for limits per instance.

          The reason it's cheap is you are only getting an apache vhost. I don't think it matters what address they are attacking.

          It's not a fair comparison to say this shared host provider is cheaper then X cloud provider. Perhaps looking at the cost of leasing a virt would be a better comparison.

          In the end, you get what you pay for and that is a very inexpensive setup.

      • Because of any degree of higher traffic (think 100k visitors a week) and you get suspended, that's why running an regular website on it sucks, unless you have very low traffic website. Nevermind their CPUs & ram are anyways quite damn busy -> slow page views.

        Filehosting: Ie. installation file of your application is not included in that, while being technically distribution, not distribution in the sense of the TOS, which interpreted means sites where you have the latest game demos for download.

    • Take a bunch of cheapend shared accounts, and you'll get way better ROI, and still, for the most part, do not sacrifice any reliability at all. Cost: Greater setup time, depending upon on several contingency factors.

      Are you seriously proposing this as a way to run a business? That strikes me as seriously retarded. I know a lot of people who run a lot of sites, and depending on their bandwidth draw and other needs, they'll rent servers, they'll rent a cabinet and buy bandwidth, or they'll use one of the reasonably priced CDNs. But I've never heard of anybody doing this unless they're running something semi-legal and want to dodge MAFIAA [mafiaa.org] threat letters.

      Swapping your shit around between a bunch of cheap hosting accounts s

      • No jiggery is needed, ie. swapping hosts etc.

        Done right there's no problems at all. Just because something is CHEAP doesn't mean one couldn't utilize it ;)

        Everything has their own place and time, what you are saying is like Mini-ITX setups should be banned and never used because they are so cheap and doesn't offer performance.

        Setup once, forget then. You get to run at a cost of say 40$ a month with 4 locations, versus 100-250$ a month with one 1 location, and practical usable bandwidth 1/10th.

        besides, ones

        • by dubl-u ( 51156 ) *

          So you're just talking static sites? Using cheap hosting plans as a dodgy CDN? If so, I've got no issue with that. But $60 a month pays for little sysadmin time, and not much more monkey time.

          If people are having that kind of traffic, it's worth starting to think about how to make their project sustainable. Things that are pure cost tend to disappear. Figuring out how to match revenues with costs means the project is much more likely to last.

          People should also be a little afraid of hosting companies when do

  • I posted this as a comment on the blog post, but I'm copying it here as well:

    http://blogs.smugmug.com/don/2008/06/03/skynet-lives-aka-ec2-smugmug/ [smugmug.com]

    Outside of one instance where it launched 250 XL nodes, it seems to be performing pretty well. Their software takes into account a large number of data points (30-50) when deciding to scale up or down. It also takes into account the average launch time of instances, so it can be ahead of the curve, while at the same time not launching more than it needs.

  • by mattbee ( 17533 ) <matthew@bytemark.co.uk> on Saturday December 06, 2008 @08:34PM (#26016619) Homepage

    I did some rough cost comparisons for a high-traffic web site in my similarly cynical article a few weeks ago [bytemark.co.uk] (disclaimer: I run a hosting company flogging unfashionable servers, and am not a cloud fan yet :) ).

    • Cloud is a nice, nebulous term. I would count the people who really know what it means to be less than one hundred in all the world.

      I'm guessing from your post and its related link that you're not one of those.

      The cloud is a symbolic abstraction that separates the served from the servers in the same way that client/server architecture does, except that it adds an additional layer of abstraction for "server" that allows for servers to be hosted anywhere redundantly and transparently. It assumes a number

      • by mattbee ( 17533 )

        Cloud is a nice, nebulous term. I would count the people who really know what it means to be less than one hundred in all the world.

        I'm guessing from your post and its related link that you're not one of those.

        Thankyou for your insight professor :) There are a lot of disparate hosting offerings out there marketed with the word 'cloud'. I understand the abstraction but the commercial reality is the interesting debate: how it develops, and how useful it is.

    • > I run a hosting company flogging unfashionable servers

      And you provide a RubyForge mirror [rubyforge.org] - many thanks for that!

    • by julesh ( 229690 )

      Good points. I think you're right: cloud services have a long way to come in terms of cost, and I'm not sure that'll happen in the near future. And that scalability isn't relevant to most people, anyway. The number of sites that can't be managed by a single commodity server are small, and that can be scaled right down to a virtual host on a machine shared with 30 other similar sites for the low end. Virtual machine software (e.g. Xen) makes it easy to migrate to a host with more capacity as and when it

    • You are saying that static compute requirements are better met by static computing platforms. Well... duh.

      The whole point of the Elastic Compute Cloud is that it is for elastic computing use cases (usage spikes, nightly/monthly/periodic heavy processing, cold spare, etc.) It's not supposed to be cheaper than a dedicated server.

      Let me tell you one way that I use EC2, and you tell me if you can give me what I want for cheaper. I own several apartment buildings, and I run my business website on a lousy, ine

      • by mattbee ( 17533 )

        We don't do any hosting for $1.50/yr :-)

        However I'm not sure the use case you talk about is in any way a typical hosting task, probably because it gives you less overall uptime rather than more. If you have any clients on big hosts that assume 1-day or 1-week DNS, and they pick up the Amazon IP which is valid for maybe a couple of hours, your site will be down for far longer than the actual Dreamhost outage (though if Dreamhost are down for days at a time it might be win).

        • If you have any clients on big hosts that assume 1-day or 1-week DNS,

          I've used two behemoth ISPs, and neither of them cache DNS entries for an entire day. It's more like an hour.

          Again, this website is not mission-critical. No applicant is going to care if my website is down for a few minutes or even an hour. Most of them aren't that great with computers, anyway (or don't even own one). I know this, because I do rent-to-own computers, furniture, appliances, etc. for them as a side business.

          My only point in responding to you was that there is plenty of room in the hosting

  • An odd argument (Score:4, Insightful)

    by chrb ( 1083577 ) on Saturday December 06, 2008 @08:55PM (#26016735)

    His argument basically boils down to "Auto-scaling is a bad idea because you might implement it badly and then it will do the wrong thing". Isn't that true of everything? The flip side, is that if you implement it well, then auto-scaling would be a great idea!

    It's like saying that dynamically sized logical partitions are a bad idea, because you should just anticipate your needs in advance and use statically sized partitions. Or dynamically changing CPU clock frequencies are a bad idea, because you should just anticipate your CPU needs and set your clock frequency in advance. Or dynamically changing process counts that adapt to different multi-core/CPU availability factors are a bad idea... you get the picture.

    The idea that some computational factor can be automatically dynamically adjusted isn't necessarily a bad idea, it's just the implementation that might be.

    • No, the argument is that auto-scaling, upon close examination, has very few benefits that are not actually better realized through other mechanisms.

  • Another risk, at least in theory, is a kind of very short term "Tragedy of the Commons" [wikipedia.org].

    In the long term (a function of the Amazon accounting timeframe - maybe minutes, hours, or days) it may not be a problem because rational customers whose systems work correctly will voluntarily limit their usage in a predictable manner.

    However very fast DDOS, for example, of several autoscaling systems could cause a system-wide failure before the Amazon accounting system and customer strategies kicked in.

    Because the clou

  • "The dynamic scaling to plan can also be automated" massive retardation here.
  • Stupid (Score:3, Insightful)

    by Free the Cowards ( 1280296 ) on Saturday December 06, 2008 @09:47PM (#26017087)

    I can summarize this article in one sentence:

    "X is only useful for those who are too lazy to do Y."

    It's been said about assembly language, high-level languages, garbage collection, plug-n-play, and practically any other technology you can name. It is not actually a valid criticism.

    • You lost me at x arguments were true, therefore x arguments are not true. Could you please start over again with more steps?

  • On the surface auto-scaling is obviously a great thing. But it doesn't take much thought to start punching holes in it.

    Lets first look at the Data center that provides such a glorious capability.
    1. It is their own best interest for you to scale up. Scale up processing, disk, bandwidth or what ever. For the simple reason it's more money. Since you signed the contract you will probably be scaled well and truly before you know it. Usually you only find out when the bill comes in.
    2. The data center has ver

    • by Cylix ( 55374 )

      The vendor should never be responsible for resource scaling. There is no better judge of resource allocation then your own organization. The good news (or bad news) is that if the entity is incapable of self governing then it will not be an issue in the long term. Infrastructure will eventually topple under it's own inability to sustain itself.

  • by PornMaster ( 749461 ) on Saturday December 06, 2008 @11:09PM (#26017483) Homepage
    When your revenues scale with the services rendered, it *does* make business sense to auto-scale. Auto-scaling is a technical solution, not a business one. Being Slashdotted isn't typically associated with more commercial activity, it's associated with "hit-and-run" visitors. The same with social networks. Does Twitter even have a business model? But wherever there's a business model where margins are relatively stable but activity rises and falls, auto-scaling makes you money rather than costing you severely. Like many things, it's a tool which should be used wisely, where not paying attention can leave you missing fingers.
    • by Cylix ( 55374 )

      While he mentioned "slashdot" very rhythmically there are other instances which can chew through resources quite aggressively.

      Denial of service attacks
      Malformed software (via bad code push or bug)
      References or page views which do not translate to customers.
      Poor design choices. (intentional, but bad)

      There are several permutations of the core issues regarding resource utilization, but the end result is the possibility of auto-scaling to compensate. Unlike traditional home owned infrastructure there will be mo

  • +1 Irony to the author of TFA, if the article becomes slashdotted....

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...