Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

The Ideal, Non-Proprietary Cloud 93

jg21 writes "As previously discussed on Slashdot, the new tendency to speak of 'The Cloud' or 'Cloud Computing' often seems to generate more heat than light, but one familiar industry fault line is becoming clear — those who believe clouds can be proprietary vs. those who believe they should be free. One CEO who sides with open clouds in order that companies can pick and choose from vendors depending on precisely what they need has written a detailed article in which he outlines how, in his opinion, Platform-as-a-Service should work. He identifies nine features of 'an ideal PaaS cloud' including the requirement that 'Developers should be able to interact with the cloud computer, to do business with it, without having to get on the phone with a sales person, or submit a help ticket.' [From the article: 'I think this means that cloud computing companies will, just like banks, begin more and more to "loan" each other infrastructure to handle our own peaks and valleys, But in order for this to happen we'd need the next requirement.']"
This discussion has been archived. No new comments can be posted.

The Ideal, Non-Proprietary Cloud

Comments Filter:
  • Security? (Score:4, Insightful)

    by llamalad ( 12917 ) on Monday July 21, 2008 @08:17AM (#24272645)

    Am I missing something, or does the article make no mention of security?

    • by Swizec ( 978239 )
      You're missing that this is slashdot and you weren't supposed to RTFA.
      • by llamalad ( 12917 )

        I didn't. I clicked on it and hit "control-f".

        All the same, just from the summary, as soon as I got to the notion that boxes would be loaned back and forth between companies my spider-sense got all unpleasant and tingly.

        I don't think I care whether my cloud is open or proprietary, as long as security is designed in from the start and not an afterthought.

        • by mosel-saar-ruwer ( 732341 ) on Monday July 21, 2008 @09:12AM (#24273429)

          In this day and age - when hardware is essentially worthless [today, for under $200, you can get what would have been a $10 million supercomputer ten years ago], and when even RDBs are essentially worthless [MySQL & PostgreSQL being free downloads], the only things which add value are:
          .

          1) Your schema [or your customizations of the vendor's standard template of the schema for your industry], and

          2) Your business logic for manipulating the schema [or your customizations of the vendor's standard template of the business logic for your industry], and

          3) The actual data in your database, and

          4) Your algorithms for analyzing the data in your database [or your customizations of the vendor's standard template of the analysis algorithms for your industry].

          Of those, at least 1), 3), and 4) are going to have to be uploaded to "The Cloud" [and 2) might have to at least interact with "The Cloud"], and unless "The Cloud" encrypts everything - both data & logic [and how do you really "encrypt" something if ultimately the registers in the CPU have to see unencrypted data, and especially unencrypted logic & algorithms?] - then you've just uploaded the crown jewels of your entire enterprise for all the world to see.

          • by mosel-saar-ruwer ( 732341 ) on Monday July 21, 2008 @09:45AM (#24273993)

            And in this day and age, when even medium-sized businesses can be sitting on literally terabytes of data, how are you going to upload all of that data to "The Cloud" so that "The Cloud" can analyze it for you?

            Maintaining a constant 10Mbps WAN connection to "The Cloud" would be monstrously expensive, and yet, at 10Mbps = (10 / 8)MBps = 1.25MBps, that means you would need
            .

            1 terabyte / 1.25MBps
            = (1000 * 1000 * 1000 * 1000 bytes) / (1.25 * 1000 * 1000 bytes per second)
            = [(1000 * 1000) / 1.25] seconds
            = 800,000 seconds
            = [800,000 / (60 * 60 * 24)] days
            = 9.259 days

            just to upload a terabyte of data at WAN speeds of 10Mbps.

            So "The Cloud" isn't going to have realtime interactions with your corporate database - "The Cloud" is going to BE your corporate database.

    • Re:Security? (Score:5, Insightful)

      by thatskinnyguy ( 1129515 ) on Monday July 21, 2008 @08:24AM (#24272739)

      Am I missing something, or does the article make no mention of security?

      Or some sort of business model where someone makes money to run all of this.

      • Re: (Score:3, Interesting)

        Or indeed, mention of anyone, anywhere actually using "cloud computing".

      • Re: (Score:2, Informative)

        by Jick ( 29139 )

        There are already great examples of businesses using the cloud to support their infrastructure (Amazon's posterchild being SmugMug.)

        One of the major reasons people will migrate is efficiency. In this green-age that we're now in, companies are looking to reduce their individual power requirements while increasing scale. Who can provide cheaper power or more efficient cooling for datacenter? Your on-site NOC or ACME colo? ACME colo, or sunpowered-ocean-cooled-datacenter.com? By making this leap, compa

  • by mike_c999 ( 513531 ) on Monday July 21, 2008 @08:18AM (#24272667)

    ... That cloud computing silver lining has started to tarnish already?

  • Huh? (Score:3, Insightful)

    by sarathmenon ( 751376 ) <<moc.nonemhtaras> <ta> <mrs>> on Monday July 21, 2008 @08:19AM (#24272679) Homepage Journal

    What makes him so sure that interoperability will be even on the provider's list? I don't see any easy way to use EC2 with some third party solution for storage. Plus, it would be lame if I had to go via internet for every request that should ideally be local.

    • Re:Huh? (Score:5, Funny)

      by bsDaemon ( 87307 ) on Monday July 21, 2008 @08:21AM (#24272703)

      No, you just don't get how awesome it'll be to get all your Web 2.5rc1 content via Internet2 through the cloud, man... it'll totally shift your paradigm.

    • I don't see any easy way to use EC2 with some third party solution for storage.

      It's really no "harder", technologically, than using EC2 with S3. It's just that S3 is probably cheaper, especially when bandwidth between the two is free.

      More specifically: An EC2 instance is just a Xen virtual machine. Amazon places no restrictions on what you run inside -- and as far as I know, they haven't even released their own S3 bindings, in any language. It's up to you to connect to S3, and it's exactly the same process, whether you're connecting from EC2 or not.

      it would be lame if I had to go via internet for every request that should ideally be local.

      That's exactly what EC2+S3 is. It's

  • Sounds like he has his head in the clouds.

    • When a concept is so new that people can't even define it, now is not the time to be trying to develop an "open standard". Now is the time to be rapidly prototyping different ideas.

      When we have several, stable clouds, then it might be time to talk about interoperability, or at least a compatibility layer.

  • by Anonymous Coward on Monday July 21, 2008 @08:23AM (#24272731)

    The guys at Red Hat have released the first version of a project called Genome genome.et.redhat.com [redhat.com] . This looks to be an open source project that makes Fedora, Red Hat Enterprise Linux, and CentOS clouds using Xen, KVM, and commodity hardware.

  • by rs232 ( 849320 ) on Monday July 21, 2008 @08:27AM (#24272765)
    Relying on third party technology is never going to provide the reliability or uptime required. The more straight forward solution is to hire some rackspace and host your own solution. 'Cloud Computing' is just the latest marketing promotion designed to move us to renting software.
    • by larry bagina ( 561269 ) on Monday July 21, 2008 @08:30AM (#24272799) Journal
      hiring rackspace is relying on third party technology.
    • by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Monday July 21, 2008 @08:35AM (#24272861) Homepage

      'Cloud Computing' is just the latest marketing promotion designed to move us to renting software.

      For some software that makes sense. Some apps cost an enormous amount to buy a copy of (no, MS Office isn't one of these!) and many smaller businesses don't need a copy continually. For example, a small engineering firm probably doesn't need a Computational Fluid Dynamics package the whole time, but when they're designing a product it's useful to rent some use of one.

      Does this mean that everyone will be hiring everything? I really doubt it. I reckon that the end result will be a mixed economy with some purchases and some hiring. Which will be the dominant mode at any time? Well, that'll probably change from year to year. Guess what? That's true for other parts of the economy too. IT's not that special...

      • Re: (Score:3, Insightful)

        by sm62704 ( 957197 )

        For example, a small engineering firm probably doesn't need a Computational Fluid Dynamics package the whole time, but when they're designing a product it's useful to rent some use of one.

        Except that the training required to learn this software is more expensive than the software. It would be cheaper to hire an engineer who had his own tools.

        It's like when your car breaks - it's cheaper to hire a mechanic than to rent diagnostic computers and other tools the mechanic has and learn about internal combustion

        • Re: (Score:3, Interesting)

          by dkf ( 304284 )

          Except that the training required to learn this software is more expensive than the software. It would be cheaper to hire an engineer who had his own tools.

          Not really. The top-end CFD codes are really very expensive indeed, and have "interesting" restrictions on use too. (I know of at least one that is considered to be a munition, being greatly useful for designing missile systems.)

          It's like when your car breaks - it's cheaper to hire a mechanic than to rent diagnostic computers and other tools the mechanic has and learn about internal combustion engines and how to use the tools you rented.

          Except that the focus is on renting to businesses, not consumers. While cloud computing can be made to work with consumers, you typically won't sell it to them "raw", but rather as packaged services that might be paid for directly or through advertising. This whole area of cloud-dri

          • by sm62704 ( 957197 )

            I know of at least one that is considered to be a munition

            IINM Back in the 1980s, it was illegal to export dBase for pretty much the same reason.

            As to "cloud computing", I think it's a terrible name. It's akin to back when the clueless called DOS "doss" without even knowing what an operating system was or what DOS stood for. Database admins didn't coin the term, their pointy haired bosses did.

    • Re: (Score:3, Insightful)

      by samkass ( 174571 )

      You're making a lot of assumptions about needs, uptime, costs, and levels of in-house expertise when you make those blanket statements. There's always a balance between "relying on third parties" and "not invented here syndrome". In the latter case, you'll have people attempting things way outside their area of expertise and reliability or uptime will be significantly worse than if they'd let the experts do their job and paid a fair price.

    • by querist ( 97166 ) on Monday July 21, 2008 @08:43AM (#24272951) Homepage

      I believe that you are partly correct in your assertion that cloud computing is, eseentially, marketing hype intended to move us toward renting software.

      One advantage that cloud computing has over your proposed solution is that you are not paying for the idle time where your rack of computers is not doing anything. You only pay for what you use (within limits - I suspect a cellphone-like billing plan will emerge). This and the rapid scalability would be wonderful for smaller businesses.

      Imagine that you have minimal needs during most of the year - word processing, billing, etc, but on a quarterly basis you need to do your taxes (US businesses normally must file tax reports on a quarterly basis) and on an annual basis you need to do a large amount of computing - employee tax records, inventory, other annual processing. With cloud computing, if you are willing to accept having your data somewhere else that is not in your physical control, you simply ramp-up the computing need in December and then you're done. You finish on time and have a larger "bill" at the end of the month. This is very much like electricity - in cooler months you don't run your AC in the house, but when a heat wave comes along you run the AC more and you just pay a higher bill. You don't maintain your own power generation capacity, you simply use more of the available supply when you need it.

      One of the nice ideas behind "cloud" computing is that computational is treated as a consumable resource, much like electricity. Cloud computing, in that way at least, is similar to "grid computing". The differences are important, however.

      "Grid" computing is related to raw computing power being distributed for a large problem. Cloud computing, on the other hand, is not so much about one user being able to access huge amounts of processing power at once as it is about making computing resources available on demand and from anywhere.

      Imagine it like this for a moment: every device that plugs into a wall outlet has its own "power meter" like the one that the electric company use to determine how much to bill you each month. (Let's not go into a discussion about estimates, how often they really read the meters, etc., please. This is only an analogy.) You can take your devices anywhere, and when you plug it into the wall the little meter records how much electricity you use.

      So, when you are at a hotel, a friend's house, or the public library, you are still being billed personally for the electricity that your laptop computer is using. You can do what you like with the electricity as long as you don't violate any laws of physics and as long as you stay within the limits of your connection or access. (In other words, don't try to draw 40 amps from a 20 amp outlet - you'll trip the breaker.)

      But, instead of electricity, you are accessing computational services in the form of data storage and software as well as data transfer. The nice thing is that you can access it from anywhere (such as Google Apps) with little dependence on operating system or platform.

      If (and this is a big "if") they can work out the security concerns, this could be very useful for large businesses.

      • by Z34107 ( 925136 ) on Monday July 21, 2008 @09:48AM (#24274037)

        "Cloud computing" sounds exactly like how (I'd imagine, beinga young'un) mainframe time was rented back in the Bad Old Days. Except that one mainframe has been replaced with one "cloud."

        However they billed for a batch job back in the '50s is how I'd expect them to build for their cloud. Just replace dumb terminals or an operator with the interwebs, and you're good to go.

        • by nurb432 ( 527695 )

          An entire building of mainframe hardware would be even closer to 'the cloud' as it could share processing across the cpu's once you got out of batch and into something interactive like TSO.

      • Re: (Score:2, Insightful)

        Imagine it like this for a moment: every device that plugs into a wall outlet has its own "power meter" like the one that the electric company use to determine how much to bill you each month...

        Well, true, Cloud computing could provide that. But you are missing the point of the name 'Grid Computing' - the original idea was to model compute time provisioning after a Power *Grids*: you plug your laptop into an outlet, and, voila, ...

        So, your wall outlet idea was already promised by Grid Computing -- what Cloud computing seems to add, IMHO, is support for (a) very simple interfaces to use the provided resources, and (b) support for specific usage modes. Grids are more all-purpose infrastructure, wh

      • by pr0nbot ( 313417 )

        If (and this is a big "if") they can work out the security concerns, this could be very useful for large businesses.

        Perhaps we will see 'cloud computing' at the LAN level rather than the WAN level.

      • "You can take your devices anywhere, and when you plug it into the wall the little meter records how much electricity you use."

        So you're saying cloud computing is just a computer network with distributed apps. Genius.

        Nice explanation, but I see the corporate consultants strike again.

      • Cloud computing is renting of software. It has many advantages.

        For certain applications (eg: Human Capital Management, otherwise known as HR) the benefits are substantial.

        A) global companies can access application from almost anywhere

        B) Self-service functionality for employees,

        C) Software maintenance is done once for all renters

        D) No requirement for software support staff.

        E) Backup and restores are vendors reponsibilities.

        F) Eliminates that huge front end licensing charge for at-home applications.

    • There are times when it makes sense to rent - if you're catering to a fashion driven market where you're "in" one day and "out" the next, you want to be able to reach as much of the world as possible when you're "in" and not have to carry the infrastructure costs while you're "out." In this case, it might make sense to pay 10x the ownership costs while you're "in", since you'll still be making huge profits then, as long as you can drop your costs to 0 when you've got nothing going on. With these kinds of

    • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Monday July 21, 2008 @09:17AM (#24273529) Homepage Journal

      Relying on third party technology is never going to provide the reliability or uptime required.

      Even if the third party has way more experience and better hardware than you do?

      • "Even if the third party has way more experience and better hardware than you do?"

        I've worked for some of the 'premier' ISPs and major multinationals, one being a consultancy to the business sector. I've seen better IT infrastructure in the average tech college. As for the expertise of the consultancy, as far as I could make out it conssted of a VB macro to create unique file names for the reports, written as PPT files. Oh yea, the only other 'innovation' was splitting the research department up into tea
    • You can't get the same scaling from a physical server as you can get from "the cloud" for anywhere near the same price.

      • "You can't get the same scaling from a physical server as you can get from "the cloud" for anywhere near the same price"

        Most people don't need such scaling and I can get more per price from a box hosted in a server farm. The reason "the cloud" would be cheaper is they build and staff it at the lowest possible cost. Things happen like forgetting to test the emergency generators [theregister.co.uk], or what probably really happened, skimping on routine maintenence.
    • I think having all your servers with one provider (even if you are that provider) is usually a bad idea. People make mistakes. hardware dies. natural disasters happen.
    • When hiring your own rackspace, there are several things you must manage. How do you provide redundancy if a server goes down? Or a switch goes down, or the power supply to the whole building? There are answers, but they are expensive and complex. Furthermore, how much storage and bandwidth do you buy? Can you predict spikes and sudden growth?

      We've not yet arrived with cloud computing, but the potential seems obvious to me. Simply tell the system "host this domain, run this database, serve up these pages, h

    • The more straight forward solution is to hire some rackspace and host your own solution.

      Which might be very bad (server in a closet behind the women's bathroom -- it actually has happened). Or it might be very good. If you get good enough at running a datacenter, you might start renting out your spare capacity -- thus, Cloud Computing.

      Unless you were talking about renting some rackspace in a datacenter owned and managed by someone else. In which case, what's your point? The only difference here is the pricing model -- you'll be paying for all that rackspace 24/7, even if you only use it for fi

  • "Proprietary"? (Score:3, Interesting)

    by samkass ( 174571 ) on Monday July 21, 2008 @08:39AM (#24272903) Homepage Journal

    The word "proprietary" is a very vague term that's usually used to connote some sort of "them", where the "us" are the good guys.

    The bottom line is that wherever there is value, someone will find a way to charge for it. If this "cloud computing" really has no model under which anyone finds it valuable enough to commercialize it, then it's probably not going to be very popular anyway.

    • by sm62704 ( 957197 )

      I thought "us" was the good guys? Isn't "us" always the good guys? And isn't "them" always the bad guys?

      And after all, we're only ordinary men. Me and you, God only knows it's not what we would choose to do. -Pink Floyd

    • Re:"Proprietary"? (Score:4, Interesting)

      by Chandon Seldon ( 43083 ) on Monday July 21, 2008 @10:28AM (#24274847) Homepage

      The word "proprietary" is a very vague term [...] The bottom line is that wherever there is value, someone will find a way to charge for it.

      Proprietary implies lock-in and monopoly. The opposite is an "open standard" where there can be a competitive market.

      Think proprietary = monopoly, open = free market.

      • by samkass ( 174571 )

        That seems like a somewhat shallow definition. I've heard commercial software called a "de-facto standard" while a competing open-source project who don't give CVS/repository access to anyone "proprietary". Is PDF proprietary or "open"? Has it changed state? I've heard APIs be called proprietary even when they're freely published and unencumbered by patent if they're not attached to a standards body...

        Basically, the word means little more than "bad" these days.

        • Basically, the word means little more than "bad" these days.

          The fact that some people are confused about the meaning of a term doesn't mean that it has somehow lost its meaning.

          That seems like a somewhat shallow definition.

          Yes, a bit. The literal definition of "proprietary" is simply "having to do with property". As jargon in the software field, it means "controlled by a single entity".

  • Proprietary buzzwords, what will they think of next? What will the next dynamic paradigm shift be?

    In seriousness, with all of the glaring security issues and discomforts that people have with sharing their private information over a network, how will this idea ever seriously take off? Will the average home user ever consider such an idea? Personally, though it may be "inconvenient", I feel more secure having my data stored locally than working with it over a cloud.
  • by conspirator57 ( 1123519 ) on Monday July 21, 2008 @08:42AM (#24272939)

    so we'll end up with a sub-prime computing crisis?

    how can you bail out companies that fail to keep sufficient computing reserves in hand to cover their potential obligations?

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      how can you bail out companies that fail to keep sufficient computing reserves in hand to cover their potential obligations?

      Simple: The computing provider uses a standard contract that doesn't offer any particular service level guarantee or compensation for downtime and call it 'industry standard'.

      Then if they don't have enough reserves to cover their obligations they laugh in their customers' faces.

    • by dkf ( 304284 )

      so we'll end up with a sub-prime computing crisis?

      how can you bail out companies that fail to keep sufficient computing reserves in hand to cover their potential obligations?

      Well, on one level that level of commoditization represents a rather large success, so I'd be happy enough.

      What you will see will be a market economy in computing. Some providers will be cheap-and-cheerful bit-shifters, others will provide stronger guarantees and/or fancier service but cost more. The customers will vote with their money according to how much they value things. What is needed though is a better way to express contracts in electronic form so that customers properly know what they're getting a

      • while you see roses, I look at other commodity industries that require infrastructure, like power. California in particular had a great deal of fun with regulation and subsequent gamed "deregulation". The "power" companies saw there was no good reason to own production capacity and so divested themselves of power generation stations only to be bitten by that decision later. I guarantee you that some business/marketing idiot will push the market that way and that much market turbulence followed by even mo

  • Pretty much any technology leads to both open implementations and proprietary implementations. The central question is whether the STANDARDS for interacting with those implementations are open or proprietary. Maybe you deploy Java to a proprietary WebLogic server, or an open JBoss server... but you're dropping basically the same EAR or JAR file in either case. THAT'S one of the key factors determining whether a technology will catch on.

    Before you can start developing the proprietary or open implementa
    • I humbly disagree. A "cluster" is something that every entity (academic or business-like) in need of moderate computing power considers using nowadays. The grid is used by international collaborations, mainly scientific ones like the LHC ones, LIGO, and the protein-folding guys. The cloud is simply a rehashing of the grid, perhaps more business oriented. And I can assure you that people DO care.
  • I believe clouds should be free, and I'd like to do business with clouds.. not Storm clouds, however.
  • by sm62704 ( 957197 ) on Monday July 21, 2008 @08:47AM (#24273015) Journal

    Today's forecaset: cloudy. This afternoon, continued cloudy with occasional periods of distributed computing.

    Tonight: Dark, with periods of light toward morning.

    Tomorrow: Ignorant, with occasional words coined by the ignorant used by the knowledgable. May be occasional clouds in the afternoon. In case of tornado, stay in your basement.

  • This looks interesting, but as this is a list of requirements, it isn't available. I need it now.

    Some background:
    For my employer, I've made an application, under perl and postfix, that runs an email forwarding application. The part the user interacts with mostly is a database server on a webcluster, but the smtp side is handled by (at the moment) 8 machines.

    This wouldn't be so bad, but they're getting a little flooded. If I could run the software in the cloud, it could grow and shrink dynamically, whi

    • You could use Amazon EC2 to do this today, with the caveat that some are blacklisting EC2 as mail forwarding servers due to spam.

  • by istartedi ( 132515 ) on Monday July 21, 2008 @09:35AM (#24273831) Journal

    Every buzzword soaked trade publication on the planet has Cloud on the cover now. When looking for a job, I'm going to put my name and contact info on my resume. Then, in place of the usual job history and qualifications I will put, in the largest font that fits, one word: CLOUD. My pay will go up 25%. Then, in 6 months, people will be saying "remember cloud computing?".

    • by foxylad ( 950520 )

      I disagree - 2008 and will be remembered as the birthday of cloud computing, the same way 1981 is remembered as the birthday of the PC. The PC was easy to use, so all of us could have a computer on our desks. Good clouds will be easy enough to use that web applications will become mainstream.

      I think Google's "Run your app here" approach is better than "Here's a copy of your OS". Suddenly a developer can launch a serious web application, without having to worry about scaling, redundancy, or all the work that

  • The whole issue is a little cloudy to me.

  • buzzword (Score:3, Funny)

    by owlnation ( 858981 ) on Monday July 21, 2008 @10:16AM (#24274605)
    Ok, I hate buzzwords as much as the next person not wearing a pale blue shirt...

    But I'd like to suggest "cloudware" as a potential interchangeable word for "vapourware".

    For obvious reasons...
  • by paimin ( 656338 )
    I'll wait for Cloud 2.0
  • abandoned after ec2 came out. EC2 does most of what it sounds like you want... except for the redundancy in providers. Personally, I'm working on building a better provisioning system for my own VPS services at http://prgmr.com/xen [prgmr.com] but the idea is that it's not that hard, even with the way ec2 is now, to take your ec2 image and run it on another xen host, or take a xen image and run it on ec2. (now getting a 'public' image off amazon ec2 and downloading it, that's hard. but if it's your image, downlo
  • Skimming through the comments here, they seems to break down into several categories:

    • Don't need it because I can do it myself.
    • It is just like some other technology.
    • Beware of vendor lock in.

    I don't see any posts talking about

    here is what we used cloud computing for and here are the problems with the current platforms.

    This tells me that whatever this technology is, it is still early and people are still testing the water. If we want some kind of standard or open implementation of clouds, we are going to need much more people using it to explore what is good and bad about the model.

    The beginning of

  • #1... one key feature of the dedicated model for web applications is a stable, static IP address.

    No it isn't. The key feature is a stable, reliable way to connect to your apps, wherever they are -- when I type example.com, I should be routed to the right place.

    This means a built-in hardware load balancer, dynamic DNS, or anything in between.

    Amazon's Elastic IP, for example, can take 15 minutes to switch between instances -- something like 10-12 minutes during which requests are sent to the old instance, then 2-3 minutes during which all traffic is dropped on the floor and no instance is reachable, and

    • Amazon's Elastic IP, for example, can take 15 minutes to switch between instances -- something like 10-12 minutes during which requests are sent to the old instance, then 2-3 minutes during which all traffic is dropped on the floor and no instance is reachable, and then, finally, traffic is routed properly to the new instance.

      Have you experimented at all with round-robin DNS records pointing to two different elastic IPs in different availability zones? If one availability zone goes tits-up, does round-robin yield acceptable performance, routing clients to the good zone?

      • Have you experimented at all with round-robin DNS records pointing to two different elastic IPs in different availability zones?

        Nope.

        Of course, the long-term plan is to have at least one on hot standby (or acting in another capacity) in a third zone -- the elastic IPs are not bound to availability zones, so however much this degrades performance, that'll only last 15 minutes.

        If one availability zone goes tits-up, does round-robin yield acceptable performance, routing clients to the good zone?

        I guess that really depends on the client properly detecting that one of the servers is dead, and that it should use the next one.

        Keep in mind, that's not the only way to do it -- dynamic DNS is still feasible, so that's another way to narrow the window during w

        • Keep in mind, that's not the only way to do it -- dynamic DNS is still feasible, so that's another way to narrow the window during which clients have to figure the situation out themselves.

          Doesn't dynamic DNS take more than 15 minutes to propagate though the Internet? I think most DNS servers disregard TTL values of under 60 minutes, no?

          • I think most DNS servers disregard TTL values of under 60 minutes, no?

            Actually, I don't know. If you're right, then sure, Elastic IP is a much better solution.

            We are using dynamic DNS for internal things (where's the DB server now?), and neither Amazon's internal nameservers, nor my ISP's, nor anything in between, seems to be slowing it down. I've got a TTL of 100 on that.

  • I always thought that all those Xbox 360s and PS3s would make a great cloud when they are not being used. One could for example trade idle time for Xbox Live points to make it worthwhile. I would think Sony could cut a similar deal and render their movies on all of the idle PS3s.

  • Disclaimer, I'm the chief architect of a cloud vendor.

    I'd say the cloud buzz started when Google's Eric Schmidt started saying that they were in the "cloud computing business" circa 2006, and with the release of Nick Carr's "The Big Switch" book in January 2008.

    Here's the question: "Why is my enterprise IT so expensive, not innovative, and hard to use and my online services so affordable, innovative, and easy to use?"

    The answer could be one of three things:
    1. maybe the online vendors do things differently

    • Are online services cheaper and easier to use than the enterprise? In some cases, certainly. Some internal IT departments require a $10k+ tax on top of server purchases to cover IT installation and provisioning costs, and then take 2 weeks to 2 months to bring the server online.

      This, I think, is the Big Deal, here.

      My current client, in the mid '90s, embarked on a huge digitization project. They hired an external vendor to scan hundreds of thousands of documents for them, and the deliverable was approximately 100GB of TIFF images that needed to be OCR-ed into PDF. To accomplish this, they bought 5 servers, 5 Windows licenses, and 5 copies of Adobe Acrobat Capture (or whatever it's called) + data center costs. I think it took a month or two to process all of those TIFFs.

      If this c

      • The NY Times converted [nytimes.com] 4 terabytes / 11 million TIFF based images & articles from their archives in 24 hours using 100 EC2 instances. And continue to do it to this day. Cost? A couple hundred dollars.

        • Thanks for that link. It's great to have my curiosity settled regarding whether or not they could have processed all of those TIFFs in 24 hours on EC2.

          Do you happen to know which OCR software they used? It wasn't clear to me from the article.

Parts that positively cannot be assembled in improper order will be.

Working...