Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel The Internet Technology IT

Intel Considering Portable Data Centers 120

miller60 writes "Intel has become the latest major tech company to express interest in using portable data centers to transform IT infrastructure. Intel says an approach using a "data center in a box" could be 30 to 50 percent cheaper than the current cost of building a data center. "The difference is so great that with this solution, brick-and-mortar data centers may become a thing of the past," an Intel exec writes. Sun and Rackable have introduced portable data centers, while Google has a patent for one and Microsoft has explored the concept. But for all the enthusiasm for data centers in shipping containers, there are few real-world deployments, which raises the question: are portable data centers just fun to speculate about, or can they be a practical solution for the current data center expansion challenges?"
This discussion has been archived. No new comments can be posted.

Intel Considering Portable Data Centers

Comments Filter:
  • by djl4570 ( 801529 ) on Wednesday November 21, 2007 @07:56PM (#21442495) Journal
    I'm sure RBN would love "Datacenter in a Box." As soon as the authorities begin sniffing around the datacenter can be trucked somewhere else. How long before someone steals one and sells it on ebay.
  • by llZENll ( 545605 ) on Wednesday November 21, 2007 @07:58PM (#21442521)
    Rule #1 in technology, anything portable is more expensive than if it were not portable. If its so cheap to use a crate, why not just put the stuff in the crate in a warehouse instead, bypassing the crate and all of the work and design involved with shoving and fitting the stuff in the crate?
    • AC for Computer Room (Score:5, Informative)

      by raftpeople ( 844215 ) on Wednesday November 21, 2007 @08:15PM (#21442679)

      Rule #1 in technology, anything portable is more expensive than if it were not portable


      Have you ever signed the bill for having AC installed for your computer room in an existing building? While that is just 1 expense of many, it makes me think rule #1 is not accurate.

      If its so cheap to use a crate, why not just put the stuff in the crate in a warehouse instead


      This is a good idea that I've seen used in certain situations. There are downsides of course but for a company on a budget or in flux w.r.t. facilities this can be a good solution.
      • Raftpeople, I've googled "flux w.r.t." but I'm getting too much noise. Can you (or someone) explain it to me?
        • by caluml ( 551744 )
          in flux means changing (in a state of flux). w.r.t. With regards to.
          HTH HAND.
          • Thanks, caluml. I was thinking it was a term of art with which I was not familiar. I'm also not used to seeing acronmyms spelt using lower-case and periods.
            • I'm also not used to seeing acronmyms spelt using lower-case and periods


              I think I picked it up from my math or physics profs, someone always did it that (w.r.t.) and I guess they passed it along to me.
      • by Jartan ( 219704 )

        Have you ever signed the bill for having AC installed for your computer room in an existing building? While that is just 1 expense of many, it makes me think rule #1 is not accurate.

        That's not really an argument for portability. That's an argument for picking a better place to put your data center. If "outside in a box" is a cheaper answer that still doesn't really have anything to do with it being portable. It's a given that if you put a data center in the middle of your climate controlled building that

      • Have you ever signed the bill for having AC installed for your computer room in an existing building?

        Yes, I have. It can be significant. As is the Generator and battery backup.

        Did you know that the current portable data centers have neither of these.....just places to hook them up to externally? So don't be so sure about rule #1 being wrong.
    • Making the iPod so big it's not portable would be cheaper to manufacture. Obviously that's not true.

      Rule one is actually:
      "When it comes to stating rules, Ignore IIZENII"
    • by mikael ( 484 ) on Wednesday November 21, 2007 @10:17PM (#21443383)
      Because the location is remote and there is not time to build a normal facility. The main purpose for these data centers is to handle expansion in limited areas, or while a new data center is being upgraded.

      There are another applications for keeping everything on a truck:

      Valerie Walters Muscle Truck [valeriewaters.com] - a fitness centre that comes to you.

      Office trailers [google.com]

      Mobile kitchen trailers [aluminumtrailer.com]

      Hospital trailers [hankstruckpictures.com]

      Mobile retail and dwelling units [lot-ek.com] (Or shops and homes in containers).
    • by Kadin2048 ( 468275 ) * <slashdot...kadin@@@xoxy...net> on Wednesday November 21, 2007 @10:31PM (#21443463) Homepage Journal

      Rule #1 in technology, anything portable is more expensive than if it were not portable. If its so cheap to use a crate, why not just put the stuff in the crate in a warehouse instead, bypassing the crate and all of the work and design involved with shoving and fitting the stuff in the crate?
      Not really applicable here. The equipment is the same either way. It's not like buying a laptop versus a desktop, where one is carefully (and expensively) optimized and the other one isn't. The same pizzaboxes/blades are going in the racks either way, whether it's in a traditional datacenter or in a cargo container.

      The advantage is more on the installation and infrastructure end. Think of it more as "mobile homes" versus "traditional houses." With a regular house, you have to get the plumber, electrician, HVAC guy, carpenters, etc. to your site. For a mobile home or trailer, you keep all those people in one place, and they build houses over and over and over, on an assembly line. And as a result, "manufactured homes" are a lot cheaper than regular ones.

      I think that's the model that you want to apply to datacenters: get rid of all the on-site installation and configuration, all the raised flooring and cabling; just have a team of people in a factory somewhere, installing and wiring all the servers into the containers, over and over. Then you just haul the container to the customer's site and plug it in. (In fact, since it's in a shipping container already, there's no reason why you do this in a place where labor is expensive; you might as well assemble them in some third-world country somewhere; it would almost assuredly be worth the small cost for sea freight -- most of a container's transportation costs are in the last few hundred miles anyway.)

      The problem is mainly a chicken-and-egg one; in order to make "datacenters in a box" cheaper than traditional ones, you need to get an economy of scale going. You need to have an assembly line churning them out. If you don't have that, you're just taking the expense of a traditional data center and then adding a bunch of containerization and transportation costs to it.

      It might take a very long time to catch on, because there's such an investment in traditional datacenters right now, but if I worked doing datacenter server installations, it's probably something I'd be a little concerned about. Unlike with 'manufactured homes' and regular houses, there isn't much social stigma over having your web site served from a trailer.
      • No, the equipment is not identical. With the limited space and resources of a portable data center, and the lack of maneuvering room for operations like relocating racks or having a bank of projector screens to monitor large arrays of systems, you have to be very careful in what you install and why. And cooling has to be managed very carefully, along with power consumption: you can't simply put in another fan to route the cool air where you want, and you don't have floor space to disassemble equipment on si
        • lack of maneuvering room for operations like relocating racks or having a bank of projector screens to monitor large arrays of systems

          You do realize that in real data centers, the operations center where people sit is not in the same room as the equipment. No....looks like you don't realize that.

          And cooling has to be managed very carefully, along with power consumption: you can't simply put in another fan to route the cool air where you want, and you don't have floor space to disassemble equipment on
          • Oh, I've done plenty of work in operations centers. This includes co-location sites where getting the operations center staff to competently unrack, swap the component, and put the system back in the rack is very awkward indeed, and I've had to go do it myself in spite of the explicit contract I've faxed them spelling out their responsibilities in black and white.

            Hauling a machine all the way back to the "operations center" to do the work is often awkward to say the least, so you do the maintenance on the f
            • by lucifuge31337 ( 529072 ) <.ten.tcepsortni. .ta. .lyrad.> on Thursday November 22, 2007 @08:39PM (#21450265) Homepage
              Understand that my point is to stop the ghettoization you've obviously seen: again, real, proper data centers don't operate that way. Every been to 365 Main in SF? Horrible. 60 Hudson? It's a travesty. This is what happens when you colo: morons put whatever they want in whatever rack you lease them and plug it into anything they can get an extension cord to. This is not a real data center.

              With containerized units being used as commodity infrastructure (which is increasingly easy to do with things like VMWare), this all goes away. No, it won't cover every possibility. You're still going to need somewhere to put those machines with weird cards, be they satellite connectivity, PSTN, etc. But the pure processing power portions of the DC can be kept "clean" and to spec with a few simple rules: the machines are what they are. If they break, an identical unit will be swapped back in.

              Yes, it takes a different approach to server utilization, but it's one that becoming increasingly common in both large and small traditional data centers.

              I'm tired of spaghetti. I'm tired of some idiot plugging both inputs of PDUs into two whips on the same generator. I'm tired of morons putting server labels over the only cooling vents on the front/back of the machine (if they even bother to label them). I'm tired of waiting for some kid at the colo facility to find a crash cart to tell me what some customer's server that has gone unreachable says on the console. I'm tired of idiots not racking machines with rails, and simply stacking a few on top of each other.

              And let's face it - the guy putting his hands ont he equipment in a noisy DC is usually not the best trained or most experienced. And that's not going to change any time soon. It's simple economics.

              These portable DCs are my OCD dream.
              • Re: (Score:3, Insightful)

                Ahh. This makes much more sense. I'd mod you up if I had the points available. I did resent the implication that I wasn't familiar with "real data centers". I've been involved in plenty of work in centers that are very "real" indeed, thank you, and in what you refer to as "ghetto" operations.

                I am also sick of spaghetti. The avoidance of spaghetti, alone, is a reason to pick consistent hardware manufacturers and spend the extra $500/server to get good Dell or HP systems instead of pizza boxes, and be able to
      • by Tim C ( 15259 )

        And as a result, "manufactured homes" are a lot cheaper than regular ones.
        There's also the fact that they're generally a damn sight smaller, of course.
      • by VENONA ( 902751 )
        "Not really applicable here. The equipment is the same either way. It's not like buying a laptop versus a desktop, where one is carefully (and expensively) optimized and the other one isn't. The same pizza boxes/blades are going in the racks either way, whether it's in a traditional data center or in a cargo container."

        People, and corporations, do different things with computers. I use a very carefully selected desktop. OTOH, I regard a laptop as an inherently unergonomic, flimsy, slow, POS. It's only advan
      • Now that is an interesting analogy. And I would argue it actually makes the argument for the gp. If mobile homes were that much better why are they not more common? Answer is value... And I think that is the point of the gp in that mobile data centers don't offer as much value as building your own data center.

        Here are the issues I could see with a data center

        1) Heating and cooling are more extreme than in a building.
        2) Space since containers are fixed sizes and since this requires extreme management of temp
        • I think you're missing the point - what takes longer to build - 1.) the equipment facilities area of a datacenter (power, cooling, etc.) plus the fit-out portion with the raised floor, racks, etc., or 2.) the equipment facilities area of a datacenter plus open warehouse space to stack sea containers in (portable data centers).

          They don't have to be left outside.
    • Not true, the reason a laptop is more expensive than a normal PC is the heating issues, the battery and the space limitations. If you remove these factors, the mass production advantage of laptops makes them equal in price.

      Here is a rule, not just for IT, but for anything to do with production:

      If you can produce a complete standalone product from factory, and just ship it, with minimal need for end-user setup, it's always cheaper in the long run.
      • If you remove the "heating issues, the battery and the space limitations" from laptops, don't you then have a desktop/"normal PC"?

        Re GP: Isn't the benefit to portable data centers, like many cheap laptops, that you can deploy them where you want them, quickly, and then, due to their inexpensive (and cheap/low value) nature, junk them in a year or two? It's not like this is a long term deployment solution where they are "just as good as" a normal Data Center, right? Think satellite laptop instead of deskt

  • by Z80xxc! ( 1111479 ) on Wednesday November 21, 2007 @07:59PM (#21442529)
    It seems to me that there would be too many hassles for this to ever work. The equipment in a data center is expensive, and that equipment doesn't usually like being jostled around in a truck, let alone bouncing around at sea for a while. Although in theory it's a great idea, I just don't see it ever really working out. Also, what about security? Data centers need good security. If it's so easily portable, then it wouldn't be that hard for someone to just take off with one, whereas you can't exactly stick a real data center on your getaway car. TFA suggests a warehouse to store the things in to address security and such, but doesn't that sort of defeat the purpose of having them be mobile?
    • by Feyr ( 449684 ) on Wednesday November 21, 2007 @08:07PM (#21442591) Journal
      good points, and there's also the maintenance and upgrades to consider, unless you're google and you just replace the rack when more than a certain % is defective. for the majority of places, clustering is the exception, not the norm and you just cant leave 70% of your rack full of defective or outdated crap

      consider minor faults too. do you replace the whole rack because a network cable went bad? i don't think so, and i don't want to be the one crawling around that shipping container stringing cat5
      • by dokebi ( 624663 ) on Wednesday November 21, 2007 @10:19PM (#21443393)
        Google isn't doing that just because they have lots of money. No, it's actually cheaper to run things that way. And now with VM's running on clusters, the health of individual machines really doesn't matter anymore.

        So, when do you think a Redundant Array of Inexpensive Datacenters will become a reality? Psst. It'll be sooner than you think.
        • by Feyr ( 449684 )
          there's a matter of scale involved. when you have 20000 racks, having 10% defective at any one time probably wont impact you. if you run 5-20 racks, im pretty sure it will and your space is probably expensive as hell as well.
          • by dokebi ( 624663 )
            Really? Let's say you have *one* rack full of drives, let's say holding 244 drives at 168 TB [sun.com]. Now, as 10% of the drives fail, would users notice the 10% drop in capacity? Really? Do you run your disk array at 90% capacity without expanding?

            The fact is, even small clusters run at 50%-80% capacity, and if a whole datacenter is running at 80% capacity, they'll have to expand pretty soon. With these datacenter-in-a-box, Snap, and its done.
      • And don't forget having to run outside every hour, 24 hours a day, to put more coins into the parking meter!
    • by drix ( 4602 ) on Wednesday November 21, 2007 @08:12PM (#21442649) Homepage
      Dig a little deeper--you really think that large companies such as IBM, Sun, Google et al would spend tens of millions of dollars developing these products and not give thought to the basic issues you have raised? I know I know this is Slashdot and this sort of armchair quarterbacking is de rigeur, but still... every one of these issues has been addressed on Jonathan Schwartz's blog, to say nothing of the myriad of technical and marketing literature which I'm sure covers it in exhaustive detail. Here's a Blackbox getting hit with a 6.7 quake [youtube.com]; here's [sun.com] where he talks about shipping it, and security as well (it comes equipped with tamper, motion and GPS sensors, to say nothing of simply hiring a night watchman to call the cops if somebody comes prowling;) and the answer to your last question is no, no it does not.
      • You still fail to address the problem of working inside one of those. A shipping container can only be so big. As Feyr said, what do you do about upgrading or replacing stuff? There's limited room to move around. You need to be able to access all the equipment, not to mention getting wiring and all set up. Also, would you want to be the captain of a ship carrying several hundred of those? If that ship sinks, then you're in deep trouble. Pun intended. Having hundreds of mobile datacenters on the sea floor is
        • Re: (Score:1, Funny)

          by Anonymous Coward
          You still fail to address the problem of working inside one of those. A shipping container can only be so big. As Feyr said, what do you do about upgrading or replacing stuff? There's limited room to move around. You need to be able to access all the equipment, not to mention getting wiring and all set up.

          You could pretend you are in the ISS.

          Also, would you want to be the captain of a ship carrying several hundred of those? If that ship sinks, then you're in deep trouble. Pun intended. Having hundreds of m
          • you mean of all the shit ass cabling jobs he has, he's also gotta spend time on some f*ing boat in the middle of the ocean with some arsehole that always wants to "pretend we're on the International Space Station", jesus christ man, you are NOT selling this idea
        • by TheLink ( 130905 ) on Wednesday November 21, 2007 @09:38PM (#21443169) Journal
          "You need to be able to access all the equipment"

          Why? If you're something like Google, I bet you could just RMA the containers with faulty stuff back and get new/refurbished ones already configured to your specs - all you need is net boot them for automated install. AFAIK Google don't fix servers once they fail or even take them out of the rack, they just have someone go about once in a while to take em out (like "garbage collecting" instead of "malloc/free").

          So for the big guys it'll be a bit like buying a prebuilt PC, only it's the size of a container.
        • by Kadin2048 ( 468275 ) * <slashdot...kadin@@@xoxy...net> on Wednesday November 21, 2007 @10:54PM (#21443591) Homepage Journal
          I think the short answer is that you don't. I've seen the photos of Sun's boxes, and while the racks do pull out to let you get to the equipment if you need to, I think you basically just view each server in the rack as a small part of a bigger assembly (the box itself), and if something goes faulty in a single server, you move its workload to another machine and just turn it off and leave it there, essentially entombed in the rack. Maybe they'll be some way of easily swapping out machines, or maybe it'll just be easier to leave them there until the entire container's worth of machines are obsolete, and then just dispose of the whole thing and get a new box hauled in. (Or send it back to somewhere for refurbishment, where they can strip it down completely, pull out all the machines, repair and replace, and then bring in a new one.)

          We think of rack space as being precious because of the way traditional datacenters are built and designed; I'm not sure that would still be true if you had a warehouse or parking lot full of crates (especially if they're stacked 3 or 4 high) instead. If you never unseal the box, rack space isn't a concern. Heck, if you have a football field of stacked containers, you might not even want to mess around with getting a dead one out of a stack if it died completely. Just leave it there until you have some major maintenance scheduled and it's convenient to remove it.

          This is getting into business models rather than the technology itself, but I could imagine a company selling or leasing boxes with a certain number of actual processing nodes and a number of hot spares, and a contract to replace the container if more than x number of nodes failed during the box's service life (5 years or so). Companies could buy them, plug them in, and basically forget about them, like the old stories about IBM mainframes. If enough units in the box failed so that it was close to running out of hot spares, then it could phone home for a replacement. As long as enough hot spares were provided so that you didn't need to do this often, it might be fairly economical.
        • You don't put them at sea. You put them somewhere where you have network and power, probably attached to an existing data center. You plug it in, and run e.g. 100Gig network to the container. This isn't rocket surgery. If one of the blades goes bad, you walk in, pull it out, and slide in a new blade. Not exactly difficult.
        • You need to be able to access all the equipment, not to mention getting wiring and all set up.

          Why? I'd think that the wiring and everything would be pre-built into the container itself with standardized fasteners, so that replacing machines inside the crate would as simple as pulling out the old box/blade and dropping in a new one. In fact, because of the standardized layout, I'd think that replacing equipment would be considerably easier. Think Lego bricks vs. jigsaw puzzles. Which are easier to put together?

        • by goodtim ( 458647 )

          Also, would you want to be the captain of a ship carrying several hundred of those? If that ship sinks, then you're in deep trouble. Pun intended. Having hundreds of mobile datacenters on the sea floor isn't going to do you much good, now is it?

          At $100/barrel, a supertanker with 2 million barrels of oil, is probably worth a lot more then a bunch of computers. Especially if you wreck the thing and it costs you another $100m to clean up.
      • by jo42 ( 227475 )

        every one of these issues has been addressed on Jonathan Schwartz's blog
        Yes, but we still maintain that it is a solution looking for a problem.
      • Because, with virtual server architectures being on the rise, a new data centre can mean one or two large and very generic servers and simplified connections. This means the configurations can be highly standardised. The real difficulty would be ensuring your network of backed up virtual server files were configured in a portable fashion and properly documented, as in config management database. You wouldn't need to worry about the builds so much, just the right config of virtual drives. Get it right an
        • Because, with virtual server architectures being on the rise, a new data centre can mean one or two large and very generic servers and simplified connections.

          Thats it, plus you get the whole thing built and assembled in the factory at factory labor rates rather than on site at consultancy rates.

          If you have a scheme that requires a large deployment of like equipment it could well be attractive. The key would be to build in enough redundancy into the basic box that the hardware never needs to be touched

          • The idea of data center plus power plus cooling in a package is definitely attractive for many applications. Rig the thing in Mountain view, send it off to Niagra Falls or some other place with real cheap power to operate it.

            Rig it in Mountain View? Try Hong Kong. Or maybe Wuhan (where Foxconn has its megafactories). The cost to ship a container from California to New York is a substantial fraction of what it costs to ship it from China to NY; most of the cost is in the "last 500 miles" -- the leg of the trip by truck from the nearest big intermodal facility. Plus, most of the servers, cabling, and other stuff going into the container is made in the Far East anyway, so it would make sense to assemble the thing there, rather th

        • Re: (Score:3, Informative)

          by kent_eh ( 543303 )
          About 14 years ago, I was at Ericsson in Richardson, TX for some training. They had a cell switch installed in a set of semi trailers that was specifically for disaster recovery. (though they did use it as a test bed when it wasn't required for DR)
          If a customer lost a switch location due to fire, earthquake, or whatever, they could deploy this unit anywhere in north america within drive time plus 3-5 days for configuration.
          The customer would be scrambling to get leased lines and microwave re-routed to the t
    • A shipping container isn't something that just any local four-fingered grunt can drive off with. I just don't see a whole lot of people being able to drive off with a shipping container without anyone noticing. There aren't many people with a crane, so narrowing down who took it is probably not that tough. I don't think cranes are cheap to rent. Some can, but I think it's not a much larger group of people that can sneak into a fixed data center and steal stuff.
  • Wouldn't want to be the trucker driving that box around... that's for sure. And if Google didn't go through with it, why would anyone else? :P But why does Intel really need multiple datacentres anyways? I mean, they have to host their website and drivers and such, but what else really...
    • It's nothing new for that much moneys worth of equipment to be in a single truck. Quite often I know trucks full of a datacenter's worth of racks drive to the destination..

      That said, I wonder if the 'portable' or 'modular' aspect of it is really useful/cost saving. "Because it's a small, contained environment, cooling costs are far less than for traditional data centers", but why is it the case that a on-site constructed datacenter *must* be larger? I look at the pictures and it seems more like the 8' wi
      • Re: (Score:3, Informative)

        by Thumper_SVX ( 239525 )
        It's a little bit of a conceptual shift from datacenters of old... and it's not for everyone. Having said that, this is exactly the sort of thing we've been talking about for a while where I work ever since Sun talked about their product.

        Data center processing capabilities have increased dramatically over the years, but generally the problem I have seen in most datacenters these days is simply that they are not designed for the heat and power load per square foot that blades and high-density systems require
  • If there were to be a Woodstock today, the center piece would be a portable data center with highpower wireless antennas mounted to the roof.

    People would be paying $100 for juice, but not because they're thirsty, rather because their laptop battery is almost dead.
    • by weighn ( 578357 )

      If there were to be a Woodstock today...
      ... there would be 100k kids on e and crystal meth bouncing around to mostly shit music punctuated by about 1 or 2 acts that are worthy of seeing.

      What you are describing is cool, but more like a LUG meet on steroids :)

      • Re: (Score:2, Funny)

        by Anonymous Coward

        If there were to be a Woodstock today...
        ... there would be 100k kids on e and crystal meth bouncing around to mostly shit music punctuated by about 1 or 2 acts that are worthy of seeing.
        There has already been a Woodstock like that. It was in 1969.
  • Yawn (Score:1, Informative)

    by Anonymous Coward
    Sun beat them to it with Project Blackbox http://www.sun.com/emrkt/blackbox/index.jsp [sun.com] Next!
  • I don't get it. How portable could a data center be if it's dependant on a hard wired infratructure. Adequate power, network/wan (fiber?) connectivity, etc.

    THis stuff takes time to set up....

    How cost effective would it be to have a 'portable' DC when you'd have to pay for at least 1 additional set of network and power connections?

    Might actually be more efficient to just have 2 seperate DC's. Like a primary/COB kind of setup....
    • Re: (Score:3, Informative)

      How cost effective would it be to have a 'portable' DC when you'd have to pay for at least 1 additional set of network and power connections?

      (1) Microwave link or mobile repeater. Costly and needs preplanning, but no external cables. (2) "Portable" can mean "nice quiet diesel or LPG powered generator in the back". Theoretically you could have it up and running while it's being delivered, without waiting for it to reach its destination. I think the target word is "hurry", not "cheap". Fast setup, as in

  • by Anonymous Coward on Wednesday November 21, 2007 @08:10PM (#21442629)
    > Intel says an approach using a "data center in a box" could be 30 to 50 percent cheaper.

    Steps:

    1. Get a box.

    2. Put your junk in the box.

    3. Make her access the box.

    and watch the love, baby...
  • by Synthaxx ( 1138473 ) on Wednesday November 21, 2007 @08:27PM (#21442765)
    This isn't about the datacenter, this is a stroke of genious.

    You see, by closing the door, the actual data contained within' is either there or not there.

    What they've done is run a network cable to that same box to check this, thereby solving one of the most fundamental questions of the universe!

    Like i said, absolute genious!

  • by timothy ( 36799 ) on Wednesday November 21, 2007 @08:32PM (#21442809) Journal
    If you have a business which can be housed in a portable structure of any kind, it makes it more likely you can move it across a border (state or national) when that makes sense, or just seem inclined to do so if the local powermongers decide they want more (of your) pie.

    Coal mines? Hard to do it.

    Hospitals? Difficult.

    Big factories? Tough.

    Data centers? If built into containers or container-friendly, you can start packing now ;)

    (On the other hand, it also means that data-centric companies can angle for that famous and annoying "corporate welfare" by flirting with various states and municipalities seeking better goodies like tax abatements, "free" infrastructure additions, etc.)

    timothy
    • Re: (Score:3, Insightful)

      While this is probably one of many possibilities introduced, I think what most people are missing isn't that this is a 'mobile' data center... but that it's 'modular'.

      In the case of Sun's Black Box project it's literally a data center in a standard shipping container. You can do almost anything with that.

      Here's one scenario.

      Imagine a web hosting company start-up. Their goal is grow as large as a big server provider like The Planet but they don't have several million to invest and even if they did, they won'
    • Data centers? If built into containers or container-friendly, you can start packing now
      However, you have to plug your box into two grids, the electrical and the data grid. Game playing with states most often has to do with labour costs, which aren't on the table here.
  • I really think portable is the wrong approach. The advantages that they are seeing are from having a compact modular unit that can be plugged in. So what they need to do is develop a building with slots that these modules can plugin to. Then I think it would be more attractive and the whole weariness about it being in a storage container can go away.

    I couldn't imagine any hosting provider touting the fact that they have portable data centers built out of shipping containers.
    • "So what they need to do is develop a building with slots that these modules can plugin to."

      Simple warehouse space with cabling and backup power would do, and a hardened data center would be especially easy to build. Military containerization by vendors like Sea Box means that there are many different styles of container to choose from.

      Upgrades could be easy too. Just truck in new modules and install. Container handling equipment by companies like Tandemloc (good online catalog w. drawings) allows precise p
  • at a trade show recently. It was an intermodal container (like an 18 wheeler hauls around). There was a HUGE power connector, an input and output pipe for cooling water, and a network interface. I don't know about the economics one way or another, but it was cool to see. From the outside, you can't help but think that someday we'll have the same thing with a normal power cord, and no cooling water, in something the size of a shoebox. Perhaps because the network connector was no bigger than the one on m
  • Military (Score:4, Interesting)

    by SirKron ( 112214 ) on Wednesday November 21, 2007 @09:05PM (#21442977)
    The military already uses these. The Marines uses them to bring their network onto a ship during transit and then into a tent when deployed.
    • by dave562 ( 969951 )
      This is what I was thinking. Perhaps the real audience for this technology isn't even in the United States, or the developed world for that matter. Maybe they are planning on selling these things to people like the UN, the United States military and other similar organizations that need to quickly establish a presence in parts of the world that are not condusive to the kind of long term investments that are required to build a traditional data center in a more stable part of the world. I'm sure that with
    • Australian military uses them as well. Heat dissipation is the biggest challenge. ADF usually leaves them mounted on the back of a truck.
    • Doom Box! But seriously, getting a permit to install one of these in my town (Thousand Oaks, Calif) would be more difficult than poking butter up a wildcat's ass with a hot awl. ...Lorenzo
    • by Kjella ( 173770 )
      No doubt. The military also spends a lot of money on expensive things because they *have to*. So it doesn't necessarily foloow that they're a good example to follow.
  • As population density increases and the raw materials required to generate power become more difficult to obtain in face of increased demand for them, the likelyhood of brown outs and rolling blackouts becomes more and more of a reality every year. Do you think that the ability to move a data center from one location to another might have anything to do with that? Data centers suck up a lot of power. Just because a data center might be in a place where it has a favorable spot on the rolling brown out sch
  • Like Prefab Houses (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Wednesday November 21, 2007 @09:24PM (#21443085) Homepage Journal
    Prefab houses are an increasingly popular method for home construction. They're not really "portable", except when they're delivered from the factory to the "installation site". They're not interesting because of their containers, but because of the economics and other efficiencies in delivering and installing them.

    Instead of the house builders building each house as a completely custom job, in an unfamiliar site, in all kinds of weather, with only the tools and materials they bring to some residential area, they've got full control at the factory. They don't have to ship all the excess materials that they used to have to ship back out as garbage. They can keep a pipeline filled with houses they're building, and deliver them very shortly after they're ordered, even quicker than they actually build them. And since so much is standardized, they can mass produce them and otherwise get scale economies that reduce costs. Since they aren't inventing a new, complex device with every home a new, arbitrary blueprint, they are skilled in more than their tools and materials, but rather skilled in producing that exact house, with solved problems presenting higher quality homes quicker.

    All that is also true of datacenters. The weather doesn't present so much of a problem avoided, because the datacenter is usually installed in an existing building. But all the rest of the efficiencies are in effect. So datacenters can be cheaper, better, and deployed quicker. This trend makes a lot of sense.
  • Security? (Score:3, Insightful)

    by Billly Gates ( 198444 ) on Wednesday November 21, 2007 @10:15PM (#21443371) Journal
    To have tens of millions of dollars just sitting in a nice convenient portable container that can be hauled by anyone with a truck seems all too tempting.

    Now if some of the data in their included credit numbers and maybe social security numbers of employees as well then you can make money by identity theft as well.

    I suppose only a minimum wage paid security guard is guarding it too so anyone with a truck and fake uniform and nametag with a bogus company name can just drive in and convince the guard to drive off with it.

    Seems risky.
    • Why? You think someone can just easily pull up a diesel tractor and haul it away? They have security on these things, nobody's going to steal it. As for data, it's probably attractive to make these containers diskless. Disks fail fairly often and need more cooling. If you network boot and use e.g. iscsi you can run the trailer hotter and not have as many hardware failures.
    • To have tens of millions of dollars just sitting in a nice convenient portable container that can be hauled by anyone with a truck seems all too tempting.

      Its a sea container. It doesn't have wheels.

      So you'd really need "anyone with a 60 ton or larger crane" as well as their friend "with a truck and sea container trailer or low bed trailer". As well as "someone who can get into the secure warehouse these things will most likely be stored/installed in" and your scenario seems highly unlikely.

      Oh....an
  • by SeaFox ( 739806 ) on Wednesday November 21, 2007 @11:01PM (#21443617)
    Large corporations will love this. Every time the property tax abatement runs out on their current data center location, they can just lay off all the employees and truck the data center to another city.

    Coming soon: Portable Oil Refineries.
  • I'll pass.

    I like my data centers to be bunker-esque. Not some flimsy trailer parked in the back lot that any schmo with a pair of cable cutters can take off line or with a stick or two of dynamite reduce to component level bits and pieces.
  • Wow! Is it just me or did the brainpower meter at Slashdot rise a few more degrees? Finally we're not begging the question.
  • So far, from what little I've seen, Sun has this one pretty well covered. I'll admit that I haven't checked out the competition, but Sun has been promoting the BlackBox for a while now (check out this video of it in 6.7 magnitude earthquake conditions: http://sunfeedroom.sun.com/?skin=twoclip&fr_story=FEEDROOM198997&rf=ev&autoplay=true [sun.com]Project Blackbox Test)

    With everything else they are doing, I think they are cornering this market. Intel getting into it is just solidifying that it's a desirabl

  • I visualise the data centers to be like in this youtube video.

    Just like this, but with servers inside.

    mobile server system [youtube.com].
  • by Anonymous Coward
    2007: government worker loses unencrypted laptop
    2017: government worker loses unencrypted portable data center
  • If you think about it. Data-centers in the way they are built and ran are Secure, redundant and very pricey. Most Data-centers rent out space for many companies. What happens if something happened to the data center? (fire, flood, earthquake, hurricane or other disaster). The companies who are paying good money for the data centers service would be out of service and loosing money. If you sold portable data centers to companies they would buy what they need and have it shipped to their locations. They keep
  • This kind of remind me of the "Blue Boxes" in The Pretender series... they had a whole lot of portable storage devices spread across the USA that were all linked together and syncronized every (IIRC) friday with the "The Center"'s mainframe.
  • Army (Score:1, Insightful)

    by Anonymous Coward
    I work IT in the Army. Portable is a bad idea because I wouldn't know what to do wtih my free time if I weren't constantly tearing down and setting up. Starting over every 3 months keeps me on my toes.
  • They could also drive off with your mobile data center.
    What about physical security of such outfits?
  • I strongly dispute the statement "there are few real-world deployments." From what I hear, Sun's Blackbox is flying off the shelf (figuratively speaking of course, I'd love to see the "shelf" that can hold a few of those...)

    When Blackbox was first introduced I tried to convince a friend of mine in a position of managerial influence at Sun to lend one to my employer, we're having data centre space issues and were willing to be a poster child for this new product.

    His reply was a simple, no-can-do, they're al
  • An article with a little history of the idea, discussion of cooling (you need lots of water and/or a chiller outside the box to get rid of heat) and such:
    http://www.sciam.com/article.cfm?id=B1027B68-E7F2-99DF-352186A04761EB7F&page=1 [sciam.com]
  • The reason a "data center in a box" sounds so attractive is that the amortization schedules are different for IT equipment and buildings. If building infrastructure can last its advertised 25-30 year life then a tilt-up or factory assembled type of building structure is more cost-effective than containerized data centers architecturally.

    Where I was going to try and make my billions (one can dream) is by building the box as an IT unit or member of a larger virtual grid. Provide significantly extra capacity
  • I think these centers should be offered as a kind of "insurance" of the vendor against fire etc. or when you order a datacenter, which will be build in some time you get a portable one during that time. So the vendors couldhave a pool of these.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...