Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel The Internet Technology IT

Intel Considering Portable Data Centers 120

miller60 writes "Intel has become the latest major tech company to express interest in using portable data centers to transform IT infrastructure. Intel says an approach using a "data center in a box" could be 30 to 50 percent cheaper than the current cost of building a data center. "The difference is so great that with this solution, brick-and-mortar data centers may become a thing of the past," an Intel exec writes. Sun and Rackable have introduced portable data centers, while Google has a patent for one and Microsoft has explored the concept. But for all the enthusiasm for data centers in shipping containers, there are few real-world deployments, which raises the question: are portable data centers just fun to speculate about, or can they be a practical solution for the current data center expansion challenges?"
This discussion has been archived. No new comments can be posted.

Intel Considering Portable Data Centers

Comments Filter:
  • Yawn (Score:1, Informative)

    by Anonymous Coward on Wednesday November 21, 2007 @09:06PM (#21442577)
    Sun beat them to it with Project Blackbox http://www.sun.com/emrkt/blackbox/index.jsp [sun.com] Next!
  • by drix ( 4602 ) on Wednesday November 21, 2007 @09:12PM (#21442649) Homepage
    Dig a little deeper--you really think that large companies such as IBM, Sun, Google et al would spend tens of millions of dollars developing these products and not give thought to the basic issues you have raised? I know I know this is Slashdot and this sort of armchair quarterbacking is de rigeur, but still... every one of these issues has been addressed on Jonathan Schwartz's blog, to say nothing of the myriad of technical and marketing literature which I'm sure covers it in exhaustive detail. Here's a Blackbox getting hit with a 6.7 quake [youtube.com]; here's [sun.com] where he talks about shipping it, and security as well (it comes equipped with tamper, motion and GPS sensors, to say nothing of simply hiring a night watchman to call the cops if somebody comes prowling;) and the answer to your last question is no, no it does not.
  • AC for Computer Room (Score:5, Informative)

    by raftpeople ( 844215 ) on Wednesday November 21, 2007 @09:15PM (#21442679)

    Rule #1 in technology, anything portable is more expensive than if it were not portable


    Have you ever signed the bill for having AC installed for your computer room in an existing building? While that is just 1 expense of many, it makes me think rule #1 is not accurate.

    If its so cheap to use a crate, why not just put the stuff in the crate in a warehouse instead


    This is a good idea that I've seen used in certain situations. There are downsides of course but for a company on a budget or in flux w.r.t. facilities this can be a good solution.
  • Re:Connectivity? (Score:3, Informative)

    by Nefarious Wheel ( 628136 ) * on Wednesday November 21, 2007 @10:18PM (#21443061) Journal
    How cost effective would it be to have a 'portable' DC when you'd have to pay for at least 1 additional set of network and power connections?

    (1) Microwave link or mobile repeater. Costly and needs preplanning, but no external cables. (2) "Portable" can mean "nice quiet diesel or LPG powered generator in the back". Theoretically you could have it up and running while it's being delivered, without waiting for it to reach its destination. I think the target word is "hurry", not "cheap". Fast setup, as in fast market capture or disaster recovery is the word. And I know there are better ways to do DR but not all of your customers think ahead like that, do they? Only the ones who probably don't need you in the first place.

    Remember, if all of your customers had perfectly-run data centres, you'd probably be out of a job.

  • by mikael ( 484 ) on Wednesday November 21, 2007 @11:17PM (#21443383)
    Because the location is remote and there is not time to build a normal facility. The main purpose for these data centers is to handle expansion in limited areas, or while a new data center is being upgraded.

    There are another applications for keeping everything on a truck:

    Valerie Walters Muscle Truck [valeriewaters.com] - a fitness centre that comes to you.

    Office trailers [google.com]

    Mobile kitchen trailers [aluminumtrailer.com]

    Hospital trailers [hankstruckpictures.com]

    Mobile retail and dwelling units [lot-ek.com] (Or shops and homes in containers).
  • by kent_eh ( 543303 ) on Thursday November 22, 2007 @12:46AM (#21443871)
    About 14 years ago, I was at Ericsson in Richardson, TX for some training. They had a cell switch installed in a set of semi trailers that was specifically for disaster recovery. (though they did use it as a test bed when it wasn't required for DR)
    If a customer lost a switch location due to fire, earthquake, or whatever, they could deploy this unit anywhere in north america within drive time plus 3-5 days for configuration.
    The customer would be scrambling to get leased lines and microwave re-routed to the temporary location, but they could probably have some service restored to their customers within a week or 2. Especially if they had a few COWs (cell on wheels) to use.
    A lot better that the 10-12 weeks it took to install a new switch from scratch.
  • Re:The Trucker... (Score:3, Informative)

    by Thumper_SVX ( 239525 ) on Thursday November 22, 2007 @12:33PM (#21446653) Homepage
    It's a little bit of a conceptual shift from datacenters of old... and it's not for everyone. Having said that, this is exactly the sort of thing we've been talking about for a while where I work ever since Sun talked about their product.

    Data center processing capabilities have increased dramatically over the years, but generally the problem I have seen in most datacenters these days is simply that they are not designed for the heat and power load per square foot that blades and high-density systems require. Most modern datacenters were designed and/or built in the 80s and 90s when they had very specific requirements as regards power and heat load per square foot... and that was reasonable at the time. The higher density systems such as blades are a great idea, and provide much more processing capability per square foot than traditional racked servers... however, it has become tough to keep up with the heat output and power requirements of these on a per rack basis. I know our datacenter where I work that was built in 1995 has been retrofitted no less than four times in the last few years to increase cooling capacity, and we're rapidly reaching the limits of what we can do with the physically constrained space we have. At the moment, if we add a new power feed or AC unit, we will actually need to remove racks to put it in. Given our racks are currently running at an average 85% physical capacity already you can see where we have a problem.

    These sort of portable datacenters though are only for those who design their systems correctly. Most applications these days can leverage "fat" back end systems (databases and so forth) with "thin" front-end application servers. My proposal that's going through the mill right now was to invest in one of these containers to migrate all of the front-end systems into that datacenter, leaving only the data and storage (SAN) sitting in the existing datacenter. That way, we can eliminate approximately 60% of our servers, which themselves make up about 40% of the heat and power load in our datacenter today. That way we can continue to expand the storage (which is desperately needed, we just have no more floor space for SAN) and leverage either powerful blade servers or powerful standard rack servers as consolidated database clusters and possibly virtual machine space. Where we need application-server space, we can put a server out in the "trailer" and connect it across a fat link into the existing datacenter (bonded gigabit), thereby providing incredible flexibility.

    The cost may seem prohibitive, but what are our other options? Right now, our only other option is to actually build a new dedicated datacenter building. The cost of that is incredibly prohibitive, and we've been playing catchup for a long time as far as trying to meet our user demand in a rapidly growing user base while being seriously constrained on space. The cost of one of these trailers is actually an incredible bargain compared to the cost of proper design, architecture, engineering and actually constructing a new building to house our ever growing application requirements.

    So what about server failures? Personally, I feel that the best way to proceed is to run up the trailer to about 85% utilized, leaving lots of idle servers in-place. Network boots and stuff like that ought to provide rapid provisioning within the trailered data center, so in the event of a failure you just use network boot to bring up another node and call for service. Hey, we already have all of our servers under maintenance with the manufacturer anyway, and most of the time this is exactly what we do. Plus, what if we grow again? Add another trailer. Simple, cost-effective and efficient.

    The security aspect? Leverage your already existing datacenter. Use that as your data source, leave as little actual customer data on the trailered servers as you can. If you start getting constrained on space, start moving your database servers out to the trailers as well, but connect them back to your SAN in the old DC. By doing s

Today is a good day for information-gathering. Read someone else's mail file.

Working...