Intel Considering Portable Data Centers 120
miller60 writes "Intel has become the latest major tech company to express interest in using portable data centers to transform IT infrastructure. Intel says an approach using a "data center in a box" could be 30 to 50 percent cheaper than the current cost of building a data center. "The difference is so great that with this solution, brick-and-mortar data centers may become a thing of the past," an Intel exec writes. Sun and Rackable have introduced portable data centers, while Google has a patent for one and Microsoft has explored the concept. But for all the enthusiasm for data centers in shipping containers, there are few real-world deployments, which raises the question: are portable data centers just fun to speculate about, or can they be a practical solution for the current data center expansion challenges?"
Yawn (Score:1, Informative)
Re:Why it probably won't work (Score:5, Informative)
AC for Computer Room (Score:5, Informative)
Have you ever signed the bill for having AC installed for your computer room in an existing building? While that is just 1 expense of many, it makes me think rule #1 is not accurate.
This is a good idea that I've seen used in certain situations. There are downsides of course but for a company on a budget or in flux w.r.t. facilities this can be a good solution.
Re:Connectivity? (Score:3, Informative)
(1) Microwave link or mobile repeater. Costly and needs preplanning, but no external cables. (2) "Portable" can mean "nice quiet diesel or LPG powered generator in the back". Theoretically you could have it up and running while it's being delivered, without waiting for it to reach its destination. I think the target word is "hurry", not "cheap". Fast setup, as in fast market capture or disaster recovery is the word. And I know there are better ways to do DR but not all of your customers think ahead like that, do they? Only the ones who probably don't need you in the first place.
Remember, if all of your customers had perfectly-run data centres, you'd probably be out of a job.
Re:It has to be more expensive (Score:4, Informative)
There are another applications for keeping everything on a truck:
Valerie Walters Muscle Truck [valeriewaters.com] - a fitness centre that comes to you.
Office trailers [google.com]
Mobile kitchen trailers [aluminumtrailer.com]
Hospital trailers [hankstruckpictures.com]
Mobile retail and dwelling units [lot-ek.com] (Or shops and homes in containers).
Re:Why it probably will work (Score:3, Informative)
If a customer lost a switch location due to fire, earthquake, or whatever, they could deploy this unit anywhere in north america within drive time plus 3-5 days for configuration.
The customer would be scrambling to get leased lines and microwave re-routed to the temporary location, but they could probably have some service restored to their customers within a week or 2. Especially if they had a few COWs (cell on wheels) to use.
A lot better that the 10-12 weeks it took to install a new switch from scratch.
Re:The Trucker... (Score:3, Informative)
Data center processing capabilities have increased dramatically over the years, but generally the problem I have seen in most datacenters these days is simply that they are not designed for the heat and power load per square foot that blades and high-density systems require. Most modern datacenters were designed and/or built in the 80s and 90s when they had very specific requirements as regards power and heat load per square foot... and that was reasonable at the time. The higher density systems such as blades are a great idea, and provide much more processing capability per square foot than traditional racked servers... however, it has become tough to keep up with the heat output and power requirements of these on a per rack basis. I know our datacenter where I work that was built in 1995 has been retrofitted no less than four times in the last few years to increase cooling capacity, and we're rapidly reaching the limits of what we can do with the physically constrained space we have. At the moment, if we add a new power feed or AC unit, we will actually need to remove racks to put it in. Given our racks are currently running at an average 85% physical capacity already you can see where we have a problem.
These sort of portable datacenters though are only for those who design their systems correctly. Most applications these days can leverage "fat" back end systems (databases and so forth) with "thin" front-end application servers. My proposal that's going through the mill right now was to invest in one of these containers to migrate all of the front-end systems into that datacenter, leaving only the data and storage (SAN) sitting in the existing datacenter. That way, we can eliminate approximately 60% of our servers, which themselves make up about 40% of the heat and power load in our datacenter today. That way we can continue to expand the storage (which is desperately needed, we just have no more floor space for SAN) and leverage either powerful blade servers or powerful standard rack servers as consolidated database clusters and possibly virtual machine space. Where we need application-server space, we can put a server out in the "trailer" and connect it across a fat link into the existing datacenter (bonded gigabit), thereby providing incredible flexibility.
The cost may seem prohibitive, but what are our other options? Right now, our only other option is to actually build a new dedicated datacenter building. The cost of that is incredibly prohibitive, and we've been playing catchup for a long time as far as trying to meet our user demand in a rapidly growing user base while being seriously constrained on space. The cost of one of these trailers is actually an incredible bargain compared to the cost of proper design, architecture, engineering and actually constructing a new building to house our ever growing application requirements.
So what about server failures? Personally, I feel that the best way to proceed is to run up the trailer to about 85% utilized, leaving lots of idle servers in-place. Network boots and stuff like that ought to provide rapid provisioning within the trailered data center, so in the event of a failure you just use network boot to bring up another node and call for service. Hey, we already have all of our servers under maintenance with the manufacturer anyway, and most of the time this is exactly what we do. Plus, what if we grow again? Add another trailer. Simple, cost-effective and efficient.
The security aspect? Leverage your already existing datacenter. Use that as your data source, leave as little actual customer data on the trailered servers as you can. If you start getting constrained on space, start moving your database servers out to the trailers as well, but connect them back to your SAN in the old DC. By doing s