Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IT Technology

Architects Design a 65-Story Data Center (computerworld.com) 138

Reader dcblogs writes: Two Italian architects have designed a data center that challenges how the structures are built. Instead of constructing a flat, sprawling complex, they are proposing a 65-story data center. From a visual perspective, the circular, futuristic-looking 'Data Tower,' as Marco Merletti and Valeria Mercuri call it, almost seems like something out of Star Trek. But it incorporates sustainable technology for efficiently cooling hundreds of thousands of servers while increasing reliance on automation. The building has a modular, cylindrical design that uses a series of pods to house servers, which are available for service in much the same way automated parking garage move cars. The data tower, as with a radiator, is designed to have the maximum contact surface with the outside. The pods are hooked on to the circular structure of the tower to form a series of vertical blades.
This discussion has been archived. No new comments can be posted.

Architects Design a 65-Story Data Center

Comments Filter:
  • by jonnythan ( 79727 ) on Tuesday April 12, 2016 @02:15PM (#51894101)

    You'd better build the power plant next door. Imagine the energy that thing would consume.

    The Empire State Building uses about 9-10 megawatts peak, and that's filled mostly with people and offices, not high-density servers.

    • You'd better build the power plant next door. Imagine the energy that thing would consume.

      The Empire State Building uses about 9-10 megawatts peak, and that's filled mostly with people and offices, not high-density servers.

      How is that different than current datacenters? Current datacenters already have tons of servers but for some reason are flat with alot less outside surface area. As the outside surface area increases slower than the inside area, it makes sense to have tall narrow datacenters if you are wanting to passively cool it. There is no logically reason to have single story datacenters.

      • by Junta ( 36770 )

        Same reason you don't see skyscrapers except in the middle of very urban centers. It's much, much more money to build up than out. Many facets of maintenance are much more complicated with a tall facility. Having to deal with elevators and having to have a large volume of freight elevators is very tricky. So on and so forth.

        • by dbIII ( 701233 )
          Yes but I suggest looking at industrial structures such as cooling towers or petrochemical plant as a comparison instead of office buildings.
      • There is no logically reason to have single story datacenters.

        You've obviously never driven a forklift onto a freight elevator.

        • Comment removed based on user account deletion
          • by Junta ( 36770 )

            Because if a general purpose freight elevator is too expensive, hard to maintain, etc, a highly customized application specific freight elevator dedicated to a column of racks would surely be the cure.

            This is a rather stylish looking concept that might be at home in some sci-fi media, but it is a horribly impractical concept to actually implement in real life.

            • This is a rather stylish looking concept that might be at home in some sci-fi media, but it is a horribly impractical concept to actually implement in real life.

              Even if the construction cost was double, if it could cut the lifetime cooling cost significantly, it would still be worth it.

              • by Junta ( 36770 )

                It won't cut the cooling cost at all compared to alternative, practical designs. If you are fixated on free air cooling there are a lot of alternative strategies to do that without resorting to a crazy complicated tower (or even a pretty mundane high-rise).

                If you care about cooling, water cooling would be a lot more bang for the buck. You can do chiller-free designs, and waste heat recovery. Sure there are going to be impellers in play, but the draw is not as significant. The general blocking point is t

            • by dbIII ( 701233 )
              Horribly impractical? It's a cylinder with what is now comparatively cheap pretty bits stuck on the outside. It only looks impractical if you focus on the pretty bits, which are really just cantilevered balconies that don't go out an enormous distance (just like on a lot of other buildings but arranged in a different way).
              • by Junta ( 36770 )

                Each column has an elevator. Said elevator means only one pod may be serviced in a column at a time. If said elevator breaks, you have no alternative access to that column. The electrical infrastructure needed to make that work for the massive volume of electricity to a pod is unreasonable (and therefore at high risk of knocking out an entire pod on servicing a component). This all adds up to a great deal of expense and limited serviceability for..... no benefit really except looking exotic and having l

                • by dbIII ( 701233 )

                  Thermodynamics are simply not on the side of this being a particularly better cooling design

                  It's hilarious when coders write like that.
                  It could be better but the concept is not as much as a disaster as suggested - read the first three chapters of a first year thermodynamics text and you'll see that thermodynamics is roughly on the same side :)

                  • by Junta ( 36770 )

                    I would think the delta T in a server cooling scenario can't be high enough to be that appealing for getting significant benefit from building taller. You can't really let the air inside a typical server get above 45C and still have any decent hope of removing enough heat to keep the temperature sufficient, even with a *hefty* airflow, and even that is considered way on the outside of any suggestion (typically you want to be under 35C, ASHRAE says it should be 27C or less). Stack effect relies on signific

                    • by dbIII ( 701233 )
                      It appears the point is to have a huge empty space inside and the server fans are apparently drawing cold air from the outside to dump exhaust in that huge empty space where the hot air will slowly rise without heating up the core area much at all. That's why the concept is not as much as a disaster as suggested.
                      I can see a lot of potential problems with input air (weather etc) but your comment above relies on assumptions that do not fit the concept images at all - look at all that empty space!
                    • by Junta ( 36770 )

                      Then you could acheive the same end by having a sprawling field of 'cargo container' style datacenter organization (which was a fad that actually was being deployed, but feels like it has largely passed as well). My point is that building up is complex and presents a large amount of cost and suboptimal results for maintenance, presuming you do find a sane strategy for the data connections at all. Building up 'looks cool', but there isn't really a shortage of land for data centers to drive a need, and if y

                    • by dbIII ( 701233 )

                      Then you could acheive the same end by having a sprawling field of 'cargo container' style datacenter organization

                      Yes.
                      Now you are getting the idea on how that big empty space with used as a slow chimney can be modelled. The temperature difference is going to be of the order of 10C between the back of the servers and ambient - it's going to slowly rise without heating up much and with all that empty space it may as well be outside.

                      So not a disaster - but I never said it was a better idea than just sticking

          • You clearly didn't RTFA. Paraphrasing a good bit, but: in Iceland, forklift is elevator

            My point was that single story datacenters don't need freight elevators.

      • I don't see how it makes sense to have tall, narrow datacentres when you could have long narrow single storey ones instead. Much cheaper to build, same surface area.

        • by dbIII ( 701233 )
          In zero-g maybe, but here convection means a chimney is better than a horizontal duct.
          • In zero-g maybe, but here convection means a chimney is better than a horizontal duct.

            True,but then you could just tack a chimney on to the building anyway. Then you wouldn't need a tall loadbearing structure.

            • by Junta ( 36770 )

              Precisely. People saying 'it's tall so it can *be* a chimney or alternatively some have said 'oh, you can suck in air from the top (which isn't right, but for the sake of argument...). Whichever way you believe, if you want a chimney so bad, build a chimney.

              • by dbIII ( 701233 )
                I think they just want a really tall tower. The air inside isn't going to move very fast but there is a LOT of it from those concept images. The servers may as well be outside.
      • by Lumpy ( 12016 )

        Yes there is... fires burn going UP faster than going out.

      • Logical reasons do include floor weight load, and you need all that space for generators, diesel tanks and Ac units.
      • Back in the day, most data centers were 3 stories, often with one expansion floor above and below. It maximized the utility of the mainframe bus and tag system, ensuring that the most equipment possible could be in direct contact with each other.

        The next generation was largely two story facilities, stacking MEP infrastructure and the raised floor area. This better accommodated high(er) density solutions, as you had larger but shorter chilled water pipes, shorter electrical feeders, etc.

        Then, things moved

    • You'd better build the power plant next door. Imagine the energy that thing would consume.

      Iceland's power comes mostly from hydro and geothermal. So, if it is indeed designed "with Iceland in mind," I think the location of the tower itself should be considered in terms of proximity to a geothermal zone or a hydro generator in that case.

      • Comment removed based on user account deletion
        • by Junta ( 36770 )

          There's no way that solar could drive this. To even get close, you'd need a sea of reflectors. At which point, you might as build a sprawling datacenter because that would be cheaper anyway.

          • by dbIII ( 701233 )
            Heat not photovoltaics.
            If solar gets the air and heat moving that's a major win even if the power to actually drive the servers comes from somewhere else.
            • If solar gets the air and heat moving that's a major win even if the power to actually drive the servers comes from somewhere else.

              The point is that the building doesn't need anything but the servers themselves to cool the building. The entire building is designed like a chimney, hot air from the server "pods" rises creating a convection current that draws in air from the outside. The building funnels the cool air through the "pods" - rinse and repeat. It minimises (or eliminates) the substantial cost of air-conditioning a data center, you don't need anything else to power the air-conditioners, the building itself is the air-conditione

        • I'm currently in cloudy Tucson, where solar could easily be used to power this thing.

          Sure, but then you'd have to choose on how to utilize the space - install a gazillion voltaic panels or a parking lot for the employees? And let's not forget the amount of gas you'd have to burn at night in order to keep the power supply constant. It's not like there's a concept of "peak load" for data centers (which have to be powered 24/7).

          • Sure, but then you'd have to choose on how to utilize the space - install a gazillion voltaic panels or a parking lot for the employees?

            That's not an either/or situation. Just raise the "gazillion voltaic panels" 7 feet or so off the ground, and voila! Parking-lot sized carport. You get the solar power benefits, the employees get shaded parking that protects their vehicles from rain and sun. Win-win all around.

            There's a few malls (or maybe just Wal-Marts?) around the US that are doing something similar,

    • by burne ( 686114 )

      The Empire State Building uses about 9-10 megawatts peak

      A French TGV uses 9.6MW peak during acceleration. That is a single train, not a 102 storey skyscraper.

      If you tried to impress me, you failed.

    • by PCM2 ( 4486 )

      You'd better build the power plant next door. Imagine the energy that thing would consume.

      The actual contest entry [evolo.us] specified that the Data Tower would be built in Iceland, where it would be powered by 100 percent geothermal energy. (Whether that's actually possible, I couldn't tell you.)

      • by dbIII ( 701233 )

        powered by 100 percent geothermal energy. (Whether that's actually possible

        They have access to enormous amounts of geothermal heat there so possible is not the issue, cost, lead time and practicality is. You need heat sinks as well as heat sources so having a lake or something near the heat source makes it easier.

        • The point is the building is designed to be a very efficient chimney, it doesn't need any air-conditioners, it also has a much smaller footprint so you don't need as much land as a flat data center. Whether the data center gets built will largely depend on whether the extra costs for building up is significantly less than the reduced land and air-conditioning costs. The building can (and probably will) be powered from the grid, geothermal plants can be located efficiently elsewhere on the island, eg: the N
          • by dbIII ( 701233 )

            eg: the North Atlantic ocean makes a great heat sink

            Yes but salt water cooling plus fighting with marine life is a huge pain in comparison with a small body of fresh water - hence cost and practicality - it's more practical to site in some place instead of others. Powering with 100% geothermal is definitely possible and while it may take time and have scaling issues it's definitely a good direction to head in for any place that has a lot of geothermal heat if only to get less dependence from imported resou

  • by elwinc ( 663074 ) on Tuesday April 12, 2016 @02:17PM (#51894117)
    This architect doesn't seem to know enough about the physics of optimizing convective flow. It should be shaped more like a nuclear plant cooling tower, with a broader base, bigger cross section air inlets at the bottom, a bit of taper, and much larger diameter to height ratio.
    • by Overzeetop ( 214511 ) on Tuesday April 12, 2016 @02:34PM (#51894257) Journal

      When architects say "designed like a radiator" what they really mean is that they were artistically inspired by some sort of cooling device enclosure they took a fancy to. They have almost zero grasp of all but the most basic physics and have never, ever set foot in a thermodynamics class or done any substantial heat transfer coursework. Heck, most would be lucky to read a psychrometric chart.

      • Zero grasp indeed. The last thing you want is uncontrolled server inlet temperatures, whether hot or cold. You want a stable inlet temperature so the boards don't expand, contract and break solder joints. They've also completely overlooked humidity control which should remain between 40% and 55% if you want your servers to keep serving.

        And where are the wires? Wires go every which way in a data center and there are many tons of them. A data center without wires is a decoration.

      • by jwdb ( 526327 )

        Maybe that's true in the US, but I'd expect that Italy's more like Belgium, where Architecture is an engineering discipline. In Belgium the architects were in the same chem, physics, thermo, calc, and other engineering classes as the rest of us, so they've got the same base as any EE, ME, etc.

    • by guruevi ( 827432 )

      But then it wouldn't be "pretty" anymore. Typically the way these things are built, they are photo-op first and then later do the technical challenges get 'fixed'.

      • by Junta ( 36770 )

        Typically for these contest winning architect concepts, they are render-op first and... well stay that way because no one will ever build it because the architect is limited only by his imagination and artistic sensibilities, and is only interested in throwing practical sounding buzzwords vaguely at it.

        Just FYI, I think these are cool things to get rendered up, just find it odd when people take it seriously. E.g. http://www.discovery.com/dscov... [discovery.com].

    • Fecking computerworld keeps eating my comment....

      I've seen this design somewhere: http://1.bp.blogspot.com/_Cdoc... [blogspot.com]

      A silo shaped datacenter does make more sense for natural ventilation, but is much more expensive to build. They build them like warehouses instead, because it's the best way to save on upfront costs.

      A silo full of detachable pods, and elevators capable of moving them is impractical. That's a lot of weight and infrastructure for so little utility. These servers don't need to be replaced often.

    • by dbIII ( 701233 )
      Cooling towers are sited where land use is not an issue so can afford to have a broad base. There are probably other constraints here such as land cost that are not instantly obvious.
    • by Megane ( 129182 )

      They also don't seem to understand the concept of humidity. Open air cooling by vertical convective flow may be great for humans, but is it a good idea for electronics? Even if you don't get condensation, if you are near an ocean (the architects are Italian, you work it out) you will have salty humidity in the air. ("The hot air inside the tower goes up and sucks the cold air from the outside. The outside cold air, to enter, is obliged to pass through the pods, and in this way cools the servers.") It also i

      • by Megane ( 129182 )

        I missed the part of TFA where it said they intended for this to be built in Iceland, completely in the middle of fucking nowhere! The only good part is that Iceland probably has some decent bandwidth from transatlantic cables landing there. How you connect it (reliably!) to the middle of a big ice field is another matter. There's probably also earthquake issues to consider in Iceland, yet another reason not to make a high-rise.

        Of course the place to build a high-rise is in the middle of nowhere where land

    • by Agripa ( 139780 )

      A hyperboloid is used for large cooling towers because the inherent strength of the shape requires a minimum of material for a given size. This data center building is puny in comparison so there is no reason to use a hyperboloid shape.

  • by __aaclcg7560 ( 824291 ) on Tuesday April 12, 2016 @02:17PM (#51894125)
    The local property taxes are based on the footprint of the building. Stacking the data center upward would reduce the overall property taxes.
    • by Junta ( 36770 )

      And for a large chunk of applications, a cheaper approach would be to build it 15-30 miles away from the expensive area.

      However, architects regularly post pie in the sky, not fleshed out ideas to catch the eyes of companies that will build real, more down to earth projects inspired by the wacky concept art of an exotic thing.

    • by Anonymous Coward

      You're only thinking about the land portion of the property taxes. There are still property taxes applied to improvements on the land, meaning buildings. So if this 65 story building ends up having 1,265,000 sq ft it's going to be taxed higher than a building with 1,265 sq ft on the exact same lot. Property taxes are not a "flat rate". For instance 1 acre in a large city could be assessed at over $40,000,000 where as 1,000 acres in a rural area could be assessed at $400,000

      Property taxes are almost alwa

    • Comment removed based on user account deletion
      • This being in a remote part of Iceland. What property taxes are you talking about?

        President Trump will annex Iceland and impose property taxes to pay for the Mexico wall.

    • Tall buildings cost a lot more money per sq foot than one/two story buildings that can have their walls poured on site and tipped up. They don't build data centers tall because land is cheaper than tall buildings.

      • They don't build data centers tall because land is cheaper than tall buildings.

        Depends on the land. Silicon Valley has very little open space. If you want a bigger building, you need to tear down the shorter building and build a taller building.

        • Why would anyone want a general purpose datacenter in SV? Build it in the middle of nowhere, somewhere along a long fiber link between major hubs.

          • Why would anyone want a general purpose datacenter in SV?

            I'm aware of four or five clusters of data centers in Silicon Valley from job interviews over the years. Most are located there because headquarters is around the corner or belong to the telecoms. The MAE-West Internet node is several miles away in downtown San Jose.

            https://en.wikipedia.org/wiki/MAE-West [wikipedia.org]

    • There are very few places where they base property taxes on the unimproved value rather than the improved value. I can't actually name any.

    • Those want to be some fucking monumentally huge property tax rates to justify building up over out and the cost of floorspace is exponentially higher to build up.
  • by account_deleted ( 4530225 ) on Tuesday April 12, 2016 @02:23PM (#51894163)
    Comment removed based on user account deletion
    • by Megane ( 129182 )
      I just want to know how Simon and the PFY fit into this story. Then it would be perfect.
  • by SeaFox ( 739806 ) on Tuesday April 12, 2016 @02:29PM (#51894211)

    how many Libraries of Congress it can hold?

    Oooo, not sure if I mean information storage or physical space now, huh?

  • Are the pods live when working on them?

    So you can swap in and out servers / disks /etc without taking the full pod down?

    Will this idea fit into other building lay outs so this can be put in to places that have better lag and bandwidth to you users?

    • It is a mastubatory exercise for an Architecture competition. It will never get built. It is just vanity designs by young architects trying to make a name for themselves. Nothing wrong with it - but it isn't ever going to be built. There is a reason they build data centers like they do now.
      • Comment removed based on user account deletion
        • Actually it is called "concrete block one story buildings are cheap when built on rural land".
          • Actually it is called "concrete block one story buildings are cheap when built on rural land".

            Perhaps they get it built in a place where close proximity to urban environment is a must for a lot of servers. Perhaps in NYC right next to the stock exchange.

    • by Megane ( 129182 )
      How are you going to keep the pod powered while you lower it 65 stories to the ground level? A really long extension cord? A third rail? (Mind the gap!) Also, wired Ethernet is limited to 100m, right? So you would have to use fiber, and be sure to not violate the minimum curve radius, wouldn't want to break it!
  • by turkeydance ( 1266624 ) on Tuesday April 12, 2016 @02:35PM (#51894267)
    makes it something like 50 years out of date.
    • by Junta ( 36770 )

      It doesn't remind me of Star Trek, it more reminds me of stuff built by the combine in half life 2.

  • I love how the artist envisioned this datacenter being installed in some extremely remote moonscape that you can only access by hiking in on foot. The complete lack of a plan for cooling this monster is another nice touch.
  • Two people who have never stepped inside a data centre in their life!

    So in order to work on a computer you have to bring a whole pod down which would disconnect everything in that pad. F*cking brilliant. Or do all the cables follow it down while you work on whatever computers are in that pod?

    Maybe they also designed the first generation of HP blades where you had to bring the whole rack down in order to work on the power supplies. God I hated those things after working on the IBM blades where if you had to

    • by gaudior ( 113467 )

      I think this is just an exercise for publicity, but the connection issues to the pods are not that difficult with current technology. If the pods must remain powered up and connected, then a flexible cable race can be built into the channel the pod rides on. If not, then it's even easier. The connections are just a scaled-up version of the mechanisms used for robotic disk and tape systems in play already.

      • by Junta ( 36770 )

        The flexible cable race would be a hellish thing to service. It would also add ungodly lengths to your cables. Also, your cabling becomes complex since you have multiple cable races in the same column to wrangle...

        A tape library makes it work because it's relatively small. The design does not scale up to a 65 story building.

  • "In the cloud" a more factual statement


  • I cannot wait for them to get this completed.

    Just imagine the panic and mayham when there's an HVAC outage and the lifts aren't working! DC techs are notoriously fast over short distances, especially fi there's pizza involved but 65 floors? SIXTY FIVE?? -I guess that will be on the Bear Grylls "Born survivor" futuristic show.

    You know it in your heart that it has to be done for posterity; if only to study what happens when 500,000 servers' overheat alarms go off to answer the question "Is half a million
    • by dbIII ( 701233 )

      Just imagine the panic and mayham when there's an HVAC outage

      The thing is a chimney. You've hit the only practical thing of the design and about the only problem it's not going to have :)

  • So this is a 65 fable building? That's pretty fabulous...
  • The design is a nice picture, but the reality of tall datacentre is much closer to 33 Thomas Street [wikipedia.org].

    Also who wants their servers moved around for you to go near them? Only people working at Google-scale, where you take a whole pod offline every so often and fix all the broken bits, can do that.

    • Ha! I came here to post the other NYC giant datacenter: 375 Pearl St [wikipedia.org]

      "Only" 32 stories, but that's because each floor is 16-17 ft high to accommodate 1970s-era telephone switching equipment.

  • The Towering Arduino

  • The "lab" in our upper floor Bangalore office is maxed out, not because of space, power, or cooling, but because the floor won't carry the weight.

    Our labs and DCs in our US facilities are all on the ground floor or in the basement for that reason.

    I honestly wonder if those architects really considered the fact that racks of 20 2U servers weighs in at over half a tonne (1100 lbs) per sq meter.

    One human in a cube probably weighs in at less than 1/50th of that (75kg human and another 75kg of desk and com
    • My guess is that they probably did notice this fundamental issue of live load, which is common to every building built, ever.

      Residential live floor loads vary from 30 to 60 psf (pounds per square foot) depending on function of the floor. This 20 2U server load you cite is 110 psf, about twice the maximum seen in residential construction, but hardly unusual for industrial construction. The floor of 33 Thomas in New York (an AT&T switching/data facility is designed for 200-300 psf.

      • Actually their design likely accommodates live load pretty well, since it has a bunch of columns relatively close together, and the pods have their own structural integrity. My guess is that the pods would be pretty close to 20T each. They could even have the columns taper outward for a larger base footprint effectively. If my math is anywhere close to correct they need less than 1m^2 at the base for the dead loads. They would also need about another square meter at each column for power conduits at 600V

      • by Junta ( 36770 )

        Though typically office buildings will not design to high degrees of live load, because it's expensive. So yes, you can make tall buildings that can support massive loads, but the cost to do so makes many facilities just stick things in the basement if they do have a high rise, or build at a little more convenient location so they can sprawl out.

  • by Nethead ( 1563 )

    I worked a decade in a 34 story telco hotel in Seattle starting in the mid 90s. I watched elevators disappear to become power conduits; parking spaces in the 6 level garage become homes for generators and cooling. We even hosted (gratis) images.slashdot.org there because Malda had maxed out his T1.

    Come to think of it, it kind of looks like a radiator. [westinbldg.com]

I'd rather just believe that it's done by little elves running around.

Working...