kdawson from the stretching-the-fabric dept.
BobB-nw writes "University of California at San Diego researchers Tuesday are presenting a paper (PDF) describing software that they say could make data center networks massively scalable. The researchers say their PortLand software will enable Layer 2 data center network fabrics scalable to 100,000 ports and beyond; they have a prototype running at the school's Department of Computer Science and Engineering's Jacobs School of Engineering. 'With PortLand, we came up with a set of algorithms and protocols that combine the best of layer 2 and layer 3 network fabrics,' said Amin Vahdat, a computer science professor at UC San Diego. 'Today, the largest data centers contain over 100,000 servers. Ideally, we would like to have the flexibility to run any application on any server while minimizing the amount of required network configuration and state... We are working toward a network that administrators can think of as one massive 100,000-port switch seamlessly serving over one million virtual endpoints.'"
"If you want to eat hippopatomus, you've got to pay the freight."
-- attributed to an IBM guy, about why IBM software uses so much memory