Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

MIT May Have Just Solved All Your Data Center Network Lag Issues 83

alphadogg (971356) writes A group of MIT researchers say they've invented a new technology that should all but eliminate queue length in data center networking. The technology will be fully described in a paper presented at the annual conference of the ACM Special Interest Group on Data Communication. According to MIT, the paper will detail a system — dubbed Fastpass — that uses a centralized arbiter to analyze network traffic holistically and make routing decisions based on that analysis, in contrast to the more decentralized protocols common today. Experimentation done in Facebook data centers shows that a Fastpass arbiter with just eight cores can be used to manage a network transmitting 2.2 terabits of data per second, according to the researchers.
This discussion has been archived. No new comments can be posted.

MIT May Have Just Solved All Your Data Center Network Lag Issues

Comments Filter:
  • by mysidia ( 191772 ) on Thursday July 17, 2014 @06:15PM (#47478583)

    Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.

    Case in point: ATM To the Desktop.

    In a modern datacenter "2.2 terabits" is not impressive. 300 10-gigabit ports (Or about 50 servers) is 3 terbits. And there is no reason to believe you can just add more cores and continue to scale the bitrate linearly. Furthermore... how will Fastpass perform during attempted DoS attacks or other stormy conditions where there are small packets, which are particularly stressful for any centralized controller?

    Furthermore.... "zero queuing" does not solve any real problems facing datacenter networks. If limited bandwidth is a problem, the solution is to add more bandwidth -- shorter queues does not eliminate bandwidth bottlenecks in the network; you can't schedule your way into using more capacity than a link supports.

  • by Archangel Michael ( 180766 ) on Thursday July 17, 2014 @06:34PM (#47478693) Journal

    Your 300 x 10GB ports on 50 Servers is ... not efficient. Additionally, you're not likely saturating your 60GB off a single server, and you're running those six 10GB connections per server to try to eliminate other issues you have, without understanding them. You're speed issues are elsewhere (likely SAN or Database .. or both), and not in the 50 servers. In fact, you might be exasperating the problem.

    BTW, our data center core is running twin 40GB connections for 80 GB total network load, but were not really seeing anything using 10GB off a single node yet, except the SAN. Our Metro Area Network links are is being upgraded to 10GB as we speak. The "network is slow" is not really an option.

Someday your prints will come. -- Kodak