Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
AI Networking The Internet

MIT Uses Machine Learning Algorithm To Make TCP Twice As Fast 250

An anonymous reader writes "MIT is claiming they can make the Internet faster if we let computers redesign TCP/IP instead of coding it by hand. They used machine learning to design a version of TCP that's twice the speed and causes half the delay, even with modern bufferbloated networks. They also claim it's more 'fair.' The researchers have put up a lengthy FAQ and source code where they admit they don't know why the system works, only that it goes faster than normal TCP."
This discussion has been archived. No new comments can be posted.

MIT Uses Machine Learning Algorithm To Make TCP Twice As Fast

Comments Filter:
  • by cold fjord ( 826450 ) on Saturday July 20, 2013 @12:57AM (#44335151)
  • by girlintraining ( 1395911 ) on Saturday July 20, 2013 @01:07AM (#44335181)

    This isn't a redesign of TCP. The network is still just as stupid as it was before; It's just that the local router has had QoS tweaked to be more intelligent. By a considerable margin too. Reviewing the material, it seems to me like it's utilizing genetic algorithms, etc., to predict what's coming down the pipe next and then pre-allocating buffer space; Rather like a predictive cache. Current QoS methods do not do this kind of predictive analysis -- they simply bulk traffic into queues based on header data, not payload.

    It comes as no surprise to me predictive/adaptive caching beats sequential/rule-based caching. They've been doing it with CPUs and compilers since, uhh... the 80386 processor. TCP/IP was designed before there was much thought being put into pipelining, caching, parallelization, etc. Using modern algorithms and our better understanding of information system design that's come from 30 years of study results in a noticable improvement to performance? Shocking...

  • by Anonymous Coward on Saturday July 20, 2013 @02:59AM (#44335493)

    > I'm shocked to read that anyone would be comfortable just ignoring the why of something just so we can progress beyond our understanding.

    ML often works like that.

    You put the inputs into a function... it spits out a model. The model can be considered as an optimal orthonormal basis for the space it was encoding, but its REALLY REALLY hard to understand the dimensions that basis is in. Sometimes, you can take an image categorization model and see "ah, this is the blue shirt dimension. It seems that people wearing blue shirts are far along this axis"... but most times, you have NO IDEA what the model is capturing.

  • by jkflying ( 2190798 ) on Saturday July 20, 2013 @03:01AM (#44335501)

    It's not that we don't understand *why* something like a genetic-algorithm designed antenna works so well. We can evaluate its performance using Maxwell's equations and say, "Yes, it works well." without ever having to build the thing. What we don't have is a set of guidelines or 'rules of thumb' that can result in an antenna design that works just as well.

    The difference is that the computer evaluates a billion antennas for us, doing some sort of high-dimensional genetic optimisation on the various lengths of the antenna components. It doesn't 'understand' why it gets the results it does. We do, because we understand Maxwell's equations and we understand how genetic optimisation works. But Maxwell's equations only work for evaluating a design, not for giving a tweak which will improve it. And we're dealing with too many variables that are unknown to have a closed-form solution.

    As for this algorithm, they basically did the same thing. They defined a fitness function and then repeatedly varied the code they were using to find the best sequence of code. However, unlike the antenna analogy, they used actual equipment to evaluate the fitness function, not just a model. This means that they don't have an accurate model, which means that your complaint that we don't know why this works is entirely valid, and the antenna analogy is not =)

  • by paithuk ( 766069 ) on Saturday July 20, 2013 @05:34AM (#44335887) Homepage

    The blurb says it "redesigns TCP/IP", and the article itself specifically says "congestion control". Which is NOT part of TCP/IP design. Congestion control is a routing feature.

    Seriously, it's both incredible how wrong you are with that statement and that somebody rated it as informative. I suggest you read up on the subject: http://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm [wikipedia.org] http://en.wikipedia.org/wiki/Congestion_window [wikipedia.org] http://tools.ietf.org/html/rfc5681 [ietf.org]

  • by Rockoon ( 1252108 ) on Saturday July 20, 2013 @06:14AM (#44335987)
    You should go talk to Intel or AMD about your opinions on the matter, because I assure you that the specific layout of their chips is based on machine learning algorithms. No human can realistically optimize circuits containing a billion transistors.

    As a matter of fact, I recall genetic algorithms being thrown at rather small circuit design problems and producing solutions that were better than any human had come up with. Ah yes, here it is: Sorting networks and the END search algorithm [brandeis.edu].

    -- "Even a 25-year old result for the upper bound on the number of comparators for the 13-input case has been improved by one comparator"
  • by Nemyst ( 1383049 ) on Saturday July 20, 2013 @10:39AM (#44336685) Homepage
    Um, what? It most certainly is machine learning. Your "simulation" is an offline machine learning algorithm which, given input parameters, finds the best algorithm in the situation provided. Machine learning isn't strictly online algorithms, and it most certainly isn't "systems that learn from and adapt to real world networks of networks", which I'm having a hard time even parsing.

Reactor error - core dumped!