Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Networking The Internet

MIT Uses Machine Learning Algorithm To Make TCP Twice As Fast 250

An anonymous reader writes "MIT is claiming they can make the Internet faster if we let computers redesign TCP/IP instead of coding it by hand. They used machine learning to design a version of TCP that's twice the speed and causes half the delay, even with modern bufferbloated networks. They also claim it's more 'fair.' The researchers have put up a lengthy FAQ and source code where they admit they don't know why the system works, only that it goes faster than normal TCP."
This discussion has been archived. No new comments can be posted.

MIT Uses Machine Learning Algorithm To Make TCP Twice As Fast

Comments Filter:
  • by Intropy ( 2009018 ) on Saturday July 20, 2013 @01:12AM (#44335201)
    We're already in that boat. One of the reasons it's so hard to make changes is that nobody really knows why the internet works. We know how and why individual networks work. We can understand and model RIP and OSPF just fine. And we know how BGP operates too. But that large scale structure is a mess. It's unstable. The techniques we use could easily create disconnected or even weakly connected networks. But they don't except for occasionally a single autonomous system falling off. We've built ourselves a nice big gordian knot already. We know what it's made of, and we know how it operates, but good luck actually describing the thing.
  • by afidel ( 530433 ) on Saturday July 20, 2013 @01:13AM (#44335203)

    Meh, it's like the AI designed antenna, we don't have to know WHY it works better, just that it does and how to build a working copy.

  • by Clarious ( 1177725 ) on Saturday July 20, 2013 @01:22AM (#44335239)

    Think of it as solving a multiobjective optimization problem using heuristic algorithm/machine learning. You can't solve the congestion problem completely as it is computionally infeasible, now they just use machine learning to find the (supposedly) optimal solution. Read TFA, it is quite interesting, I wonder if we can apply that to Linux writeback algo to avoid the current latency problem (trying copying 8 Gb of data into a slow storage medium such as SD card or USB flash, prepare for 15+ seconds stalls!), the underlying is the same anyway.

  • by Lord_Naikon ( 1837226 ) on Saturday July 20, 2013 @01:44AM (#44335301)

    Huh? Did you read the same article as I did? As far as I can tell, the article is about a TCP congestion control algorithm, which runs on both endpoints of the connection, and has nothing to do with QoS on intermediate routers. The algorithm generates a set of rules based on three parameters resulting in a set of actions to take like increasing advertised receive window and tx rate control. The result of which is a vastly improved total network throughput (and lower latency) without changing the network itself.

    I fail to see the relevance of predictive/adaptive caching. It isn't even mentioned in the article.

  • by Anonymous Coward on Saturday July 20, 2013 @01:52AM (#44335319)

    Everything everyone ever says is wrong on the Internet and especially on Slashdot. Some folks just can't wait to start typing so they can tell everyone how wrong they are about everything without even knowing what the fuck they are talking about. I find it is best to ignore them as their lives are typically so sad that it rouses my considerable empathy and I just wind up feeling sorry for them rather than doing something useful.

  • by Ichijo ( 607641 ) on Saturday July 20, 2013 @02:18AM (#44335387) Journal
    So we built a computer that figured out the answer [wikia.com]. Now we just need to build an even bigger computer to figure out the question!
  • by Animats ( 122034 ) on Saturday July 20, 2013 @02:21AM (#44335393) Homepage

    One of the reasons it's so hard to make changes is that nobody really knows why the internet works.

    We still don't know how to deal with congestion in the middle of a pure datagram network. The Internet works because last-mile congestion is worse than backbone congestion. If you have a backbone switch today with more traffic coming in than the output links can handle, the switch is unlikely to do anything intelligent about which packets to drop. Fortunately, fiber optic links are cheap enough that the backbone can be over-provisioned.

    The problem with this is video over the Internet. Netflix is a third of peak Internet traffic. Netflix plus Youtube is about half of Internet traffic during prime time. This is creating demand for more and more bandwidth to home links. Fortunately the backbone companies are keeping up. Where there's been backbone trouble, it's been more political than technical. It also helps that there are so few big sources. Those sources are handled as special cases. Most of the bandwidth used today is one-to-many. That can be handled. If everybody was making HDTV video calls, we'd have a real problem.

    (I was involved with Internet congestion control from 1983-1986, and the big worry was congestion in the middle of the network. The ARPANET backbone links were 56Kb/s. Leased lines typically maxed out at 9600 baud. Backbone congestion was a big deal back then. This is partly why TCP was designed to avoid it at all costs.)

  • by The Mighty Buzzard ( 878441 ) on Saturday July 20, 2013 @03:01AM (#44335499)

    Nobody at MIT is going to be picking which algorithm gets used on any live device outside of MIT, their pockets, or their house, so I was obviously not talking about them.

    Any sys/network admin putting this on or in the path of critical live devices should be fired no matter how it preforms though. No admin worth having would push this live for the same reason they wouldn't overclock the database servers; performance is always a distant second to reliability.

  • Re:Come on now (Score:5, Interesting)

    by Daniel Dvorkin ( 106857 ) on Saturday July 20, 2013 @03:17AM (#44335537) Homepage Journal

    As complex systems goes there are far worse. Go ask an engineer or a scientist.

    I am a scientist--specifically, a bioinformaticist, which means I try to build mathematical and computational models of processes in living organisms, which are kind of the canonical example of complex systems. And I will cheerfully admit that the internet, taken as a whole, is at least as complex as anything I deal with.

  • by seandiggity ( 992657 ) on Saturday July 20, 2013 @03:23AM (#44335561) Homepage
    We should keep investigating why it works but, to be fair, the history of communications is implementing tech before we understand it (e.g. the first trans-Atlantic cable, implemented before we understood wave-particle duality, and therefore couldn't troubleshoot it well when it broke).

    Let's not forget this important quote: "I frame no hypotheses; for whatever is not deduced from the phenomena is to be called a hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy."

    ...that's Isaac Newton telling us, "I can explain the effects of gravity but I have no clue WTF it is."
  • by Daniel Dvorkin ( 106857 ) on Saturday July 20, 2013 @03:27AM (#44335563) Homepage Journal

    I'm shocked to read that anyone would be comfortable just ignoring the why of something just so we can progress beyond our understanding.

    If you insist that we know why something works before we make use of it, you're discarding a large portion of engineering. We're still nowhere near a complete understanding of the laws of physics, and yet we make machines that operate quite nicely according to the laws we do know (or at least, of which we have reasonable approximations). The same goes for the relationship between medicine and basic biology, and probably for lots of other stuff as well.

    If we don't understand the why then we're missing something very important that could lead to breakthroughs in many other areas. Do not let go of the curiosity that got us here to begin with.

    I don't think anyone's talking about letting go of the curiosity. They're not saying, "It works, let's just accept that and move on," but rather, "It works, and we might as well make use of it while we're trying to understand it." Or, from TFA: "Remy's algorithms have more than 150 rules, and will need to be reverse-engineered to figure out how and why they work. We suspect that there is considerable benefit to being able to combine window-based congestion control with pacing where appropriate, but we'll have to trace through these things as they execute to really understand why they work."

  • OSPF (Score:3, Interesting)

    by globaljustin ( 574257 ) on Saturday July 20, 2013 @04:02AM (#44335649) Journal

    It's basically a more complex version of Open Shortest Path First.

    Depending on how you understand the term 'autonomous system' [wikipedia.org] you can have a lot of fun with the idea. It doesn't *explain* everything about how this works, but it puts it into context, in my mind.

    FTA: To approximate the solution tractably, Remy cuts back on the state that the algorithm has to keep track of. Instead of the full history of all acknowledgments received and outgoing packets sent, a Remy-designed congestion-control algorithm (RemyCC) tracks state variables...

    So basically it has, in the minds of these researchers, a really, really well mapped 'routing table' it can access faster than regular TCP.

    It's a network control algorythm. It optimizes network flow based on user-identified parameters which result in measurable outputs that can give the user feedback.

    Network control algorythm.

  • by dltaylor ( 7510 ) on Saturday July 20, 2013 @04:55AM (#44335791)

    Yet Another Misleading Headline

    The paper states quite clearly that once the simulation has produced an algorithm, it is static in implementation.

    The authors give a set of goals and an instance of a static network configuration and run a simulation that produces a send/don't send algorithm FOR THAT NETWORK, in which all senders agree to use the same algorithm.

    While this looks like very interesting and useful research, it has nothing to do with systems that learn from and adapt to real world networks of networks.

  • by Anonymous Coward on Saturday July 20, 2013 @08:04AM (#44336199)

    This.

    I've done some work in machine learning and was wondering if they'd done something novel. Then I read through and it said that it was very good for its given training set but the performance drops off rapidly the further the network strays from said training set.

    Well hell. I've had machine learning algorithms that were 100% accurate when working on its training set. That's not impressive. The test comes in to how well it works when presented a data point outside of its training set. And they state in the FAQ that it doesn't do particularly well.

  • by AK Marc ( 707885 ) on Saturday July 20, 2013 @08:37AM (#44336281)
    It's a mystery because in practice, thousands of sessions being tracked is too hard to deterministically determine in a simple static manner. So we use WRED instead. This is WDED - weighted deterministic early detection. What we don't understand is how this does so much better than random drops, mainly because math is hard. Someone could probably take this, and write a mathematics thesis on this. Determining how to drop packets to keep a minimum queue size and have the lowest impact on performance is something that has been worked on for years. This isn't unknowable, or ever really that hard. It's just different and complicated (within a small area of interest, so less than 1% of the population knows what WRED is, let alone how this is essentially an improvement based on it (at least what I could tell from FTA, as I haven't had time to read the source, let alone understand it.
  • by Immerman ( 2627577 ) on Saturday July 20, 2013 @10:30AM (#44336643)

    The canonical example is we have no idea why we're capable of logical thought, yet that doesn't in any way impair us form using it.

    In fact when it comes to most complex systems (economy, ecology, etc, etc, etc) we don't *really* understand how they work, but we muddle through as best we can. Generally speaking when faced with a complex system we take one of two routes:
    * Oversimplify our model of the system so that we think we understand it and work from there (the "professional" approach, which often ends catastrophically when those oversimplifications come home to roost)
    * Trial and error (the historically far more successful approach, and the only one used by evolution)

    Something like the bent-wire antenna with incredible omnidirectional properties is a great example of this: It's not that there's some magical features we haven't discovered about radio, the thing was designed by genetic algorithm within a computer simulation that was 100% limited by our existing models of antenna behavior. But a 10-wire antenna allows for phenomenally complex interactions between those behaviors, and the trial-and-error approach allowed the algorithms to home in on an extremely effective configuration within a problem space far to complex to reason our way through.

    An even better example would be the nozzles used by some companies to create powdered laundry detergent - they spent a bunch of money on engineers to design a nozzle that would spray a liquid detergent solution in a manner that created tiny quick-drying droplets. Despite the simulations all saying it should work great, it failed miserably. Then they just built a bunch of random nozzles, tried them out, and used genetic algorithms to home in on an effective design. The difference from the antenna process being that they actually made physical versions of the nozzles to test, because the best available simulations were clearly incompatible with reality.

  • by Immerman ( 2627577 ) on Saturday July 20, 2013 @11:55AM (#44336983)

    I have heard claims along that line - something like one of the protective layers was effectively thermite? There seem to be as many theories as there are people making them, but it's hard to argue that the hydrogen wasn't at least an added accelerant.

    Personally I blame the television crews for the real disaster. Without them it would've just been a newspaper story about a German airship burning up and killing some people. With the dramatic visuals though it was the death-nell of the airship industry, for no good reason. The sinking of the Titanic was at least as big a disaster and had negligible effect on the oceanliner industry.

Old programmers never die, they just hit account block limit.

Working...