Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Google The Internet

QUIC: Google's New Secure UDP-Based Protocol 97

New submitter jshurst1 writes "Google has announced QUIC, a stream multiplexing protocol running over a new variation of TLS, as well as UDP. The new protocol offers connectivity with a reduced number of round trips, strong security, and pluggable congestion control. QUIC is in experiment now for Chrome dev and canary users connecting to Google websites."
This discussion has been archived. No new comments can be posted.

QUIC: Google's New Secure UDP-Based Protocol

Comments Filter:
  • Re:Don't trust 'em (Score:5, Informative)

    by K. S. Kyosuke ( 729550 ) on Friday June 28, 2013 @03:31PM (#44136211)

    I have serious doubts in ANY new network tech not having backdoors of some sort.

    Oh, come on. This is a network protocol. Sure, protocols *can* have flaws, but it's a very long stretch from being forced to run an unknown binary. Just implement it on your own if you're paranoid enough.

  • Re:Don't trust 'em (Score:5, Informative)

    by brunes69 ( 86786 ) <[slashdot] [at] [keirstead.org]> on Friday June 28, 2013 @03:37PM (#44136287)

    Maybe if you would RTFA instead of pontificating, you would have found that the reference QUIC implementation is already open source, the specification is open, the wire specification is open, the whole thing is open. If you don't trust Google's implementation then roll your own.

  • Re:Don't trust 'em (Score:5, Informative)

    by brunes69 ( 86786 ) <[slashdot] [at] [keirstead.org]> on Friday June 28, 2013 @03:46PM (#44136377)

    They are not doing their own crypto.... they are using TLS. Again, please read the actual documents.

  • Re:Don't trust 'em (Score:2, Informative)

    by jdogalt ( 961241 ) on Friday June 28, 2013 @04:00PM (#44136593) Journal

    I have serious doubts in ANY new network tech not having backdoors of some sort.

    Oh, come on. This is a network protocol. Sure, protocols *can* have flaws, but it's a very long stretch from being forced to run an unknown binary. Just implement it on your own if you're paranoid enough.

    The problem with trying to implement a new protocol over tcp/ip (the internet) like Tim Berners-Lee did with the web, is that the mythical 'Open Internet' has been degraded. QUIC and webRTC reek to me of some orwellian attempt to make the lies about home servers being less worthy of net-neutrality protections than skype's servers make sense. I.e. allowing 'client to client' file transfers and video chats. All this is because of the conspiracy to deprive residential internet users the power to serve. Now, don't get me wrong, the things webRTC, and I'm guessing QUIC work around (legacy nat traversal when no simple 'open internet' directly routable path is available) are useful. But it is disingenous to look at QUIC without also looking at the fact that when Google entered the residential ISP business, they actually pushed the server persecution further, with a blanket 'prohibited from hosting any kind of server' terms of service language.

    Earlier this week the FCC finally after 9 months 'served' my NetNeutrality (2000F) complaint against Google, along with the longer 53 page manifesto that now reached google via the FCC via the Kansas Attorney General('s office). Yesterday after pinging schneier@schneier.com for any insight to prepare for google's July 29th compelled response, he (or someone spoofing him) replied- "Thanks.\n\nGood Luck.".

    http://cloudsession.com/dawg/downloads/misc/mcclendon_notice_of_informal_complaint.pdf [cloudsession.com]
    http://cloudsession.com/dawg/downloads/misc/kag-draft-2k121024.pdf [cloudsession.com]
    http://slashdot.org/comments.pl?sid=3643919&cid=43438341 [slashdot.org] (score 5 comment about the situation, with further links)

  • by Bengie ( 1121981 ) on Friday June 28, 2013 @05:03PM (#44137359)
    TCP has some major issues with congestion control that isn't playing well with buffer-bloat. The Internet is bursty in nature. TCP takes too long to ramp-up. It is acutally easier on infrastructure to burst 10MB over one second than to stream it over 10 seconds.

    There are a lot of write-ups on issues with TCP, but one of the big issues that is starting to become a problem as speeds increase but latency is staying fixed, is the congestion control. Because TCP starts off slow and ramps up, it tends not to make use of available bandwidth. Un-used bandwidth is bad. The other issue is current TCP uses packet-loss to decide when to back off. The issue this creates is packet-loss tends to affect a lot of connections at the same time. You get this synchronization where lots of computers experiencing packet-loss all at the same time, so they all back-off at the same time. Suddenly the route is under-utilized. All of the connections start building up again until the route is over-utilized, then they all back-off at the same time.

    This issue alone could possible cause large portions of the Internet to fail. It has happened in the past and the variables are getting to be similar again. Essentially you're left with a large portion of the Internet routes in a constant violent swing between over-utilized and under-utilized.

    You get this issue where the average utilization is low, but packet-loss and jitter is high. Relatively speaking.

    There is a lot of theory on how to fight these issues, but the only real way to figure this out is to actually test these theories on the large scale. A protocol that rides on top of UDP and runs in the application is the perfect place to test this. If something goes wrong, you just disable it. You can't do that with most OSs TCP stacks.

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...