PHK: HTTP 2.0 Should Be Scrapped 220
Via the HTTP working group list comes a post from Poul-Henning Kamp proposing that HTTP 2.0 (as it exists now) never be released after the plan of adopting Google's SPDY protocol with minor changes revealed flaws that SPDY/HTTP 2.0 will not address. Quoting: "The WG took the prototype SPDY was, before even completing its
previous assignment, and wasted a lot of time and effort trying to
goldplate over the warts and mistakes in it.
And rather than 'ohh, we get HTTP/2.0 almost for free', we found
out that there are numerous hard problems that SPDY doesn't even
get close to solving, and that we will need to make some simplifications
in the evolved HTTP concept if we ever want to solve them. ...
Wouldn't we get a better result from taking a much deeper look
at the current cryptographic and privacy situation, rather than
publish a protocol with a cryptographic band-aid which doesn't solve
the problems and gets in the way in many applications ? ...
Isn't publishing HTTP/2.0 as a 'place-holder' is just a waste of
everybody's time, and a needless code churn, leading to increased
risk of security exposures and failure for no significant gains ?"
Encryption (Score:5, Insightful)
I hope that whatever HTTP2.0 ends up being enforces encryption by default.
Re:Encryption (Score:5, Interesting)
No, you really don't. Encryption is good for Facebook, but enforcing it for your Internet-of-Everything lightbulb or temperature probe in the basement gains nothing other than more complex bugs and lower battery life.
Re:Encryption (Score:5, Funny)
Nice try NSA.
Re:Encryption (Score:4, Informative)
Nonsense. Enforcing encryption does not make things more secure, unless that encryption and the authentication going with it is flawless. That is very unlikely to be the case against an attacker like the NSA.
Re:Encryption (Score:5, Insightful)
It doesn't need to be perfect. If cracking it still takes some time, it lowers their resources. And it can still be unbreakable for attackers with fewer resources at their disposal.
Re:Encryption (Score:5, Insightful)
Unfortunately, breaking the crypto directly is _not_ what they are going to do. Protocol flaws usually allow very low cost attacks, it just takes some brain-power to figure them out. The NSA has a lot of that available.
Re:Encryption (Score:4, Insightful)
Re: (Score:2)
Except that you might not want either end to know you're listening in.
Re: (Score:2)
In that case, break in physically and "bug" the end-systems. Or burn a higher-value zero day exploit for a remote compromise.
Re: (Score:3)
Indeed. Crypto is only ever going to help against mass-surveillance, not against targeted attacks against a small number of people. But that is the idea here.
Re: (Score:2)
http://xkcd.com/538/ [xkcd.com]
Re: (Score:2)
At least subpoenaing has some judicial oversight and is visible to the target. I know, the USA got rid of all that with National Security Letters and similar bullshit, but in principal forcing judicial oversight at the protocol level is a good thing and gives us something to work with.
Re:Encryption (Score:4, Insightful)
I think the only thing we know is that they would like to be able to break modern crypto directly. There is no indication that they can. Of course, they can brute-force DES or 512 bit RSA keys, but that is not going to help them a lot and that capability does not scale to , say, 128 bit symmetrical or 2048 bit asymmetrical keys. There are also indications that they _may_ be able to break some non-compromised ECC crypto and they likely know of ways to compromise elliptic curves in a way that allows them to cheaply (!) attack some ECC crypto. All of which is not a problem when algorithm selection is done carefully.
Note that sabotaging crypto is not the same as breaking it. Breaking crypto means successfully attacking non-sabotaged crypto. (Crypto lingo deviates a bit from common use here.)
Re: (Score:2)
Run your own CA (Score:2)
Re: (Score:3, Interesting)
Reasonable idea, but I suspect GE, Samsung, Whirlpool, and all the other manufacturers of Internet connected widgets will force you to buy a certificate from their app store. Hacking your light bulb to install your own certificate will be a federal crime, punishable by PMITA prison or worse.
Re: (Score:2)
Re: (Score:2)
You've confused encryption with authentication. It doesn't need to be authenticated, the idea is to stop drive-by starbucks script kiddies, mass surveillance. Targeted attacks will always be an issue, even with strong, well auth'd encryption.
Re: (Score:2)
Re: (Score:2)
On the minus side, your mood lighting may be a lot more expensive if it suddenly needs a significant CPU power upgrade and far more complex software.
Re: (Score:2)
Re:Encryption (Score:5, Insightful)
Nothing is NSA-proof, therefore we should just scrap TLS and transmit everything in plaintext, right? The whole point here is not to make the system undefeatable, just to increase the cost of breaking it, just like your door lock isn't perfect, but still useful. If HTTP was always encrypted, even with no authentication, it would require the NSA to man-in-the-middle every single connection if it wants to keep its pervasive monitoring. This would not only make the cost skyrocket, but also make it trivial to detect.
Re: (Score:2, Insightful)
You are confused. The modern crypto we have _is_ NSA proof (as the NSA made sure of that). The protocols using it are a very different matter. These have the unfortunate property that they are either secure or are cheap to attack (protocols do not have a lot of state and hence cannot put up a lot of resistance to brute-forcing). Hence getting the protocols right, and more importantly, designing them so that they have several effective layers of security and can be fixed if something is wrong is critical. U
Re: (Score:2, Interesting)
Nothing is NSA-proof,
NSA proof is possible unless NSA includes goons armed with $5 wrenches.
The whole point here is not to make the system undefeatable, just to increase the cost of breaking it, just like your door lock isn't perfect, but still useful.
If you can't view traffic then traffic is safe from you therefore it is not necessary to encrypt traffic.
If you can view traffic then you have everything necessary to own that traffic.. TCP initial sequence number and fast pipe is all you need... nobody is doing any of the filtering necessary to prevent source address spoofing so these attacks are trivial.
If your data is going through a "great firewall", CGN (everyone using a cellular ne
Re: (Score:2)
How do you explain to the user well their data might be encrypted yet their data is not protected since it is not trusted?
I'm talking about http here, not https. The idea is that even with http -- where you don't pretend that anything is secure -- you still encrypt everything. It's far from perfect, but it beats plaintext because the attacker can't hide anymore -- it has to be an active attack. I don't pretend to know all about the pros and cons of http 2, but plaintext has to die.
Re: (Score:2)
> Nothing is NSA-proof, therefore we should just scrap TLS and transmit everything in plaintext, right?
I'm afraid that this is a common approach. I've seen numerous system designers refuse to enable encryption on their internal websites or unternal softwre on the basis that "if they're inside our networks, we'e got much bigger problems". It's one of the mantras of bad engineers, much like "there's no point to documentation, just read the code".
Re: (Score:2)
Re: (Score:2)
It makes man in the middle attacks trivially possible and thus rendering the encryption useless. The problem with channel encryption is that you need the share some form of keying material and if you don't authenticate you can't trust the keying material. The only cases where encryption without authentication sort of works is when the keying material is previously shared and you can reasonably assume that it was not compromised. But in the case of the web, this is basically impossible.
Re: (Score:3)
DNSSEC gives you assurance that the domain you're connecting to is the server that says its hosting that domain. So you have no MitM attacks with un-authenticated encryption.
So splitting auth from encryption is a good thing, and can be done properly. You can have both, but at the moment you can only have both or none. None, is obviously not good.
Re: (Score:2)
To quote myself:
The only cases where encryption without authentication sort of works is when the keying material is previously shared and you can reasonably assume that it was not compromised
Reading comprehension much?
Re: (Score:3)
With encryption without authentication, many people will assume they gain some security when they are not
The do gain some security. They gain security against passive adversaries. They don't gain security against adversaries who are able to intercept and modify their traffic. The question is whether the first threat model is a sensible one to care about. Given that we now know that there is at least one global passive adversary (and likely others who didn't have a Snowden), it seems sensible to me.
Also, it has higher overhead and higher implementation complexity, increasing cost both for the implementation and the platform it runs on (thing small embedded devices, e.g., that can do (very basic) http even on small 8 bit controllers, but forget about fitting any crypto on the
Most IoT platforms are looking at using the ARM Cortex-M0 (or M0+) as the client devices. At the data rates r
Re: (Score:2)
Not at all. It would appear to the user like any non-TLS site does today - standard address bar, no padlock, nothing. What goes on in the background doesn't matter as far as the user is concerned. In fact, I'd be surprised if many users have even considered that their data is being sent plaintext on the majority of sites. Changing the background to be encrypted would be a good way to block a lot of pass
Re: (Score:2)
If all traffic is encrypted, but trivial to break (as a protocol or other flaw can easily make it), then the decryption effort is not an issue. It can be done by dedicated hardware. Just to give you an idea, a current CPU does maybe 100MB/s AES per core in software but something like 2GB/s in its hardware AES engine. For a modern multi-core CPU, the AES performance is in the order of the main memory bandwidth, and the AES engine is just a tiny addition to the chip. Dedicated hardware is a few orders of magn
Re: (Score:2)
Re: (Score:2)
Makes no diference. If I stuck wireshark on your network, I'd see all the packets being sent and could read them quite happily.
If you haven't encrypted them, I'd be able to read them without any problem whatsoever. The difference between HTTP and Custom protocol in this case - no whatsoever.
All http gives you is a standard set of knowledge, routing, software and devices that know how to handle it. That's pretty useful, given that you're not anymore secure than if you used it. So you might as well use http.
Re: (Score:2, Troll)
So Genius ... how's that phone app transmit commands to the system controlling your home? Oh yeah. Through the internet. It's as if security still matters there huh?
Not through HTTP. Oh, never mind. You are too dumb to know the difference between HTTP and The Internet.
Re: (Score:2)
Re: (Score:2)
If you're behind a firewall that shunts all traffic to a proxy that allows only outgoing HTTP connections
I'm not. Are you?
At one time, I was. I imagine many still are, especially in workplaces and on IPv4 address-poor continents.
Re: (Score:2)
Interestingly in the above case encryption is almost irrelevant. What are you doing with the information that he turned on the light bulb 23 and switched of his AC? The important bit here is authentication and that can be trivially don without channel encryption. You still need an "encryption" scheme like an HMAC for the authentication though.
Re: (Score:2)
arent these the exact situations where encryption is necessary? i dont want someone sniffing the plain text credentials of my fridge as they fly through the air and then turning the temperature up to 40 ...
Re: (Score:2)
No, you really don't. Encryption is good for Facebook, but enforcing it for your Internet-of-Everything lightbulb or temperature probe in the basement gains nothing other than more complex bugs and lower battery life.
By default is not the same as being mandatory. I don't see anything wrong in principle in having the endpoints explicitly agree to use an unencrypted connection. In practice I'd only want if it it did not add a significant overhead or make it more difficult to set up a server without certificates.
Re: (Score:2)
No, you really don't. Encryption is good for Facebook, but enforcing it for your Internet-of-Everything lightbulb or temperature probe in the basement gains nothing other than more complex bugs and lower battery life.
While you tried to make a good point here, your example was horrible.
Being concerned about Facebook encryption is like gift-wrapping an elephant with a piece of string.
Your password is about the only damn thing not shared or sold on that site, so I'm not sure what the hell anyone is trying to hide or secure.
Re: (Score:2)
There are already low power wifi chips that support WPA2. Crypto is slowly becoming a standard peripheral on low cost microcontrollers and radios (e.g. a â1 Atmel XMEGA supports AES in hardware). I have implemented protocol level encryption with user selectable keys over an 868MHz ultra low power (5+ years life from two AA cells) network for my job, so it definitely is possible.
Personally I wouldn't buy an internet connected device that didn't support encryption. Luckily even the cheapest, crappiest AR
Re: (Score:2)
Re:Encryption (Score:5, Informative)
Last I heard, it still supports unencrypted, but only if both the client and server ask for it. If either one asks for encryption, then the connection is encrypted, even if there's no authentication (i.e. certificate). With no certificate, it's still possible to pull an active(MitM) attack, which is much harder to pull off at a large scale without anyone noticing (i.e. you can just collect all data you see).
Re:Encryption (Score:5, Informative)
Last I heard, it still supports unencrypted, but only if both the client and server ask for it. If either one asks for encryption, then the connection is encrypted, even if there's no authentication (i.e. certificate). With no certificate, it's still possible to pull an active(MitM) attack, which is much harder to pull off at a large scale without anyone noticing (i.e. you can just collect all data you see).
A server cannot ask for encryption.
Unless the client establishes a secure connection in the first place, the server has no way of knowing if the client is actually who they claim to be. If the client attempts to establish a secure connection and the server responds with "I can't give you a secure connection" then the client needs to assume there is a man in the middle attack going on and refuse to communicate with the server.
There is no way around it, security needs to be initiated on the client and the server cannot be allowed to refuse a secure connection.
HSTS is a partial solution for this problem (http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security)
Re:Encryption (Score:5, Informative)
A server cannot ask for encryption.
AFAIK, HTTP2 allows the server to encrypt even if the client didn't want to.
Unless the client establishes a secure connection in the first place, the server has no way of knowing if the client is actually who they claim to be. If the client attempts to establish a secure connection and the server responds with "I can't give you a secure connection" then the client needs to assume there is a man in the middle attack going on and refuse to communicate with the server.
If you're able to modify packets in transit (i.e. Man in the Middle), then you can also just decrypt with your key and re-encrypt with the client key. Without authentication, there's just nothing that's going to prevent a MitM attack. Despite that, being vulnerable to MitM is much better than being vulnerable to any sort of passive listening.
Re: (Score:2)
Despite that, being vulnerable to MitM is much better than being vulnerable to any sort of passive listening.
Although this was sort of my sentiment too, but it basically is false. Passive listening on WiFi connection may be foiled, to a certain degree. But the moment someone has access the the wires the game is over no matter how you play it. Passive listening that requires penetrating a device or inserting hardware, is basically MitM and as a result going from passive listening to active MitM attack is basally just upgrading the software.
Re: (Score:2)
You're confusing encryption with authentication.
Re: (Score:2)
A server cannot ask for encryption.
So a request to HTTP://www.example.com can't be redirected to HTTPS://www.example.com? Because I would consider that the server asking for encryption.
There is no way around it, security needs to be initiated on the client and the server cannot be allowed to refuse a secure connection.
Great, so when we go to all secure, we don't need any more 500 series error messages, as the servers aren't "allowed" to refuse connections.
Re: (Score:3)
Except that HTTP 500 is a proper and valid response when it comes to the HTTP. Following your oddball logic a 404 would also be "refused", what nonsense. The server accepted the connection, the request handler for that resource crapped it's pants and the server is dutifully informing you of that fact. What is "not accepting the connection", where? The fact that you got a 500 means the server accepted your connection. Even a 401 Unauthorized is technically an accepted connection, since the authorization is f
Re: (Score:2)
RFC 2817 defines how a server (or a client) can require TLS. It is widely deployed for printers but less so on the Internet due to proxies not supporting it.
Re: (Score:3)
I fear that would train users to mass click through certificate warnings, or even to install shady "helpful" software that will "manage" the problem for them.
Re: (Score:2)
Given the (lack of) alternatives, it's hard to blame them for doing that rather than being abandoned by users; but it's pretty much the state of play right now.
Re:Encryption (Score:4, Insightful)
That is stupid. First, encryption essentially belongs on a different layer, which means combining it with HTTP is always going to be a compromise that will not work out well in quite a number of situations. Hence at the very least you should be able to leave it out, and either do without or use a different mechanism on a different layer. SSL (well, actually TLS) would have worked if it had solved the trust-in-certificates problem, which it spectacularly failed at due to naive trust models, that I now suspect were actively encouraged by various Three Letter Agencies at that time. In fact, if you control the certificates on both ends, TLS works pretty well and does not actually need a replacement.
That said, putting security for specific, limited (but widely used) scenarios in can be beneficial, but always remember that it makes the protocol less flexible as it needs to do more things right. And there has to be a supporting infrastructure that actually works in establishing identity and trust. In order for the security to have any benefit at all, it must be done right, must be free from fundamental flaws and must give the assurances it was designed to give. That is exceedingly hard to do and very unlikely to be done right on the first attempt.
Re:Encryption (Score:5, Insightful)
In order for the security to have any benefit at all, it must be done right, must be free from fundamental flaws and must give the assurances it was designed to give. That is exceedingly hard to do and very unlikely to be done right on the first attempt.
SPDY's security component is TLS. SPDY is essentially just some minor restrictions (not changes) in the TLS negotiation protocol, plus a sophisticated HTTP acceleration protocol tunneled inside. So this really isn't a "first attempt", by any means. Not to mention the fact that Google has been using SPDY extensively for years now and has a great deal of real-world experience with it. Your argument might hold water when applied to QUIC, but not so much to SPDY.
It really helps to read the thread and get a sense of what the actual dispute is about. In a nutshell, Kamp is bothered less that HTTP/2 is complex than that it doesn't achieve enough. In particular, it doesn't address problems that HTTP/1.1 has with being used as a large file (multi-GB) transfer protocol, and it doesn't eliminate cookies. Not many committee members seem to agree that these are important problems for HTTP/2, though most do agree that it would be nice some day to address those issues, in some standard.
What many do agree on is that there is some dangerous complexity in one part of the proposal, a header compression algorithm called HPACK. The reason for using HPACK is the CRIME attack, which exploits Gzip compression of headers to deduce cookies and other sensitive header values. It does this even though the compressed data is encrypted. HPACK is designed to be resistant to this sort of attack, but it's complex. Several committee members are arguing that it would be reasonable to proceed without header compression at all, thus greatly simplifying the proposal. Others are arguing that they can specify HPACK, but make header compression negotiable and allow clients and servers to choose to use nothing rather than HPACK, if they prefer (or something better when it comes along).
Bottom line: What we have here is one committee member who has been annoyed that his wishes to deeply rethink HTTP have been ignored. He is therefore latching onto a real issue that the rest of the committee is grappling with and using it to argue that they should just throw the whole thing out and get to work on what he wanted to do. And he made his arguments with enough flair and eloquence to get attention beyond the committee. All in all, just normal standards committee politics which has (abnormally) escaped the committee mailing list and made it to the front page of /.
Re: (Score:2)
His previous comments are much better (Score:4, Interesting)
Why HTTP/2.0 does not seem interesting [varnish-cache.org]
Good point, except... (Score:3, Interesting)
...the entire idea is to cripple security and the ability to provide for privacy. In the end, National Security agencies take the view that digital networks are a primary source of intelligence. Thus, being able to bug and break into systems is a national security priority. The group are dominated by companies that rely on government contracts, so they do their bidding and weaken the specs.
Ultimately, you live in an Oligarchy, not a democracy, so no one cares about your opinion or that of anyone else, unless you happen to have lots of cash. If you did have lots of cash, then you too would be trying to undermine security and privacy to ensure no one takes it from you.
Deal with it.
Re: (Score:3)
Was tempted to post as an Anonymous Coward for effect.
Re:Good point, except... (Score:4, Interesting)
It is a technical discussion. Unless you are prepared to provide feedback on how to make a more private/anonymous protocol which can serve as a drop-in replacement for HTTP 1.1, "normal users" will just serve as background noise.
PHK's biggest issue IMHO is that HTTP/2 will break his software (Varnish), by requiring things his internal architecture can't really deal with (TLS).
Re:Good point, except... (Score:5, Informative)
PHK's biggest issue IMHO is that HTTP/2 will break his software (Varnish), by requiring things his internal architecture can't really deal with (TLS).
Varnish was never intended to support TLS, nor do the majority of Varnish users (myself included) want it to. The core issues being discussed have little to do with Varnish, aside from the fact that PHK has an excellent understanding of HTTP and high performance content delivery. Having written an HTTP proxy of my own to perform certain other tasks, I understand and largely agree with his sentiments.
That said, it should be noted that many people who need to support TLS connections already use separate software in front of Varnish for cases where high performance intermediate HTTP caching is desirable. This is really a separate topic from discussion of HTTP/2 and/or SPDY, but implementation of a SPDY to HTTP proxy could handle cases where an administrator wishes to run software that only speaks HTTP, albeit with the drawback that SPDY-specific features would be unavailable.
For many use cases, the ability to support 30,000 concurrent HTTP connections with a single VM outweighs the value proposition of encrypting the content in transit, especially for cases where the content in transit isn't remotely sensitive in nature. While "encryption doesn't add much overhead, Google said so" is a commonly parroted idea these days, if you take the opportunity to test various deployment scenarios you'll quickly find that assertion is false for many of those use cases.
Mixing insensitive and sensitive content (Score:3)
For many use cases, the ability to support 30,000 concurrent HTTP connections with a single VM outweighs the value proposition of encrypting the content in transit, especially for cases where the content in transit isn't remotely sensitive in nature.
It isn't necessarily that the work that you're trying to serve is "remotely sensitive in nature". It's that other parts of the same page may be "sensitive in nature", and browsers throw up pop-up windows about "mixed content" when a secure document transcludes an insecure resource. For example, the logged-in user's session cookie is "sensitive in nature" because an attacker can view it and replay it to impersonate the user. But because ad networks have a history of not supporting HTTPS, many sites have had
Re: (Score:2)
I'm keenly aware of the many issues surrounding mixed content. I'm not referencing any use cases where that would be an issue; far from it, I'm referencing cases where a single entity controls the serving of non-sensitive content, and I'm certainly not suggesting serving session cookies over plaintext under any circumstances. You might be interested to learn that I spend a considerable amount of time every week educating people on issues far more complex than the limited fundamentals you've referenced here,
Re: (Score:2)
Re: (Score:2)
I'm explicitly not referring to Slashdot. As much as I may disagree with said practices (and Slashdot has a seemingly ever-increasing pile of bad practices following the last buyout), poor practices on the part of a particular site operator bear no relation to responsible use cases.
Re: (Score:2)
To add to the heap of bad practices in relation to the current conversation, I'll repeat my earlier comment that Slashdot has still not renewed their certificate [cubeupload.com], and thus even subscribers are in a bit of a pickle with regard to TLS unless they happen to have previously noted the key fingerprint of the now-expired certificate and place rather high trust in the internal integrity of Slashdot operations. Given the circumstances, I could not in good professional conscience recommend such trust.
Re: (Score:2)
I'll add one more bit of fuel to the fire of Slashdot irresponsibility. It appears slashdot.org uses Apache 2.2.3 on CentOS to serve its content, and while this could be due to an obfuscated host response header, the issuance date for their wildcard SSL certificate (which really shouldn't be used in this case anyhow) was April 20, 2013. Unless Slashdot went to the trouble of avoiding OpenSSL all this time, this means the private key for their wildcard certificate was vulnerable to the Heartbleed vulnerabili
Re: (Score:2)
On an entirely separate note, it seems Slashdot still has not renewed their own certificate [cubeupload.com].
Re: (Score:2)
That privacy and anonymity are at least equal to encryption, if not more important.
I'm really interested on your plan to enforce privacy and anonymity in the HTTP protocol. Because I don't see how you can do that......
Re: (Score:2)
Convincing (Score:5, Interesting)
There is also the other thing that there is no urgent need to replace HTTP/1.1, despite of what people claim. Sure, it has problems, but the applications it does not support so well are things that there is not urgent need for, hence there is no urgent need for a protocol replacement. It would be far better to carefully consider what to put into the successor and what not. And KISS should the the overriding concern, anything else causes a lot more problems and wastes a lot more resources than having the successor a few years later.
Re: (Score:2)
Indeed. And getting a bit more experience with it to be able to judge its merits and problems (unfortunately Google has been both short-sighted and too specific for their own needs in the past) would be a good thing too, as it provides the opportunity to learn some lessons before setting anything in stone.
Moving goal posts (Score:5, Insightful)
I don't think HTTP has any problems with security. All the real world problems with HTTP security are caused by:
* dismally slow roll out of dnssec. It should have been finished years ago, but it has barely even started.
* the high price of acquiring an SSL certificate (it's just bits!).
* slow rollout of IPv6 (SSL certificates generally require a unique IP and we don't have enough to give every domain name a unique IP).
* arguments in the industry about how to revoke a compromised SSL certificate, which has lead to revocation being almost useless.
* SSL doesn't really work when there are thousands of certificate authorities, so some changes are needed to cope with the current situation (eg: dsnssec could be used to prevent two certificate authorities from signing the same domain name)
Re: (Score:3)
You've got identification, which you only really want in a subset of cases (your bank, say); but which is actually slightly expensive to do properly and then you've got encryption, which you want in basically all cases (would you ever not at least want it?) which is cheap; but requires t
MITM from day one breaks key continuity management (Score:3)
However, much of the time your main concern is that the certificate isn't an MiTM, and that you are talking to the same person or entity you were talking to previously.
That's called the "key continuity management" paradigm. But KCM breaks down if the first time you talk to someone happens to be through a man in the middle. If your Internet connection is through a MITM proxy, as seen in bug 460374 [mozilla.org] and in many corporate networks, then "the same person or entity you were talking to previously" would be the MITM. For this reason, even though SSH is most often used in KCM mode, the "Please answer yes or no" prompt urges the user to confirm the server's key fingerprint out of b
Re: (Score:2)
Re: (Score:2)
Well you can always do it the TOR way, basically the onion address is a fingerprint of the public key. You'd still need DNS to tell you that "435143a1b5fc8bb70a3aa9b10f6673a8.pubkey" can be found at ip 1.2.3.4 or ipv6 abcd:abcd:abcd:abcd:abcd:abcd:abcd:abcd) so you could always suffer denial of service, though the "pubkey" DNS server should refuse requests to redirect that aren't signed by that public key but nobody else would have the correct key for a MITM. The obvious downsides:
Re: (Score:2)
DNSSEC should give you confidence that the person who currently "owns" the domain name is the same person who "owns" the server you're talking to. That should be enough for most casual connections. But it also puts all of your security in one basket. Take over the domain entry and you control everything.
So the next obvious step is to get multiple independent authorities to verify your identity, sign your key and provide that information via DNS and / or at connection establishment time. Then we should rais
cost of SSL certificates (Score:2)
Re: (Score:3)
The cost of SSL certificates is not in the bits.
Back in the day you actually had to pick up the phone, speak with someone and provide corporate documentation. Now you purchase certs from a computer in an 100% automated process. Completely "just bits" worthless.
It's in the security of the private key, some validation in extended verification certs
Extended verification is a foolish scam to enrich CAs. Users hardly understand what the padlock icon means in URL bar after being intentionally inundated with fake padlock gifs and "we're secure" believe what we say assertions littering every online commerce and banking site on the planet.
Re: (Score:2)
I don't think HTTP has any problems with security.
I disagree. We live in a world where phishing attacks are common, and the PKI system is fragile. Fragile as in when Iran compromised DigiNotar and people most likely died as a result.
The root cause of both problems is the current implementation of the web insists we use the PKI infrastructure every time we visit the bank, store or whatever. Its a fundamental flaw. You should never rely on Trent (the trusted third party, the CA's in this case) when you don't have to. Any security implementation does the
Death by Committee (Score:5, Insightful)
HTTP/1.1 is roughly seventeen years old now - technically HTTP/1.0 came out seven years before that, but in terms of mass adoption, NSFNet fizzled in '94 and then people really started to pay attention to the web - I had my first webpage about six months before that (at College) and there were maybe a dozen in the whole school who had heard of it previously. Argue for seven years if you'd like, but I'll say that HTTP/1.0 got seriously revised after three years of significant broad usage. SSLv3, still considered almost usable today, was released the year before. TLSv1.2, considered good, has been a standard for over five years and still it's poorly supported though now critically necessary for some security surfaces.
After this burst of innovation, somebody dreamt up the W3C and we got various levels of baroque standards, all while everybody else solved the same problems over and over again. IETF used to be pretty efficient, but it seems like they're at the same point now.
I won't argue for SPDY becoming HTTP/2.0 but I will admire it as an effort to freaking do something. Some guys at Google said, "look, screw you guys, we're going to try to fix this mess," and they did something. While imperfect, they still did enough that the HTTP/2.0 committee looked at it and said (paraphrasing), "hrm, since we haven't done anything useful for 15 years, let's take SPDY and tweak it and call it a day's work well done."
The part Google got most right was the "screw you guys" part - central-planning the web is not working.. I'm not positive what the right organization structure looks like, but it's not W3C and IETF. We need to figure out what went right back in the mid 90's and do that again, but now with more experience under our belts. This talk of "one protocol to rule them all for 20 years" is undeniably a toxic approach. HTTP/1. 1 should have been deprecated by 2005 and we should be on to the third iteration beyond it by now. Yeah, more core stuff for the devs to do - used to be we had people who could start a company and write a whole new web browser in a year - half the time it takes to change the color of tabs these days.
And don't start with this "but this old browser on ... " crap either - we rapidly iterated before and can do it again. Are there people who fear change? Sure - and nobody is going to stop HTTP/1.1 from working 50 years from now, but by golly nobody should want to use it by then either.
Too much. (Score:3)
It's better they chuck some of it and stick with a few good bits. The encryption can be trashed as far as I care; that can be another group's problem. We need proxy caching and you can't do it with encryption and be secure.
The reason we can't move like before isn't the committee, it's that we now have a global system built around it and a great deal of investment in it. In the 90s it was all new; low risk, low impact. Today, there is a vast territory claimed and set; when you make new things you can't des
Re: (Score:3)
Trouble is, if you just wander off and do your own
Re:Death by Committee (Score:4, Insightful)
Re:Death by Committee (Score:4, Insightful)
Google has derailed so much of the web's evolution in an attempt to control it that they do not have the right for them or any Google lover to suggest they get to the web's standards from committees. From the "development" trees in Chrome, to WebRT and WebM, they have splintered the internet numerous times with no advantage to the greater good.
The committee was strong armed into considering SPDY simply because they knew Google could force it down everyone's throats with their monopoly powers across numerous industries (search, advertising, email, hosting, android, etc.). HTTP/1.1 has worked well for the web. The internet has not had any issues in the last 22 years except when assholes like Google and Microsoft decided to deviate from a centralized standard.
There is no way we should let Google set ANY standard after the numerous abuses they have done over the last 8 years, nor should any shills like you be allowed to suggest they should be the one calling the shots.
So, kindly go to hell.
How does WebM splinter the Internet? (Score:3)
From the "development" trees in Chrome, to WebRT and WebM, they have splintered the internet numerous times with no advantage to the greater good.
VP8 is a royalty-free video codec whose rate/distortion performance is in the same league as the royalty-bearing MPEG-4 AVC. WebM is VP8 video and Vorbis audio in a Matroska container. Did Xiph likewise "splinter[] the Internet" by introducing Vorbis as a royalty-free competitor to the royalty-bearing MP3 and AAC audio codecs? If so, how? If not, then how did Google's On2 division "splinter[] the Internet" by introducing WebM as a competitor to MPEG-4?
So was it a bad idea to introduce PNG beside GIF? (Score:2)
The very fact that they introduced another video codec, when they knew h264 was the dominant one, is proof enough.
"The very fact that Google introduced another touch-screen smartphone operating system, when they knew iOS was the dominant one, is proof enough."
After all they said "ah, but we'll stop using h264 in a while and remove it from Chrome" to placate those concerns, only to reneg on that idea and leave TWO codecs in Chrome, forcing others to scramble and implement h264 support.
You have one codec for people who also plan to serve to iOS and Flash Player (namely H.264) and one codec for people who plan to avoid paying royalties to MPEG-LA (namely VP8). It's the same reason that browsers added PNG alongside GIF: to allow webmasters to avoid paying royalties to Unisys.
Re: (Score:2)
I completely agree. W3C seems to be always behind reality, trying to describe it, but not define it. IETF did a lot of very useful work, but they have been branching out into rather obscure protocols recently. Where is HTTP/1.2? Surely HTTP/1.1 is not perfect?
And Google did what Google does: they threw together a prototype and checked how it would work. And it seems it is working very well for them, but maybe not so much for others.
I would also advocate to separate some of the concerns. Transmitting huge am
Re: (Score:2)
We rapidly iterated before when the web was niche. When we had, comparatively speaking, very few users. Before there was a mass adoption in business.
Back then the disruption of rapid iteration and accompanying obsolescence was not a big problem. Now it's a massive problem.
Sure, one can argue that institutions stuck using IE6 (or even IE8) should get with the times and update, but the reality is that is a very costly exercise. One can't simply blindly update a few thousand machines in a company when the
Arguing about other peoples arguments (Score:4, Insightful)
I think following demonstrates reality participants in standards organizations are constrained by the market and while they do yield some power it must be exercised with extreme care and creativity to have any effect past L7.
As much as many people would like to get rid of Cookies -- something
you've proposed many times -- doing it in this effort would be counter-productive.
Counter-productive for *who* Mark ?
Counter-productive for FaceBook, Google, Microsoft, NSA and the other mastodons who use cookies and other mistakes in HTTP
(ie: user-agent) to deconstruct our personal identities, across the entire web ?
Even with "SSL/TLS everywhere", all those small blue 'f' icons will still tell FaceBook all about what websites you have visited.
The "don't track" fiasco has shown conclusively, that there is never going to be a good-faith attempt by these mastodons to improve personal privacy: It's against their business model.
And because this WG is 100% beholden to the privacy abusers and gives not a single shit for the privacy abused, fixing the problems would be "counter-productive".
If we cared about human rights, and privacy, being "counter-productive" for the privacy-abusing mastodons would be one of our primary goals.
It is impossible for me to disagree with this. Have several dozen tracking/market intelligence/stat gathering firms blackholed in DNS where creative use of DNS to implement tracking cookies do not work. I count on the fact they are all much too lazy to care about a few people screwing with DNS or operating browser privacy plugins.
I'm personally creeped out by hoards of stalkers following me everywhere I go...yet I see the same mistakes play out again and again... people looking to solve problems without consideration of second order effects of their solutions.
You could technically do something about those army of stalker creeps ... yet this may just force them underground, pulling same data thru backchannels established directly with site - rather than a cut and paste javascript job it would likely turn into module loaded into backend stack with no visibility to the end user or ability to control.
While this would certainly work wonders for site performance and bandwidth usage... those limited feedback channels we did have for the stalked to watch the stalker are denied. On flipside of the ledger not collecting direct proof of access could disrupt some stalker creeps business models.
I think emotional half-assed reaction to NSA with established ability to "QUANTUM INSERT" ultimately encourages locally optimal solution having effect of affording no actual safety or privacy to anyone.
Not only does opportunistic encryption provide a false sense of security to the vast majority of people who simply do not understand relationship between encryption and trust such deceptions effectively work to relieve pressure on need for a real solution.. which I assume looks more like DANE and associated implosion of SSL CA market.
My own opinion HTTP 2.0 is only a marginal improvement with no particular pressing need... I think they should think hard and add something cool to it.. make me want to care...as is I'm not impressed.
SPDY doesn't solve the real issues (Score:3, Insightful)
The biggest problem with SPDY is that it's a protocol by Google, for Google. Unless you are doing the same as Google, you won't benefit from it. In my free time, I'm writing an open source webserver [hiawatha-webserver.org] and by doing so, I've encountered several bad things in the HTTP and CGI standard. Things can be made really more easy and thus faster if we, for example, agree to let go of this rediculous pathinfo, agree that requests within the same connection are always for the same host and make the CGI headers work better with HTTP.
You want things to be faster? Start by making things more simple. Just take a look at all the modules for Apache. The amount of crap many web developers want to put into their website can't be fixed by a new HTTP protocol.
We don't need HTTP/2.0. HTTP/1.3 with some things removed, fixed or at least have some vague things be specified more clearly, would be more than enough for 95% of all the websites.
Tunnelling (Score:5, Insightful)
Maybe if we weren't trying to tunnel every god damned protocol and transport known to mankind through HTTP it wouldn't be such a massive problem to re-engineer and fix.
Seriously: The idea of TCP was to have multiple protocol ports, not to tunnel everything over :80.
Re:Tunnelling (Score:4, Insightful)
Re: (Score:3, Insightful)
Dang, I'm sad Linus Torvalds, John Carmack, et. al. are "too self important" because someone else made a wikipedia page about them. Or maybe programming, especially concerning the next standard for what most of the internet would ideally run, is too important for fucking hipsters to get involved.
Re:If you are a programmer and have a Wikipedia pa (Score:4, Insightful)
And why shouldn't have a moratorium and review ESPECIALLY in regards to what has come to light about how fucked the internet is in just the last year?
Why proceed blindly with a protocol that comes from Google, who gladly works hand in hand with the NSA and is a Corporation whose core focus to track and monitor every single person and thing online?
What? Just proceed with something that addresses NONE of the present mass surveillance issues, and possibly could make us less secure than we are now just because we don't have a fall back lined up?
Or how about we take this time to step back and reevaluate what HTTP2.0 needs to be -- such as changing to a focus on security and privacy.
Re: (Score:2)
Re: (Score:3)
Google, who gladly works hand in hand with the NSA
I have to call bullshit on this rather common but unsupported and unsupportable claim.
There is no evidence that Google had cooperated with the NSA in any way other than actually required by law, and there they claim to be sticklers in demanding that the government dot the i's and cross the t's, including refusing any requests that are overly broad. We can't see what they actually do, of course, because the law makes it illegal for them to say... however Google was the company that first started publishing
Re: (Score:2)
Globally unique addresses are really unintuitive. What is this, the phone system? /s