How Not To Design a Protocol 186
An anonymous reader writes "Google security researcher Michael Zalewski posted a cautionary tale for software engineers: amusing historical overview of all the security problems with HTTP cookies, including an impressive collection of issues we won't be able to fix. Pretty amazing that modern web commerce uses a mechanism so hacky that does not even have a proper specification."
The main thing is ... (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
Delicious strawberry flavored death!
Re: (Score:2)
Aww shoot... (Score:2)
Darn...and here I thought this was going to be an article on the OSI Network model...
http://en.wikipedia.org/wiki/OSI_model [wikipedia.org]
Re:Aww shoot... (Score:5, Insightful)
Re: (Score:2)
It has been implemented in IS-IS [wikipedia.org], used in some service provider networks.
Re: (Score:2)
That's because it's just a description of the network structure, not a protocol in itself. It's only a specification in the sense that it accurately describes how networks must be layed out. It is in fact implemented everywhere. It has to be, or a network connection does not exist. The specific protocols don't matter, the OSI model doesn't care about them beyond describing which layer they fall into.
Layer 1 is your physical connection - any medium over which data is transmitted (coax, microwave, fiber,
Re: (Score:2)
Re: (Score:2)
Re:Aww shoot... (Score:4, Informative)
Ah, the OSI model [sic, recte suite], [...] having never been implemented!
Saying that the full OSI suite has never been implemented is like saying that nobody implements the full set of standard track RFCs -- which is true, since some standard track RFCs are mis-designed or even contradict other standard-track RFCs.
Large parts of the OSI suite have been implemented, and some are still running today. For example, IS-IS [wikipedia.org] over CLNP [wikipedia.org] is commonly used for routing IP and IPv6 traffic on operators' backbones. (I was about to mention LDAP and X.509 before I realised they are not necessarily the best-designed parts of OSI.)
Where you are right, though, is that large parts of OSI are morasses of complexity that have only been implemented due to government mandate and have since been rightly abandoned.
More restrictive spec could have averted this (Score:5, Interesting)
I still think allowing cookies to span more than one distinct domain was a mistake. If we had avoided that in the beginning, cookie scope implementations would be dead simple and not much functionality would be lost on the server side. Also, JavaScript cookie manipulation is something we could easily lose for the benefit of every user, web developer and server admin. I postulate there are very few legitimate uses for document.cookie
Re: (Score:3, Interesting)
It was created to allow a site to dispatch some functionality within a session to dedicated computers, let's say a catalog server, a shopping cart server and a cashier server.
Re: (Score:2)
This functionality would be achieved with a very simple rule. The rule is simply that for a given hostname, the cookie can be accessed by any hostname that is LONGER than the hostname it was set for. So if "example.co.uk" sets a cookie, "foobar.example.co.uk" can access it. A website can simply make use of this by directing people to the core web site. Note that even this can be abused. A registrar might set up "co.uk" and set a cookie that every domain in "co.uk" can access.
Re: (Score:2)
Then describe those "other means".
Re:More restrictive spec could have averted this (Score:4, Insightful)
First, this happens only rarely in practice. Most of the time these types of ID handovers are done by huge commercial sites such as eBay and even they have cleaned up their URL mess considerably in the last years. Nowadays, big sites tend to have multiple transparent front-end servers that handle incoming connections to a single domain. Using subdomains as a means of differentiating separate machines is not all that common anymore, especially when they exchange lots of data.
But if you really need this functionality, you can just as easily pass a one-time auth token by URL and create another cookie on the second server. There is really no trickery involved here. And if you need to make it very very secure, you can use OAuth, but that would be overkill for the scenarios we're talking about here.
Re: (Score:2)
This won't work if at the moment the user's connection breaks, times out or whatever causes the transfer to fail and he's trying to restart the transaction. And it is susceptible to replay attacks.
Re: (Score:2)
With that restriction, you'd have had to log in to tech.slashdot.org, linux.slashdot.org, slashdot.org, and so on all separately. As it is, you have to log into slashdot.org and {some subdomain}.slashdot.org separately.
A better solution might be to put cookie policies in either a well-known location on the web server (as with robots.txt) or in DNS records (as with SPF). That way, domains like slashdot.org could say 'cookies are shared between all subdomains' while domains like .com would have no entry
Re: (Score:2)
What the world needs is more people who actually do things instead of sniping cheap shots from the sidelines
And, if I may add, "How do you know that software won't form the base for an open standard some day?".
Documents take time and cost money. Free reference implementations are priceless.
Re: (Score:2)
That's an excellent point, thank you! Apart from the fact that I don't really have the influence to publish some lofty new protocol standard (and make people care about it at the same time), I absolutely agree with you that things should be tested in real life first. There are many examples where I believe the committee-designed version was horrible compared to something already in practical use - such as XML Schema versus Relax NG.
Case in point, I was totally proud of the protocol I used for the prototype
Re: (Score:2)
Not planned (Score:3, Insightful)
I'd be happy if we could declare valid cookies (Score:2)
On a domain.
Like the crosssite.xml or robots.txt files. "Cookies on this site must follow this pattern." Or somesuch.
Most of the rest, I can cope with. Cookie pollution from various forms of injection, not so much.
Re: (Score:2)
You could actually implement that in your server. Throw away any cookies you are not interested in.
Why the hate.... (Score:5, Informative)
Why go hatin' on this particular protocol?
Most of them are just nuckin futs:
* FTP: needs two connections. Commands and responses and data are not synced in any way. No way to get a reliable list of files. No standard file listing format. No way to tell what files need ASCII and which need BIN mode. And probably more fubarskis.
* Telnet: The original handshake protocol is basically foobar-- the handshakes can go on forever. Several RFC patches did not help much. Basically the clients have to kinda cut off negotiations at some point and just guess what the other end can and will do.
* SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?
Re:Why the hate.... (Score:4, Insightful)
Telnet dates to 1969. FTP dates to 1971. SMTP dates to 1982. HTTP dates to 1991, with the current state of affairs mostly dictated during the late 1990s.
It's excusable that Telnet, FTP and even SMTP have their issues. They were among the very first attempts ever at implementing networking protocols. Of course mistakes were going to be made. That's expected when doing highly complex stuff that has absolutely never been done before.
HTTP has no such excuse. It was initially developed two to three decades after Telnet and FTP. That's 20 to 30 years of mistakes, accumulated knowledge and research that its designers and implementors could have learned from.
And it did learn... (Score:3, Interesting)
It didn't make mistakes that closely resemble those in Telnet, tftp, ftp, smtp, it made what may be considered completely distinct 'mistakes' in retrospect.
However, if you confine the scope of HTTP use to what it was intended, it holds up pretty well. It was intended to serve up material that would ultimately manifest on a endpoint as a static document. Considerations for some server-side programmatic content tweaking based on client given cues was baked in to give better coordination between client and s
Re: (Score:3, Insightful)
HTTP works perfectly fine for the purpose for which it was made: downloading a text file from a server. How were the developers supposed to know that someone was going to run a shop over it?
HTTP and the Web grew organically. That evolution has given it its own version of wisdom teeth.
Re: (Score:2)
Take a look at Session Initiation Protocol (SIP) RFC 3261 if you really want to see crazy.
Re: (Score:2)
Ah, but take a look at RFC 2543. As long as the net-heads had the reins, SIP was still sane. Once the telco actors got in the game, SIP went to hell faster than you could compress the word "idiocy" in your SIGCOMP VM with the counterpart-provided bytecode decomp implementation.
Re:Why the hate.... (Score:4, Informative)
These protocols were designed for a different world:
1) They were experiments with new technology. They had lots of options because no one was sure what would be useful. Newer protocols are simpler because we now know what turned out to be the most useful combination. And the ssh startup isn't that much better than telnet. Do a verbose connection sometime.
2) In those days the world was pretty evenly split between 7-bit ASCII, 8-bit ASCII and EBCDIC, with some even odder stuff thrown in. They naturally wanted to exchange data. These days protocols can assume that the world is all ASCII (or Unicode embedded in ASCII, more or less) full duplex. It's up to the system to convert if it has to. They also didn't have to worry about NAT or firewalls. Everyone sane believed that security was the responsibility of end systems, and firewalls provide only the illusion of security (something that is still true), and that address space issues would be fixed by reving the underlying protocol to have large addresses (which should have been finished 10 years ago).
3) A combination of patents and US export controls prevented using encryption and encryption-based signing right at the point where the key protocols were being designed. The US has ultimately paid a very high price for its patent and export control policies. When you're designing an international network, you can't use protocols that depend upon technologies with the restrictions we had on encryption at that time. It's not like protocol designers didn't realize the problem. There were requirements that all protocols had to implement encryption. But none of them actually did, because no one could come up with approaches that would work in the open-source, international environment of the Internet design process. So the base protocols don't include any authentication. That is bolted on at the application layer, and to this day the only really interoperable approach is passwords in the clear. The one major exception is SSL, and the SSL certificate process is broken*. Fortunately, these days passwords in the clear are normally on top of either SSL or SSH. We're only now starting to secure DNS, and we haven't even started SMTP.
---------------
*How is it broken? Let me count the ways. To start, there are enough sleazy certificate vendors that you don't get any real trust from the scheme. But setting up enterprise cert management is clumsy enough that few people really do it, hence client certs aren't use very often. And because of the combination of cost and clumsiness of issuing real certs, there are so many self-signed certs around the users are used to clicking through cert warnings anyway. Yuck.
Re: (Score:2)
*How is it broken? Let me count the ways. To start, there are enough sleazy certificate vendors that you don't get any real trust from the scheme. But setting up enterprise cert management is clumsy enough that few people really do it, hence client certs aren't use very often. And because of the combination of cost and clumsiness of issuing real certs, there are so many self-signed certs around the users are used to clicking through cert warnings anyway. Yuck.
I would just like to add: regardless, you are placing your trust in a central authority. That authority can be subverted with ease, when the will to do so emerges.
Re: (Score:2)
Don't forget the horrible hacks on SMTP for lines that consist of just a period "."
Also, if you want to see a brand new bad protocol, look at XMPP.
I think the all time worst protocol I've seen is SyncML. vCards wrapped in XML [sun.com], with embedded plaintext passwords.
Re: (Score:2)
>Also, if you want to see a brand new bad protocol, look at XMPP.
Maybe, but still better than SIP/SIMPLE
Re: (Score:3, Insightful)
* SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?
There's nothing in the SMTP protocol stopping you from using 'From ' at the start of a line. The flaw is with the mbox storage format, in improper implementations[*], and mail clients who compensate for that without even giving the user a choice. Blaming that on SMTP is plain wrong.
[*]: RFC4155 gives some advice on this, and calls the culprits "overly liberal parsers".
Re: (Score:2)
FTP: needs two connections.
Which makes a lot of sense if you want to be able to send commands while a file transfer is going on.
SMTP: You can't send a line with the word "From" as the first word?
Yes you can. It's only the Berkeley implementation of SMTP that cannot.
Re: (Score:3, Informative)
> * SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?
Huh? The first blank line tells SMTP to stop parsing stuff as the body has begun. Far from perfect, but hey. Anyway, I just sent myself an email with "From: foo@foo.org" in the first line of the body. Needless to say, it worked.
Re: (Score:3, Interesting)
Interestingly, "mbox" format is another one of those standards without a standard, just like cookies.
It started basically as a storage convention for the mail command. Then, other programs started using it. Some of those programs were written to depend on certain information appearing on the line after the "From " and others didn't.
When I contributed to KMail 2 back in the day, on of my patches was to change what KMail put into the "From " lines of mailbox files because mutt or pine users (forget which) wer
The way the web works in general is bizarre (Score:5, Insightful)
Let's see:
1. IP is a stateless protocol, that's inconvenient for some things, so
2. We build TCP on it to make it stateful and bidirectional.
3. On top of TCP, we build HTTP, which is stateless and unidirectional.
4. But whoops, that's inconvenient. We graft state back into it with cookies. Still unidirectional though.
5. The unidirectional part sucks, so various hacks are added to make it sorta bidirectional like autorefresh, culminating with AJAX.
Who knows what else we'll end up adding to this pile.
Not completely nonsensical... (Score:5, Informative)
1. Sure
2. stateful, stream-oriented, *and* reliable
3. HTTP designed as a stateless datagram model, but wanted reliability, so TCP got chosen for lack of a better option. SCTP if it had existed might have been a better model, but for the time the stateful stream aspect of TCP was forgiven since it could largely be ignored but reliability over UDP was not so trivial.
4. More critically, the cookie mechanism strives to add stateful aspects that cross connections. This is something infeasible with TCP. Simplest example, HTTP 'state' strives to survive events like client IP changes, server failover, client sleeping for a few hours, or just generally allowing the client to disconnect and reduce server load. TCP state can survive none of those.
5. Indeed, at least AJAX enables somewhat sane masking of this, but the only-one-request-per-response character of the protocol means a lot of things cannot be done efficiently. If HTTP had allowed arbitrary server-side HTTP responses for the duration of a persistent http connection, that would have greatly alleviated the inefficiencies that AJAX methods strive to mask.
Re: (Score:2)
6. Hence WebSockets?
Re: (Score:2)
I can grasp the point of server-side events (though I'm not sure it's a whole lot better than just having a vanilla HTTP request 'pending' from the client at all times to afford the server a way in if needed.
I really don't get what WebSockets buys me over any generic TCP stream. The biggest thing touted commonly is 'hey, it gets through overzealous firewalls that only allow port 80', which I think is as stupid as when SOAP advocates made the point. If any of these become sufficiently pervasive, then you'l
Re: (Score:2)
Re: (Score:2)
Isn't it great that there are people trying to create an app platform out of this shit?
Re: (Score:2)
Thanks for sharing! (Score:2)
alternatives ? (Score:2)
Most of the crap we surround ourselves with (cookies, MIME, Windows and Office, etc.) are still there because they are there and the alternatives aren't.
What is the alternative to using cookies, really? Almost every framework for web-based development has session support that largely relies on cookies. Give me something more secure that works as easily and I will be using it right away.
You think HTTP is bad? (Score:2)
SNMP is a nightmare. There was a doc out there that used SNMP as an exemplar of "how not to write a protocol."
It's easy to forget, but these protocols were designed back in the day when there wasn't a lot of ram, bandwidth, or CPU.
Most of the problems with everything have been well-discussed. You can dig into the past to see, but interoperability with existing implementations is always the blocking factor.
Heck, everyone knew the problems with ActiveX when it was announced...but that didn't stop MS. Same wit
Re: (Score:3, Informative)
SNMP is in serious need of retirement! Even XML is better (and that's saying a LOT!) The constraints that made it seem like a good idea simply don't exist anymore anywhere.
See also BEEP and the syslog over BEEP (that has never, to my knowledge EVER been supported by anything) A protocol that didn't realize that we already HAVE multiplexing built in to the communications channel, and so re-implemented it in the most baroque way possible.
ActiveX rises to new heights of bogosity. It's not just poor implementat
Re: (Score:3, Insightful)
RTFA. That's exactly what happend with HTTP. "It works". In the world of 1990. And then they started to "fix" it to keep up.
Re:Does it work ? (Score:5, Insightful)
Re: (Score:2, Insightful)
Web developers would have the time to think about important things like that, if they weren't spending all of their time trying to prevent data loss caused by MySQL or the NoSQL database de jour, horrible server-side peformance due to PHP, horrible client-side performance due to JavaScript, all while trying to avoid the numerous browser incompatibilities.
Although the tools and technologies they're using are complete shit, it sure doesn't help that they generally don't understand even basic software developm
Re:Does it work ? (Score:4, Informative)
It would help if you qualified or explained a single one of these blanket assertions you've made.
What data loss is caused by MySQL? And while perhaps a NoSQL database "du jour" causes data loss, are you suggesting that the major ones like Couch, Cassandra, Mongo, etc all have serious data loss issues?
If so, specifics or it didn't happen. File a bug report, at the very least.
I don't have much good to say about PHP, but didn't someone recently roll out a compiler for it? I can't imagine PHP performance is a significant bottleneck, especially as people run successful websites written in everything from Java to Ruby. And what would you suggest in its place, C++? Gee, thanks, now we can spend all our time focusing on memory leaks and buffer overflows instead.
It's possible it's the wrong language for the job, but if you want to make that case, you've got to suggest an alternative.
Similarly, for JavaScript -- say what? Chrome compiles JavaScript to native code, and Firefox just got faster than Chrome. Both of them are now more than competitive with languages typically used for server-side development, where you'd expect performance to be a much bigger bottleneck. Indeed, there's at least one modern server-side JavaScript framework, written for V8, Chrome's JavaScript engine.
And again, is a potential alternative actually better for a given problem? Again, specific examples. There are applications which actually have performance needs which suggest they should be native apps, and people generally don't try those as web apps. Then there's a very, very thin border where a web app makes sense on the Web, but would be faster native -- but often, it's the design that's shite, not the technologies themselves.
If you ignore IE, browser compatibilities aren't so bad. Even if you include IE, are they significantly worse than OS incompatibilities if you decided to go native?
Finally, MVC. Exactly how is this "bastardized"? How would you do it differently, if you were writing a web framework? At least that's a specific example -- but you mentioned "software development and programming theories," plural, and you've only mentioned one.
It's possible you've got some good points, but you haven't backed them up at all.
Re: (Score:2)
Yes, Facebook runs PHP compiled to C++ using HipPop [github.com].
Re:Does it work ? (Score:5, Insightful)
By the time you add real garbage collection to C++, you're rapidly approaching a point where you may as well use Java. Anything short of that, like auto_ptr, is just a band-aid -- you still have plenty of ways to leak memory, and plenty of potential for buffer overflows. Contrast this to a sane, modern language, where these problems cannot exist.
Again, what would you suggest? If you're going to continue dismissing things I propose as crap without offering anything useful in its place, it's really not worth talking to you. If C++ is actually what you're suggesting, say so, and defend it.
Re: (Score:3, Insightful)
Your way of thinking is nice, but it is exactly this attitude that gets developers fired (or their bosses broke if they share that attitude and don't fire you, in which case an inferior insecure competing product will dominate) for thinking too much instead of getting the product out. That's why we are up to the neck in inferior goods, protocols just being one example. Not even death penalty (e.g. for melamine in chinese milk) does seem to stop this.
Re:Does it work ? (Score:5, Insightful)
So in other words, you never bring anything into production status.
Look, its really quite simple.
HTTP was a presentation mechanism, designed to deliver content, dependent on non persistent connections, where each initial and each subsequent request had to supply all information necessary to fulfill said request. Even if you "log in" to your account, every request stands alone.
There is no persistent connection. There is no reliable persistent knowledge on the server side that can be positivity attributed to any given client. Clients are like motorists at a drive up window of a Burger stand, not well known patrons at a restaurant.
Given that scenario, it was inevitable that cookies would be developed, and employed.
So unless you were willing to hold off deployment of e-commerce until you totally rewrote HTTP into a persistent connection based protocol, totally replaced the browser as the client side tool, any grandstanding on how carefully and methodically you work is just grandiose bravado.
The only tool at hand was http and web servers and browsers. Its still largely the same today. There was no other way besides cookies of some sort. You may argue about their structure, their content or what ever, but cookies are all that is on the menu.
Re: (Score:3, Insightful)
I can't imagine what the problem might be... maybe we need a few more layers to make it perfect!
Re: (Score:3, Funny)
Thank you, Captain Hindsight! What a complete failure the designers of HTTP were. They should've done it so much different! :-)
Re: (Score:3, Insightful)
It is this type of thinking that separates a carpenter from an engineer.
Re: (Score:2)
"Working" is different from "working well". (Score:5, Insightful)
"Working" is measured over a very wide spectrum. On one hand, we have "broken", and on the other we have "working perfectly". The web is far, far closer to the "broken" side of the spectrum than it ever has been to the "working perfectly" side.
Put simply, almost everything about the web is one filthy hack upon another. It's a huge stack of shitty "extensions" that were often made with little thought, so it's no wonder web development is so horrible today.
HTTP has been repurposed far more than it should have been. Its lack of statefulness has resulted in horrible hacks like cookies and AJAX. HTTP makes caching far harder than it should be. SSL and TLS are mighty awful hacks. And those are just a few of its problems!
HTML is a mess, and HTML5 is just going to make the situation worse. Even after 20 years, layout is still a huge hassle. CSS tries to bring in concepts from the publishing world, but they're not at all what we need for web layout, and thus everyone is unhappy.
A lot of people will claim otherwise, and they're wrong, but JavaScript is a fucking horrible scripting language. It's even worse for writing anything significant. And no, it's absolutely nothing like Scheme (some JavaScript advocate always makes this stupid claim whenever the topic of JavaScript's horrid nature comes up).
PHP is one of the few popular languages that can rival JavaScript in terms of being absolutely shitty. Then there are other server-side shenanigans like the NoSQL movement, which arose solely because there are a lot of web "developers" who don't know how to use relational databases properly. I've seriously dealt with such "developers" and many of them didn't even know what indexes are!
Most web browsers themselves are quite shitty. It has gotten better recently, but they still use huge amounts of RAM for the relatively simple services they provide.
The only people involved with some sort of web-related software development who aren't absolute fuck-ups are those working on HTTP servers like Apache HTTPd, nginx, and lighttpd. But now we're seeing crap like Mongrel and Mongrel2 arising in this area, so maybe it's only a matter of time before the sensible developers here move on.
So just because the web is "sort of broken", rather than "completely fucking broken", it doesn't mean that it's "working".
Re:"Working" is different from "working well". (Score:5, Insightful)
HTML is a mess
Unquestionably, yes. And yet it has nevertheless become the most pervasive, flexible, universal communication medium in the history of the world, so it's a glorious mess. It is questionable whether a better-specified system would have succeeded in this, because it would have been too locked down into its designer's original intent. It is precisely the hackability of HTML/http that makes it both fucking awful and fucking brilliant.
Re:"Working" is different from "working well". (Score:4, Insightful)
I've been noticing technology trending towards biological models, either intentionally or otherwise. Genetic algorithms. Adaptable AIs. Computer viruses, even.
The rise of the internet and the web models this, too. Much like our own DNA, there's a lot of redundancy, legacy functionality that borders on harmful, and amazing features that are the result of (tech/biological) hacks upon hacks, but they survived not because they were necessarily the best, but because they allowed earlier iterations (ancestors/early web) to be more flexible and adaptable, so it flourished.
Re: (Score:2)
PHP isn't as shitty as people want to make it out to be.
It's certainly an inconsistent language, but arguments being in weird orders and some function having _ and some not doesn't really make a language 'shitty'. Especially now that it's a real OOP and if you actually use that part it's pretty consistent.
And thanks to HTTP's shittiness and web servers being bitches it often results in PHP being not being stateful either, but that's not really PHP's fault. None of the 'cgi' languages are stateful, and eve
Re: (Score:2)
Re: (Score:2)
He basically said "everyone apart from me sucks".
Re: (Score:3, Insightful)
Yes, and actually I agree with jabberw0k. There's simply no call for that kind of language; it added nothing to the points being made, and in fact distracted the poster from what had been a reasonably cogent argument up until that point.
If you reread the AC post, he/she makes several good points with some substance in the first four paragraphs - and then just lets rip with the profanity in the fifth paragraph, which, coincidentally, is where the entire post dissolves into a bunch of assertions with little t
Re: (Score:2)
That's the "if it isn't broke, don't fix it" mindset for you. Nothing ever improves.
Re: (Score:2)
I don't know about Australia, but in the US, that's simply illegal (presuming that running with the loss of a fuel cap is a "safety" item and it's on a certified aircraft flown by a certified pilot). When you have to break the law to do stupid, it's a special breed of stupid. And a pilot should know such things...
Re:Analogy (Score:5, Funny)
> HTTP is like a manual lawn mower.
No it isn't. A manual lawnmower is well-designed. The Web is like a lawnmower built by Rube Goldberg out of dozens of pairs of scissors, lots of string, some boards and a child's wagon, propelled by a large dog and powered by the wagging of his tail (the cookies are to get him to wag it). It's now had a clippings bag and a fertilizer cart added following the same design principles. An automatic dandilion remover, a dethatcher, and an aerator are coming soon (and several more dogs).
Re:Analogy (Score:4, Funny)
Re:Analogy (Score:5, Funny)
am I the only one who now wants to see that built/build it myself?
Re: (Score:2)
God I hope so!
Re: (Score:2)
I don't think I'd mow my lawn with it though
That's the beauty of it.. you don't have to, because the dog does everything! I want one!
Re: (Score:3, Insightful)
Rube Goldberg? Quite the opposite. The HTTP protocol is very simple, eminently debuggable, plus extensible both ways.
It's the implementations in browsers and servers that suck.
Now *SOAP*, layered on top of HTTP, is truly a Rube Goldberg invention with no redeeming qualities whatsoever.
Re: (Score:2)
The HTTP protocol is very simple, eminently debuggable, plus extensible both ways.
Simple, yes, but I'd say its a little to simple for its own good. For example I find it rather ridiculous that in 2010 I still can't reliably continue an interrupted download, as without any form of checksum the browser might just append new data to a file containing garbage and not even know it.
Re: (Score:2)
You have both a timestamp as well as a byte range, so downloads can reliably be continued as long as you can trust the server.
If the file contains garbage, that's surely the fault of the client and not the protocol?
Nothing stops the client from generating and saving a checksum for every kB received.
Re: (Score:2)
Now *SOAP*, layered on top of HTTP, is truly a Rube Goldberg invention with no redeeming qualities whatsoever.
Yet a lot of times it's the only thing that makes sense from a business perspective, more elegant solutions often require a lot more work while the majority of your systems can somewhat easily be made to work with SOAP. Not trying to defend it, it's still pretty ugly but connecting to different systems using SOAP is often faster than using something elegant, and the boss doesn't care about "elegant" (I'm sure there are exceptions and I'd love to work for someone like that, most don't though).
Re:Analogy (Score:5, Insightful)
The only reason the implementations in browsers suck is because HTTP is such a hack-job of a protocol (it wasn't originally, but then it was not originally designed to do what it does today). The browsers are left dealing with issues which the HTTP "specification" (which isn't even fully documented, btw) either completely ignores or recommends practices that are completely unrealistic.
One example from the article: the HTTP spec recommends a minimum of 80kb for request headers (20 cookies per user, 4kb per cookie). However, most web servers limit request headers to 8kb (Apache) or 16kb (IIS) in order to prevent denial of service attacks. It is very important that they limit the headers - not doing so leaves them wide open to attack. The HTTP recommendations are completely unreasonable in this regard and fly in the face of good security practice. They are also completely ignored in this and many other cases, because they are so unreasonable.
If the protocol were simple, clear, well designed, and well defined then the browser implementations wouldn't have to suck. It's HTTP that has caused this problem, not the other way around.
It was a very limited protocol that became way too popular, and now we're stuck with a bunch of hacks to get it to work with modern web technology.
Re: (Score:3, Interesting)
Part of the problem is historical. Tim B-L wanted to make a WYSYWYG viewer system. Back in the day when it was invented, it was dangerous. Dangerous because it was an independent, open API set that worked wherever a browser worked. That flew in the face of tons of proprietary software. It was a transport-irrelevent protocol set that took the best of different coding schemes and made it work. Like most things invented by a single (or very few) person(s), it was a work of art. But it was state of the art near
Re:Analogy (Score:5, Funny)
It would appear that you do not know what a manual lawnmower is.
Re: (Score:2)
LOL. Got me.
Re: (Score:2)
I think the GGP means a "real mower".
By contrast though, an "automatic" lawnmower to me is like a deadly roomba. So what is a riding lawnmower, a self propelled lawnmower, a powered lawnmower, and do we make distinctions based on fuel/power type?
Re: (Score:2)
True, but neither did the poster originally making the analogy....
Re: (Score:2)
I would have said it more like a baby stroller which later had to do duty as a lawnmower and a vacuum cleaner while still maintaining full backwards compatibility and increasing capacity up to 200 babies.
Re: (Score:2)
But then you run into problems if sessions are to be detached to different servers, because not a single computer answers your requests, but a large server farm, maybe geographically distributed worldwide.
Re: (Score:3, Insightful)
But these servers need to communicate anyway to maintain a "session" in any meaningful sense, so they can as well send the associated crypt key with the rest of the session information.
Re: (Score:2)
Not true. The best systems are idempotent on the server side, storing any state associated with a single "session" in the cookie itself. This is precisely so that these different servers only have to communicate permanent state, if that.
Re: (Score:2)
Re: (Score:2)
You need a limitation on space.
Which localStorage has, as I understand the spec [w3.org]. As it stands now, user agents SHOULD allocate 5 MB per origin until the user raises or lowers it.
You need very carefully defined sharing, so sites can federate.
A cross-document message [w3.org] broker running as a script in one document can very carefully define how other documents can interact with an origin's store.
You probably need enforcement of https.
Right now HTTPS requires a static IP and a certificate. More widespread HTTPS would need more widespread deployment of DNSSEC and either IPv6 or client operating systems supporting SNI.
Re: (Score:2)
And let's replace IPv4 while we're at it!
Re: (Score:2)
Huh? The article is talking about HTTP, not HTML. Those two are not related in any way, Flash is also sent via HTTP.
Re: (Score:2)
Let's not confuse html with http. This is already messy territory as it is
Re: (Score:2)
We're talking about HTTP, not HTML. Just because they are often used together doesn't mean they are the same thing. In fact, they couldn't be more different; one is a communications protocol, the other is a markup language - I hope to god you can figure out which is which from that much.
But HTML is a terrible mess of kludges that doesn't work very well, too. It's just that most people on Slashdot consider it to be superior to Flash, even though it lacks a lot of Flash's basic functionality, and lacks all
Re: (Score:2)
What does that tell you about how bad Flash is that HTML5 is such a massive improvement over it?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And it stores your passwords in cleartext.. or at least with reversible encryption.
Re: (Score:2)