Please create an account to participate in the Slashdot moderation system


Forgot your password?
The Internet Privacy Security IT

How Not To Design a Protocol 186

An anonymous reader writes "Google security researcher Michael Zalewski posted a cautionary tale for software engineers: amusing historical overview of all the security problems with HTTP cookies, including an impressive collection of issues we won't be able to fix. Pretty amazing that modern web commerce uses a mechanism so hacky that does not even have a proper specification."
This discussion has been archived. No new comments can be posted.

How Not To Design a Protocol

Comments Filter:
  • by Anonymous Coward on Saturday October 30, 2010 @08:49AM (#34072270)

    The whole cookie system should be replaced by a system based on public key cryptography. Replace domain scope by associating sessions with the public keys of the client and the server. Authenticate each chunk of exchanged data by signing a hash value. Browsers could offer throwaway key pairs for temporary sessions and persistent key pairs for preferences and permanent logins.

  • by thasmudyan ( 460603 ) <<moc.ufnepo> <ta> <naydumsaht>> on Saturday October 30, 2010 @08:53AM (#34072284)

    I still think allowing cookies to span more than one distinct domain was a mistake. If we had avoided that in the beginning, cookie scope implementations would be dead simple and not much functionality would be lost on the server side. Also, JavaScript cookie manipulation is something we could easily lose for the benefit of every user, web developer and server admin. I postulate there are very few legitimate uses for document.cookie

  • by Sique ( 173459 ) on Saturday October 30, 2010 @09:55AM (#34072490) Homepage

    It was created to allow a site to dispatch some functionality within a session to dedicated computers, let's say a catalog server, a shopping cart server and a cashier server.

  • Re:Why the hate.... (Score:3, Interesting)

    by panda ( 10044 ) on Saturday October 30, 2010 @10:50AM (#34072728) Homepage Journal

    Interestingly, "mbox" format is another one of those standards without a standard, just like cookies.

    It started basically as a storage convention for the mail command. Then, other programs started using it. Some of those programs were written to depend on certain information appearing on the line after the "From " and others didn't.

    When I contributed to KMail 2 back in the day, on of my patches was to change what KMail put into the "From " lines of mailbox files because mutt or pine users (forget which) were complaining that KMail was broken because it wrote "From aaa@aaa" followed by the date with the hour set to midnight. This broke one of the other readers that expected the sender's email address and an actual timestamp.

    Anyway, long story short, mbox format is plagued by similar though less serious problems to cookies. The biggest of which is that it is actually not a standard, but a convention.

  • And it did learn... (Score:3, Interesting)

    by Junta ( 36770 ) on Saturday October 30, 2010 @11:28AM (#34072922)

    It didn't make mistakes that closely resemble those in Telnet, tftp, ftp, smtp, it made what may be considered completely distinct 'mistakes' in retrospect.

    However, if you confine the scope of HTTP use to what it was intended, it holds up pretty well. It was intended to serve up material that would ultimately manifest on a endpoint as a static document. Considerations for some server-side programmatic content tweaking based on client given cues was baked in to give better coordination between client and server and some other flexibility, but it was not intended to be the engine behind highly interactive applications 'rendered' by the server. HTTP was founded at a time when the internet at large wasn't particularly shy about developing new protocols running over TCP or UDP and I'm sure the architects of HTTP would've presumed such a usage model would have induced a new protocol rather than a mutation of HTTP over time.

    Part of the whole 'REST' philosophy is to get back to the vision that HTTP targets. Strictly speaking, a RESTful implementation is supposed to eschew cookies and server maintained user sessions entirely. Every currently applicable embodiment of data is supposed to have its own *U*RL and authentication when required is HTTP auth. Thanks to Javascript a web application can still avoid popping up the inadequate browser provided login dialog as well as assembling disparate data at the client side rather than server side. It doesn't work everywhere, and often even when it does it's kinda mind warping to get used to, but it does try to use HTTP more in the manner it was archictected to be used.

  • Re:Analogy (Score:3, Interesting)

    by postbigbang ( 761081 ) on Saturday October 30, 2010 @11:47AM (#34073054)

    Part of the problem is historical. Tim B-L wanted to make a WYSYWYG viewer system. Back in the day when it was invented, it was dangerous. Dangerous because it was an independent, open API set that worked wherever a browser worked. That flew in the face of tons of proprietary software. It was a transport-irrelevent protocol set that took the best of different coding schemes and made it work. Like most things invented by a single (or very few) person(s), it was a work of art. But it was state of the art nearly two decades ago, and we've come a lonnnnnnng way.

    When http and W3C were hatching, there were still battles about ARCNet, Token Ring, Ethernet, and something called ATM. Now most of the world uses Ethernet and Ethernet-like communications using TCP/IP-- which back then, was barely running across the aforementioned networking protocols.

    Lawn mowers, by contrast, were a 2-stroke, then 4-stroke engine with a blade and housing. The need, whacking grass, hasn't changed. By contrast, we now make browsers do all sorts of things never invisioned in the early 1990's. And we're planning stuff not really imagined in 2000. In 2020, browsers may be gone, or they may be *completely* different tools than they are now. Lawnmowers will still only whack grass.

  • by Anonymous Coward on Saturday October 30, 2010 @07:05PM (#34075914)

    HTTP is a text transfer protocol? Are you serious? With only the Content-Type header it's already leaps and bounds beyond FTP at identifying what the actual content of a file is.

    FTP: Hmm, do I need to set ASCII or BINARY when I want to transfer this file? There's no extension, how do I tell? There's no manifest in the directory, how do I tell? I might have to transfer it a second time if I get it wrong.

    HTTP: HEAD /SomefilewithoutExtension. Oh, it's a text/plain, text/rtf or image/gif file. GET /SomefilewithoutExtension.

Q: How many IBM CPU's does it take to execute a job? A: Four; three to hold it down, and one to rip its head off.