Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Privacy Security IT

How Not To Design a Protocol 186

An anonymous reader writes "Google security researcher Michael Zalewski posted a cautionary tale for software engineers: amusing historical overview of all the security problems with HTTP cookies, including an impressive collection of issues we won't be able to fix. Pretty amazing that modern web commerce uses a mechanism so hacky that does not even have a proper specification."
This discussion has been archived. No new comments can be posted.

How Not To Design a Protocol

Comments Filter:
  • by thomst ( 1640045 ) on Saturday October 30, 2010 @08:40AM (#34072246) Homepage
    ... cookies are delicious!
  • Darn...and here I thought this was going to be an article on the OSI Network model...

    http://en.wikipedia.org/wiki/OSI_model [wikipedia.org]

    • Re:Aww shoot... (Score:5, Insightful)

      by timeOday ( 582209 ) on Saturday October 30, 2010 @10:02AM (#34072516)
      Ah, the OSI model (circa 1978), the polar opposite of Cookies - a spec so glorious, it's still commonly cited - yet so useless it's a 30 year old virgin, having never been implemented!
      • It has been implemented in IS-IS [wikipedia.org], used in some service provider networks.

      • That's because it's just a description of the network structure, not a protocol in itself. It's only a specification in the sense that it accurately describes how networks must be layed out. It is in fact implemented everywhere. It has to be, or a network connection does not exist. The specific protocols don't matter, the OSI model doesn't care about them beyond describing which layer they fall into.

        Layer 1 is your physical connection - any medium over which data is transmitted (coax, microwave, fiber,

      • The OSI model has been implemented, if you can call it that. It's more of a descriptive model of how networking works than anything else. Now, OSI protocols, that's another story. IS-IS has been deployed in ISPs, but stuff like CLNP has never been widely used. I believe there was some talk about moving to it though.
      • Re:Aww shoot... (Score:4, Informative)

        by klapaucjusz ( 1167407 ) on Saturday October 30, 2010 @04:18PM (#34075020) Homepage

        Ah, the OSI model [sic, recte suite], [...] having never been implemented!

        Saying that the full OSI suite has never been implemented is like saying that nobody implements the full set of standard track RFCs -- which is true, since some standard track RFCs are mis-designed or even contradict other standard-track RFCs.

        Large parts of the OSI suite have been implemented, and some are still running today. For example, IS-IS [wikipedia.org] over CLNP [wikipedia.org] is commonly used for routing IP and IPv6 traffic on operators' backbones. (I was about to mention LDAP and X.509 before I realised they are not necessarily the best-designed parts of OSI.)

        Where you are right, though, is that large parts of OSI are morasses of complexity that have only been implemented due to government mandate and have since been rightly abandoned.

  • by thasmudyan ( 460603 ) <thasmudyan@openfu. c o m> on Saturday October 30, 2010 @08:53AM (#34072284)

    I still think allowing cookies to span more than one distinct domain was a mistake. If we had avoided that in the beginning, cookie scope implementations would be dead simple and not much functionality would be lost on the server side. Also, JavaScript cookie manipulation is something we could easily lose for the benefit of every user, web developer and server admin. I postulate there are very few legitimate uses for document.cookie

    • Re: (Score:3, Interesting)

      by Sique ( 173459 )

      It was created to allow a site to dispatch some functionality within a session to dedicated computers, let's say a catalog server, a shopping cart server and a cashier server.

      • by Skapare ( 16644 )

        This functionality would be achieved with a very simple rule. The rule is simply that for a given hostname, the cookie can be accessed by any hostname that is LONGER than the hostname it was set for. So if "example.co.uk" sets a cookie, "foobar.example.co.uk" can access it. A website can simply make use of this by directing people to the core web site. Note that even this can be abused. A registrar might set up "co.uk" and set a cookie that every domain in "co.uk" can access.

    • With that restriction, you'd have had to log in to tech.slashdot.org, linux.slashdot.org, slashdot.org, and so on all separately. As it is, you have to log into slashdot.org and {some subdomain}.slashdot.org separately.

      A better solution might be to put cookie policies in either a well-known location on the web server (as with robots.txt) or in DNS records (as with SPF). That way, domains like slashdot.org could say 'cookies are shared between all subdomains' while domains like .com would have no entry

    • I think you don't even have to go that far, you just have to make sure that the browser request passes along the path/url/etc of the cookie along with the value. Most of these problems with cookies being clobbered has to do with the application not being able to tell that it's not reading the cookie for its domain, but is instead reading the one for the top-level domain (or the non-secure one, or the non-http-only one, etc). If the application had all the applicable cookies and knew which was which, then
  • Not planned (Score:3, Insightful)

    by Thyamine ( 531612 ) <.thyamine. .at. .ofdragons.com.> on Saturday October 30, 2010 @09:10AM (#34072330) Homepage Journal
    I think it can be hard to plan for this far into the future. Look how much the web has changed, and the things we do now with even just HTML and CSS that people back in the beginning probably would never have even considered doing. You build something for your needs and if it works then you are good. Sometimes you don't want to spend time planning it out for the next 5, 10, 20 years because you assume (usually correctly) that what you are writing will be updated long before then and replaced with something else.
  • On a domain.

    Like the crosssite.xml or robots.txt files. "Cookies on this site must follow this pattern." Or somesuch.

    Most of the rest, I can cope with. Cookie pollution from various forms of injection, not so much.

    • by Sique ( 173459 )

      You could actually implement that in your server. Throw away any cookies you are not interested in.

  • Why the hate.... (Score:5, Informative)

    by Ancient_Hacker ( 751168 ) on Saturday October 30, 2010 @09:51AM (#34072476)

    Why go hatin' on this particular protocol?

    Most of them are just nuckin futs:

    * FTP: needs two connections. Commands and responses and data are not synced in any way. No way to get a reliable list of files. No standard file listing format. No way to tell what files need ASCII and which need BIN mode. And probably more fubarskis.

    * Telnet: The original handshake protocol is basically foobar-- the handshakes can go on forever. Several RFC patches did not help much. Basically the clients have to kinda cut off negotiations at some point and just guess what the other end can and will do.

    * SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?

     

    • by Anonymous Coward on Saturday October 30, 2010 @10:14AM (#34072564)

      Telnet dates to 1969. FTP dates to 1971. SMTP dates to 1982. HTTP dates to 1991, with the current state of affairs mostly dictated during the late 1990s.

      It's excusable that Telnet, FTP and even SMTP have their issues. They were among the very first attempts ever at implementing networking protocols. Of course mistakes were going to be made. That's expected when doing highly complex stuff that has absolutely never been done before.

      HTTP has no such excuse. It was initially developed two to three decades after Telnet and FTP. That's 20 to 30 years of mistakes, accumulated knowledge and research that its designers and implementors could have learned from.

      • And it did learn... (Score:3, Interesting)

        by Junta ( 36770 )

        It didn't make mistakes that closely resemble those in Telnet, tftp, ftp, smtp, it made what may be considered completely distinct 'mistakes' in retrospect.

        However, if you confine the scope of HTTP use to what it was intended, it holds up pretty well. It was intended to serve up material that would ultimately manifest on a endpoint as a static document. Considerations for some server-side programmatic content tweaking based on client given cues was baked in to give better coordination between client and s

      • Re: (Score:3, Insightful)

        by ultranova ( 717540 )

        HTTP has no such excuse. It was initially developed two to three decades after Telnet and FTP. That's 20 to 30 years of mistakes, accumulated knowledge and research that its designers and implementors could have learned from.

        HTTP works perfectly fine for the purpose for which it was made: downloading a text file from a server. How were the developers supposed to know that someone was going to run a shop over it?

        HTTP and the Web grew organically. That evolution has given it its own version of wisdom teeth.

    • by Bookwyrm ( 3535 )

      Take a look at Session Initiation Protocol (SIP) RFC 3261 if you really want to see crazy.

      • Ah, but take a look at RFC 2543. As long as the net-heads had the reins, SIP was still sane. Once the telco actors got in the game, SIP went to hell faster than you could compress the word "idiocy" in your SIGCOMP VM with the counterpart-provided bytecode decomp implementation.

    • Re:Why the hate.... (Score:4, Informative)

      by hedrick ( 701605 ) on Saturday October 30, 2010 @10:50AM (#34072732)

      These protocols were designed for a different world:

      1) They were experiments with new technology. They had lots of options because no one was sure what would be useful. Newer protocols are simpler because we now know what turned out to be the most useful combination. And the ssh startup isn't that much better than telnet. Do a verbose connection sometime.

      2) In those days the world was pretty evenly split between 7-bit ASCII, 8-bit ASCII and EBCDIC, with some even odder stuff thrown in. They naturally wanted to exchange data. These days protocols can assume that the world is all ASCII (or Unicode embedded in ASCII, more or less) full duplex. It's up to the system to convert if it has to. They also didn't have to worry about NAT or firewalls. Everyone sane believed that security was the responsibility of end systems, and firewalls provide only the illusion of security (something that is still true), and that address space issues would be fixed by reving the underlying protocol to have large addresses (which should have been finished 10 years ago).

      3) A combination of patents and US export controls prevented using encryption and encryption-based signing right at the point where the key protocols were being designed. The US has ultimately paid a very high price for its patent and export control policies. When you're designing an international network, you can't use protocols that depend upon technologies with the restrictions we had on encryption at that time. It's not like protocol designers didn't realize the problem. There were requirements that all protocols had to implement encryption. But none of them actually did, because no one could come up with approaches that would work in the open-source, international environment of the Internet design process. So the base protocols don't include any authentication. That is bolted on at the application layer, and to this day the only really interoperable approach is passwords in the clear. The one major exception is SSL, and the SSL certificate process is broken*. Fortunately, these days passwords in the clear are normally on top of either SSL or SSH. We're only now starting to secure DNS, and we haven't even started SMTP.

      ---------------

      *How is it broken? Let me count the ways. To start, there are enough sleazy certificate vendors that you don't get any real trust from the scheme. But setting up enterprise cert management is clumsy enough that few people really do it, hence client certs aren't use very often. And because of the combination of cost and clumsiness of issuing real certs, there are so many self-signed certs around the users are used to clicking through cert warnings anyway. Yuck.

      • *How is it broken? Let me count the ways. To start, there are enough sleazy certificate vendors that you don't get any real trust from the scheme. But setting up enterprise cert management is clumsy enough that few people really do it, hence client certs aren't use very often. And because of the combination of cost and clumsiness of issuing real certs, there are so many self-signed certs around the users are used to clicking through cert warnings anyway. Yuck.

        I would just like to add: regardless, you are placing your trust in a central authority. That authority can be subverted with ease, when the will to do so emerges.

    • Don't forget the horrible hacks on SMTP for lines that consist of just a period "."

      Also, if you want to see a brand new bad protocol, look at XMPP.

      I think the all time worst protocol I've seen is SyncML. vCards wrapped in XML [sun.com], with embedded plaintext passwords.

    • Re: (Score:3, Insightful)

      by arth1 ( 260657 )

      * SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?

      There's nothing in the SMTP protocol stopping you from using 'From ' at the start of a line. The flaw is with the mbox storage format, in improper implementations[*], and mail clients who compensate for that without even giving the user a choice. Blaming that on SMTP is plain wrong.

      [*]: RFC4155 gives some advice on this, and calls the culprits "overly liberal parsers".

    • FTP: needs two connections.

      Which makes a lot of sense if you want to be able to send commands while a file transfer is going on.

      SMTP: You can't send a line with the word "From" as the first word?

      Yes you can. It's only the Berkeley implementation of SMTP that cannot.

    • Re: (Score:3, Informative)

      by RichiH ( 749257 )

      > * SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?

      Huh? The first blank line tells SMTP to stop parsing stuff as the body has begun. Far from perfect, but hey. Anyway, I just sent myself an email with "From: foo@foo.org" in the first line of the body. Needless to say, it worked.

  • by vadim_t ( 324782 ) on Saturday October 30, 2010 @10:19AM (#34072586) Homepage

    Let's see:

    1. IP is a stateless protocol, that's inconvenient for some things, so
    2. We build TCP on it to make it stateful and bidirectional.
    3. On top of TCP, we build HTTP, which is stateless and unidirectional.
    4. But whoops, that's inconvenient. We graft state back into it with cookies. Still unidirectional though.
    5. The unidirectional part sucks, so various hacks are added to make it sorta bidirectional like autorefresh, culminating with AJAX.

    Who knows what else we'll end up adding to this pile.

    • by Junta ( 36770 ) on Saturday October 30, 2010 @11:04AM (#34072818)

      1. Sure
      2. stateful, stream-oriented, *and* reliable
      3. HTTP designed as a stateless datagram model, but wanted reliability, so TCP got chosen for lack of a better option. SCTP if it had existed might have been a better model, but for the time the stateful stream aspect of TCP was forgiven since it could largely be ignored but reliability over UDP was not so trivial.
      4. More critically, the cookie mechanism strives to add stateful aspects that cross connections. This is something infeasible with TCP. Simplest example, HTTP 'state' strives to survive events like client IP changes, server failover, client sleeping for a few hours, or just generally allowing the client to disconnect and reduce server load. TCP state can survive none of those.
      5. Indeed, at least AJAX enables somewhat sane masking of this, but the only-one-request-per-response character of the protocol means a lot of things cannot be done efficiently. If HTTP had allowed arbitrary server-side HTTP responses for the duration of a persistent http connection, that would have greatly alleviated the inefficiencies that AJAX methods strive to mask.

      • 6. Hence WebSockets?

        • by Junta ( 36770 )

          I can grasp the point of server-side events (though I'm not sure it's a whole lot better than just having a vanilla HTTP request 'pending' from the client at all times to afford the server a way in if needed.

          I really don't get what WebSockets buys me over any generic TCP stream. The biggest thing touted commonly is 'hey, it gets through overzealous firewalls that only allow port 80', which I think is as stupid as when SOAP advocates made the point. If any of these become sufficiently pervasive, then you'l

    • And that's why I don't do web development. Almost everybody's got a back end, and that's where I stay.
    • by bonch ( 38532 )

      Isn't it great that there are people trying to create an app platform out of this shit?

    • The web stack needs a rewrite. Maybe a protocol and 'markup' language that aren't designed for documents. Furthermore, they keep trying to address developers needs by adding new communications features, but it would really be nice to just have udp for christ's sake. Perhaps they need to create a couple transport level protocols that address the security concerns keeping them from giving scripts access to udp/tcp.
  • A pretty interesting write up :)
  • Most of the crap we surround ourselves with (cookies, MIME, Windows and Office, etc.) are still there because they are there and the alternatives aren't.

    What is the alternative to using cookies, really? Almost every framework for web-based development has session support that largely relies on cookies. Give me something more secure that works as easily and I will be using it right away.

  • SNMP is a nightmare. There was a doc out there that used SNMP as an exemplar of "how not to write a protocol."

    It's easy to forget, but these protocols were designed back in the day when there wasn't a lot of ram, bandwidth, or CPU.

    Most of the problems with everything have been well-discussed. You can dig into the past to see, but interoperability with existing implementations is always the blocking factor.

    Heck, everyone knew the problems with ActiveX when it was announced...but that didn't stop MS. Same wit

    • Re: (Score:3, Informative)

      by sjames ( 1099 )

      SNMP is in serious need of retirement! Even XML is better (and that's saying a LOT!) The constraints that made it seem like a good idea simply don't exist anymore anywhere.

      See also BEEP and the syslog over BEEP (that has never, to my knowledge EVER been supported by anything) A protocol that didn't realize that we already HAVE multiplexing built in to the communications channel, and so re-implemented it in the most baroque way possible.

      ActiveX rises to new heights of bogosity. It's not just poor implementat

One man's constant is another man's variable. -- A.J. Perlis

Working...