Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Privacy Security IT

How Not To Design a Protocol 186

An anonymous reader writes "Google security researcher Michael Zalewski posted a cautionary tale for software engineers: amusing historical overview of all the security problems with HTTP cookies, including an impressive collection of issues we won't be able to fix. Pretty amazing that modern web commerce uses a mechanism so hacky that does not even have a proper specification."
This discussion has been archived. No new comments can be posted.

How Not To Design a Protocol

Comments Filter:
  • Re:Does it work ? (Score:1, Informative)

    by Anonymous Coward on Saturday October 30, 2010 @09:11AM (#34072334)

    I wonder how many code snippets of yours have appeared on The Daily WTF. Just because something works doesn't mean it's good.

    I knew a pilot who flew with duct tape holding down the fuel cap on his wing. That worked too, but it's hardly ideal is it?

    Here in Australia a few years back, a major power substation was "working" only because someone rigged up a hose to constantly drip water on an overheating thingomajig. Sure it works and props to the hardhack, but it's a piece of shit that can easily stop working.

    You see, some of us prefer things not to be a piece of shit.

  • Why the hate.... (Score:5, Informative)

    by Ancient_Hacker ( 751168 ) on Saturday October 30, 2010 @09:51AM (#34072476)

    Why go hatin' on this particular protocol?

    Most of them are just nuckin futs:

    * FTP: needs two connections. Commands and responses and data are not synced in any way. No way to get a reliable list of files. No standard file listing format. No way to tell what files need ASCII and which need BIN mode. And probably more fubarskis.

    * Telnet: The original handshake protocol is basically foobar-- the handshakes can go on forever. Several RFC patches did not help much. Basically the clients have to kinda cut off negotiations at some point and just guess what the other end can and will do.

    * SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?

     

  • by Anonymous Coward on Saturday October 30, 2010 @10:44AM (#34072710)

    culminating with AJAX.

    Oh no, not at all. There's WebSockets and Server-Sent Events in the pipeline now.

  • Re:Why the hate.... (Score:4, Informative)

    by hedrick ( 701605 ) on Saturday October 30, 2010 @10:50AM (#34072732)

    These protocols were designed for a different world:

    1) They were experiments with new technology. They had lots of options because no one was sure what would be useful. Newer protocols are simpler because we now know what turned out to be the most useful combination. And the ssh startup isn't that much better than telnet. Do a verbose connection sometime.

    2) In those days the world was pretty evenly split between 7-bit ASCII, 8-bit ASCII and EBCDIC, with some even odder stuff thrown in. They naturally wanted to exchange data. These days protocols can assume that the world is all ASCII (or Unicode embedded in ASCII, more or less) full duplex. It's up to the system to convert if it has to. They also didn't have to worry about NAT or firewalls. Everyone sane believed that security was the responsibility of end systems, and firewalls provide only the illusion of security (something that is still true), and that address space issues would be fixed by reving the underlying protocol to have large addresses (which should have been finished 10 years ago).

    3) A combination of patents and US export controls prevented using encryption and encryption-based signing right at the point where the key protocols were being designed. The US has ultimately paid a very high price for its patent and export control policies. When you're designing an international network, you can't use protocols that depend upon technologies with the restrictions we had on encryption at that time. It's not like protocol designers didn't realize the problem. There were requirements that all protocols had to implement encryption. But none of them actually did, because no one could come up with approaches that would work in the open-source, international environment of the Internet design process. So the base protocols don't include any authentication. That is bolted on at the application layer, and to this day the only really interoperable approach is passwords in the clear. The one major exception is SSL, and the SSL certificate process is broken*. Fortunately, these days passwords in the clear are normally on top of either SSL or SSH. We're only now starting to secure DNS, and we haven't even started SMTP.

    ---------------

    *How is it broken? Let me count the ways. To start, there are enough sleazy certificate vendors that you don't get any real trust from the scheme. But setting up enterprise cert management is clumsy enough that few people really do it, hence client certs aren't use very often. And because of the combination of cost and clumsiness of issuing real certs, there are so many self-signed certs around the users are used to clicking through cert warnings anyway. Yuck.

  • by Junta ( 36770 ) on Saturday October 30, 2010 @11:04AM (#34072818)

    1. Sure
    2. stateful, stream-oriented, *and* reliable
    3. HTTP designed as a stateless datagram model, but wanted reliability, so TCP got chosen for lack of a better option. SCTP if it had existed might have been a better model, but for the time the stateful stream aspect of TCP was forgiven since it could largely be ignored but reliability over UDP was not so trivial.
    4. More critically, the cookie mechanism strives to add stateful aspects that cross connections. This is something infeasible with TCP. Simplest example, HTTP 'state' strives to survive events like client IP changes, server failover, client sleeping for a few hours, or just generally allowing the client to disconnect and reduce server load. TCP state can survive none of those.
    5. Indeed, at least AJAX enables somewhat sane masking of this, but the only-one-request-per-response character of the protocol means a lot of things cannot be done efficiently. If HTTP had allowed arbitrary server-side HTTP responses for the duration of a persistent http connection, that would have greatly alleviated the inefficiencies that AJAX methods strive to mask.

  • Re:Does it work ? (Score:4, Informative)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Saturday October 30, 2010 @01:53PM (#34074040) Journal

    It would help if you qualified or explained a single one of these blanket assertions you've made.

    What data loss is caused by MySQL? And while perhaps a NoSQL database "du jour" causes data loss, are you suggesting that the major ones like Couch, Cassandra, Mongo, etc all have serious data loss issues?

    If so, specifics or it didn't happen. File a bug report, at the very least.

    I don't have much good to say about PHP, but didn't someone recently roll out a compiler for it? I can't imagine PHP performance is a significant bottleneck, especially as people run successful websites written in everything from Java to Ruby. And what would you suggest in its place, C++? Gee, thanks, now we can spend all our time focusing on memory leaks and buffer overflows instead.

    It's possible it's the wrong language for the job, but if you want to make that case, you've got to suggest an alternative.

    Similarly, for JavaScript -- say what? Chrome compiles JavaScript to native code, and Firefox just got faster than Chrome. Both of them are now more than competitive with languages typically used for server-side development, where you'd expect performance to be a much bigger bottleneck. Indeed, there's at least one modern server-side JavaScript framework, written for V8, Chrome's JavaScript engine.

    And again, is a potential alternative actually better for a given problem? Again, specific examples. There are applications which actually have performance needs which suggest they should be native apps, and people generally don't try those as web apps. Then there's a very, very thin border where a web app makes sense on the Web, but would be faster native -- but often, it's the design that's shite, not the technologies themselves.

    If you ignore IE, browser compatibilities aren't so bad. Even if you include IE, are they significantly worse than OS incompatibilities if you decided to go native?

    Finally, MVC. Exactly how is this "bastardized"? How would you do it differently, if you were writing a web framework? At least that's a specific example -- but you mentioned "software development and programming theories," plural, and you've only mentioned one.

    It's possible you've got some good points, but you haven't backed them up at all.

  • by sjames ( 1099 ) on Saturday October 30, 2010 @03:00PM (#34074476) Homepage Journal

    SNMP is in serious need of retirement! Even XML is better (and that's saying a LOT!) The constraints that made it seem like a good idea simply don't exist anymore anywhere.

    See also BEEP and the syslog over BEEP (that has never, to my knowledge EVER been supported by anything) A protocol that didn't realize that we already HAVE multiplexing built in to the communications channel, and so re-implemented it in the most baroque way possible.

    ActiveX rises to new heights of bogosity. It's not just poor implementation or even poor design. The very concept of letting random websites execute arbitrary code outside of a sandbox is brain dead.

  • Re:Aww shoot... (Score:4, Informative)

    by klapaucjusz ( 1167407 ) on Saturday October 30, 2010 @04:18PM (#34075020) Homepage

    Ah, the OSI model [sic, recte suite], [...] having never been implemented!

    Saying that the full OSI suite has never been implemented is like saying that nobody implements the full set of standard track RFCs -- which is true, since some standard track RFCs are mis-designed or even contradict other standard-track RFCs.

    Large parts of the OSI suite have been implemented, and some are still running today. For example, IS-IS [wikipedia.org] over CLNP [wikipedia.org] is commonly used for routing IP and IPv6 traffic on operators' backbones. (I was about to mention LDAP and X.509 before I realised they are not necessarily the best-designed parts of OSI.)

    Where you are right, though, is that large parts of OSI are morasses of complexity that have only been implemented due to government mandate and have since been rightly abandoned.

  • Re:Why the hate.... (Score:3, Informative)

    by RichiH ( 749257 ) on Sunday October 31, 2010 @10:35AM (#34079292) Homepage

    > * SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?

    Huh? The first blank line tells SMTP to stop parsing stuff as the body has begun. Far from perfect, but hey. Anyway, I just sent myself an email with "From: foo@foo.org" in the first line of the body. Needless to say, it worked.

egrep -n '^[a-z].*\(' $ | sort -t':' +2.0

Working...