Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security The Internet

OAuth 2.0 Standard Editor Quits, Takes Name Off Spec 101

New submitter tramp writes "The Register reports, 'Eran Hammer, who helped create the OAuth 1.0 spec, has been editing the evolving 2.0 spec for the last three years. He resigned from his role in June but only went public with his reasons in a blog post on Thursday. "At the end, I reached the conclusion that OAuth 2.0 is a bad protocol," Hammer writes. "WS-* bad. It is bad enough that I no longer want to be associated with it."' At the end of his post, he says, 'I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.'"
This discussion has been archived. No new comments can be posted.

OAuth 2.0 Standard Editor Quits, Takes Name Off Spec

Comments Filter:
  • WordStar? (Score:4, Funny)

    by jabberw0k ( 62554 ) on Saturday July 28, 2012 @11:51AM (#40802147) Homepage Journal
    What's WS-* supposed to mean... WordStar? I almost thought, some geek reference to a VMS error message... (%WS-X-XYZZY) but surely not?
    • Re:WordStar? (Score:5, Informative)

      by Anonymous Coward on Saturday July 28, 2012 @11:54AM (#40802173)

      It references the plethora of crappy standards created during the SOAP era. (WS-Security, WS-Routing, WS-Addressings, WS-YourMom)

    • Re:WordStar? (Score:5, Informative)

      by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Saturday July 28, 2012 @12:02PM (#40802249) Homepage

      What's WS-* supposed to mean...

      It refers to the plethora of web-services specifications, most of which take a fairly complicated protocol (XML over HTTP) and add huge new layers of mind-boggling complexity.

      You don't ever need WS-*, except when you find you do because you're dealing with the situations that the WS-* protocol stack was designed to deal with. When that happens, you'll reinvent it all. Badly. JSON isn't better than XML, nor is YAML; what they gain in succinctness and support for syntactic types, they lose at the semantic level. REST isn't better than SOAP, it's just different, and security specifications in the REST world are usually hilariously lame. Then there's the state of service description, where WSDL is the only spec that's ever really gained really wide traction. WS-* depresses me; I believe we should be able to do better, but the evidence of what happens in practice doesn't support that hunch.

      • Re:WordStar? (Score:4, Insightful)

        by Anonymous Coward on Saturday July 28, 2012 @12:47PM (#40802533)

        REST is better than soap because it uses the features of the transport instead of ignoring and duplicating them in an opaque fashion. SOAP is like having every function in your program take a single argument consisting of a mapping of arguments. Or a relational database schema with only three tables: objects, attributes, and values. In other words, SOAP is an implementation of the Inner Platform antipattern.

      • Re:WordStar? (Score:5, Insightful)

        by Anonymous Coward on Saturday July 28, 2012 @01:02PM (#40802643)

        As a regretful author of several WS-* specs, after I got sucked into the vortex of IBM and MS when they passed too close to our academic lab, I felt exactly as Eran Hammer stated in his blog. He wrote, "There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, ... It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career." I have used so many of those same phrases in reflecting on my experience with other veterans of that period!

        And I'll tell you, XML and SOAP have no semantics either. They simply have a baroque shell game where well intentioned people confuse themselves with elaborate syntax. XML types and type derivation are syntactic shorthands for what amounts to regular expressions embedded in a recursive punctuation tree. There is absolutely no more meaning there than when someone does duck typing on a JSON object tree, particularly after the WS-* style "open extensibility" trick is added everywhere, allowing any combination of additional attributes or child elements to be composed into the trees via deployment-time and/or run-time decisions.

        As a result, I am rather enjoying the current acceptance of REST and dynamically typed/duck typed development models. It is much more honest about the late-binding, wild west nature of the semantics involved in our everyday web services.

      • Re: (Score:1, Interesting)

        Ignore all concerns but scalability, and REST becomes far more preferrable than SOAP. The overhead of XML -- usually an order of magnitude in data size -- can be a huge, undesirable impact. That said, there's one aspect of SOAP that popular REST specs are missing: a definition language. With the help of the WSDL, SOAP gained cross-platform client generation and type safety. REST protocols would do well to leverage this concept, at least for invocation parameter definitions. In most cases, REST result
        • by SuperKendall ( 25149 ) on Saturday July 28, 2012 @02:37PM (#40803271)

          Ignore all concerns but scalability, and REST becomes far more preferrable than SOAP.

          You don't have to ignore any concerns. SOAP was always a bad idea, as there is nothing to be gained from it you cannot work out by the combination of the HTTP protocol with REST style access.

          This was obvious even in the very earliest days of SOAP, when people at that time where noting that REST was so much more practical. I had to use it off and on with various internal IT projects but it was always a bad deal, and just about always was eventually moved to a REST style service so people could get work done.

          That said, there's one aspect of SOAP that popular REST specs are missing: a definition language.

          As you note, it's called JSON, and we've been using it for years. It doesn't "need to be in the spec" when everyone is doing it that way.

          But even then, having a documented result schema would be a huge improvement

          No, it's really not useful. It's overhead. It takes more effort to maintain such a formal interface than to have people simply consume JSON as they will. And often the parts of the system that are supposed to process those formal definitions fail. All around just a horrible block to getting things working the way you like.

          • by durdur ( 252098 )

            SOAP is quite widely deployed and yes, it is more complex for the client, but a lot of people have made it work for them. There is not one right way to build a web interface.

            • SOAP got popular because Java and especially .NET promoted it as the way to write web services. So, like XML, it's another case of an overengineered design-by-committee solution becoming popular simply because using it was the path of least resistance due to it being in the standard library. Most people using it that way don't actually have a clue about how it works, and they certainly didn't pick it because of the way it's designed.

              • XML is overengineered?

                • Very much so. Starting from simple things like the uncertain difference between attributes and child elements, and down to the unholy mess of DTD. Don't even get me started on some of the associated tech like XML Schema.

            • Yes I know SOAP is quite widespread. This is do to Java and C# making valiant efforts to build enough tooling around it to reduce the pain, or at least to building a system where you have even odds of making a client that can communicate with a server...

              But that does not change the fact that underneath it is a nightmare, things can still go wrong, and that everyones life becomes SO much easier when you go REST with JSON.

              The real death of SOAP was the rise of mobile clients, which do NOT have the processing

          • No, it's really not useful. It's overhead. It takes more effort to maintain such a formal interface than to have people simply consume JSON as they will. And often the parts of the system that are supposed to process those formal definitions fail. All around just a horrible block to getting things working the way you like.

            Couldn't disagree more. Frameworks and protocols are meant to make life easier. What I see with many implementations based on REST are frameworks that, through the lack of a published
            • Frameworks and protocols are meant to make life easier.

              I agree. In this regard, SOAP is a dismal failure.

              What I see with many implementations based on REST are frameworks that, through the lack of a published schema, encourage half-baked, undocumented APIs

              To some extent, yes.

              Is there the possibility of something that might hold a little more definition than the very loose combo of JSON over REST? I will not deny that is possible, but SOAP is way, way too far off the edge.

              As it stands, simply well document

          • As you note, it's called JSON, and we've been using it for years. It doesn't "need to be in the spec" when everyone is doing it that way.

            FFS! JSON IS NOT A DATA DEFINITION LANGUAGE!!!

            Just get a fucking clue. JSON is a syntax, nothing less nothing more. It is up to the client to inspect the packet and has NO WAY to validate that the contents of the packet are indeed correct. Contrast this with an XSD that would outline which elements could exist, which attributes they had, where they could exist, what they could contain and even limit exactly how many could exist.

            JSON provides none of that. Also, Javascript, which is what JSON is is a dynamic

            • JSON IS NOT A DATA DEFINITION LANGUAGE!!!

              Of course not, but it IS a loosely typed means of transferring data.

              What I was arguing against is NEEDING a data definition language. That has ALWAYS been needless overhead for any web service I have ever seen, and in fact you are limiting clients by mandating a single possible data type for a field when a client might want to treat something differently.

              And having a Schema is NOT WASTEFUL -- it's a condom to prevent asswipes like you

              In my experience with over a de

        • Comment removed based on user account deletion
        • Re:WordStar? (Score:4, Informative)

          by shutdown -p now ( 807394 ) on Saturday July 28, 2012 @07:46PM (#40804723) Journal

          The problem with SOAP and WS-* stuff isn't XML. It's rather that it takes, IIRC, five levels of nesting of said XML to call a simple web service that takes an integer and returns another one. In other words, it's ridiculously overengineered for the simple and common cases, while supposedly covering some very complicated scenarios better - a claim that I cannot really verify since I've never in my life seen system architecture, even in the "enterprise", where that complexity was actually useful.

        • by toriver ( 11308 )

          WADL [wikipedia.org] is the REST equivalent of the WSDL of SOAP, though apparently REST services can be described using WSDL 2.0 as well.

      • I'm gonna stop you right there. You should get a big slap in the face for saying REST and SOAP are on the same level!
        SOAP sucks big monkeyballs and REST doesn't, period.
        • by dkf ( 304284 )

          I'm gonna stop you right there. You should get a big slap in the face for saying REST and SOAP are on the same level!

          You're right about that, they're not the same thing. They're fundamentally different ways of viewing an application on the web (one is about describing things beforehand, the other at runtime; one is about factoring verbs first, the other is nouns first). But from the perspective of the big picture, they're really not that different.

          SOAP sucks big monkeyballs and REST doesn't, period.

          That's what it seems like to you, but when you're working with applications that you're building on top of these webapps, SOAP works better. The tooling is better. The separatio

  • by An Ominous Coward ( 13324 ) on Saturday July 28, 2012 @12:02PM (#40802243)

    The resulting specification is a designed-by-committee patchwork of compromises that serves mostly the enterprise. To be accurate, it doesnâ(TM)t actually give the enterprise all of what they asked for directly, but it does provide for practically unlimited extensibility. It is this extensibility and required flexibility that destroyed the protocol. With very little effort, pretty much anything can be called OAuth 2.0 compliant.

    Sounds familiar. For anyone following the Smart Grid work, this is exactly why Smart Energy 2.0 is a fiasco. All of our major standards organizations (IEEE, ANSI, IETF, etc.) have been taken over by bureaucratic-minded industry and government consultants -- parasites that feed first on the drawn-out work within the standards organization that results in a "flexible" specification (meaning that it's not a specification at all), then feed on any group that tries to implement the standard because they'll need the "expert" insight in order to make the "flexible" damn thing work at all.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      To be fair, it's a hard problem. Let's take the analogous example of a word processor. Surely, we can come up with something less bloated that Microsoft Word? Let's just get rid of all the arcane features that only 1 percent of the user base wants. That sounds good, until you find that entire industries (such as legal) run their business on Word and depend on those arcane features. Another user base (such as sci pubs) might need an entirely different subset of arcane features. Then there are those glo

      • by Nurgled ( 63197 )
        Option 4: - Focus on a speciflc use-case and let others focus on other use cases, rather than trying to make one product that is a jack of all trades and a master of none. There's no rule that says all problems must be solved with one piece of software.
        • There's no rule that says all problems must be solved with one piece of software.

          There is such a rule. It's called monopoly capitalism.

    • All of our major standards organizations (IEEE, ANSI, IETF, etc.) have been taken over by bureaucratic-minded industry and government consultants

      Sad but true. About a decade ago I was part of an IETF standards effort that was turning into crap fast, when someone finally decided to run an interop test on implementations the conclusion was "this protocol does not work". The working group chair's comment on this was "we'll push it through as a standard anyway and then someone will have to figure out how to make it work". My (private) reaction to this was "The IETF has now become the ISO / OSI". In other words it had become the very thing that it was

  • a few excerpts (Score:4, Interesting)

    by anarcat ( 306985 ) on Saturday July 28, 2012 @12:11PM (#40802305) Homepage

    Good article, quite interesting to see the problems a community is faced when going through standards processes.

    Our standards making process is broken beyond repair. This outcome is the direct result of the nature of the IETF, and the particular personalities overseeing this work. To be clear, these are not bad or incompetent individuals. On the contrary – they are all very capable, bright, and otherwise pleasant. But most of them show up to serve their corporate overlords, and it’s practically impossible for the rest of us to compete. Bringing OAuth to the IETF was a huge mistake.

    That is a worrisome situation. With the internet openness being so much based on open standards, the idea that the corporate world is taking over standards and sabotaging them to fulfill their own selfish interests is quite problematic, to say the least.

    As for the actual concerns he is raising about OAuth 2.0, this one is particularly striking:

    Bearer tokens - 2.0 got rid of all signatures and cryptography at the protocol level. Instead it relies solely on TLS. This means that 2.0 tokens are inherently less secure as specified. Any improvement in token security requires additional specifications and as the current proposals demonstrate, the group is solely focused on enterprise use cases.

    I don't know much about oauth, but this sounds like a stupid move.

    • by Trepidity ( 597 )

      The enterprise-use-cases problem is partly for structural reasons. The IETF process makes it most natural to participate if you're a representative of a company, because it is very long, requires many meetings (some of them in-person), and therefore is most feasible to participate in if someone is paying your salary and travel to spend 3 years standardizing a protocol. Sometimes academics participate as well, if it's a proposed standard that is very close to their interests, enough so that it makes sense to

    • by naasking ( 94116 )

      I don't know much about oauth, but this sounds like a stupid move.

      No, it's how it should have been to begin with. Bearer tokens are now pure capabilities supporting arbitrary delegation patterns. This is exactly what you want for a standard authorization protocol.

      Tying crypto to the authorization protocol is entirely redundant. For one thing, it immediately eliminates web browsers from being first-class participants in OAuth transactions. The bearer tokens + TLS makes browsers first-class, and is a pattern

  • OAuth (Score:3, Interesting)

    by bbroerman ( 715822 ) on Saturday July 28, 2012 @12:18PM (#40802339) Homepage
    Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.
    • Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.

      1.0 had some issues when you moved beyond web apps (JavaScript or mobile apps), but I am much more confident of its security.

    • Re:OAuth (Score:4, Interesting)

      by icebraining ( 1313345 ) on Saturday July 28, 2012 @03:48PM (#40803567) Homepage

      There's nothing wrong with SSL/TLS for this. Software doesn't fall for SSL stripping and you can even copy the service's certificate over and validate against that, bypassing CA issues.

    • by chrb ( 1083577 )

      Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS

      Hammer has been saying similar things for years now: OAuth 2.0 (without Signatures) is Bad for the Web [hueniverse.com]

    • by Jonner ( 189691 )

      Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.

      TLS is not broken at all. Using it properly can be difficult. This, as well as lack of redundant security mechanisms is the reason Eran Hammer didn't like relying on TLS solely. If you think TLS is broken, you may be confusing it with the public key infrastructure everyone uses for HTTPS. The problems with poorly run signing authorities are not fundamentally technological but administrative. Outside of accessing public HTTPS sites with a browser, you can take more control over the certificates and policies

      • by dkf ( 304284 )

        TLS is not broken at all. Using it properly can be difficult. This, as well as lack of redundant security mechanisms is the reason Eran Hammer didn't like relying on TLS solely. If you think TLS is broken, you may be confusing it with the public key infrastructure everyone uses for HTTPS. The problems with poorly run signing authorities are not fundamentally technological but administrative. Outside of accessing public HTTPS sites with a browser, you can take more control over the certificates and policies used for TLS authentication.

        To be more exact, the key to using TLS well is controlling the code that determines whether a particular chain of certificates (the ones authorizing a connection) are actually trusted. HTTPS does this one particular way (a fairly large group of root CAs that can delegate to others, coupled with checking that a host is actually claiming to be able to act for the hostname that was actually requested) but it isn't the only way; having a list of X.509 certificates that you trust and denying all others is far mo

  • by wonkey_monkey ( 2592601 ) on Saturday July 28, 2012 @12:52PM (#40802575) Homepage
    Yeah yeah, I know, if you don't already know and can't be bothered to go looking, you must therefore be a dribbling buffoon who should not dare to even use the internet let alone visit the hallowed and sacred Slashdot, but:

    OAuth is an open standard for authorization. It allows users to share their private resources (e.g. photos, videos, contact lists) stored on one site with another site without having to hand out their credentials, typically supplying username and password tokens instead. Each token grants access to a specific site (e.g., a video editing site) for specific resources (e.g., just videos from a specific album) and for a defined duration (e.g., the next 2 hours). This allows a user to grant a third party site access to their information stored with another service provider, without sharing their access permissions or the full extent of their data.

  • by CockMonster ( 886033 ) on Saturday July 28, 2012 @01:05PM (#40802663)
    I tried to implement OAuth v1 on a mobile device. What a pain in the hole. And it all fell down once you had to get the user to fire up the browser to accept the request. There was no way (I could figure out) to handle the callback so instead it seems to have been implemented via a corporate server thereby defeating the whole purpose of it. The easiest to work with was DropBox. I never got what extra level of security sorting the parameters provided the signature would show up any tampering, it just means you gobble up memory unnecessarily.
    • by Anonymous Coward

      I never got what extra level of security sorting the parameters provided the signature would show up any tampering, it just means you gobble up memory unnecessarily.

      Well it's good that someone else understood it and forced you to do it, then.

      But in actual response to your answer: it allows the request signature to be calculated by the server you're sending the request to so that it can ensure that the parameters have not been tampered with.

    • by Mark Atwood ( 19301 ) on Saturday July 28, 2012 @03:41PM (#40803531)

      I was there, I helped write v1.

      The reason you had to sort the parameters etc etc was because OAuth 1.0 was designed to be implementable by a PHP script running under Apache on Dreamhost. Which meant you didn't get access to the HTTP Authentication header, and you didn't get access to the complete URL that was accessed. So we had to work out a way to canonicalize the URL to be signed from what we could guarantee you'd have: the your hostname, your base url path, and an unsorted bag of url parameters. Believe me, we *wished* for a straightforward URL canonicalization standard we could reference. None existed. So we cussed a lot, bit the bullet, and wrote one that was fast and simple as possible: sort the parameters and concatenate them.

      Go yell at the implementors of Apache and of PHP. If we could have guaranteed that you'd have access to an unmangled Authentication: HTTP header, the OAuth 1.0 spec would have been 50% shorter and a hell of a lot easier to implement.

      • Hi Mark, thanks for replying. Do you not think it was a flaw to target a spec towards a specific language/architecture? Another thing that really pissed me off was the complete lack of help testing my implementation. I'd have given up far sooner if it hadn't been for this site: http://term.ie/oauth/example/client.php [term.ie]
        • by equex ( 747231 )
          Well, half the world runs on Apache + PHP, but you are right in asking why.
        • by dkf ( 304284 )

          Do you not think it was a flaw to target a spec towards a specific language/architecture?

          From the perspective of someone on the outside of the process, it was both a mistake and not a mistake. It was a mistake in that it causes too many compromises to be made. It was not a mistake in that it allowed a great many deployments to be made very rapidly. IMO, they should have compromised a bit less and pushed back at the Apache devs a bit harder to get them to support making the necessary information available.

          But I wasn't there, so I've very little room to criticize.

      • Speaking of sorting parameters, there is at least one issue I still see in a lot of libraries. The spec says encode things, then sort them. Many of the libs I've seen do it the other way around. Sorting first is the most obvious way to do it, but I guess the spec was trying to avoid issues with locale-specific collations by forcing everything to ASCII first. Most sites uses plain alphanumeric parameter names so people get away with doing it either way.

        Still, it goes to show how developers can completely fai

        • IIRC, you have to encode the key, encode the parameter, append them with '&' and encode again, and then sort them, generate the signature, encode the signature key and the signature itself. Or something. Oh and the encoding routine is urlencode plus some extra characters so that has to be written from scratch too.
      • Go yell at the implementors of Apache and of PHP

        then why didn't you? Last time I checked Apache was open source so you could have submitted your required changes. I'm not quite so sure of PHP, but maybe there is a way to add an extension to it that grabs the unmangled header from your newly customised Apache.

        • The problem is AIUI the goal was to make things work on shitty webhosts. So working in up to date apache/php with the right settings is not enough, you have to work on whatever old version of apache/php and whatever crummy config the webhost offers.

          • sure, but if you don't fix things, they'll never get fixed. The OP seemed to just be too whiny about how things were difficult, boohoo.

            In a year or two all those old Apache webhosts would be upgraded - or TBH, if he'd made the patch and added it they would pretty much all get upgraded in the next update release. And those that didn't, would be really insecure anyway due to other unpatched vulnerabilities. I think webhosts tend to update their servers reasonably regularly.

  • I’ve worked on related standards and I can identify with much of Eran’s frustration. Eran’s a smart, dedicated, passionate person who has worked very hard to make OAuth work for everyone - not just those looking to profit from it. And OAuth is currently the best open standard option for securing REST-based web services today. I hope that when he thinks about OAuth, he thinks primarily about the huge contribution he has made, and not with regret. The standardization process ultimately

God made the integers; all else is the work of Man. -- Kronecker

Working...