OAuth 2.0 Standard Editor Quits, Takes Name Off Spec 101
New submitter tramp writes
"The Register reports, 'Eran Hammer, who helped create the OAuth 1.0 spec, has been editing the evolving 2.0 spec for the last three years. He resigned from his role in June but only went public with his reasons in a blog post on Thursday. "At the end, I reached the conclusion that OAuth 2.0 is a bad protocol," Hammer writes. "WS-* bad. It is bad enough that I no longer want to be associated with it."' At the end of his post, he says, 'I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.'"
Re: (Score:1, Insightful)
Re: (Score:2)
I rarely laugh at these offtopic comments. The original MyCleanPC ones were kinda funny but the later ones just got too ridiculous. But this one was just short and sweet. Got a good laugh out of me.
WordStar? (Score:4, Funny)
Re: (Score:2)
Re:WordStar? (Score:5, Informative)
I have never seen "ws-*" before... reference please?
Ask and ye shall receive.
http://en.wikipedia.org/wiki/WS-* [wikipedia.org]
http://lmgtfy.com/?q=ws-* [lmgtfy.com]
Courtesy of wikipedia and google.
Re: (Score:1)
http://lmgtfy.com/?q=ws-* [lmgtfy.com]
Google gives me a bunch of .ws domain web sites with that search, but nothing about WS-*.[1] Including -inurl:.ws helps, but only very little.
A search for WS-* oauth returns more relevant results.
Bing (which I don't use as Google usually gives me more useful results) on a search of "ws-*" has "List of web service specifications - Wikipedia, the free encyclopedia" as the fourth result.
Both Bing and Google give useful suggestions in the dropdown when typing "ws-*" into the search box.
[1] Google resul
Re: (Score:2)
Oh please, you arrogant twats. This web services sector is such a huge over-engineered mess of enterprisey consultant circle-jerking,
Talking about going off the fucking tangent. Who the hell says I or anyone else is proud of the WS-* shit? Do you have to love a stupid acronym to know how to google it? It's not about whether WS-* is good or bad. It's about posters of a site whose motto is 'News for Nerds' who need 3rd parties to google acronyms for them.
I'm actually *proud* I'm not having any relationship with it.
In practice, it's one of the dumbest things out there.
Preaching to the crowd buddy. You ain't the first one who found out the flaws of it. Though don't let that get in the way of making you feel intelligent by repeating what most people know
Re: (Score:2)
Did you actually look at the fucking results from what you googled? Or were you just in such a hurry to be an arrogant twat that you couldn't bother?
Yes, and the results right on top contains, among other things... tada... web services. Shit, let's forget about google. About wikipedia, that oh so not new and wonderful site that lists almost all type of shit, including... tada... an entry for WS-*.
So what's your grip anyways, that people think WS-* is a good thing (in which case, you are building a strawman because no one is making that claim here, certainly not me), or that the google results didn't spoon feed you the precise answer of your liking?
Re: (Score:2)
Mods, lay off the crack pipe. The parent answered the question (indirectly.)
Re:WordStar? (Score:5, Informative)
It references the plethora of crappy standards created during the SOAP era. (WS-Security, WS-Routing, WS-Addressings, WS-YourMom)
Re:WordStar? (Score:5, Informative)
What's WS-* supposed to mean...
It refers to the plethora of web-services specifications, most of which take a fairly complicated protocol (XML over HTTP) and add huge new layers of mind-boggling complexity.
You don't ever need WS-*, except when you find you do because you're dealing with the situations that the WS-* protocol stack was designed to deal with. When that happens, you'll reinvent it all. Badly. JSON isn't better than XML, nor is YAML; what they gain in succinctness and support for syntactic types, they lose at the semantic level. REST isn't better than SOAP, it's just different, and security specifications in the REST world are usually hilariously lame. Then there's the state of service description, where WSDL is the only spec that's ever really gained really wide traction. WS-* depresses me; I believe we should be able to do better, but the evidence of what happens in practice doesn't support that hunch.
Re: (Score:1)
The irony here is that your sentence is punctuationally deficient.
Re: (Score:1)
One does not simply walk into Morder.
Re:WordStar? (Score:4, Insightful)
REST is better than soap because it uses the features of the transport instead of ignoring and duplicating them in an opaque fashion. SOAP is like having every function in your program take a single argument consisting of a mapping of arguments. Or a relational database schema with only three tables: objects, attributes, and values. In other words, SOAP is an implementation of the Inner Platform antipattern.
Re:WordStar? (Score:5, Insightful)
As a regretful author of several WS-* specs, after I got sucked into the vortex of IBM and MS when they passed too close to our academic lab, I felt exactly as Eran Hammer stated in his blog. He wrote, "There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, ... It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career." I have used so many of those same phrases in reflecting on my experience with other veterans of that period!
And I'll tell you, XML and SOAP have no semantics either. They simply have a baroque shell game where well intentioned people confuse themselves with elaborate syntax. XML types and type derivation are syntactic shorthands for what amounts to regular expressions embedded in a recursive punctuation tree. There is absolutely no more meaning there than when someone does duck typing on a JSON object tree, particularly after the WS-* style "open extensibility" trick is added everywhere, allowing any combination of additional attributes or child elements to be composed into the trees via deployment-time and/or run-time decisions.
As a result, I am rather enjoying the current acceptance of REST and dynamically typed/duck typed development models. It is much more honest about the late-binding, wild west nature of the semantics involved in our everyday web services.
Re: (Score:1, Interesting)
Ignore nothing, SOAP is awful (Score:5, Insightful)
Ignore all concerns but scalability, and REST becomes far more preferrable than SOAP.
You don't have to ignore any concerns. SOAP was always a bad idea, as there is nothing to be gained from it you cannot work out by the combination of the HTTP protocol with REST style access.
This was obvious even in the very earliest days of SOAP, when people at that time where noting that REST was so much more practical. I had to use it off and on with various internal IT projects but it was always a bad deal, and just about always was eventually moved to a REST style service so people could get work done.
That said, there's one aspect of SOAP that popular REST specs are missing: a definition language.
As you note, it's called JSON, and we've been using it for years. It doesn't "need to be in the spec" when everyone is doing it that way.
But even then, having a documented result schema would be a huge improvement
No, it's really not useful. It's overhead. It takes more effort to maintain such a formal interface than to have people simply consume JSON as they will. And often the parts of the system that are supposed to process those formal definitions fail. All around just a horrible block to getting things working the way you like.
Re: (Score:2)
SOAP is quite widely deployed and yes, it is more complex for the client, but a lot of people have made it work for them. There is not one right way to build a web interface.
Re: (Score:2)
SOAP got popular because Java and especially .NET promoted it as the way to write web services. So, like XML, it's another case of an overengineered design-by-committee solution becoming popular simply because using it was the path of least resistance due to it being in the standard library. Most people using it that way don't actually have a clue about how it works, and they certainly didn't pick it because of the way it's designed.
Re: (Score:2)
XML is overengineered?
Re: (Score:2)
Very much so. Starting from simple things like the uncertain difference between attributes and child elements, and down to the unholy mess of DTD. Don't even get me started on some of the associated tech like XML Schema.
Re: (Score:2)
Yes I know SOAP is quite widespread. This is do to Java and C# making valiant efforts to build enough tooling around it to reduce the pain, or at least to building a system where you have even odds of making a client that can communicate with a server...
But that does not change the fact that underneath it is a nightmare, things can still go wrong, and that everyones life becomes SO much easier when you go REST with JSON.
The real death of SOAP was the rise of mobile clients, which do NOT have the processing
Re: (Score:1)
Couldn't disagree more. Frameworks and protocols are meant to make life easier. What I see with many implementations based on REST are frameworks that, through the lack of a published
Re: (Score:2)
Frameworks and protocols are meant to make life easier.
I agree. In this regard, SOAP is a dismal failure.
What I see with many implementations based on REST are frameworks that, through the lack of a published schema, encourage half-baked, undocumented APIs
To some extent, yes.
Is there the possibility of something that might hold a little more definition than the very loose combo of JSON over REST? I will not deny that is possible, but SOAP is way, way too far off the edge.
As it stands, simply well document
Re: (Score:2)
As you note, it's called JSON, and we've been using it for years. It doesn't "need to be in the spec" when everyone is doing it that way.
FFS! JSON IS NOT A DATA DEFINITION LANGUAGE!!!
Just get a fucking clue. JSON is a syntax, nothing less nothing more. It is up to the client to inspect the packet and has NO WAY to validate that the contents of the packet are indeed correct. Contrast this with an XSD that would outline which elements could exist, which attributes they had, where they could exist, what they could contain and even limit exactly how many could exist.
JSON provides none of that. Also, Javascript, which is what JSON is is a dynamic
Re: (Score:2)
JSON IS NOT A DATA DEFINITION LANGUAGE!!!
Of course not, but it IS a loosely typed means of transferring data.
What I was arguing against is NEEDING a data definition language. That has ALWAYS been needless overhead for any web service I have ever seen, and in fact you are limiting clients by mandating a single possible data type for a field when a client might want to treat something differently.
And having a Schema is NOT WASTEFUL -- it's a condom to prevent asswipes like you
In my experience with over a de
Re: (Score:2)
Re:WordStar? (Score:4, Informative)
The problem with SOAP and WS-* stuff isn't XML. It's rather that it takes, IIRC, five levels of nesting of said XML to call a simple web service that takes an integer and returns another one. In other words, it's ridiculously overengineered for the simple and common cases, while supposedly covering some very complicated scenarios better - a claim that I cannot really verify since I've never in my life seen system architecture, even in the "enterprise", where that complexity was actually useful.
Re: (Score:2)
WADL [wikipedia.org] is the REST equivalent of the WSDL of SOAP, though apparently REST services can be described using WSDL 2.0 as well.
Re: (Score:1)
SOAP sucks big monkeyballs and REST doesn't, period.
Re: (Score:2)
I'm gonna stop you right there. You should get a big slap in the face for saying REST and SOAP are on the same level!
You're right about that, they're not the same thing. They're fundamentally different ways of viewing an application on the web (one is about describing things beforehand, the other at runtime; one is about factoring verbs first, the other is nouns first). But from the perspective of the big picture, they're really not that different.
SOAP sucks big monkeyballs and REST doesn't, period.
That's what it seems like to you, but when you're working with applications that you're building on top of these webapps, SOAP works better. The tooling is better. The separatio
Re: (Score:2)
It doesn't have to be perfect - only "good enough". Look at all the technologies we're currently using: The X Server, HTTP, and so on. None of it is perfect, but "good enough".
So instead of moaning, do something, to improve it!
Improvement can only take places when things can be salvaged at a reasonable cost. When the architecture of things is bad enough to cross a certain point, it is best to start over. The software industry has plenty of live examples of this, accumulated for the last 30-40 years.
Re: (Score:2)
Eran Hammer seems to be saying that OAuth 1 is "good enough" and few will benefit from OAuth 2.
Re: (Score:3)
Nobody uses X Servers for what they were designed (though I don't dislike the concept), and the only problem with HTTP is that people are abusing it for things that it shouldn't be used. By design, HTTP is a stateless pull protocol, and people are abusing it by forcing state, streaming, and pushing for no good reason.
Lack of perfection is not the problem, the problem are high level idiots with influence reinventing high level wheels full of compromises because they don't know better and should have never b
Re: (Score:2)
Don't say "nobody," I use them for what they were designed for at least a few times a year.
Re: (Score:2)
Once a spec has spent too long trying to get from good enough to perfect, often by gluing on so many options, exceptions, and extensions that nearly anything can be said to comply but nothing can be said to implement it comprehensibly, there can be no good enough any more. The closest you can get is to carve a bunch of it away and call a cleaned up subset of it good enough.
Sounds familiar (Score:3)
The resulting specification is a designed-by-committee patchwork of compromises that serves mostly the enterprise. To be accurate, it doesnâ(TM)t actually give the enterprise all of what they asked for directly, but it does provide for practically unlimited extensibility. It is this extensibility and required flexibility that destroyed the protocol. With very little effort, pretty much anything can be called OAuth 2.0 compliant.
Sounds familiar. For anyone following the Smart Grid work, this is exactly why Smart Energy 2.0 is a fiasco. All of our major standards organizations (IEEE, ANSI, IETF, etc.) have been taken over by bureaucratic-minded industry and government consultants -- parasites that feed first on the drawn-out work within the standards organization that results in a "flexible" specification (meaning that it's not a specification at all), then feed on any group that tries to implement the standard because they'll need the "expert" insight in order to make the "flexible" damn thing work at all.
Re: (Score:2)
SIP is nearly as bad.
SIP is not only nearly as bad; I would says that SIP is an abomination and that well thought well designed h.323 should have won the soft-phone protocols war. But as usual the Worst is Better [wikipedia.org] approach won...
Re: (Score:2, Insightful)
To be fair, it's a hard problem. Let's take the analogous example of a word processor. Surely, we can come up with something less bloated that Microsoft Word? Let's just get rid of all the arcane features that only 1 percent of the user base wants. That sounds good, until you find that entire industries (such as legal) run their business on Word and depend on those arcane features. Another user base (such as sci pubs) might need an entirely different subset of arcane features. Then there are those glo
Re: (Score:2)
Re: (Score:2)
There's no rule that says all problems must be solved with one piece of software.
There is such a rule. It's called monopoly capitalism.
Re: (Score:2)
All of our major standards organizations (IEEE, ANSI, IETF, etc.) have been taken over by bureaucratic-minded industry and government consultants
Sad but true. About a decade ago I was part of an IETF standards effort that was turning into crap fast, when someone finally decided to run an interop test on implementations the conclusion was "this protocol does not work". The working group chair's comment on this was "we'll push it through as a standard anyway and then someone will have to figure out how to make it work". My (private) reaction to this was "The IETF has now become the ISO / OSI". In other words it had become the very thing that it was
Re: (Score:2)
Re: (Score:2)
Wrong kind of framework. They're talking about a framework of concepts and ideas, not a software framework.
a few excerpts (Score:4, Interesting)
Good article, quite interesting to see the problems a community is faced when going through standards processes.
That is a worrisome situation. With the internet openness being so much based on open standards, the idea that the corporate world is taking over standards and sabotaging them to fulfill their own selfish interests is quite problematic, to say the least.
As for the actual concerns he is raising about OAuth 2.0, this one is particularly striking:
I don't know much about oauth, but this sounds like a stupid move.
Re: (Score:3)
The enterprise-use-cases problem is partly for structural reasons. The IETF process makes it most natural to participate if you're a representative of a company, because it is very long, requires many meetings (some of them in-person), and therefore is most feasible to participate in if someone is paying your salary and travel to spend 3 years standardizing a protocol. Sometimes academics participate as well, if it's a proposed standard that is very close to their interests, enough so that it makes sense to
Re: (Score:2)
No, it's how it should have been to begin with. Bearer tokens are now pure capabilities supporting arbitrary delegation patterns. This is exactly what you want for a standard authorization protocol.
Tying crypto to the authorization protocol is entirely redundant. For one thing, it immediately eliminates web browsers from being first-class participants in OAuth transactions. The bearer tokens + TLS makes browsers first-class, and is a pattern
I'm glad someone sees things clearly (Score:1)
OAuth (Score:3, Interesting)
Re: (Score:2)
Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.
1.0 had some issues when you moved beyond web apps (JavaScript or mobile apps), but I am much more confident of its security.
Re:OAuth (Score:4, Interesting)
There's nothing wrong with SSL/TLS for this. Software doesn't fall for SSL stripping and you can even copy the service's certificate over and validate against that, bypassing CA issues.
Re: (Score:2)
Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS
Hammer has been saying similar things for years now: OAuth 2.0 (without Signatures) is Bad for the Web [hueniverse.com]
Re: (Score:2)
Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.
TLS is not broken at all. Using it properly can be difficult. This, as well as lack of redundant security mechanisms is the reason Eran Hammer didn't like relying on TLS solely. If you think TLS is broken, you may be confusing it with the public key infrastructure everyone uses for HTTPS. The problems with poorly run signing authorities are not fundamentally technological but administrative. Outside of accessing public HTTPS sites with a browser, you can take more control over the certificates and policies
Re: (Score:2)
TLS is not broken at all. Using it properly can be difficult. This, as well as lack of redundant security mechanisms is the reason Eran Hammer didn't like relying on TLS solely. If you think TLS is broken, you may be confusing it with the public key infrastructure everyone uses for HTTPS. The problems with poorly run signing authorities are not fundamentally technological but administrative. Outside of accessing public HTTPS sites with a browser, you can take more control over the certificates and policies used for TLS authentication.
To be more exact, the key to using TLS well is controlling the code that determines whether a particular chain of certificates (the ones authorizing a connection) are actually trusted. HTTPS does this one particular way (a fairly large group of root CAs that can delegate to others, coupled with checking that a host is actually claiming to be able to act for the hostname that was actually requested) but it isn't the only way; having a list of X.509 certificates that you trust and denying all others is far mo
Some information (Score:3)
OAuth is an open standard for authorization. It allows users to share their private resources (e.g. photos, videos, contact lists) stored on one site with another site without having to hand out their credentials, typically supplying username and password tokens instead. Each token grants access to a specific site (e.g., a video editing site) for specific resources (e.g., just videos from a specific album) and for a defined duration (e.g., the next 2 hours). This allows a user to grant a third party site access to their information stored with another service provider, without sharing their access permissions or the full extent of their data.
v1 was bullshit too (Score:3)
Re: (Score:1)
I never got what extra level of security sorting the parameters provided the signature would show up any tampering, it just means you gobble up memory unnecessarily.
Well it's good that someone else understood it and forced you to do it, then.
But in actual response to your answer: it allows the request signature to be calculated by the server you're sending the request to so that it can ensure that the parameters have not been tampered with.
Re:v1 was bullshit too (Score:5, Informative)
I was there, I helped write v1.
The reason you had to sort the parameters etc etc was because OAuth 1.0 was designed to be implementable by a PHP script running under Apache on Dreamhost. Which meant you didn't get access to the HTTP Authentication header, and you didn't get access to the complete URL that was accessed. So we had to work out a way to canonicalize the URL to be signed from what we could guarantee you'd have: the your hostname, your base url path, and an unsorted bag of url parameters. Believe me, we *wished* for a straightforward URL canonicalization standard we could reference. None existed. So we cussed a lot, bit the bullet, and wrote one that was fast and simple as possible: sort the parameters and concatenate them.
Go yell at the implementors of Apache and of PHP. If we could have guaranteed that you'd have access to an unmangled Authentication: HTTP header, the OAuth 1.0 spec would have been 50% shorter and a hell of a lot easier to implement.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Do you not think it was a flaw to target a spec towards a specific language/architecture?
From the perspective of someone on the outside of the process, it was both a mistake and not a mistake. It was a mistake in that it causes too many compromises to be made. It was not a mistake in that it allowed a great many deployments to be made very rapidly. IMO, they should have compromised a bit less and pushed back at the Apache devs a bit harder to get them to support making the necessary information available.
But I wasn't there, so I've very little room to criticize.
Re: (Score:2)
Speaking of sorting parameters, there is at least one issue I still see in a lot of libraries. The spec says encode things, then sort them. Many of the libs I've seen do it the other way around. Sorting first is the most obvious way to do it, but I guess the spec was trying to avoid issues with locale-specific collations by forcing everything to ASCII first. Most sites uses plain alphanumeric parameter names so people get away with doing it either way.
Still, it goes to show how developers can completely fai
Re: (Score:2)
Re: (Score:2)
Go yell at the implementors of Apache and of PHP
then why didn't you? Last time I checked Apache was open source so you could have submitted your required changes. I'm not quite so sure of PHP, but maybe there is a way to add an extension to it that grabs the unmangled header from your newly customised Apache.
Re: (Score:2)
The problem is AIUI the goal was to make things work on shitty webhosts. So working in up to date apache/php with the right settings is not enough, you have to work on whatever old version of apache/php and whatever crummy config the webhost offers.
Re: (Score:2)
sure, but if you don't fix things, they'll never get fixed. The OP seemed to just be too whiny about how things were difficult, boohoo.
In a year or two all those old Apache webhosts would be upgraded - or TBH, if he'd made the patch and added it they would pretty much all get upgraded in the next update release. And those that didn't, would be really insecure anyway due to other unpatched vulnerabilities. I think webhosts tend to update their servers reasonably regularly.
Thanks Eran! (Score:1)