Why the Semantic Web Will Fail 179
Jack Action writes "A researcher at Canada's National Research Council has a provocative post on his personal blog predicting that the Semantic Web will fail. The researcher notes the rising problems with Web 2.0 — MySpace blocking outside widgets, Yahoo ending Flickr identities, rumors Google will turn off its search API — and predicts these will also cripple Web 3.0." From the post: "The Semantic Web will never work because it depends on businesses working together, on them cooperating. There is no way they: (1) would agree on web standards (hah!) (2) would adopt a common vocabulary (you don't say) (3) would reliably expose their APIs so anyone could use them (as if)."
So let me get this straight ... (Score:4, Insightful)
It will fail for other reasons too (Score:5, Insightful)
The semantic web will fail because it is too complex and noone outside the academic community working on it really understands it. The ad-hoc tagging systems and microformats Web 2.0 has brought are good enough for most people, and much simpler for the casual web developer to understand.
Web services (Score:4, Insightful)
Reason #1 the Semantic Web will fail (Score:5, Insightful)
I don't need you to mark "This page is a REVIEW of a CELL PHONE that has the NAME iPhone" anymore. All I need to do is Google "iPhone review" or hop on over to Amazon. Problem pretty freaking solved from my perspective.
Corporate Self Centeredness (Score:3, Insightful)
Problem is, when you get so big so fast, there are almost neccessarily major flaws in the designs.
Problem is, you never get rid of them again.
Google (Score:2, Insightful)
Why the future tense, anyway? (Score:1, Insightful)
Why it will fail (Score:5, Insightful)
It might fail for the reasons given (no I've not read the full article yet - naturally) but personally I think it will fail simply because it's too much work for the amount of payback. It would be great if one day magically over night all our data was semantically marked up but that's not going to happen. The reality of it is that we will have to mark up the majority of content by hand. Even then inter-ontology mappings are so difficult that I'm not sure the system would be much use.
Perhaps worse than that though is the prospect of semantic spamming. It would be impossible to trust the semantic mark up in a document unless you could actually process the document and understand it. What would be the point in the mark up in that case?
What is it anyway? (Score:4, Insightful)
Sure, we're all seeing community sites, blogs, tagging, etc. But each of those sites is an individual site, and their only connections seem to be plain HTML links. Community sites don't really allow collaboration, blogs are standardized personal web pages and who here uses tags to actually find information? All these things might warrant a "Web 1.0 patch 3283" label, but is it really a new type of web? Is it the type and magnitude of paradigm shift that the first web was? It only seems like people are just becoming more aware of the possibilities of the same web it was 10 years ago.
Re:Far out! (Score:1, Insightful)
Yes, some degree of abstraction and layering is necessary when developing software. But it must be kept to a sane level. These layers must be developed, ideally, by the very same people or project group, to help ensure a certain level of quality. And sometimes when our layering tower gets too tall, we need to kick out the middle.
Take a typical Web site today. It involves HTML and JavaScript, which is displayed by a web browser. If you're using a browser like Firefox, your web page is in turn displayed via XUL. XUL runs on top of the Gecko rendering engine. Gecko runs on top of a toolkit like GTK+. GTK+ runs on top of GDK and GLib. GDK runs on top of Xlib. Xlib runs on top of C standard library and the underlying operating system. Only now do we get anywhere near the actual hardware.
So we end up with an extremely tall serial tower, where failure occurs if even just one component runs into trouble. The Semantic Web is partially susceptible to this sort of a problem, too. Look at its technology stack [wikimedia.org]. If any one of the libraries that implements such functionality fails, your whole Semantic Web stack fails.
Re:What is it anyway? (Score:3, Insightful)
What's really cool is the beginning of desktop to web applications. There are a lot of innovative applications floating around -- mostly niche (being able to create a
One word: SPAM (Score:5, Insightful)
But in the Real World, any online system that is used by a large enough number of people will eventually become attractive for spammers and scammers to defile and twist to their own purposes. So you'll get a deluge of pages that appear to be useful reviews of digital cameras (and are marked up as such) but in fact simply go to a useless "search" page that has lots of link farm references.
And if you say "Ok, so we don't trust the author of the page, we have someone else do it"... then who? Who's going to do all the work? Answer: Nobody. AI is nowhere near being smart enough for this. Keyword searching is, unfortunately, here to stay. If you trust the author to do the markup, then the spammers have a field day. If you say "Only trusted authors" then the system will still fail, due to laziness on most people's part - if a system isn't trivial to implement and involves some kind of "authentication" or "authorization" then nobody will use it, period. The Web succeeded in the first place because anybody anywhere could just stick up a Web server and publish pages, and it was immediately visible to the whole world.
The Semantic Web will fail for the same reason that the "meta" tag failed in HTML: Any system that can be abused by spammers, will be abused.
So, the Semantic Web, which is all about helping people find stuff, will fail. Not because of any technological shortcomings (it's all very nice in theory), but simply because we as people won't work together to make it work. Well, a small number of people could work together, but as that number got larger, until it reaches the point of being useful, it will automatically get to the tipping point where it becomes worthwhile for the spammers to jump in and foul it all up.
Obvious (Score:3, Insightful)
No matter how cool your RDF/OWL ontologies are, the real world is perfectly happy with plain XML/CSV. If there isn't an obvious benefit, people won't switch.
Re:Far out! (Score:1, Insightful)
It may not be rocket science, but it still is pure logic.
And casual people and logic don't mix very well.
Re:It will fail for other reasons too (Score:5, Insightful)
Re:Reason #1 the Semantic Web will fail (Score:5, Insightful)
Re:It will fail for other reasons too (Score:2, Insightful)
Fast? CSS has allowed us to do that since 1996 [w3.org]!
People used similar arguments for HTML standards when the (first) browser wars were on. The purpose of a standard is (supposed to be) that sites interacting with each other (as is an ideal of Web 2.0) can do so without specific coding for each site you interact with. Hence RSS aggregators can grab any RSS feed and display it. If "Web 2.0" is going to be as useful as the web itself then it needs to avoid splitting into different APIs and formats.
And, for me, this is part of the problem.
Re:Reason #1 the Semantic Web will fail (Score:5, Insightful)
No it doesn't. The genius of google was that it relies on people linking to pages talking about keywords. And uses various tools to identify and promote good linkers.
But the most important reason is that it would be much cooler to have a web where you could say "give me a list of all the goals scored by Romario" and have it list them for me.
That's a curious thing to ask for, since the first google result is a story about how there is a good bit of controversy surrounding Romario's "1,000" goals. The problem is your request is to vague and doesn't define all the words within itself (i.e. does a goal scored as teenager in a different league count?).
This goal is quite a bit higher than many realize, as you could get 10 people (5 of them experts) in a room and they wouldn't necessarily be able to agree on the "right" answer.
To ask, or even demand, that computers do the same task as a background function is ludicrious, IMHO (at least when applied to a universal context).
Re:You keep using that word... (Score:5, Insightful)
Lewis Carol had it right [sabian.org], and George Orwell agreed with him [wikipedia.org]: "Which is to be master" is the question that matters.
In free societies, everyone is master, and our language is conditioned only by the minimal need to communicate approximately with others. Beyond that, we are free to impose whatever semantics we want, and we do this to a far greater extent than most people realize. As a friend who works in GIS once said, "If I send out a bunch of geologists to map a site and collate their data at the end of the day, I can tell you who mapped where, but not what anyone mapped." Individual meanings of terms as simple as "granite" or "schist" are sufficiently variable that even extremely concrete tasks are very difficult.
Imposing uniform ontologies on any but the most narrowly defined fields is impossible, and even within those fields nominally standard vocabularies will be used differently by rapidly-dividing "cultural" subgroups within the workers in the field.
The semantic web is doomed to fail because language is far more highly personalized than anyone wants to believe. I think this is a good thing, because the only way to impose standardized meanings on terms would be to impose standardized thinking on people, and if that were possible someone would have done it by now. Whereas we know, despite millennia of attempts, no such standardization is possible, except in very small groups over a very specialized range of concepts.
Re:It will fail for other reasons too (Score:5, Insightful)
You're missing the point. It's not that the current "presentation" of the Semantic Web is too complex; the problem is that actually creating the Semantic Web is too complex a task for most Web content creators to be interested in.
Essentially, the Semantic Web asks users to explicitly state relations between concepts and ideas to make up for our current lack of an AI capable of discerning such things for itself from natural human language. But let's face it, the average Joe writing his weblog or LiveJournal entries - or even a more technical user such as myself - would generally not be interested in performing this time-consuming task, even with the aid of a fancy WordPress plugin or other automated process. This is what the parent meant by saying it's just "too complicated".
The way to realize the Semantic Web is to advance AI technology to the point where it becomes an automated process. Anything less would require too much manual labor to take off.
Re:One word: SPAM (Score:3, Insightful)
I believe all your arguments have been used to explain why Wikipedia will fail. Well, it hasn't failed yet.
It's relative (Score:3, Insightful)
Most likely the symantic web will fail to achieve all it's objectives but achieve some of them, and may eventually rise again after it's failed. This is the nature of progress. Good ideas that fail are usually resurrected later. However the blogger is probably right, as long as the symantic web is going to be "handed" to us by a group of established corporations it will most likely never succeed, there's too much incentive for back stabbing in that top-down implementation. For it to succeed it needs to be so obvious that there's more money and power available by playing nice that all but the most black hearted capitalists will play nice. We have to be aware that people like spammers exist, though, and anything that could potentially be used to generate advantage will be abused to death.
RDF promotes interoperability and extensibility (Score:5, Insightful)
One of the features of the W3C's model [w3.org] (based on RDF) is that it doesn't push the idea that everyone should adopt the same vocabulary (or ontology) for a topic or domain. Instead it offers a way to publish vocabularies with some semantics, including how terms in one vocabulary relate to terms in another. In addition, the framework makes it trivial to publish data in which you mix vocabularies, making statements about a person, for example, using terms drawn from FOAF [xmlns.com], Dublin Core [dublincore.org] and others.
The RDF approach was designed with interoperability and extensibility in mind, unlike many other approaches. RDF is showing increasing adoption, showing up in products by Oracle [oracle.com], Adobe [adobe.com] and Microsoft [dannyayers.com], for example.
If this approach doesn't continue to flourish and help realize the envisioned "web of data", and it might not after all, it will have left some key concepts, tested and explored, on the table for the next push. IMHO, the 'semantic web' vision -- a web of data for machines and their users -- is inevitable.
Misconceptions and risk-aversion (Score:3, Insightful)
His second point is just a common misconceptions and FAQs [w3.org]. It doesn't require that people does that.
I have just accepted a position with a consultancy that does a fair amount of work for those cut-throat businesses. And they are interested, very interested, in fact. Which is also why Oracle, IBM, HP, even Microsoft is interested.
Typical use case for them is: So, you bought your competitor, and each of the companies sit on big valuable databases that are incompatible. You have huge data integration problem that needs solving fast. So, throw in an RDF model, which is actually a pretty simple model. Use the SPARQL query language. Now all employees have access to the data they need. Problem solved. Lots of money saved. Good.
But this is not part of the open web, you say? Indeed, you're right. So, Semantic Web technologies have allready succeeded, but not on the open web. And since I'm such an idealist, I want it on the open web. So, the blog still has a valid point.
We need to make compelling reasons why they should put (some) data on the open web. It isn't easy, but then, let TimBL tell you it wasn't easy to get them on the web in the first place. It is not very different, actually. The main approach to this is capitalise on network effects. There is a lot of public information, and we need to start with that.
So, partly, that's what I'll do. We have emergent use cases, and that's the evil part of cut-throat business. You don't talk about those before they happen. So, sorry about that. I think it will be very compelling, but it'll take a few years. If you're the risk-averse kinda developer who first and foremost has a family to feed, then I understand that you don't want to risk anything, and you can probably jump on the bandwagon a couple of years from now, having lost relatively little.
But if you, like me, like to live on the edge, and doesn't mind taking risks doing things that of course might fail, then I think semweb is one of most interesting things right now.
Re:It will fail for other reasons too (Score:4, Insightful)
The problem here is trust. All of the previous features of the web, whether it is javascript or metadata or something else, have invariably been abused by those seeking to game the system for profit. The semantic web is asking the marketplace to state relations in an unbiased fashion when there are powerful economic incentives to do otherwise (i.e. everything on the semantic web will end up being related to pron whether it actually is or not). Indeed there are entire businesses devoted to "optimizing" search engine results, targeting ads, spamming people to death, and other abuses. The problem was that the people that designed and built the initial web protocols and technologies did not account for the use of their network by the general public and thus did not take steps to technologically limit abuses (their network of distinguished academic colleagues was always collegial after all so there would be no widespread abuses). The semantic web will fail precisely because human nature is deceptive, not because the technology is somehow lacking.
In fact, this whole discussion is reminiscent of the conversation that Neo has with the Architect in The Matrix Reloaded. The Architect, as you may recall, explains why a system (the Matrix), which was originally designed to be a harmony of mathematical precision, ultimately failed to function, in that form, because the imperfections and flaws inherent in humanity continuously undermined its ability to function as it was intended. The same general principle is at work with the Semantic Web, the perfect system could work in a perfect world, but not in our world because humans are not perfect.
Re:It will fail for other reasons too (Score:3, Insightful)
I agree on this much.
I don't think it really is. Its certainly asking people to make claims about resources, but those claims themselves are resources that may be the subject of metadata making claims about those claims. How people (or automated systems) treat particular claims on the Semantic Web can certainly depend on claims made about those claims by particular other sources of metadata. Trust is an issue, sure, but the Semantic Web itself also provides the framework on which to build a distributed system for resolving issues of trust.
Muddled and complex are different problems (Score:3, Insightful)
At least with things like TCP/IP, relational database theory, information theory, and the like, the concepts are well defined, not some mishmash of marketing buzzwordspeak and sloppy definition. Of course, TCP/IP as it is now often taught (via OSI) is just as muddled even though the model is (and ought to be) clear as daylight. If people are going to cover OSI and TCP/IP they ought to cover the entire protocol ideas, design criteria, etc. That way people will *understand* why OSI protocols (like H.323) are so awkward when run on TCP/IP. [/rant]
The big thing is, instead of having a vague marketing buzzword about something, it is helpful if we devide things into usable and practical concepts. Social networking, web services, service oriented architectures, semantic markup, etc. rather than lumping it all together into a vague term that doesn't really mean anything.