Hypertext Creator: Structure of the Web 'Completely Wrong' 357
angry tapir writes "The creator of hypertext has criticized the design of the World Wide Web, saying that Tim Berners-Lee's creation is 'completely wrong,' and that Windows, Macintosh and Linux have 'exactly the same' approach to computing. Ted Nelson, founder of first hypertext project, Project Xanadu, went on to say, 'It is a strange, distorted, peculiar and difficult limited system... the browser is built around invisible links — you can see something to click on but you’ve got nowhere else to go.'"
Smokin' (Score:2, Insightful)
I'll have some of whatever he's been smoking.
WTF? (Score:4, Insightful)
“[My approach] would be entirely different from today's documents where you look at one page at a time and you can see a ribbon or beam connecting documents together,” he said. “Having to refer to a paragraph and a sentence in an e-mail is just so barbaric when you could just strike it out and make the connection between sentences.”
Is it just me, or is this just completely incoherent? What the hell is he talking about?
opening a URL is like going to the store (Score:4, Insightful)
Yawn (Score:4, Insightful)
Yet another visionary wanting to do something different just for the sake being different. It's become popular lately to claim that particular industries or areas are doing it "all wrong", because naturally, if their whole process is "wrong", and you know the "right" way, then you're a genius right?
In reality, some things haven't changed in a long time because we've figured out something that works well. Every time I hear one of these "revolutionary" interface ideas they work well for the couple of examples that their creators can cite, but typically fall flat when you try to then adapt it to the entire world of computing.
Oh Really? (Score:2, Insightful)
Put Your Money Where Your Mouth Is.
Confucius say: (Score:4, Insightful)
man who says impossible shouldn't interrupt man who does
So Teddy boy comes up with a concept, theorizes around, accomplishing (near) zilch building his ivory towers out of clouds for 20 years and he's complaining about the 50 million bazillion websites people have made, some of them actually useful? Jeeze, at least pretend to be relevant by helping pound a stake through the heart of Flash.
Re:Smokin' (Score:5, Insightful)
I think what he's suggesting is this:
Many documents are composed of parts of other documents. If I write an essay I might quote from source texts, scientific papers, other people's work on the subject, interviews I've conducted, etc, and I'll add my own ideas around this. At the moment, I duplicate (retype) any source material and provide a link to it. The material I've linked to doesn't automatically link back. Instead, I could make a link using his system which includes the text from the version of the document I look at, and provides a two-way link.
It's a nice idea, but unless you can make it easy to create documents with all these links (and ensure they don't need any maintenance) I don't see how it would catch on.
Wikipedia's software is close in some respects -- you can include pages (but not, AFAIIA, selected bits of pages) in other pages. There aren't links in the UI, but it would be trivial to add them.
Re:I know a couple of the Xanadudes... (Score:4, Insightful)
One thing they've mentioned on many occasions is that 404 errors bug the shit out of them. In the Xanadu system, all links were two-way, and you couldn't end up with a broken reference like that.
How would it be possible to not have 404's unless every document took control (ie. a copy) of every document to which it linked (and subsequently would have to link to everything linked in those linked pages, ad infinitum).
That seems to be the obvious flaw in everything this guy has talked about for 50 years. XanaduSpace is really no different from a web browser with regular links, all that it does is load all linked pages simultaneously and display the linked documents in the background of some 3D view. Real browsers don't do this because they have to deal with the reality that the linked pages are hosted remotely and therefore have latency and bandwidth issues which need to be balanced with the likelihood of a user wanting to actually follow that link.
XanaduSpace's entire concept seems to be predicated on the assumption that all linked content is immediately available and immutable. This obviously cannot work on non-trivial amounts of data. Either it would mean having the entire Internet on your local computer or, slightly more realistically (but altogether more scary), having some kind of central Internet server/database/authority that maintained control of all published documents. Short of an international fascist uprising I don't see that happening.
Re:WTF? (Score:4, Insightful)
Its a nice concept but where it falls down is meta data. You need good metadata on every document when its stored to make this sort of thing work. The computer does not know Romania, the country from some girl who happened to be named Romania. The trouble there are really one two solutions,
A) Make end users actually tag things correctly, and completely
B) User mind boggling amounts of computer power to do the sort of deep statistical analysis, like IBM's Watson to categorize things.
B will likely work in the near future, A has been tried a thousand times there is no sense in going down that path anymore.
Re:Smokin' (Score:5, Insightful)
The way the web works today doesn't allow this. Sure, you could fetch some text part from a remote server somewhere, but what if that site goes down? Or what if your document contains 100 snippets from 100 servers? Just imagine the load times.
At least now, when presented with a hyperlink, the user has an expectation that it might be broken, but even then the locally stored text remains accessible.
And then we didn't even mention copyrights...
Re:Smokin' (Score:2, Insightful)
It's basically a document version of DLL hell.
What's interesting to me is that many technology types remain so enamored with this sharing of a single resource. It comes around every few years in different forms with different buzzwords, but it's always the same principle: trust your data to someone else. Invariably people fall for the latest buzzwords, currently "the cloud", and they get bitten in the ass when something happens because they weren't in control of their own data. People want to own CDs and DVDs because they don't trust that they will be available forever. People want their own software because they want to control when and if they upgrade. People want to store their own pictures and documents because they don't trust other people not to do evil things with them. The list goes on forever. What's interesting is this constant push to get away from independent management of one's own data. NEVER GIVE UP CONTROL OF YOUR DATA.
Re:It's all about DRM (Score:5, Insightful)
No. DRM does suck. Definitively and conclusively sucks.
The reasons why it sucks are two:
1 - There is no way it could work. And by that I don't mean any practical, legal or social factor. It simply can't work, the working of our universe doesn't permit DRM to work.
2 - Every human activity must be a hostage of it for we to pretend that it works. The content industry can go to hell, most people think it is way more important to afford real things.