Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

Second Coming of Technology 335

BgJonson79 writes "A Yale computer scientist has published his views on what will be the next 15 years of computing. He says the last breakthrough operating system was for the Mac and that Linux is obsolete. He also says the present file system is obsolete, as are some filenames." Many good points in this one. great discussion fodder: talks about how we just sort accept flaws in the systems we use. Also talks about how in the future the net will be less about computers and more about the net (eg astronomy isn't about telescopes) Definitely worth a read.
This discussion has been archived. No new comments can be posted.

Second Coming of Technology

Comments Filter:
  • by Chris Parrinello ( 1505 ) on Friday July 07, 2000 @12:16PM (#950528)
    Well I did a quick search:

    6,006,227 Document stream operating system [164.195.100.11]

    Just FYI

  • While the author makes a few interesting points this article is what you what expect from a comp-sci student who's been detached from the reality of computing for years. While his 'life-streams' migth potentially sound good on paper how would you ever model this? He makes the analogy of how the brain accesses information. However, the brain takes one word like 'Fifth-Avenue' and then takes into accoutn a large amount of sub-concious knowledge to find exactly the resutl you're searching for. On a computer, on the other hand, you would need to figure out all this subconcious information (date, time, relationships of information) to get what you're looking for. It's just like any other find command. Of course the obvious solution to this would be neural implants, but by the time we have something of a quality sufficient to do that all of his ideas will be obsolete!
  • "The important challenge in computing today is to spend computing power, not horde it."

    I love it when people use the wrong homonym to make a point that's obvious to the point of nonsense.
  • In fact, the net is more about pornography, gambling and copyright violations than there are actual pornography, gambling and copyright violations on the net.

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • I don't know about you, but astronomy IS often about the telescope. A lot of astronomers really love their field, and for feel just as exited about a cool telescope as computer nerds feel about a cool computer. In any field, there will be a bond between man and tool. A carpenter takes care of his saws and chisels, a race car has a bond with is car, an athelete cherishes certian pieces of equiptment. Computing is just like any other field in that respect. Sure, there are carpenters who could care less about the tools and the profession and simply does it for a job, but in all fields, a great many people actually like what they do, and liking the equiptment that they use is often a part of it. So yes, a lot of computing may become more about the net than the computer itself, but a large number of computer users will still care about the elegance of the environment they're using, and the niftyness of the latest hardware or software. These people aren't necessarily only the nerds. In all computing areas there are people who simply like an OS or a system of its elegance. NeXT users are smitten by its elegance, MacOS users love its creative aspect, and BeOS users tend to be quite obsessive about their OS. Even Windows users can get attached to the raw power of DirectX.
  • Come on now fellas. You can't be serious here. BeOS is cool, but its not in the same class as something like Mac OS X. At least use something before you attempt to belittle it.
  • ooohhh, a whole -six- pages, huh? honestly...
  • So don't story the information in one large (mail, whatever) file. Story individual items individually, and come up with a good search mechanism for them.

    This has been done. When I used OS/2, my mail program (MR/2) stored each message as its own file. Before I deleted the partition, I backed up all of my existing mail files. This came in EXTREMELY handy because I could simply use grep to find something like a long-lost order number or serial number (purchasing downloadable software is not without its pitfalls).

    The main downside: even with the most current file systems, you waste some space by creating often thousands of files. Not to mention that trying to open the directory in any sort of graphical file browser proved a rather tedious exercise in futility.
  • Yeah, I've heard of David Gelernter, too, and realize that he's an author and professor at a highly regarded university. But the essay still stinks. Fame and credentials are more important than content? You must design web sites. Sorry for the flame.

    WWJD -- What Would Jimi Do?

  • It's our world, friends .. let's use it the right way

    I am not your friend. I am your worst enemy, you fundamentalist freak.
  • Wow... What a nut... Course, that probably means he's not far off.
  • >Might a horizontal stack of "book spines" onscreen be more useful than a clutter of icons?

    How about a vertical one -- let's call it "List Mode"! (depending on OS).

  • Don't listen to him! He's an agent! He only wants to keep you in the Matrix. . .
  • by TheDullBlade ( 28998 ) on Friday July 07, 2000 @12:00PM (#950541)
    I consider something "pointless fluff" when everything in it provokes one of two responses: "of course, that's obvious" and "of course not, that's absurd". This article neither provokes thought nor predicts with any accuracy, just annoys and confuses.

    There will be no real revolutions in interface because we've already found the only two possible interfaces:
    -language (CLI)
    -(usu. simulated) physical device manipulation (GUI)

    That's it. Those are your only choices: tell the computer to do it, or work the buttons and levers on the machine that does it.

    Yes, these will evolve. We will teach computers to speak with us more naturally (natural language capability), and we will make more intuitive and useable device simulations (virtual reality). These are not revolutionary changes, just natural and predictable evolutionary changes.

    To call either obsolete is foolish, absurd. With the limited capacities of current computers, the language interface is more efficient (for most tasks) for those who work with computers enough to make learning the language worthwhile, and the direct manipulation of simulated machines is easier for those who don't use computers as much (and for those tasks which involve manipulations of simulated physical objects: like drawing and 3D modelling).

    Okay, there's one more: direct subconscious control. The computer reads your mind, knows what you want, and gets it for you without you having to consciously communicate with it or even consciously understand what you want. This will happen, but not soon, and likely it will be a failure. People like conscious control, and don't trust computers. At any rate, this more properly considered a form of mind-enhancement than communication.

    Similarly, the relationship between computers and networks isn't going to undergo any dramatic revolutionary change. We'll continue to tweak what should be served and what should be processed by the client based on costs and capacities of processing, storing, and transmitting data. The baseline of things that can't be trusted to the network will also be preserved.

    And, of course, we'll continue to improve methods of data storage and retrieval. Hierarchical data storage won't go away; it's the natural system for us programmers to work in, it's very useful to have things like file paths and URLs. There'll just be more databases on file contents and better search features.

    Timestreams, on the other hand, are just a bad guess. Linear organization? It has its place, but I don't see it gaining any special prominence in the future. Computer use is more timeless than focused on time.

    Revolutions will take place in task-areas which do not yet exist, to solve problems we don't yet recognize as problems.
  • And then he goes an asks for tactile feedback in a mouse...

    Logitech manufactures one of these. With an appropriately rigged GUI, it will give force feedback for things like buttons, scroll bars, etc., that makes it easier to be sure you're in the right place. I've never tried it, though.
  • You wouldn't have to control every cell in the body. Once your new (I mean GNU) Linux body was installed it would run unattended almost indefinitely. On the other hand if you did want to make changes you would have the complete source code and could make modifications to your heart's (and other body parts) content. You would have nifty scripting tools like Cellular Perl or NanoPython to help keep things running smoothly. And tripwire could be used to control things like cancerous growth, viral infections, and probably even dangerous mood swings.

    Best of all, however, would be that I would finally get that third arm I have always wanted. Not only would it improve my ski-boxing, but it would make it possible to use the mouse without taking my hand off of home row on the keyboard! Woohoo, that would be sweet!

  • I believe that is precisely the point the author was trying to make. It is very much the status quo on slashdot to immediately flame any and all posts that begin to suggest that we have a higher moral authority than human desire.

    Truly this is not the reaction of someone with an open mind, which you self-proclaimed technological elite seem to think you possess.

  • >Moreover, is it someone like our esteemed Mr
    >Katz who wishes to help shape the lexicon for
    >the new age because they realzie they lack the
    >ability to actually work with the technology
    >itself.

    Ok, so I had the same difficulty stomaching this article too, but do your research before you start with the ad hominem attacks. David Gelernter is respect author and programmer (yes, well, how else to you get a CS position at yale?) with more than a little research under his belt. He was also seriously injured by the Unibomber, if you recall.

    Just so you know.

    spreer

  • by double_h ( 21284 ) on Friday July 07, 2000 @11:19AM (#950546) Homepage

    Some good cyberpoints, but I can't help but cyberreact dubiously to a cyberbody so cyberattached to cyber-buzzwords in the cyberyear 2000.

  • by Golias ( 176380 ) on Friday July 07, 2000 @11:20AM (#950549)
    ...the net will be less about computers and more about the net.

    That's no prediction. The net is already all about pornography, gambling, and copyright violations.

  • Agreed, if you focus on the tools instead of the product or uses, the point gets lost. Even now we have to remember that the point of computer technology is the use, not the technology itself.

    However, that does not mean we should give up trying to innovate at the "lower levels." What if car manufacturers quit making better engines and only focused on making more comfortable seats and installing better stereos?

    There is always room for innovation and improvement. Linux is proving that against Micro$oft right now. Let's not believe there will be an "end" to change - technology does not encourage entropy.

  • I disagree with you even on that. MacOS X is merely an incrimental improvment. True, it combines the ease of use of the previous MacOS and NeXT with the power of the underlying BSD, *but* it doesn't improve any of the other ones signifiginately. Aqua is beautiful, yes, but by no means revolutionary. The concept of providing for old applications via a wrapper is not now either, c.f. Mac's old 68k emulation layer, or Wine. It's a great application of technology, yes, but nothing new inof itself.
  • 23. The computer mouse was a brilliant invention, but we can see today that it is a bad design. Like any device that must be moved and placed precisely, it ought to provide tactile feedback; it doesn't.

    Hello...ever hear of force-feedback?

    This guy postulates and pontificates on a subject that he seems to know little about. Even his lexicon sucks..."cyberbodies"...gimme a break.

    He bitches about all the flaws in todays computing environment, but doesn't seem to give us any solutions. I even went back and forced myself to read the article again (yech!) to make sure I wasn't wrong.

    This guy is so full of BS it's coming out of his ears...what is this:

    32. You shouldn't have to put files in directories. The directories should reach out and take them. If a file belongs in six directories, all six should reach out and grab it automatically, simultaneously.

    So he's describing a semi-complicated linking system. BFD.

    David Gelernter gives us nothing new here with his "Manifesto". BTW, I think anyone who publishes anything called a manifesto needs to have their head examined.

  • 20. If a million people use a Web site simultaneously, doesn't that mean that we must have a heavy-duty remote server to keep them all happy? No; we could move the site onto a million desktops and use the internet for coordination.

    What is described here is Usenet. An article starts in one location, and spreads out over a network of Usenet servers, and anyone can hook up to their LOCAL server to retrieve it.

    It could also be used to describe IRC. At this abstraction layer, he is discussing methodologies that are already in place for certain jobs. You know... Where applicable. But he takes these methodologies and apply them to everything, as if there is only one correct way of doing something, no matter what it is you're doing.

    Since he brought up cars himself, this would be analogous with Nissan predicting the end of the 'net, because in the future, we will all DRIVE to the web sites, instead of sitting at home and hog your DSL connection. (If your ISP actually gets around to install your DSL connection, but that's besides the point.)

    Sure, if I drive to my web site, I would definitely know where I was, and I could give my URLs by longitude and latitude, but it is hardly efficient. I like my file system, thank you very much, it is a really neat way of organizing my files, and then pull them out directly by name.

    Every method, every architecture has its specific use, it can not be applied everywhere. It is quite obvious that the author of the article is a visionary who only sees things from the end user point of view and doesn't realize the architecture behind it.

    He presents nothing more revolutionary than Microsoft's implementation of "Favourites", that someone so brilliantly pulled out as an example of why I should start using Internet Explorer instead of Netscape.

    "See, Microsoft invented this new feature called 'favourites', where I can store my favourite web sites, so that I can go back to them later, without writing down the web site. And Netscape doesn't have favourites." - ever hear of bookmarks, buddy?

    Except, if I take this story and put it in the hands of this author, he'd be presenting it as "in the future, there will be no files, only bookmarks. We will have no documents, only links."

    Anyway... enough ranting...

  • by PD ( 9577 )
    Yes, that's true. His manifesto was a rant against technology.
  • by slipping it your calling card

    Maybe I'm just some kind of perv, but, uh, I'm going to go "slip it" to my notebook with a pcmcia card...

  • come on... computers do not work wonderfully at all...

    And until we put together an additional abstraction layer on top of the current filesystem (hide the ickiness of files from the end user).

    Seriously, users dont care about file systems, or file system structure, they just want their work to be where they put it when they last worked on it.

    We, as programmers and tech-people, have grown accustomed to the ways that computers act and expect us to act. Currently computers force humans to act less like people and more like machines.

    And computers are rude.

    I'm sure that you have filled out an application for a video store. When you are done with it, does the piece of paper jump up and scream out "You need to put a credit card number here, or I wont let you give me to the clerk"

    Nor does it ask - what would you like to name me?

    Thats the lunacy of our current "filesystem" we've got to name EVERYTHING. We dont name everything in real life.

    If I am drinking a can of soda, and I set it down, I dont need to save it as "mysoda7-7-00.pop" and then neatly place it in a file cabinet, or risk it disappearing and all traces of me drinking it removed from my body - thats the contradiction that faces users and current file systems. They must save all the time, because the comuter is not like the real world, the computer will forget what you have done.

    I set the damn thing down, and it stays there, nicely maintaining its state (ok, it may get warm and flat, but suspend disbelief with me here).

    But the important thing here is, it didnt ask for a name. The can will be where I put it (ignore outside forces) and will stay there, in its current state until I drink more of it, throw it out, or spill it on the ground.

    Soda Cans dont need names. Neither do term papers (I am writing a paper on gravitational mechanics, I shall call it "Newton").

    The point here, is that computer systems force people to think in ways that are completely un-natural and non-logical. The words "AND/OR" mean exactly opposite between real life and computers.

    The excuses that we as developers, designers and implementers use to perpetuate these anomalies are poor.
    "The user isnt computer literate"
    "That a training issue"
    "You arent supposed to do it like THAT"

    computer literacy is a joke, normal people dont want to be "computer literate", no more than we want to be "Accounting Literate".

    Yes, it may interest some people, but most people only care that the accounting department gets their paycheks right. They dont demand that everyone is "Accounting Literate" in order to get a correct paycheck.

    Likewise, we should not force users to become "computer Literate" if all they want to do is produce a Term Paper. Writing a term paper should not involve fiddling with operating systems, playing with the file system, launching applications, saving and storing copies of the "document".

    These are all things that should be transparent to the user. All they want to do is do their term paper, and not muck about in computer hell for hours.

  • Many filenames are already obsolete:

    command.com
    win.exe
    c:\

    and the list goes on :)
  • by oGMo ( 379 ) on Friday July 07, 2000 @01:12PM (#950576)

    This article may purposefully be controversial or not, likely it is. Things that make people think and re-examine current ideas usually are quite controversial.

    However, your proposition that a computer is a "device that runs programs" is incorrect, perhaps subtely, but the mistake is very important and the reason the rest of your reasoning is also incorrect. Let's look at this a bit more closely.

    First, your statement. At the core, a computer does not run "programs". It executes code, which in turn manipulates data in some fashion (whether moving it to the screen or to a disk, it's data). What's the difference, you ask?

    Saying that a computer runs programs presupposes an organizational model, a split between data and functionality. However, a computer doesn't presuppose this at all, as we can see if we examine it at a lower level. "Modern" operating systems, of course, make this supposition, and their underlying model is what "forces" higher levels of activity (developers and users) to also operate under this supposition. In fact, we've been using operating systems that function in this manner for so long that for the most part we all merely assume that the computer itself imposes this model on us. This isn't so! Why is this important?

    When someone suggests another model, our assumption that the computer forces us to our current model blinds us to the possibilities, and even the very nature, of the newly-suggested system. As developers we no longer even take the time to consider a better way of doing things. If something needs done, we write as program, because it's programs that get things done! Or so we're lead to believe. So we continue to believe, as we wander further and further down an increasingly difficult path of programs, applications, and files. What's wrong with this path, though?

    After all, we've been using programs to do things for us for the most part sice the first computer was built. Old doesn't imply that this is bad, so what's the problem? Computers are a tool. They are meant to be used to aid in whatever we're trying to accomplish, like any other tool, whether this is to write a book, or simply relax to a nice game of Quake 3. Just like any other tool, though, the closer they fit the problem, the better off they are. You want a hammer for a nail, after all. Programs, however, come in two varieties: applications, and utilities. Applications simply don't scale. They grow into larger and larger monstrosities of code, adding whatever new "feature of the week" seems cool to the developer or marketing department, and only interact through kludgery and hacks (such as OLE and related mechanisms). Applications aim "shotgun-style" to hit a broad segment of functionality, and if it doesn't do what you want, you're out of luck. Unix-style utilities on the other hand do a single thing very well, and are typically built to be useful in a chain of interaction (such as a pipe or other redirection). These have problems to, but different ones. Typically they require knowledge of how to get from what you've got to where you're going, that is, which commands will get your data, modify it in the way that you wish, and then output it to where you want. Often they require "data frobbing" along the way to make the next utility understand what the last one said, since the stream of data has no meaning outside what each utility gives it. So what would be better?

    An object-oriented (OO) system, as many people have suggested, is really the next step (no pun intended) above a utility-centric system. As I'm sure you know, an OO model reorganizes ("encapsulates") code and data into units ("objects") that, because of the encapsulation, have the implicit meaning which our streams lacked. (They also add a lot of other things I won't go into, because I'm sure you know, and it's not necessary here.) This is not fundamentally different from how a computer works. It is merely a more useful reorganization of what a computer already does, namely, executing code (object methods) and manipulating data (object properties). How is this more useful?

    Objects can interact without the problems encountered with utilities, but retain all the advantages. There is no need to worry about how data is formatted so that something else may parse it, but rather each property can be queried directly upon request, when necessary, if necessary. Objects remain independent, focused only on what they do, without growing features out of control to handle every case under the sun. We can use these objects as building blocks to form what we're after: they model every situation we can come across with a great degree of accuracy. How can I make such a broad statement? Look around where you're sitting right now. Your desk, you monitor, keyboard, pens, paper, books, etc. They're all objects. Everything we use and work with is an object already: it's natural. This is another major advantage (and also for some, a disadvantage): we're used to objects. People already know how to work with them. Unfortunately (and here's the disadvantage), we've all been conditioned to working with programs so long, that using objects on the computer seems unnatural. We can't think of any other way of doing things than opening up an application and saving that file. Here's the test though: what happens when you sit someone down who has never used a computer before?

    Usually, they want to start typing, or start drawing, or immediately attack their problem at hand. Instead, they have to go through a set of unnatural and unusual steps, like starting up the word processor or drawing program, making a new file (of what type?), and any other number of things, before they can start writing or drawing. And once they've done that, they're not done! They have to save it, which is quite a foreign concept to most (after writing on that piece of paper, do you have to tell it to remember what you just did? I don't think so.) So, how would these common actions work in a totally object-oriented environment?

    Instead of starting a program, and making a new document, you might simply grab a piece of paper, and start typing onto it. Or, pick up your tablet (or grab your mouse), and start drawing onto it. The code to decide what tools to present is trivial - if you don't believe me, look at GIMP or any other device-context-aware program. Pick up a tablet stylus and it'll switch properly to the last tool you were using. Same with the mouse, etc. Once you're done? You're done. You wrote that chapter or drew that picture, just put it away. Objects are persistant, and this is natural. But how to restore that change you didn't want to make? Or revert? Undo, and version control. Already been proven to work. But how would "other" things, like Quake, work, in such an environment?

    Same way. Each quake level is still composed of objects that do things, have certain methods. Quake levels themselves are "containers" for these objects. Internal representation is up to the implementation of the object. Fast 3D display is for an interface object (a "View") which determines how to display these. (A level editor would be much the same, use the same objects, but present a different view.) Simple, see?

    This is important. Hopefully this will cause those of you who chance to read this to stop for a moment and think of the possibilities, and a better way of doing things, and maybe someday we'll even see a system like this.

  • If I am drinking a can of soda, and I set it down, I dont need to save it as "mysoda7-7-00.pop"

    On my desk, right now, I have a dozen "Untitled Soda"s

    The excuses that we as developers, designers and implementers use to perpetuate these anomalies are poor.
    "The user isnt computer literate"
    "That a training issue"
    "You arent supposed to do it like THAT"

    15 yr olds aren't "driving literate". 5 yr olds aren't "money literate" (the nickle is bigger, but worth less?). We live in an integration of disparate systems. We need to learn how to use (or be used by!) said systems. Cars, money, public transport... computers are no different. By and large, for joe end user they are only marginally more difficult than driving.

    normal people dont want to be "computer literate", no more than we want to be "Accounting Literate".

    Programmers, sysadmins etc. are the "Accounting Literate" people. End users only need to know how to recognize various denominations, write a cheque and use their bank card.

    All they want to do is do their term paper, and not muck about in computer hell for hours.

    May I suggest a pen? To write a term paper you need to know several systems.
    1. language. Can you speak it? Understand it? Learning time: 4 years.
    2. writing. "joiny up" writing optional. A big system with lots of tricks. 'Its' versus 'It's'. 'i before e except after c' (except for science and heinous and... 40 others). Most /. posters don't grok this system yet :) Estimated learning time: 3 yrs basics. University for advanced.
    3. typing. Almost mandatory since the 30's. Learning time: 2 yrs

    Figguring out that the icon of the document that says "document" on it is your (wait for it...) document can't be that tough after all that.... can it?

  • .
    About the only thing that humans do mentally in their environment is name things.

    Not only do we name objects, the more sophisticated can name concepts. I call that a chair, that thing over there a desk, and that a cardboard box.

    The more important things we give more and more unique names. "Hand me the pliers, no, the red pliers". We've just named it "Red Pliers".

    Really important things we name "Bob", or "Carrie". We even make up our own unique names like "Sweetie", or "Crunchy Love Biscuit".

    And we all strive to maintain identity through our own names, either by attempting to be unique (note the intentional misspelling of my slashdot account), or by referring to something (imagine the book you could write chronicalling the references in every slashdot account login).

    Names can also be the strongest, most powerful connection to a persons mental awareness. If you doubt that, walk up to someone and call them a nigger. Or a kike. Or any of a hundred labels that we have created to do damage to a person.

    The act of naming something is very inherent in being a human. Vaguely referring to something is, as a result, very annoying. "Which pliers? I have two pliers here. Damn it! Which one do you want?!?". "Uh, the last I worked with?".

    Filenames are the natural method. We name a million things in our mind throughout the day.

    --
    Evan

  • by istartedi ( 132515 ) on Friday July 07, 2000 @12:05PM (#950581) Journal

    talks about how in the future the net will be less about computers and more about the net (eg astronomy isn't about telescopes)

    This analogy is flawed. The network is to computers what empty space is to astronomy. Light is to a telescope as information is to a computer. Information traverses the network just as light traverses empty space (although not quite so efficiently).

    The statement is also flawed. Who really cares about the network, other than those who maintain it? The only time I ever think about the network is when it goes down. What I care about is the stuff that's on the network. I want to get stuff from the network that makes the PC better.

    This is not to say that the network is not important, only that it is a means to an end, not the end itself.

  • The truth is, there hasn't been a real 'revolution' in computers since 1947. All we've done with them since then is make them smaller, faster, and paint them in prettier colors.

    There have been some remarkable advances in computing technology over the past 50 years. For Instance:
    • Nerual Networks
    • Transputers
    • Quantum Computing
    • Multi-processor clusters (Beowolf, ect.)
    • Networked CPUs
    • Timeshare systems


    Admittedly, not all of these are as equally significant, but they do represent some attempts to go beyond the Von Neumann archetechture which computing has followed since the introduction of ENIAC [upenn.edu].

    One of the things that has been lost in this process is the ability to make decent analog computers. In the first half of the 20th Century, almost all of the routine calculations were done using these instruments, epitomised with perhaps the most useful: The Sliderule. Although for the most part they lack the precision needed for some of today's applications, you can get some calculations performed with a well-designed analog computer much faster than you can with a 1 GHz CPU. It just takes a good mechanical engineer and machinist rather than an electrical engineer.

    Although not really a part of this list, the introduction of a personal computer allowed computing technology to be spread to the point that an average person could sit down with the technology and try to use it.

    Probabally the number one idea to keep in mind about computers was missed by the author of this article: Electronic computers are a general purpose machine

    You can take a CPU (putting this broadly... I'm talking right now about a "black box" that holds a processor, disks, network interface cards, I/O ports, ect. that you don't care where it sits) and by using the very same box you can have it perform all kinds of various functions. For example:

    • operate the engine of your car
    • monitor usage of gasoline at a gas station, and run a cash register
    • build an automobile from the ground up, controlling robots in a factory
    • act as an interactive toy for toddlers
    • act as a sports medium to play games for older folks (like most /. readers)
    • respond as a really beefed up typewriter that even helps proofread your content
    • monitor your house for security/fire protection


    At this point I know I'm preaching to the choir, but it seems like he totally misses this point. As computing technology changes, through evolutionalary not revolutionary processes, will there be any of the changes that he is discussing. The revolution, if any, will be with some of the alternative computing techniques that I mentioned at the beginning of this post, where you will be able to develop machines that can do something that nobody even expected.

    Of course coming up with a new computing model is about as difficult as coming up with a new universal physical theory, like Celestial Mechanics (Newton) or Relativity (Einstein). I seriously down the author is in the caliber of creativity as either of these two, or even Babbage, Turing, or Von Neumann. (add your own short list of computing pioneers if you like)

    Whenever I read an article like David Gelernter's piece I also want to shout "Show me the interface!" Having done programming for almost two full decades now, I have plenty of experience trying to come up with a program, just to have an "end user" play with it for a few minutes and get frustrated because it isn't doing exactly what "they" want. Usually I just scribble down a few notes and try to refine the interface, but sometimes I have to throw in the towel and try to start over with a different approach.
  • Just curious, what makes up the Holy Trinity of Cyberpunk? I would imagine that Neuromancer and Snow Crash are the first two... or maybe not. Could you elaborate?
  • 33. A file should be allowed to have no name, one name or many names. Many files should be allowed to share one name. A file should be allowed to be in no directory, one directory, or many directories. Many files should be allowed to share one directory. Of these eight possibilities, only three are legal and the other five are banned -- for no good reason.

    I am not a computer scientist. But I play one in meetings at work. Of the eight possibilities he mentions I can do all but three, not five.

    A file should be allowed to have no name

    Can't do (unless you want to count the ones I file in /dev/null).

    one name

    Default.

    or many names.

    Linking.

    Many files should be allowed to share one name.

    Can't do.

    A file should be allowed to be in no directory,

    Can't do. Even / could be considered a directory. That's like asking to store a file on no device.

    one directory,

    Default.

    or many directories.

    Linking again.

    Many files should be allowed to share one directory.

    Default again.

    His use of analogy is sometimes very good and at other times leaves me scratching my head. I can't for the life of me figure out what good a file without a name would be. On the one hand he uses the analogy of a book spine giving far more information than the average filename, but wants the ability to have nameless files (spineless books?).

    All in all he had some good things to say, but I'm afraid most of that will get lost in the abducted by aliens feel of some of the other comments.

  • Exactly! My god, the mix of the very occasional concrete ideas (approx. all of them good, IMHO) and the flurry of half-baked, fuzzy, let's-wave-my-hands-around-and-hope-nobody-calls-m e-on-it...can't even think of a word strong enough...tapioca fog (yeah, I know, I haven't had all my coffee yet) that can't even be described as thoughts...good lord, it's enough for me to become an Objectivist.

    Why do you need to describe a software company as a "lifestream"? Why in god's name is it stupid to name 10,000 files when you may at any moment have to call it up -- and when it's simpler to say "Give me 3rdQuarterProjection.ps" rather than "Um, the one I was just working on..you know the one...it's got the thing...about how we're gonna do really well next quarter...it's in postscript..." And what the fsck does "tangible time" mean?

    And then he goes an asks for tactile feedback in a mouse...a brilliant suggestion emerging from a sorry mass of dryer lint! (--not sarcasm) I can't...make...any...sense...of it...[bangs head against monitor]

  • by dsplat ( 73054 ) on Friday July 07, 2000 @12:08PM (#950611)
    Back when "automatic programming" was invented, it was assumed that programmers would become obsolete. Computing would be forever changed. The Users would be able to program for themselves. There was one problem with that. Automatic programming was a term for compiling code written in a higher level language. It didn't eliminate programming. All it did was redefine the skill set required by inroducing a level of abstraction. The processor is still there and so is the object code.

    A couple of decades later, fourth generation languages were once again going to make programmers obsolete. Once again, The User would be able to program for himself. It didn't quite work out that way. It seems that the Users still enjoy the leverage of having specialists make their tools for them.

    As for not mattering what operating system you are running, well if all the interfaces are the same, no it doesn't. That's the benefit of RFCs, POSIX, etc. But Gelernter neatly inverted the bits versus paper dichotomy that is well dealt with in The Unix Philosophy. Data shouldn't be printed to be used. It is printed as a fixed record of its state at a point in time. Paper data is dead. The power of the Unix model is the power of treating all of your files as streams of bytes and having a set of powerful tools for manipulating those bytes.

    As for his point about files having no name, one name, many names, being in no directory, one or many, and a directory having one or many files. He said that three of these were currently legal and the other five not. That shows a clear lack of knowledge of the Unix separation of inodes and directory entries.
  • Elements stored in a mind do not have names and are not organized into folders; are retrieved not by name or folder but by contents.


    Yes, but I often forget where I put my memories, I don't want the computer forgetting how to get to slashdot, but then again, its nice to know that computers arn't shy about asking others for directions!!!

  • You all remember Ted Nelson of Computer Lib fame... right? In case you forgot, a refresher is here [uconn.edu]

    From the text
    The windows-menus-mouse "desktop" interface, invented by Xerox and Apple and now universal, was a brilliant invention and is now obsolete. It wastes screen-space on meaningless images,

    Meaningless? Now, if he was talking about OpenWindows where every damn icon says "/bin/tcsh" on it regardless of what that shell is doing, I'd agree... but the whole point of icons was to distill meaning into a simple, easy to identify image that would take the place of a dozen words. Gee, that icond looks like a hard drive... I suppose it's my hard drive. Meaning transmitted. Mission accomplished. Yale guy wrong.

    fails to provide adequate clues to what is inside the files represented by those blurry little images, forces users to choose icons for the desktop when the system could choose them better itself,

    If it's blurry, you need a better monitor. My millions-of-colours icons (MacOS 8.5+) are very crisp. If Mr. Yale (oops, Dr. Yale) has a problem with getting appropriate info out of pictographs I weep at the thought of him navigating an airport. Pictographs are far more intuitive and easily understood than written linquistics. All of the original writing systems were based on pictographs and only evolved into alphabetic systems when the need to communicate non-concrete objects became a burden (give me a pictorgraph for "obligation"... sheesh). However, in the OS environment, just about everything is a noun. Nouns lend well to pictographs. If an OS needs to transmit info that doesn't lend itself well to pictographs, there's still text. I don't get what he means by "when the system could choose them better itself". When the system choses stuff for me, I usually get upset ("no, no not Internet Explorer you stupid system!")

    and keeps users jockeying windows (like parking attendants rearranging cars in a pint-sized Manhattan lot) in a losing battle for an unimpeded view of the workspace -- which is, ultimately, unattainable. No such unimpeded view exists.

    Well, if no such unimpeded view exists, then why dis the desktop metaphor? By his own admission his goal is unattainable. Hm. Yes, window clutter is a serious problem, however it is far from paralyzing because:
    1. Screens are big now. My work screen is a whopping 21". The first Mac had a nine inch screen... and that idea managed to catch on.
    2. Window management techniques are pretty good. The reduce-to-icon idea (a la CDE or, for a bad example, OpenWidows), the Doc (NeXT and, soon, OS X) and the Window Shade (Mac OS 7.1+ and KDE if you turn it on) all allow for multiple windows to be prioritized and deprioritized nicely.
    3. There can only be one focus item at a time. Period. His issue seems to be that finding the item you want to focus, focusing it and keeping it in focus is too tough. Well, as data grows and the number of potential tasks increases, this problem will only increase... typing cd /usr/export/home/foo/ is no easier than playing window-hunt.

    As a side note, the single-window mode on OS X (a la DP4) seems a fine way to ameliorate this problem. For a guy who speaks so highly of the Mac OS it seems like he doesn't really look at it.

    Now go look at Ted Nelson's stuff....

  • Let's go about this in order:

    But how do I distinguish between a text editor and a word processor? How do I distinguish between typing text into the text widget of the gimp and typing text into my text editor and typing it into my word processor?

    This immediately assumes an application model already. I know it is quite difficult for you to do otherwise; it is for everyone I talk to, until they switch their thinking around.

    Of course, that doesn't answer your question. And before I start, I'd like to say that there were a lot of things I left out in the original. I'll try to cover some more detail and depth this time around. And now for the answer: why would you regularly be opening your word processor or text editor?

    To write a document or edit code (likely), of course. In an OO system, you might take a Page object and make a FormattedText frame in it (or likely you'd have a template of this). Or then again, you might just start typing text, and the system would bring up a "text buffer" of sorts you could type in. You could later take this bit of text and put it where you wanted (format it, whatever). That's another way of doing things. There are many possibilities: the important thing, the extremely important thing that you see here, is that you're not just running a program. It may seem similar to you, but the difference is incredibly important. How is it different?

    Let's take the first model. You have a Page object. This object has properties such as size, and all it does is contain subclasses of DocumentObject. Each DocumentObject has properties such as size and dpi, and methods such as render(). A Document is nothing more than an ordered set of Pages. There is nothing saying a Document must be text; it might be a drawing, or a musical score, or a combination of the above. Each different thing, such as a MusicStave or Image is just a subclass of DocumentObject, and thus Page wouldn't care. "But, but," I hear you exclaim, "how is this different from OLE or other embedding?"

    It is both subtly and enormously different. Subtle because you have similar ends: one sort of medium in another. Enormously because almost everything else is different. There are no applications here; you're not embedding Word in Excel in Explorer in Word in MyPhotoEditor2000. Ask yourself: once you start embedding one application in another, and one changes into the other, why are you bothering with applications at all? Why do they need to be there at all? Of course, they don't. Programmers don't have to go to painful lengths and write horrible hacks to make their programs interact with each other via OLE or CORBA or COM or whatever the latest fad is; they simply write objects, with the simple functionality they contain, and by the nature of the system these objects work together.

    Next:

    The only way that I can see, short of omniscient computers, is to do something which is the equivalent of launching a program. Calling it changing modes, shifting paradigms, praying to the archangel gabriel. I don't care, but someone, unless this computer is psychic, I have to tell it what I want.

    I hope I have in some small way shown why this is not equivalent to launching a program. Telling the computer what you want does not imply programs and applications. Having the computer respond to what you're doing automatically does not imply you won't have to ever tell it what you want. But the difference is extremely important. Oversimplifying is shortsighted, and will continue to lead down the wrong development path until it is corrected.

    At that point, you're right back to ./dns2reverse.pl and frankly I don't really believe that you're going to come up with a much faster interface than the command line.

    I'm not sure what dns2reverse.pl is, but I have not said or implied that textual input is in any way dead, obsolete, unnecessary, outmoded, outdated, or even less important. The opposite is true. With a proper object-relational system, you can bring textual commands to a new height. A "natural language" like frontend beside a GUI might even be the best and most useful thing for both experienced users and new users alike. What you say next relates to this:

    So if we take a step back from input and just focus on file storage, doesn't everyone realize that a filesystem is nothing more than a relational object database?

    I don't know if you're oversimplifying or you just don't know what a general relational system is, but a filesystem is merely a very limited form of the relation, that is, containership. The world is not hierarchical. With a true relational system, you could specify that an object is "next to", "over", "under", "inside", "part of", or any other number of relations. Hierarchy is easy to do with this... objects are merely related to containers as being "in" them. However, a more general relation is quite a bit more useful. A document might be "part of" a project, and also "in" something else, perhaps "related" to a different document. A conventional filesystem is not optimized for handling general relations like that.

    Back to the textual input interface, though. Once we have established a relational layout, we can easily say things like "show me everything related to Project XYZ." Or, "show me all Graphic objects in reports from last year over 1 megabyte" and get a "query window" that would be similar to your typical icon view of a directory today. All the data is now there and accessible. The possbilities are endless, and the flexibility...

    And UNIX even has the object-stream method, more or less (more less than more, in practice). Make every data file you have executable ...

    You don't understand what an object is. Go get a good book (there are any number of good books on the subject), and read what encapsulation, polymorphism, inheritance, and abstraction are. They aren't fancy buzzwords; they define what it means to be object-oriented. To back this up, your example lacks encapsulation (the data and functionality aren't bound), inheritance (doesn't even make sense here without encapsulation), polymorphism (no methods or inheritance and nothing to be polymorphic), and abstraction (no context - the data itself is meaningless bits and bytes).

    I've got a friend who rants about the everything-is-an-object model once every month or two. The problem with it is that it isn't how life works.

    I've looked around, and I can't find anything in "life" that isn't an object. If you can find something that isn't an object (it can't have any properties, for instance), then I'd like to hear about it. Of course, "nothing" might not be an object, but then, if nothing isn't an object, then everything is an object. ;)

    But please don't ever make a system which requires you to describe the data that you want by the data that you want rather than some external artificial description. I'd much, much rather use logo7.jpg than "The logo in the burgendy shade of red which is bump-mapped a little more than logo6.jpg and I airbrushed ..."

    There's nothing requiring you describe anything. However "logo7.jpg" is hardly descriptive of anything, I'd sure not want you to organize my graphics collection. However, you'd likely want a "title" property (short) and perhaps a "description" or "comment" property where you could expound a little more on the thing, so you could make searching easier...

    Oh, and computers do make a difference between code and data.

    I did not say that there was no difference. I don't know how you got the impression that I did. (Or is this a straw man? I said they don't force you into a program model; rather they execute code which manipulates data.)

    If it was that revolutionary it would have revolutionized by now.

    I take it you believe every concept possible has been considered and either implemented or shown useless, and that nothing new and exciting can be done. What a dull world!

    New things can be done. New things might even be getting done. We just have to avoid this geezerly "it's been done this way since my gradpa's grandpa and aw say there ain't no other way to do it!" attitude, step way back, and evaluate whether there isn't a more useful way of doing things. For some things, the old way might be the best. For this, programs and applications must be replaced.

  • Since we aren't yet in the brave new world without directories I probably should have counted this as another doable by having the file dumb.ideas in many directories.
  • Michaelangelo DID STUFF.

    Show me where Gelernter has executed on any of this handwaving in a way that's anywhere near as wonderful as Michaelangelo's work.

  • ...would either be a refinement of the language or tool-manipulation interface. You'd either think "Computer: move that file to there" or you'd imagine that file moving there.

    Besides, who says you've only got conscious control over your body? Haven't you ever started home and just let your mind wander, then realized you were home without remembering how you got there?
  • by jhk ( 203538 ) on Friday July 07, 2000 @11:23AM (#950629)
    Go read Age of Spiritual Machines by Ray Kurzweil and hang out at asktog.com. Both are good resources for trying to figure out where things are going. Kurzweil extrapolates to 2099 when the biggest argument of the day is "what is a human being" since we may be machines by then.

    JHK

    Why ask why? Just go try. [homepage.com]

    • a file with no name - I'm not sure about this.


    There was a technique that was rather widely publicized in the wake of the famous Internet Worm. Since it was written up in Communications of the ACM, I think one can say that there is a credible source. However, I can agree with the professor on this one because it isn't general; it only works for executables. You run it, and unlink the directory entry while it is running. The inode continues to exist for the life of the process, but the file has no name. Of course, when the process exits, the inode will go away as well.
  • This guy teaches? Computer science? At Yale? You have to be kidding. He sounds like like the bastard son of Oprah and Carl Sagan. Your cyberbody will float down to a computer like a bluebird settling on a branch, will it? Give me a damn break. This is poetry. I think we are all very aware of the rapid change of technology. My grandfather and 50 generations before him may have been carpenters, but none could be computer programmers. Technology is not only moving forward, but accelerating. We know this. My grandchildren will do things for a living that I can't imagine. However, they will not speak in terms of cyberbodies and they will not be free of files or directories. My great great grandfather drove a horse and buggy. I drive a '97 Nissan Altima. Very different, but they both have wheels, a seat, a power source, and a steering system. Our computers, software, networks, peripherals, etc will all improve. This is for sure. But computers will not become mystical.

    And while I'm ranting, the operating sustem does not fasten the user to the computer, it fastens the user software to the hardware. I wonder if this guy's a real CS prof or an escapee from a creative writing class.
  • Relly, the unix model has its advantages, but if unix-heads reinvented the human body, they would probably find some advantage in allowing (and hence, requiring) someone to control each individual cell in the body. Everything is a cell! would be the advertising slogan.

  • does the piece of paper jump up and scream out "You need to put a credit card number here, or I wont let you give me to the clerk"

    No... but the clerk does. Ever try to get credit without supplying your SSN? You literally need to fight with the clerk/management about why you value your privacy and why they don't need that info?

    If I am drinking a can of soda, and I set it down, I dont need to save it as "mysoda7-7-00.pop"

    Ever have concussion with short term memory loss? I have. It was more than a month before I could remember where I set down a can of pop a few minutes before. I frequently had to write notes to remind myself to do certain things or remind me where I set stuff. Ever set a can of pop down and have someone set an identical can down next to it and you were perplexed as to which was which? Ever label a styrofoam cup so you knew it was your drink?

    Soda Cans dont need names.

    Then why do they have brand names and flavors on them? For identification obviously. Isn't that the point of naming a file rather than just storing it as a raw block on the drive?

    Neither do term papers

    They do (as to magazines, journals, articles, etc) if you plan on sharing them with people. We have a hard enough time evaulating data based on what it claims to be; Can you imagine if we had to absorb every bit of random information given to us and then figure out what it was that we wanted without any identification?

    The excuses that we as developers, designers and implementers use to perpetuate these anomalies are poor. "The user isnt computer literate" "That a training issue" "You arent supposed to do it like THAT"

    Why is it that we expect people to learn how to address an envelope, drive a car, or use a stove but we can't expect them to learn how to use a computer? Why is only the computer expected to know exactly what you want to do? "Envelope, send yourself to Bob." "Car, go to the store." "Stove, cook me some soup." Wait, cars, envelopes, stoves, food, stores, etc don't need names either.

    Writing a term paper should not involve fiddling with operating systems, playing with the file system, launching applications, saving and storing copies of the "document".

    Neither then should it involve finding the properly ruled paper, an appropriate pen, putting it in a folder where you can find it to turn it in, putting your name on it so the grader knows who it belongs to, etc.

  • You separate the attributes from the filenames. Right now, you probably sort your documents using elements in the path name. For example, /home/myacct/doc/{eng123|lit234|cs345}, that is, you're storing the attributes of the file that you're using for sorting them in the pathname. This works fine for a given way of sorting the documents, but what if you need a different ordering for some purpose?

    Why not store that in the file itself? No AI involved, rather, you would have a set of attributes associated with each piece of data that could form keys for a query. Some of these, such as "file type", "author", "owner" and "date", could be rather standardized. Then you could add other fields such as "subject", "keywords", "project", etc. Use your imagination. To make it further extensible, you could allow each file to have up to N user-specifiable attributes, to let the user dream up their own categories.

    Now, directories start to become irrelevant, since the information that used to be part of the pathname to the file is stored in the database record for the file itself.

    For instance, if you want all of your files that have to do with CS-305, you could do a query on "class equals 'cs-305'", and voila! you're done. If you want all emails from the Linux Kernel archive about USB, you could do "list equals 'linux-kernel' AND (subject contains 'USB' OR body contains 'USB')". As more and more content is created in this meta-data aware context, even emails stand a chance of being auto-sortable, since the author will have already provided relevant keywords in the "keywords" field.

    I think this scales far better than a filesystem, and requires less, rather than more, user intervention. You could still use procmail-style techniques for setting attributes on emails, and you could have your word processor fill in most of the details for you. The rest are details you already have to take care of when you select a folder to save your file in.

    --Joe
    --
  • There are already too many responses to this article for this to ever get read, but....

    What we really need is Auto-Make in our operating system.

    What I mean is, I should never have to generate a postscript file. Or a DVI file. If I want to print out or view the document I just edited, the OS should know that it needs to look for the TeX file and generate the DVI, then generate the postscript. It should cache them and delete them if I need the space.

    Also, the operating system should generate binaries automatically from source. It should profile the binaries using spare CPU cycles and when the system is idle re-compile to try to optimize the parts of the binaries that are slowest. Again, these should be heavily cached so that you don't have to compile many times. Maybe it should grab the sources directly from the net, too... For instance, if you want to run ``grip'' it will automatically go to the net, get the grip source tree (and maybe a pre-compiled binary, too) and download it, and then start compiling/optimizing.

    That's just my opinion, anyway.

  • When he started mentioning things like directories that automatically pull the files into where they need to be, I knew he was nuts. He just bitches and moans and doesn't try to fix anything. We already have legislative bodies to do that, we do not need computer scientists to be doing it.
  • Ummm, a lifestream was what he termed his replacement for files--nothing to do with software companies.

    FWIW, I thought that much of what he described sounded like the Net of yesteryear, when clients for a multitude of servers proliferated and platforms did not matter nearly so much. And several ideas were interesting, although I do know how workable they might be (e.g. lifestreams). He is completely correct in one thing: the standard file structure we use is woefully inadequate to the millions-of-documents world in which we find ourselves. Obviously any system in the near future will be built on top of files, but we need to get beyond them for day-to-day work.

    And yes, I'd rather say `I need the report I was working on yesterday' to my computer and get it.

  • He never said that Linux is obsolete; he said that it is old. There is obviously a great deal of benefit to be derived from this (we know where the bugs are, we have perfected it &c.), but the state of the art does need to advance.

    He's not afraid of anything. He is creating. He's trying to see beyond the current, trying to forecast the future. He may very well be wrong. But many of his points sound as things used to be on the net before it was commercialised; I like to think that some of these will come true.

    Try to read the articel next time.

  • And here I was thinking the net is about information. Here it's really just a self-referential structure. Odd. I suppose that should say "computing will be less about computers and more about the net appliances" or something like that.
  • by Shaheen ( 313 )
    Hey Slashdot, I've got a really long-ass essay that I think everyone should read, just to make sure that the future can be a utopia shaped by me and my wonderful views on computing. Won't you please post a link to my site?

    [Settle down people, there are probably some good points, but the thing is six pages long! Might as well send some mail bombs to people and then threaten to send more if the NYTimes doesn't publish it...]
  • by Anonymous Coward on Friday July 07, 2000 @11:27AM (#950674)
    Most people who wish to point out the future of computers seem to imply that we need some sort of major revolution- why? Computers work wonderfully as is.

    People could say the same thing about the telephone.. this could have changed drastically over the years, but it works very well, and so why change it? Sure, everything is cordless, yadda yadda blah blah, but all in all, you still pick up the phone and punch in those numbers that you've been doing for a really long time. There could be technology to pick up the phone and say "Call bob", and there is, but people don't use it.. because the telephone is fine as is.

    In my opinion, the future of computing will at some point move away from the computer.. it's headed towards pdas and cell phone etc. But the computer as is works fine as is, with it's filenames, windows, and basic directory structure similar to the way it was years ago. Sure, computers look different, pcs, server racks, laptops, etc., but all in all it's the same idea.. same with the phone. And while things will move forward (since when do they stand still), I don't think things will evolve much for computers.. I think they will be a persistant piece of technology that stays alongside of tomorrow's technology. People are for the most part comfortable with them and their interfaces (or are at least getting there.. my Grandfather knows how to use a mouse.. that says a lot).. so while they can be changed.. why would you want to?
  • by Anonymous Coward
    Eros does a similar sort of thing with the concept of a persistent filesystem. Of course, you still have the obvious problem--how do you "callup" something by name?

    I suppose I'm a moron, but I just can't see how you get away from this sort of thing in real life. His example "you'll name your three dogs, but you'll not name your 10,000 cows" is fundamentally lame. . .at some level and for some duration of time, each cow must be treated as individually as your dog. likewise, the equivalent of Lisp's gensym (simply simulated using $$ in a Bourne shell script) is a quite common programming idiom.

    fwiw, i suppose a declarative paradigm "fits" the model he's presented.

    my advice: skip this and read Rob Pike's article instead.
  • The real topic in computing is the Cybersphere and the cyberstructures in it, not the computers we use as telescopes and tuners.

    I think it's very nice that someone has the time to deliver this sort of speech with a straight face, but, well, fuck off. I am vastly more interested in how and why my computer works than in interacting with the banal masses involved with yon cyber-whatever. All that shit about an online community still involves humans, and the vast bulk of humans will always be less interesting that a nicely done for loop.
  • Okay, now that I have taken a moment to actually read it (read as in "read Jon Katz", not read as in "read Douglas Adams"... in other words, I got bored halfway through and skimmed it.), I can't help but notice that the whole "I get it and you don't" tone of the article.

    He's basically saying that he sees the big picture, and the bones tell him that something else is coming, but he can't say what. Why sweat the details, right?

    The self-importance reminded me more than a little of the Cluetrain Manifesto [cluetrain.org]. Did anybody else get that impression?

  • by 11223 ( 201561 ) on Friday July 07, 2000 @11:28AM (#950679)
    This article is definitely flamebait. The whole of the suggestions don't look like they're being serious, but rather being contraversial.

    But I'll bite. That's an interesting concept of interacting with information - but a horrible concept of interacting with a computer. Why? A computer is not an object-centered idea. Remember the big push a while ago for "OLE" and "Objectification"-style ideas, where everything turned into an object? Note where they are today - nowhere, and here's why:

    A computer is a device that runs programs.

    That's an important axiom. It doesn't modify documents, etc. It runs programs. And there's currently no way to create such a document-centered beast out of modern computer technology, because the computer is (at its core) still a device that runs programs. And folders, names, alphabetical orderings, etc. are a perfect metaphor for programming. He states that the current idea is great for programmers - well, that's because the programs they write need to interact with a system - and through our carefully-designed structure, they do it.

    It's impossible to throw out the current orginizational concept of computers because any new concept of computing will still function in the same way. It may have abstraction layers hiding the functionality, but it will still function in the same way. What you end up with if you try to create a document-centered model is a system that falls apart, programmaticaly. It doesn't work, because it leaves no room to run the programs. Where do I put Quake III in the document-centered model? It's fine for one application, but what about when I've got fifty, all with their own data files?

    When Be, Inc. first started to design their system, they had a flat filesystem, almost exactly as described. No folders. Instead, everything lived in the database. They abandoned that approach because it's almost impossible to build a large-scale device that way. (My Palm Pilot, however, works fine). Instead, they came up with a database system for the filesystem that doesn't throw out the approaches that have been carefully designed to deal with running programs. I can build a query and store it on the desktop that lists all of my BeOS-related bookmarks, and another that lists my Linux-related bookmarks. But they still exist on the filesystem as an item.

  • Yes, that's part of what I was saying. What he is proposing fits in a filesystem whether the user knows the file names or not.

    That fellow should write for Wired. Not for a technical journal. He reminds me of the Nanotechnology conference, where the theme is something like the future will be great, just as soon as we figure out how to do it.

    Bruce

  • Programmers are not the target audience for a user interface. However, you overstate the case. A spoken programming language (GREAT THESIS TOPIC ALERT) would need words for block structure but not for carriage-return and perhaps not even for statement termination. The "C beautifier" program seems to be able to synthesize line breaks on its own given a syntacticaly correct program.

    There's a phone number in my perens.com host record, and you can use that to verify my Slashdot ID if you really want to be sure :-)

    Bruce

  • Linux is the fulfillment of the GNU system, the bootstrapping event of Free Software. It is not meant to be an end-point in the development of computer operating systems or even the development of free software.

    Of course it's obsolete and based on a 30-year-old system. The point is that the innovations in that 30-year-old system were largely being bypassed by the industry and we needed to fix that problem first.

    The GUI itself is not an end-point of our work, and I believe that the verbal user interface will become the dominant way that people deal with computers in the future, at least until and unless there are really science-fictional things like direct neural interface.

    Verbal user interface computing will use kernels and filesystems, but the user won't care about that. The paradigm is the computer as your invisible friend. The user will ask the computer for things like "Find me the hotel in Indiana that Joe emailed me about", and will be told about matches or asked questions that refine the query. The GUI will become almost output-only, with pointing done with the finger or eyes and the word "that" replacing the mouse-click. For example, the user points at something on the screen and says "magnify that".

    Bruce

  • I believe that the verbal user interface will become the dominant way that people deal with computers in the future

    Imagine working in a room of 20odd programmers (I work in one, plus managers, admin staff, sysadmins, sales, and we're still hiring - 9 new people start on Monday...).

    Still want a verbal interface to be the dominant interface to a computer? My office is noisy enough without people having to constantly speak to their PCs too...

    I'm not even going to start on the problems with using such a method in public. :-)

    Cheers,

    Tim

  • I don't know, to me, the atmosphere of the article sounded like the author was "Whining".

    Perhaps this is a remenant of being a Tech-Support Geek and hearing, constantly, "Your Program should do this, and this feature is worthless."

    I am, of course, referring to the critisism of the Windows/Mac GUI and the use of Icons on Pages 3 and 4 of the Web Article.

    It seemed to me that the entire article was critiquing the fact that computers cannot read the minds of the users. (Which it cannot, and SHOULD not, in my opinion, since the USER should be giving the orders, and not letting the computer make the decidions.)

    Or perhaps that is more residual rant from my critical thinking of Windows 95.

    Regardless, It was an interesting article, but the feel of it left me with an annoying headache, which I usually get from people who feel the need to whine, or debate for the sake of debate.

    That is all.
    *Carlos: Exit Stage Right*

    "Geeks, Where would you be without them?"

  • They're on the telephone now. Noise is not going to be a blocking factor for this, workstations and the office environment, and even mass transit will adapt to accomodate it.

    Bruce

  • by sugarman ( 33437 ) on Friday July 07, 2000 @11:30AM (#950697)
    Hmmm. The author makes some interesting predictions for the future usages of computers, but not really interesting to anyone who has read a smattering of the Holy Trinity of Cyperpunk.

    POersonally, about halfway through, and witnessing about the 20th noun with the word 'cyber' prefixed to it, the paper became more interested and self-involved with the words it was creating ("Cybersphere", etc) and finding metaphors for the situations.

    Like a Bruckheimer movie, I started to be able to pick the plot points and figure what coined term was going to appear next ("Cyberbody?" Cyberbody: Check.)

    So while he does raise some interesting points, is it all that new, or is it merely someone wanting to bullshit their profs by writing a paper that appears "deep" and "Visionary"?

    Moreover, is it someone like our esteemed Mr Katz who wishes to help shape the lexicon for the new age because they realzie they lack the ability to actually work with the technology itself.

    Just some 1st impressions...

  • I think we need to contract our focus here: in reading Gelerenter's manifesto, think about the UI rather than the underlying machines. What Gelerenter is proposing is a new way of interacting with the machines we have; a way of making human-machine interation more intuitive for the average user. To a user, accessing a document by asking for "that picture I took of my kids last sunday at the lake" rather than "kidpic.jpg" is as big a leap as asking for "kidpic.jpg" instead of "0:3:0x45FF3F2". What Gelerenter wants is a computer that's not a computer. He wants a computer that thinks for him, that sorts the files he inputs in a way that's closer to the way his mind sorts them than to the way his mind tells his filing cabinet to sort them. He wants to throw out a hierarchical structure in favor of a distributed structure based on content. When I try to see the type of organization that Gelerenter describes, I visualize something like a Visual Thesaurus [plumbdesign.com], with directories being replaced by abstract nodes, eg "lake", "kids", "photos", and files being appropriately connected t o those nodes through their content. Yes, this can be achived through links and directories, but what Gelerenter wants is something to do it for him, something that will organize a filesystem for the average user. So despite all his anti-Linux hndwaving, what Gelerenter really wants isn't a whole new paradigm, but just a whole new UI. A UI that thinks for the user so that (s)he doesn't have to.
  • 9. The computing future is based on "cyberbodies" -- self-contained, neatly-ordered, beautifully-laid-out collections of information, like immaculate giant gardens.
    10. You will walk up to any "tuner" (a computer at home, work or the supermarket, or a TV, a telephone, any kind of electronic device) and slip in a "calling card," which identifes a cyberbody. The tuner tunes it in. The cyberbody arrives and settles in like a bluebird perching on a branch.
    11. Your whole electronic life will be stored in a cyberbody. You can summon it to any tuner at any time.
    Does this sound like Microsoft's press release [microsoft.com] on .NET to anybody else? Specifically, this paragraph:
    MSN.NET... By combining the leading content and services of MSN with the new .NET platform, MSN.NET will allow consumers to create a single digital personality and use smart services to ensure consistent, seamless and safe access to the information, entertainment and people they care about any time, any place and on any device.
  • As for his point about files having no name, one name, many names, being in no directory, one or many, and a directory having one or many files. He said that three of these were currently legal and the other five not. That shows a clear lack of knowledge of the Unix separation of inodes and directory entries.

    I thought so too - almost all of those are are possible:

    • a file with one name - possible
    • a file with multiple names - possible with links
    • many files sharing one name - possible if they are in separate directories (see below)
    • many files sharing one directory - possible
    • a file can be in one or more directories - again possible with links
    • a file with no name - I'm not sure about this. From my viewpoint, a thing (can of soda or a file) isn't distinct without some sort of name. For a can of soda, the name may not be just "Pepsi", it may be "the fourth Pepsi from the left in the fridge". Likewise with files - they don't need good names, but they do need to be distinguishable from their fellows, even if just by inode number. If you can't tell your 10000 head of cattle apart in some basic way, then how do you know you really have 10000?
    • a file in no directory - see below

    Why is it that every forward-looking guru has it in for hierarchical filesystems? I remember reading something like this a year ago on /. with essentially the same refrain - that hierarchy is somehow more confusing or harder for the user. On the contrary - we use hierarchies all the time in real life to organize important stuff - we divide furniture by room, food by shelf in the fridge, and tools onto different hooks in the garage.

    I wouldn't argue with a system like Mac OS X is going to use (from a previous /. article that I'm too lazy to find) - the BSD /usr, /etc, and so forth will be hidden from the user. Reducing unnecessary complexity from the user's view is a good thing, and that will hide all of the files that to the user's view are identical and not of interest. But not providing a way to pigeonhole information as necessary is the short path to madness.

    I did like some of his comments about lifestreams and time - the idea sounded a little disorganized (remember, we're eschewing hierarchical organization here) but workable with a few changes.

  • by PD ( 9577 )
    It's a small piece of irony that this scientist who published his manifesto of technology was seriously injured by another man who also published a manifesto.

    The Unabomber.
  • by jms ( 11418 ) on Friday July 07, 2000 @12:50PM (#950710)
    His tired old "Linux is like Unix, which is 30 years old, so it must be obsolete"

    He fails to understand that what is "new" about Linux isn't the software technology -- it's the development, licensing, and distribution models.

    Linux development is driven by the people who use it, not by the people who market it. That's a huge difference that filters out a lot of the unfortunate crap that winds up in market-driven operating system design.

    Linux licensing places the source code to the operating system in what could best be described as very close to a "copyright-enforced public domain." by guaranteeing you not only the right, but also the ability to control your software by guaranteeing you access to the source code. Traditional licensing keeps the source code to the operating system as far away from the public domain -- and the public -- as copyright and trade secret laws allow. This is another huge difference.

    Linux distribution removes the single point of failure created by the proprietary ownership and distribution model. Traditional software distribution funnels all distribution through a single distributor, who charges monopoly prices, and can remove the product from the market at any time. If RedHat, Caldera, and all the other Linux distributors were to go out of business, or dump the Linux kernel in favor of a new kernel design, Linux would survive. I'd like a new release of the Lisa operating system. When can I expect it? This is a huge difference.

    Linux will evolve, just as the Mac OS and Windows will evolve. The difference is that Linux is picking up features like journelled filesystems, while Windows is picking up features like talking paperclips and desktops that blink with advertisements.

    That's a huge difference.

    The innovation of Linux is that is has created a functional replacement for a Public Domain in software that has never existed due to overly restrictive copyright laws and overly long copyright durations, and like the public domain, has the potential to become ubiquitous.

    NOT the fact that it is largely an implementation of a traditional Unix kernel, which, as the article points out, is not a new achievement.

  • I appologise in advance for the inability to recall all of the details, but I remember a thread here on /. about someone who worked in the Macinstosh Human Interface division of Apple and they were working on new interface elements for Mac OS. One of these was "piles". People work by keeping their files in piles of paper and not in folders or drawers. You would do the same thing on your Mac OS desktop. A visual pile could be dragged out to reveal all of the items in it, and then when one was selected the pile would collapse. Also you could click on the file on top and it would shuffle through the "papers". I think this idea combined with the Mac OS X dock would be a great addition to Linux.

    Linux users would have piles along with folders, and when you clicked on a pile, a bar like the Mac OS X dock would appear on the bottom of the screen where you could scroll through files like Mac OS X does. Power users would have a keystroke/contextual menu to open the pile in a detail-view window.

    Another one of the interface elements in the article was the use of time-based visual cues. Older files would look aged, larger files would have visual size/depth.

    Apple pioneered the use of icons as nouns (Xerox-PARC only used icons for verbs i.e. cut, copy, paste), I think Linux window managers should take it to the next level.

    It's sad but true that there isn't a lot of cutting-edge work being done on the OS level in Linux, there's a lot of "we can do what Bedowsintosh can do", when what there needs to be is "Bedowsintosh can't do this".
    "I don't want more choice, I just want nicer things!"

  • Michaelangelo seemed like quite a blowhard to his contemporaries, I'll bet.

    I am at first shocked at how little enthusiasm I see here for a dramatic vision of technology's future. One would think a bunch of proclaimed geeks would celebrate anyone's attempt at creating a new vision, even if it is still mostly BS. Then again, the captains of sailing vessels were the greatest critics of the early steamships.

    -cwk.

  • I would say the last big advance was Lisa Office...
    Correct. The Mac was the cost-reduced version of the Lisa, which had protected-mode multitasking and a hard drive. Most of the innovation was in the Lisa, but it cost about $10,000 in 1983.

    What computers really need right now is a DWIM (Do What I Mean) interface.
    AI isn't up to the job yet. DWIM was originally a feature in Interlisp, and Gosling claimed that although DWIM sometimes did the wrong thing, it never did anything bad (i.e. non-undoable.) One day I typed "EDIT" when in a mode where EDIT wasn't meaningful, and DWIM spell-corrected it to "EXIT", throwing me out of Interlisp and losing the workspace. That's the trouble with letting a DWIM system actually do anything. Probably today's closest equivalent of DWIM is Ask Jeeves [askjeeves.com], which is notorious for doing the wrong thing, but which operates in a context where doing the wrong thing doesn't cause harm.

    In other words, you should be able to communicate with your computer in some way that makes sense to you, and it should translate your request into something that makes sense for it.
    Not good enough. The computer has to have a sense of the consequences of its actions before it can be allowed to act on its own. This is one of the major problems in computing today. Go think about that problem for a while in the context, say, of system administration.

    3d interface
    Have you ever tried to get anything done inside a gloves-and-goggles VR system? I've tried six of them, starting with Jaron Lanier's original one, and they all suck. It's like trying to build something while wearing mittens. Autodesk played around with VR early on, thinking that it would be the next generation in CAD. It wasn't. An early goal was to get to something comparable to an Erector set in VR, and that's still out of reach. Even high-end 3D animation is almost invariably done with three planar views and one 3D view on-screen. Even though the better animation systems let you draw in the 3D window, few animators do.

    If you think online navigation by moving around in a big 3D world would be a great idea, check out Worlds.com [worlds.com], which has such a world. Works OK, but the experience sucks. Moving your avatar around a big 3D space turns out to be a lousy way to shop, let alone look up information.

  • Linux in the context of obsolesense can be thought of as just a UNIX variant. UNIX in general has been called obsolete for about 15 years now. The funny thing is that no one comes up with anything to replace it. Here are the basic reasons:
    • Complexity -- UNIX is baroque, true. However, the alternative would be to create a single, consistent way of addressing problems as varied as job collaberation and control, complex searching, security management, user customizability, and many other features. UNIX addresses each one of these separately, and each solution is tried and tested over the long haul.
    • Flexibility -- Let's take the example of mail. Many people feel that mail should be pretty. Fine, I can agree in as far as that goes. However, every attempt that I've seen to make mail reading pretty has resulted in a fraction of the number of features as "ugly" mail readers such as pine, mutt, vm and mh. Why? Because those other tools take advantage of the rich, but ugly traditions of UNIX (pine and mutt through the shell and external editors, vm through EMACS and mh through being just a command-line set of tools). In order to write a pretty mail reader that does not take advantage of those traditions you must replace gargantuan amounts of UNIX's features within your mail reader, which is not what you wanted. Things like OLE and Bonobo are attempts to bring the world of the modular to the world of the pretty. On the Microsoft side, they're being used wrong though. For example, do Word and Outlook share the same spell-checker through OLE? Nope. Why? mutt, EMACS, and many other UNIX programs share ispell (or aspell for that matter) without breaking a sweat....
    • Text -- Text is the fundamental unit of exchange under UNIX, and try as the rest of the world might, complex, tagged, bagged data-streams will never be able to match the signal/waste ratio of pipes and text.
    Well, I guess I've firmly pegged myself as old-school. You whipper-snappers go ahead and take your best shot.

    Funny thing is that the system that had a shot of replacing UNIX was Windows NT. NT is a solid, well-designed operating system which inherits the best ideas of VMS and Mach. It could have been a real UNIX-killer, but in the end Microsoft's marketing engine saw to it that Win32 was slapped on top of it like a fresh coat of sea-sludge. Given a much more careful and thoughtful layering of UNIX-like, VMS-like, Mach-like and Windows-like subsystems on top of NT's "microkernel", UNIX might actually have had a run for it's money. As it stands, the only thing keeping NT afloat is a marketing engine that dwarfs several sections of the United States Federal Government.

  • While I was reading all of his ideas, I was particularly interested at the idea of dynamically generated folders, and reminded me a lot about the VFolder technology being used in the new GNOME PIM (what is it called? Magellan?). Anyway, I was excited about the idea of applying this technology to the common directory structure.

    Then, I visited the "Mirror Worlds" webpage he linked to (I don't see what's so "transparent" about it, does anybody else?). It says they have a patent on this whole thing!! How will this effect GNOME and VFolders?

  • Well it must be said. I never thought I'd see such an overuse of buzzwords since my company's latest pressrelease.

    What he talks about is not computing for the next 15 years but for centuries to come (that is, if we can unravel the way the mind works). Artificial intelligences, neural networks, associative memory retrieval all of them need major work and several revolutions before they can be applied like this.

    One thing really irks me. Filenames are obsolete. I haven't heard many statements like this but this is definitely not very well thought out.(especially not if you consider the timeframe this article refers to). Documents need a way to be retrieved, now you can do this based on content or on name.
    But whichever way you chose, by giving it a unique name or by attaching labels to it, fact is that to determine which document to retrieve you need a unique identifier, whether it's by attached n labels to a document until it's uniquely identified or by giving it a unique name, it does NOT matter. Both are ways of naming and identifying a unique document. Of course we can be pedantic and only look at names in the purely strict sense but that would be limiting ourselves.

    By comparing filenames to memories the author tries to show that the end of giving names in sight. But really he's just reinforcing the idea that we need to use unique identifiers.
    For example, you might have quite a few memories of going out skiing, but each and everyone will be unique, like "that time I went out skiing and got so drunk on gluhwein that I threw up , then slipped over it and broke my leg". By attaching those labels, we name it. Maybe not vocalise it but essentialy that's what we do. Ah well, enough waffling from me. (and this is not a first post!)
    --
    Full Time Idiot and Miserable Sod

  • ...is because of people like this. 90% of all the university "computer scientists" that I have met (and that's quite a few) are doofuses who couldn't implement Towers of Hanoi to save their pathetic lives. It's interesting that one of them recommended that I indeed stop with a master's degree so as not to damage my reputation and marketability. Learn from books, people.
  • by ameoba ( 173803 ) on Friday July 07, 2000 @03:11PM (#950745)
    This works great if you want your folders created by date recieved, length, or sender, but what happens when you want to sort your messages based on things that are not (or can't be) stored in the DB? How would you automatically create different folders based on subject matter, or importance?

    If you want to do something like that, won't you be back to put everthing into a folder manually? If you carry the idea to files on your system, it scales poorly. Sure, you can sort files by what type they are, who made them, size, etc.. but how do I sort my English assignments from my Lit assignments, from CS Project proposals? Do we really want/need an AI to handle file-system accesses?
  • What if car manufacturers quit making better engines and only focused on making more comfortable seats and installing better stereos?

    You mean this hasn't already happened?

  • . . . no one will listen to people who predict the future.
  • His article reminds me somewhat of those horror films that gain their effect purely through the use of the word evil. They say that something is evil, pure straight evil. Evil with a capital E. But, they never go much further than that. They just repeat the word evil over and over again in the hope if they use it a sufficient number of times and in the right context, they get some manner of effect on the audience. Never mind explaining WHY such-and-such is evil, but it's EVIL. The author of the aforementioned is doing a very similar thing with the word change. He is advocating change for change's sake. People must realize that sometimes things stay essentially same for a long time because they work well and there's really nothing to be gained from changing them.
  • by Chris Johnson ( 580 ) on Friday July 07, 2000 @11:41AM (#950789) Homepage Journal
    First: Gelernter is _the_ canonical handwaver of the last bits of the 20th century. If you have ever read his book 'Mirror Worlds' (try your local library) and managed to get all the way through it you'll understand this right down in your bones. He never goes within shouting distance of application or execution- it's all about conjuring very salable visions in the sky. Gelernter is the UBERhandwaver, I've never seen another person even approach him in this.

    Second: The instant you see words like 'lifestreams' be well aware that Gelernter is SELLING this. It was the same for Mirror Worlds- the whole thing was an advertisement for the commercial software project he was selling. I have not seen any evidence that Gelernter understands sharing and the free software approach- I daresay it seems terribly quaint to him.

    As such, it can be interesting to scan over Gelernter-handwaving for practicable ideas, such as long-skinny icons like book spines (hey, how about horizontally so you can read them- hey, what about making them a stack so the most recently used ones go to the top and stay there?). However, I would be very cautious about this because of the risk that Gelernter is busily filing patents on all of it and will attack anyone who tries to make his handwaving practical. Actually, I haven't seen evidence one way or the other, but based on his history of producing handwavey 'white papers' that are actually referring to proprietary technology that he is SELLING, I would be moderately surprised if Gelernter wasn't busily patenting up everything he could patent- which of course translates to 'everything'.

    Can you tell I'm not utterly thrilled with this fellow? ;P If it turns out he's not seizing huge swathes of IP with patents on handwaving-derived general notions, I will be considerably more friendly- but in the final analysis there needs to be more implementation and less imagination for his ideas to go anywhere. There needs to be a lot more gritty detail in how these things are to be actually DONE. One thing you can say for the Linux approach- it's all gritty detail, rarely much in the way of sweeping imagination- but stuff GETS DONE. At the end of the day, Linux stuff got done and an awful lot of grand breathtaking visions remained just grand breathtaking visions...

  • This is an important single line of information
    1. I have read too many William Gibson novels
    2. I have read too many Neal Stevenson novels
    3. Although I am intellegent and know how technology has changed from being the tool to being the end goal I don't know how to say that without sounding like I have spent my whole like sheltered by science fiction novels


    4. I like lists in case you haven't noticed

    5. I will be redundent
    6. I will be redundent
    7. I enjoy small fonts


    The authors is obviously intellegent, and has a very good grasp of how technology develops, but I think he needs to get out of the lab every once and a while. The ideas of how the computers will move towards creating a world like that out of Neal Stevenson's Snow Crash although they are possible for the future, the level of technology and bandwidth of the net is not near to what that sort of future requires.

    The closest thing we have to what he discribes is VR, and it is still in its infancy stages right now, we don't even have popular games that require it. Something is going to have to make the push to create a truly 3D environment for the computer using VR type devices, until then it is just a pipe dream. Perhaps it is time to take Linux out of 1976 by creating the first VR desktop. Of course the amount of time and money required to do so is probably impractical for a group of hobbiests like us.
  • by thesparkle ( 174382 ) on Friday July 07, 2000 @11:49AM (#950799) Homepage
    Timeline:

    1991 - Dr. G. publishes first manifesto.

    1992 - Commercialization of the internet allowed.

    1993 - Overnight, 5000 ISP's, webhosting companies and online shopping malls spring up all with the word "Cyber" in their names - cybermall.com, cybernet.net, etc.

    1993 - Dr. Gelertner is injured by bomb. Out of the picture for some time. Cannot use a computer, visit the Internet or read Wired. Instead, he watches "I Love Lucy" and "Good Times" reruns.

    2000 - Gelertner back. Writes manifesto. Uses the word cyber 400 times in a paragraph thinking he is cutting edge. Next manifesto subjects:

    * how bad is Windows 3.1.
    * when is the new version of Procomm coming out?
    * "I just read a book called 'Snowcrash'. Here is a review".
    * "10 things I like aboutOS/2"
    * RFC2213666 - Why pay for Internet Access? Get paid to surf cyberspace!

  • about how we just sort accept flaws in the systems we use

    Heck, we justify them. The worst thing about Linux advocacy is hearing raving justifications for something that's been total crap for ten or thirty years. (I'm not saying that Linux is total crap, just that parts of the UNIX culture were heavily criticized all through the 1980s, and rightly so.)
  • No offense, but I think that you misunderstood some of what he meant.

    • a file with one name - possible
    • a file with multiple names - possible with links
    • many files sharing one name - possible if they are in separate directories (see below)
    • many files sharing one directory - possible
    • a file can be in one or more directories - again possible with links
    • a file with no name - I'm not sure about this. From my viewpoint, a thing (can of soda or a file) isn't distinct without some sort of name. For a can of soda, the name may not be just "Pepsi", it may be "the fourth Pepsi from the left in the fridge". Likewise with files - they don't need good names, but they do need to be distinguishable from their fellows, even if just by inode number. If you can't tell your 10000 head of cattle apart in some basic way, then how do you know you really have 10000?
    • a file in no directory - see below

    I believe that by `many files sharing one name' he meant within on directory (or functional group), i.e. one pathname, in much the way, say, that no single beer in the fridge has a name; they're all just `beer' and it doesn't matter which is which. I don't know if this would transfer to a computer well: we'd just make one big beer file.

    While a file can have many names, using hard or soft links, actions on one name do not necessarily affect the others: delete one hard link and the others are still there (a useful function, true); delete a symbolically linked file and all the symlinks go insane (hardly useful).

    A file in many directories is possibly different from a file with multiple names. Can't see it myself, but then I'm no Ph.D.

    Non-named files would be those that are linked to in some other way. Photographs or such might be especially good in this sense. I know that many people have photo collections which give them many views on their libraries; the photos themselves are named in sequential order. Why name them at all? Why not jsut have a `photo chooser' which displays each image and its comments? Why make the named server as comments on a file? Because that's the way we've been doing it for 30 years?

    I do not know how workable his ideas are, but OTOH we can see that the current approach has its shortcomings. Some work needs to be done, anyway.

  • "a cyberbody is a cloud's drifting shadow overing many tiles simultaneously"
    "computers will be as anonymous as molecules of air"
    "The cyberbody arrives and settles in like a bluebird perching on a branch."


    .. and many more...


    33. A file should be allowed to have no name, one name or many names. Many files should be allowed to share one name. A file should be allowed to be in no directory, one directory, or many directories. Many files should be allowed to share one directory. Of these eight possibilities, only three are legal and the other five are banned -- for no good reason.

    Oh, and that's just plain wrong. Hasn't this guy heard of "ln"? They're "banned" in some (not mentioning any names) operating systems... and some of which have good reasons to be "banned".
  • Of course computers will change drastically over the course of the next fifteen years. Look at how far computers have come in the past fifteen years - in 1985 we were dazzled by the Nintendo, using a mixture of IBM compatible machines and Atari / Commodore 64s. Eight bits. Wow.

    But wet dreams about what the future holds aren't relevant to anything except sleeping. Of course computers will continue to evolve. They'll be faster and smaller and easier to use. They'll be pervasive. Literal shelves of books have been written on the subject - with such a dazzling array of opinions, one or two of them are bound to come through in some way. The truth is, there hasn't been a real 'revolution' in computers since 1947. All we've done with them since then is make them smaller, faster, and paint them in prettier colors.

    Why should this guy know what he's talking about? Because he's from Yale? Please. I've got friends in small towns in the midwest who have a more solid grasp on this subject than this guy. The future of computers has less to do with their size, shape, and form than it has to do with humans and human interaction.

    I'm not talking about human / computer interaction, although that, too, will contine to become more and more refined as computers become more powerful. I'm talking about the ability of computers to facilitate the interaction between human beings in a real-world environment.

    The revolution in computers won't have anything to do with computers in the classical sense; it will have much more to do with the humans. We currently view computers as a 'platform' we can use to communicate or calculate. As the revolution begins, we will come to see computers in a new light: not as a platform for, but as a barrier to communication.

    We need to come to terms with the fact that the fundamentals of our computing system is fatally flawed and is in dire need of replacement. Once we've done that, we can begin to truly redefine what we want computers for and restructure them to provide it for us.

    Then the revolution will begin.
  • OK, my little experiment is over. Only two eagle-eyed people figured out that the comment I wrote (#98) is actually a slightly edited letter written by the Unabomber to David Gelernter.

    I was amused to see that someone found it worthy of a positive moderation.
  • Those who can, do.

    Those who can't, teach.

    (Disclaimer: this applies to the university/college level. Anybody teaching K-12 is not who I'm refering to with that little sarcasm...)

    Anyway, the guy's lucky he doesn't have to get out and work in the real world.

    t_t_b
    --
    I think not; therefore I ain't

  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Friday July 07, 2000 @11:47AM (#950834) Homepage Journal
    People with advanced degrees aren't as smart as they think they are. If you'd had any brains you would have realized that there are a lot of people out there who resent bitterly the way techno-nerds like you are changing the world. In the epilog of your book, "Mirror Worlds," you tried to justify your research by claiming that the developments you describe are inevitable,and that any college person can learn enough about computers to compete in a computer-dominated world. Apparently, people without a college degree don't count. In any case, being informed about computers won't enable anyone to preventinvasion of privacy (through computers), genetic engineering (towhich computers make an important contribution), environmental degradation through excessive economic growth (computers make an important contribution to economic growth) and so forth.

  • Probably today's closest equivalent of DWIM is Ask Jeeves, which is notorious for doing the wrong thing, but which operates in a context where doing the wrong thing doesn't cause harm.

    Yes. I asked jeeves just today "How do I report a gross polluter?" and it came back with "Where can I ask science questions of Wendell Worm (yucky stuff)", "Where is the fun Web site Air Sickness bag Virtual Museum" and "Where can I learn about the yucky worm bearded worms?"

    Obviously some work is needed here.

    Have you ever tried to get anything done inside a gloves-and-goggles VR system? I've tried six of them, starting with Jaron Lanier's original one, and they all suck. It's like trying to build something while wearing mittens.

    That's the problem, they've all sucked, to date. If I had enough money, I'm firmly convinced I could pull it off. I have friends (and countrymen, but no romans, ha ha) who are seriously into math, and some who are very much the superbadass EE. The problem is that you need tactile and force feedback, at a very fine resolution. Gloves with those characteristics are currently very, very expensive. I think that that can be changed.

  • IBM OS/2 had an Object oriented user interface that was going to do exactly all the kind of things you mention in your post.

    The problem is that it never really worked very well (buggy code, clumsy user experience). They were planning on expanding it, trying to add CORBA support and BENTO functionality, being BENTO == japanese lunch box == many kind of things (food, documents, objects) laid out inside a container.

    Too bad all of this got canceled, and the implementation you can find today in OS/2 is limited, buggy and not worth a try.

    On the brighter side of things, GNOME's bonobo is trying to do something that could very well give us something similar to an OOUI. It's still beta and a long way from stable, but it's exactly what IBM, Apple and others tried to do without success in the Taligent project.

    Ciao,
    Rob!
  • What you're missing here is that as a race we're fat to fragmented to ever collectivly 'sit back and logically think about' technology. I don't believe there will ever be a truly 'global' society, becuase human social groups larger than the nuclear family tend to splinter under their own weight.

    If we're to use technology responsibly, we need to stop preaching about 'the greater good' and start thinking in a more human--and consequently more selfish--manner. Technology needs to be used responsibly not for the 'greater good' but for the 'good of myself and my family/friends/social group'

    Be selfish--it's more natural.

    Hell, lets look at the Open Source paradigm--how many of us code and release it for free to be altruistic, or because 'code needs to be free?' I don't. I work with Open Source because it betters my own personal reputation, my sekill set, and my industry.

    There's nothing wrong with a little selfishness; but too much and you've just become anti-social.

    Think about it, my friend.

  • by Mr Z ( 6791 ) on Friday July 07, 2000 @12:14PM (#950849) Homepage Journal

    Of course, it sounds pretty wonky, although there are some precedents. For instance, that new Gnome mailing agent (can't think of the name offhand) doesn't have actual folders for your emails. Rather, all emails are stored in a single monolithic database, and virtual folders are constructed using queries. Even if you just wanted "Unread mail" and "Already read mail", you could construct that with queries.

    Done properly, this behaves EXACTLY as he was saying -- the "folders" automatically "pull emails" into themselves. At least, that's how it might look to a user. Particularly if the user changes their folder parameters and all of the emails automatically appear where they need to be. The physical layout doesn't require the emails to actually move. Rather, the database merely processes the query when the virtual folder is opened.

    It's a very powerful model, and a data-store structured in this manner would be very useful. I stopped short of saying filesystem, because I still feel a filesystem, at least in the sense we have today, should not be that heavily abstracted. Rather, if the filesystem is not an appropriate level of abstraction for a user's data, then a coherent software layer should construct one above that. Sorta like how your accelerator pedal and shifter on an automatic don't directly move the throttle or shift gears -- they state the intentions of the driver and the car does what's required.

    --Joe
    --
  • And would you trust the prediction of the future of computing to a guy who has to break his paper down to six pages? I would have thought that the web would have made pages obselete...

    Mouse is obselete, windows (Not Windows, they were obselete in the 90's) are obselete, filenames are obselete etc etc. Sorry, not THIS decade. And I wouldn't be so quick to write off those systems designed by programmers for programmers since we're the ones implementing your grand vision of the future.

    There are a lot of obstacles to forward progress, too. Although current bandwidth limits still make it hard to make on-demand video content available, another couple of cycles of it doubling and it will be technically feasible for you to watch Dharma and Greg off the Internet at 2 in the morning if you want to. None of the content providers will use the ability actually provide any content though, for fear of losing their precious IP. The legal hurdles will be much harder to surpass than the technical ones.

    Current technology is primative. The seams between the desktop and the net are very pronounced. But the correct answer to the question is The Net and The Desktop, not The Net or the desktop. And none of this stuff will evolve on its own. If a company doesn't see a lot of money in it, chances are the consumer will never see it.

  • But he's not saying anything that hasn't been said more eloquently before. His metaphors need some work, too. Data doesn't flow through computers like wind through tall grass, it's pumped from tank to tank like a waste treatment system.

    Of course, we would never have seen this article come out of the pipe at the other end, because it would have been seperated out with the other sludge.

    Anyway, he does make some good points in the process (Though I didn't complete the process, because I got tired of him about halfway through, which seems to be a common thread here. Maybe someone should talk to him about how to hold reader interest.) He does bring up the point that UNIX is nothing new (which is true) and that the last big advance was MacOS, though I would say the last big advance was Lisa Office -- A simple system in which it was possible to proceed directly to getting work done without having to know how computers accomplish anything.

    I was talking about some of this stuff with a coworker just last week. What computers really need right now is a DWIM (Do What I Mean) interface. In other words, you should be able to communicate with your computer in some way that makes sense to you, and it should translate your request into something that makes sense for it.

    So now, let's discuss what I consider to be the two most important advances upcoming in computerland (No affiliation with the lame mac store) -- 3d interface, and drag-and-drop programming. Neither one is obviously desirable, necessarily, so now I'm going to get all defensive about my position.

    First of all, let's examine the 3d interface deal. We're all familiar with the way computers are portrayed in sci-fi movies, with the big transparent cubes with data flowing on them, resulting in a huge drain on the special effects budget as these animations are rendered at awe-inspiring resolutions. However, that's all bullshit. It's hard to see how an interface in which you flew through a bunch of transparent cubes with meaningless blips of light flying around is going to make you more productive.

    On the other hand, a "virtual reality" interface which allows you to interact more naturally with your data than would otherwise be possible could quite easily bring you to new levels of productivity. Right now, when you load a file, you have to swim through a deep hierarchy of files and folders to locate the file you want. It gets even more confusing when you add the network into the picture. However, a flat filesystem is not the answer either. How much sense would your personal belongings make if you spread them out in a more or less even layer across your floor?

    What would seem to make the most sense is the file cabinet/dresser drawer model. A file cabinet (or Volume) has some number of drawers (directories) which then contain folders (subdirectories). The folders then contain sheets of paper, images, or whatnot. However, this is where the metaphor breaks down, because the things inside your directory structure can be analogous to a normal physical piece of paper (Like a resume, or a picture, or what have you) or an application (Which is most closely analagous to another computer, in the physical world -- Or something like a tape recorder.)

    Anyway, let's face it, windows has the 2d market locked down. You can make a better 2d interface than the one windows uses, but it's hard to really add very much more functionality, because you're limited to just two dimensions. Sure, you can add task bars and things like that, but windows has something analogous, if less functionality - The taskbar, with its clock, system tray, and start menu does everything (except for virtual desktops, which is really a function of the window manager) that the gnome taskbar does. It's not as pretty, or as configurable, but it performs all the same functions.

    If you really want to accomplish something new, you're going to have to break into a new world and go 3d. Just think, you could have flat windows and 3d windows and just free-floating 3d primitives, all representing programs. BTW, I'm picturing a "3d window" as a sort of OS-standard "box" inside of which there would be 3d content. The things the content did would not be able to pass through the "glass" on the front of the box. I'll get around to writing up a whitepaper on this sometime soon, I swear.

    Of course, a model like THAT in particular has to have goggles with head positioning, and a dataglove for moving things around, but you get the idea, which is to stop thinking "flat" and start thinking in three dimensions, which should be fairly natural to most of us.

    The other thing I think we need to see (and that we WILL see) is some sort of simple drag and drop procedural programming in which a user never writes a single line of code. This isn't going to lend itself to every kind of program, but the idea is that you can bang out a quick tool to get something done. For instance, if you just need a simple app to view jpegs, you'd create a window, set it resizable, you wouldn't turn off any of the gadgets, you'd drag in a toolbar, create some buttons on it, and tie them to file requesters or whatnot by drawing lines. You'd then drag an image viewing pane into the window, and click "package" or something, and BANG, you'd have an image viewer. I won't even get into such things as datatypes (Now known as the Translation Kit in BeOS, but the idea came from Datatypes on the Amiga.)

    The ideas here are A> to make using a computer more natural, which has already been done to pretty reasonable degrees in 2d by MacOS, then by Windows, and finally by AOL. I hate AOL as much as the next guy, but it really is goddamn easy to use. Also, B> to make it possible for users to "write programs". A user should be able to create a tool just by slapping together precreated components. If you were a "real programmer", IE, were able to create your own objects (In whatever wins... COM+, or CORBA, or what have you) then you could sell those objects for five or ten or five hundred dollars/yen/pounds/whatever so that other people could incorporate them. It certainly makes licensing a lot more complicated, but I think the reward is worth it.

  • Don't be silly. Since the Human Body is composed of DNA, all you really have to do is make modifications to the source code and then do a ./configure, make, make install - SHAZAM! The new DNA would then be transmitted to each cell to upgrade you to the latest version.

    Wow! just think of this! if we could make a DNA strand like the Linux Kernel, just think how resistant we would be to Virii!

What is research but a blind date with knowledge? -- Will Harvey

Working...