Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
GNOME GUI

Let's Make UNIX Not Suck 355

The above is a title of the talk that Miguel de Icaza of Gnome and now Helix Code fame gave at OLS concerning the look and feel of the UNIXs. From what I've heard from attendants the talk was great - and now you too, in the privacy of your own home/cube/lean-to/car can read it.
This discussion has been archived. No new comments can be posted.

Let's Make Unix Not Suck

Comments Filter:
  • by Sloppy ( 14984 ) on Wednesday August 09, 2000 @07:25AM (#867587) Homepage Journal

    Funny, but concepts like pipes, output redirection, background processes, and the command line are integral to my computing experience, and I don't see UNIX sucking.

    But Unix doesn't use those things enough! The philosophy hasn't carried over to the graphical applications, so we have a schitzophrenic Unix where the little text tools try to do one thing well, and the GUI applications are monolithic and try to do everything.

    If you like the idea of building a specialized text-processing app by combining some generic tools (e.g. grep, awk, sed, etc), then wouldn't you like to be able to build a specialized GUI app the same way (e.g. HTML renderer, image editor, calendar, etc)? Don't you see a difference between the beauty of grep and the disgusting bloat of Netscape Communicator?


    ---
  • Insufficient Coffee in Operator - System halted.

    Sorry, got the band name wrong. That should read "Great Big Sea" (I always get it wrong), and the song is on their album "Up".

  • by tenchiken ( 22661 ) on Wednesday August 09, 2000 @07:26AM (#867592)
    You are missing the point. CORBA does not implement policy, only a mechanism. This is policy on top of any object model you choose. That is why it is powerful.

    The point is that there needs to be some sort of policy that is loose enough so that anyone can use and understand it, but rigid enough to enforce a metaphore that works and in understandable by non-computer-cluefull people.
  • Reading some of the comments here, seems like people haven't really read what the article or don't really understand what Bonono is.

    Check out the full description at :

    http://www.helixcode.com/tech/bonobo.php3 [helixcode.com]

    Just to quote from that page if you can't be bothered :

    Bonobo is a set of CORBA interfaces that define the interactions required for writing components and compound documents. These CORBA interfaces are not bound to GNOME, the X11 windowing system, or the UNIX system.

    The Bonobo distribution as shipped by the GNOME project includes the Bonobo CORBA interfaces, and a GNOME/Gtk+-based implementation of these interfaces.
  • What about Slackware?
  • by beebware ( 149208 ) on Wednesday August 09, 2000 @05:47AM (#867604) Homepage
    Isn't UNIX still (tm) AT&T? I thought the generic term was Posix...
    Posix system's aren't really aimed at beginners - that's what people keep forgeting. It was designed for use by people who know what they are doing and how _they_ want to do it - not the way a Redmond base drone wants them to...

    Richy C. [beebware.com]
    --
  • I am very glad that just about everything I need to work with can be manipulated via some text file.

    No disagreement here -- I just don't uinderstand why we have to reinvent the text file layout for every application, and why some of the basic function (like cut and paste) can't be described in a single text file, like the win.ini file of yore.

    Having a binary registry with everything dumped together is the most collosal mistake of the entire Win32 system these days, but that's not to say some values shouldn't be centrally configurable (and if you had a standard XML or whatever format, you could easily make full-featured config editors!).

    Note that due to the standard file layout of the MS .ini files, the MSconfig utilty can edit the .ini files on a windows system using a tree interface, so you don't actually have to delve into it with a text editor (but of course you can if you want to). This is a perfect example of making things easier for novices (less likely to screw up the text file using the GUI) while sacrificing nothing for experts (text file is still available for direct edits).

    I'm an investigator. I followed a trail there.
    Q.Tell me what the trail was.
  • The thing that most user interface people do not seem to understand is that the power of a computer really comes from the fact that a universal turing machine can simulate another universal turing machine. Ok, well that explained nothing, so let me be more specific:

    Microsoft dose not (and dose not want to) sell powerful and flexable computers/interfaces. They want to take advantage of the fact that computer hardware manufacturors sell a universal turing machine to allow themselves to sell black inflexible easy to use boxes (like wordprocessors, web servers, web browsers, spreadshets, etc.).

    This is just like a TV company saing You can buy a computer with a large monitor cheeper then our TVs, so I'll sell you a computer with a large monitor and software to make it act like a TV to save money.

    Anywho, the moral of the story is that black boxes and uneccisarily specilized systems are bad. The power of a computer comes from the fact that it is not an unnecissarily specilized system.. and a powerful user interface must inherently be a programming langauge. Now, it may be an easy to use and learn programming langauge (like that scripting langauge Macs had), it may even be possible for a really stupid person to use a computer for years without noticing it's additional abilityies, but it must have the full power of a turing machine.

    Note: I'm not talking about having a scripting langauge allongside the user interface. i'm talking about the very underling user interface being a scripting langauge. This is necissariy to make the transition for people from user to programmer as painless as possible.

    Anyway, it's only when everyone has some limited programming instincts that people will not waist time doing the same thing over and over agian. This should be the goal of user interfaces and Unix got a nice start on this goal (via schell scriting), but it's necissary to expand this today to include GUI user interfaces too.
  • by JonK ( 82641 ) on Wednesday August 09, 2000 @07:03AM (#867608)
    UNIX and UNIX-like Operating Systems were not created to serve as desktop Operating Systems

    Bollocks, pure and simple. The first Real-World deployment of Unix (i.e. outside ken's lab) was to the secretaries in the patent department of AT & T to enable them to write up patent applications, hence the emphasis on text-management and typesetting software in the early releases of the Unix system.

    As for Unix being intended as a server OS - quick reality check required. Unix dates from the late 60s/early 70s. The client-server separation was fifteen years in the future - back then there were no clients. Unix was designed to support multiple terminals (tty0 - n) hanging off a central computer (originally a PDP8 IIRC) which displayed plain text in black and white: termcap files, full-screen editors and the like didn't arrive on the scene until the mid-70s and prior to that people edited with ed.

    Get yourself a copy of the Unix Programming Environment and read it, then come back and post an informed comment.
    --
    Cheers

  • There are all sorts of things that you can expect to be able to do on any Unix-like OS. You expect X-Windows, NFS, vi, etc. Yet at one time or another most of the Unix "standards" had some sort of competition.

    Miguel is proposing to standardize Linux (and Unix as well) simply by creating Free software that is so cool that it becomes ubiquitous. Instead of writing their own HTML widget, XML parser, Address book, text editor widget, etc. programmers will simply borrow the GNOME widgets. Programmers will do this because GNOME is free, comes with source code, and because GNOME can be installed anywhere.

    GNOME will simply become one more layer in the existing framework that is UNIX. Some people won't use it in much the same way that some Linuxers don't use X-Windows. But most of the new software will happily use these shared components in much the same way that they happily use the existing shared Unix libraries.

  • by Deven ( 13090 ) <deven@ties.org> on Wednesday August 09, 2000 @05:54AM (#867610) Homepage
    Actually, I believe AT&T sold UNIX (including the trademark) to Novell, who later sold it to SCO. As far as I know, SCO still holds the trademark. (It was SCO that gave permission for the Lions book [amazon.com] to be published with V6 UNIX source code in it...)
  • by NMerriam ( 15122 ) <NMerriam@artboy.org> on Wednesday August 09, 2000 @05:55AM (#867611) Homepage
    osix system's aren't really aimed at beginners - that's what people keep forgeting

    No, people keep forgetting that everyone is a beginner at some point.

    Making a system good for beginners does not mean crippling it for advanced users. I think we should be past the point where complexity for complexity's sake should be attractive...

    I'm an investigator. I followed a trail there.
    Q.Tell me what the trail was.
  • Saying that CORBA's interface versioning solves this problem is like saying that a hammer solves the problem of nails. It's a tool, valuable to the extent that people know how to use it properly by making sure that the interface definition accurately and completely describes the behavior of a component and that the "actual interface" doesn't change without corresponding change to the "official interface" - something that doesn't just happen on its own. Arguably, what you really need to solve this problem (and, even more arguably, what some languages or programming models give you) is a way to ensure that all behavioral dependencies are captured in the interface definition and that nobody can even begin to depend on something that could change.

  • by tenchiken ( 22661 ) on Wednesday August 09, 2000 @07:32AM (#867615)
    Here is my point. It isn't just CORBA. Corba is a neat way to force applications to talk in a non-pointer determinate fashion. It's about the policy (that is what bonobo partially delevers).

    There has to be a way that a system can serialize data, provide access control to serialized objects, and store thoose objects. This is what Linux is missing in a big way.

    Componets are cool, but if we are going to add componets, lets add a constant model or policy, and flexability in. Let's also make it so that there is a common configuration interface, a block or object interface, and some heirchal or non-heirchal way of looking at it.
  • Unix DOESN'T suck. (By UNIX, I am referring to both UNIX and it's variants.)

    So what's the problem? What DOES suck is the attempt to make Unix into a friendly operating system. Great.....IF you know what you're doing.

    The Microsoft Windows 95 interface is successful because they had Human/Computer interaction experts working to design the interface. (Don't start with the 'Microsoft copied Apple copied Xerox' crap. Each interface is different enough that SOME design went into it.) The problem here, is that we have the Gnome people, and the KDE people, etc., trying to make a Graphical Interface - and they're trying to be too much like the other interfaces out there, and the rest of the design is just being assumed.

    UNIX doesn't suck. It was designed to be a powerful network operating system - and that's exactly what it is. Sure, 10 years down the road it's going to be different. However, the basic functionality will be the same.

    To say an Operating System sucks because of bad design of an additional interface is ignorant.

    -- Give him Head? Be a Beacon?

  • ``Miguel also should have picked a better title. "Unix sucks" is *not* going to get the mainstream to read the article, they're too used to one-line sound bytes''

    ``Unix Sucks'' was not the title! The title was ``Let's make UNIX Not Suck''. I don't think there really could have been a better title.
    --
    Ski-U-Mah!
  • I really can't find much here that I can agree with.

    Code Reuse was the mantra in the eighties, not the Next Big Thing for the year 2000. There are obviously some cases where it makes sense, but its not a cure-all for derailed software projects. Code reuse is well suited to projects with a short development cycle, a short life cycle, and a small user base. Most internal IT applications fall in this category. But it doesn't make much sense for projects with ongoing, constant development, long life cycles, and millions of users. This is where fine-tuning "custom" code makes sense, despite the current aversion to hand-written code by people who consider themselves to be knowledgeable about such things. Every re-write is an opportunity to improve the code. Software reuse effectively eliminates this opportunity, and in time leads to an obsolete and brittle code base. Code reuse is a part of effective software development, but it shouldn't be the central tenant around which it revolves. That distinction belongs to good design and well-written code.

    The 'no policy' policy of X is a Good Thing, in my opinion. There are some advantages to having a consistent UI, but they are outweighed by the disadvantages. A certain amount of chaos always accompanies innovation, but chaos is better than being subjugated to an inflexible policy. There are always going to be people who want to force their ways of doing things upon others, and UI policies are a great way for them to accomplish this. The best policies are the ones that are voluntarily adopted by large numbers of users over time. If it works for that many people, then there's probably something good about it. But if there isn't consensus in a particular area, trying to enforce policy is not going to meet with much success. MSFT gets away with this because they have a more or less captive user base.

    Miguel has a few good points here, but overall this article could be more appropriately titled "Let's make UNIX suck like Windows".

  • Used to be, anyway. But there's no way I'm installed eighty-eleven libraries the hard way. I'm gonna find RPMs. And RPM -U removes the old lib. Whoops.

    That is a misfeature of RPM then (and other packaging systems which have the same problem). Dependencies should not be an either/or thing -- different versions of libraries can and do coexist just fine. I agree that -U should not nuke old libraries by default, but an alternative would be --install --force, whereby both library packages are in place. Of course, one must use this with caution, as many packages include other files (headers, config files, etc.) with libraries, whcih IMHO is also a mistake. Libraries are LIBRARIES, header files and config files are something else. The -devel scheme tries to address this and (mostly) succeeds, but a more explicit approach might be better:

    package-0.0-1.bin.rpm
    package-0.0-1.conf.rpm
    package-0.0-1.lib.rpm
    package-0.0-1.devel.rpm

    Whatever scheme is used, I agree it is a mistake (and disengenuous to some degree) for popular packaging systems to actually undue one of *nixes tremendous strengths: library versioning and peaceful coexistence.
  • My guess would be that much of this has to do with code reuse in graphical applications. Netscape takes up a lot of space, but not one byte can be used by other applications. In contrast, MS apps can make use of the code in IE (even the web browser part of Winamp).
    The same could be said for StarOffice, which is a big bloated mess, but can't share its code.

    The real future of Linux is looking towards things like Mozilla and galeon, where galeon can reuse the code from Mozilla to create a small browser.
  • I think your point is valid, in part -- there are a lot of programmers who just mindlessly crank out redundant code and tools. However, would you say that Windows developers, as a general rule, are much more innovative and clever in their development style, or that their code is usually better?

    The primary difference, in my eyes, is that with open source, there's a chance of breaking this cycle. If people can be persuaded to support a larger project, or combine efforts when their applications are similar in purpose, then they can learn from each other's mistakes, and come up with a much better product. And, the little projects that fork off from the "core" codebase sometimes evolve into something that's better suited to a particular niche.

    Would a tool like thttpd still exist if it had been written exculsively for Windows, and the source had been closed? Sure, Microsoft might have simply bought the rights to it, if users wanted that functionality, and rolled it into the IIS codebase. But then it would just be another checkbox on the same monolithic monster, instead of a tiny, hella-fast "ace in the hole" for certain applications.

    I would hardly say that Windows programmers don't "recycle shit that's already written," since they largely put together their projects out of Microsoft-supplied code. The Windows platform basically forces you to use the "blessed" libraries. That may help to limit duplication of effort, but it also limits your choices.

  • "Various people like to criticize Microsoft for producing "bloated and monolithic applications". Before we criticize Microsoft, lets take a look at the end user applications that we have on Unix outside of GNOME: Netscape, GhostView, XDVI, Acrobat, Mathematica, Maple, Purify, FrameMaker, Star Office. "

    Your examples of software projects that fail to have code reuse are mostly (or all, I'm not sure) propriatary non-free projects?!?

    I've seen studies on the amount of code reuse among free software projects, and it's quite surprising how much is reused. For libraries you don't mention like readline and regex to small bits and pieces here and there.

    "The only common denominator on those applications is libc and Xlib."

    Perhaps you aren't aware of a little program called `ldd`. Try this:

    ldd /usr/bin/* 2>/dev/null |grep "=>"|sort |tee /tmp/ldd.1 |cut -f 1 -d " "| uniq > /tmp/ldd.2; for i in `cat /tmp/ldd.2` ; do echo -n "$i : "; grep -c "$i" /tmp/ldd.1 ; done

    That's what my "computing experience" is about, and it's not something I would ever want to try with a drag and drop interface.

  • The way I read it, the toolkit Miguel is describing does NOTHING to your ability to hack at it w/your text editor. Essentially, it's a wrapper: for those of us who don't WANT to remember the syntax of every fsck'ing config file in /etc, we can run the toolkit, and it will edit the files according to the syntax rules FOR us. If the system goes down hard, and the CORBA interface goes w/it, then your dust off vi and edit by hand. The XML references he makes are the toolkit's backups, not the new state of your files. The config files you'll find exactly where you left them, with the content you wanted, but (here's their whole point) without the syntax errors that currently crop up occasionally when even the best *nix hacker goes to work.
  • The article is really good, but I think that he falls into the same trap as everyone else. What do I mean by that? He is creating another system to do componet level reuse when they should instead be focusing on a unifying policy. There are no less then four different object models at this point (Mozilla's XPCOM, Gnome's Bonobo, KDE's whatever their new name of the day is, and gnustep's ib).

    Instead they should be looking at a unified policy. The reason that Linux will never rule the desktop the way things are is because Microsoft is simple. You want to change a setting for the system? Goto the control panels. Change a setting for a application goto options->preferences.

    I am not saying that these are the options to use, but instead imagine the following. Envision something like the /proc filesystem called /options. If you want to change something, cat a new value into it. If you want to find out what something is set to, read the file. Include interpreters to map sendmail options into the tree (something like /options/senmail/rulesets/6) and someway to store that either by a translator or in XML.

    Better yet, do it in RDF and let any arbitrary program write to the RDF tree like a filesystem. For example, a translator could map the filesystem into something like /World/Home. Options could be mapped into /World/Preferences/User or /World/Preferences/System). Persistantly write objects into somekind of back store by their Corba id or XPCOM id in /World/Objects/serialize/global or /World/Objects/serialize/local.

    Use translators to set options for apache etc. Use mozilla or IExploder to set your configuration. You can probibly serialize objects into the RDF tree and bingo, you have poor-man's DCOM and object sharing. Wouldn't it be cool if you could write something like this in perl

    #! /usr/local/perl5
    use UnifiedModel::RDF

    my $print = new RDFNode("/world/www.ahost.com/objects/default_prin t");

    $print->isA("Printer") || die 'Could not Open\n';

    $print->print($text);

    Why should we do this? By having a loose policy implemented via RDF or something, we have a single simple policy for storing information and a powerfull way to share componets.

    I am seriously considering coding something like this. Anyone else interested?
  • >3. Mail Client.

    >Sorry, but you've got no defense here. Balsa, Mutt, even emacs will read mail. Gnome folks are even building an Outlook clone.

    >4. Editor. Uhh, I use vi and emacs when there is absolutely, positively, nothing else available. Don't get me wrong, I first learned emacs over 8 years ago. But there are some basic functions which I rely upon that don't exist in emacs. Give me something like HomeSite on a linux box and you've got a convert.

    Try Screem.

    That's whatI liked about the redhat distro. It came with Pico and pine. 2 great tastes that taste great together. If Corel Linux was made for the native win95 users for its fancy interface, why was pico and pine left out? Those to me are the easiest to use console mail/editor apps that I have used since 1993.

    Just my 2 cents.
  • No, learning to code is not copying people's work. If you want to learn to code, then use parts. peices. Toy with it. But dont make an entire copy of some fucking way overdone app and post it on freshmeat and consider yourself a baddass open source hacker.
  • by Animats ( 122034 ) on Wednesday August 09, 2000 @07:43AM (#867647) Homepage
    That's a good article.

    What he's really pointing out is UNIX doesn't have a modern middleware layer.

    The history of modern middleware begins with Visual Basic "buttons", which were invented by Alan Cooper. (Microsoft bought this technology; it wasn't developed there.) Visual Basic made it possible to write medium-sized business applications with graphical user interfaces without much pain. Code reuse worked well in that environment, and it was easy to access a database. You could have the good programmers write a "button" and let the lesser programmers drag and drop the button into their app. This was a major driver in moving corporate America off the green-screen IBM mainframe terminals and onto Windows.

    Programmers were pushing the "button" idea way beyond its intended uses. So Microsoft expanded it into COM, DCOM, and Active-X. It turned into a huge proprietary do-anything object system. But a lot of work gets done with that toolset, even though it's become ugly.

    de Icaza has correctly identified the lack of a comparable middleware system as a serious problem in the UNIX world. Whether he can fix it remains to be seen. It's very hard to get this right. The past is littered with failed middleware environments: OpenDoc and NextStep come to mind.

    A big problem is that if you let the object fanatics design the thing, it ends up too abstract and complex. If you let the UI designers design the thing, it ends up not powerful enough. CORBA and Prograph are at opposite ends of the spectrum here. (If you let the hackers design the thing, it ends up like Perl. Perl, remember, started as a tool for reading text logs. It's a special-purpose language pushed way beyond its design basis.) This requires really good engineering judgement.

    For some insight on how to make design decisions here, read Weaving the Web [amazon.com], by Tim Berners-Lee (you know, the guy who invented HTML and the Web), page 182, where he discusses why HTML isn't a programming language like TeX. He says it better than I can, and I'm not going to repeat him here.

    I hope de Icaza can pull it off. From reading his article, he has the basic good sense needed to get it right. Best of luck to that project.

  • by MenTaLguY ( 5483 ) on Wednesday August 09, 2000 @07:43AM (#867649) Homepage

    This is exactly one of the reasons UNIX is in so much trouble -- even something simple like the concept of shared libraries (at least in practice) is a relatively new thing in much of the UNIX world. It's not very well understood, let alone welcomed.

    People bitch and moan about having to deal with getting a whole bunch of libraries because of runtime dependencies. Deal. It's the cost of componentizing software, and it's not like there aren't tools that can manage the complexity for you (e.g. apt-get).

    And, for those of you who don't think you need shared libraries, try replacing all the binaries on your system with statically linked versions and see how much disk space you have left...

    There is also a good deal of management flexibility and efficiency that you gain through using higher-level component systems, as well. I know the Unix pipeline has been offered as a viable model, but it's not really complete enough.

    Unless you extend the role of the filesystem abstraction as in Plan 9, the UNIX file/pipeline metaphor is simply not a viable component model in and of itself -- if it were, every application could be written as a shell script (which you can mostly do in Plan 9, btw, even if it would be a bit slow).

    There's a point where you just have to stop managing all of the complexity yourself, and delgate some of it to software. The untyped data streams of the Unix pipeline (without the additional policy/flexibility imposed by Plan 9) don't sufficiently allow that.

    Failing a Plan 9-esque level of abstraction, you're going to have to settle for a higher-level layer like Bonobo to really function in a modern environment.

  • Well, the "integration" of Linux Mandrake of Helix GNOME is really sad. There is no single day that passes without Jacob receiving mail for the broken way Mandrake distributed Helix GNOME.

    Miguel
  • The ability to evolve and survive is indeed one of the strongest features of and *NIX system, but there comes a time where rigid standards and enfored policies in the forground are necessary.

    For instance OpenGroup has required CDE as part of a distribution in order for it to be called UNIX. Although I personally do not like CDE, the use of a standardized window manager that allows for multiple flavors of UNIX to look and feel similar is a necessity for its survival. It allows for application developers to hit a target that is not moving, they may have to deal with some library issues but they don't have to deal with customizing the UI to several different standards too.

    At the same time we do need to keep the flexability that *NIX enjoys. That is one of the reasons why I switched to Linux. But I have quite a few friends who want to switch to Linux and just want a standard GUI. They don't want to deal with which window manager, which application manager, etc... They have enough problems deciding what distro to pick. They don't want to take sides in the geek religious wars that are being waged, they just want it to work and consistantly.

    Linux needs more standards, especially in the GUI arena. I say keep the religious wars in the background, but allow them to be accessable to those who want to participate. Decide on standards and then send them to the forefront. This will make companies trying to support Linux happier, it will allow for easier customer support, and will draw in a larger group of users.
  • The biggest source of flexibility in UNIX is that everything can be manipulated with linguistic tools. If Microsoft ever ships a truly easy to use from-the-command-line scripting system that can easily interact with and manipulate COM objects, they'll have achieved much of the flexibility of UNIX. If they ship Perl/Python, then they'll have a language rich enough to generate its own hashtables and lists of data that can then be analyzed/treated as if it were a text file, and regexp's and the like will be available to work with their full COM suite.

    I've spent almost five years now developing a GUI, object-based sysadmin project for UNIX [utexas.edu], and it has taken almost that much time to convince some die-hard UNIX traditionalists here that the very poweful consistency safeguards, error checking, privilege delegation and support for n-user simultaneous editing that it provides was worth giving up the ability to do grep on the passwd file.

    I'm with Miguel all the way on this.. something like COM can mean *more* flexibility, as long as we have good scripting tools that make working at the higher level easy, and as long as the COM-style interfaces are designed with a *lot* of thought towards flexibility and security.

    If we get COM-style interfaces that prevent us from doing things that the designer thinks we shouldn't do (such as setting a pre-hashed password for a user account, which NT doesn't provide an API for), then a COM would become a barrier. If the interfaces are designed to be as open as possible while still hiding implementation -specific details and providing safety and error checking, then a COM becomes a tremedous strength.

    I sure wish Miguel would have talked some about security, though.

  • Back when I was a total computer newbie, I was using a DOS computer. I knew nothing about DOS, but I had a program that would dial up to BBSes for me. My roommate set that program up, and I used it. After a few months, I got a shell account on an ISP. I knew nothing about the 'net or UNIX. This ISP had a great menu system, though. It would give me options for commonly used programs and system functions, and while it was doing that, it would print out the exact command that it did. After a while, I started using those commands directly at the system prompt, to save time.

    Soon I was looking up man pages on various commands like vi, as I tired of the limits of pico. I was looking up various things on how to customize my shell and use it better. A little while after that I was installing Linux on my home system (this was a few months before kernel 1.0), and a couple months after I managed to get that working with X and PPP, I wiped out my DOS partition. I've been learning much more since.

    I don't think I would have started out liking UNIX at all, if I didn't have some sort of guide like that character based menu, that told me exactly what was happening behind the scenes in the shell.

    Something of this concept, perhaps in a GUI even, would be very good for beginners, and would do nothing to cripple things for advanced users.
  • It has often been argued that competition is good, and therefore we need both KDE and Gnome. But after reading Miguels very nice article it seems to me that this is not the case. The thing we really need is common standarts. If KDE and Gnome are not willing to make a common standart then it might be time for one of them to die.
  • use CORBA.

    (damnit Rob, i have the sentence "use CORBA" and that fucking lameness filter freaks.)
  • I'm all for making things easy to learn. I am NOT in favor of making them just like Microsoft.

    I totally agree that GNOME, KDE, Linux, and Posix based systems should try to avoid being like Microsoft (with the execption of COM / CORBA type stuff). Actually, I would say that user interface developers should say "Microsoft did it this way, that means it must be bad, unless there is real academic research backing this idea up." (real academic research means research not being preformed by the Microsoft did it all right morons who sometimes get tenure in CS departments.

    Unfortunatly, I do not see how the complexity you describe relates to that (execpt the associate files of this type complaint). I do not think I'm the only one since all your other replies go to answering the library question without answering the associate file question or the larger question of copying Microsoft. This means you really need to express yurself more clearly.

    Anyway, I agree with you, that there are many stupid things which GNOME (and KDE) get from Microsoft. They shuld really try to emulate a good user interface like Plan9.
  • Comment removed based on user account deletion
  • Right, I agree that WSH is some pretty nice stuff, but so far as I know, Microsoft hasn't completed the picture by shipping out-of-the-box a Perl or Python (or anything even remotely similar) runtime that can be used from the command-line. I know you can write VBA, JavaScript, or VBScript code that interacts with COM, but until that capability is as ubiquitous as the gawdawful .BAT file handling, and doesn't require a GUI application, then all versions of Windows will be really limited.

    I want to be able to ssh into a Windows NT system and run a script to interact with the operating system.. create users, synchronize account groups, and the like. Today I have to install a third party RSH/SSH service along with ActiveState's Perl and the Win32::NetAdmin and Win32::AdminMisc modules. It should be possible to script things without any GUI tools on NT today.

    And .BAT doesn't even come close to being the same concept as what I'm talking about here, of course. ;-)


  • Some very good points here. When I get moderation I'll remember to come back here and bring this up.

    However, I'm not convinced that XML should be used. I would insist that the configuration file abstraction be one layer up, meaning any configuration format is allowed as long as the common parser can parse it into whatever the tree might look like in memory. Then your at the same point you would be if the file where xml. IOW the memory model is the same but persistent storage is cofigurable.

    I have looked at XML quite a bit. I haven't written code for it, but think about what a typical config would look like in XML. Try something as trivial as syslog and you'll see how ugly it can get. Not all configs look like they map into XML as well as apache.conf or smb.conf.

    KidSock

  • See my response to miguel
  • You are one confused person.

    It is based on COM or whatever name they give to COM these days (COM+ :-)

    miguel.
  • What is it bad to copy someone else stuff in the software industry ?

    I never said it was bad. I said it was bad when it's not done WELL.

    -- Give him Head? Be a Beacon?

  • The GNOME project was started because of the licensing problems in KDE and Qt: the result was not a free system

    Miguel
  • by miguel ( 7116 ) on Wednesday August 09, 2000 @07:15AM (#867683) Homepage
    We have reduced all the complexity that you have described above now. To install, setup and configure your whole system:

    lynx -source go-gnome.com | sh

    We take care of the library issues for you, and you can focus on compiling Galeon (which we plan on including on Helix GNOME as well in the near future).

    Miguel.
  • The problem with Microsoft OSs, MacOS, etc. is that they try and keep us from doing certain things. Unix takes a much better approach. You can do anything on Unix, and doing anything won't even take the system down with it. Sure it might be a little bit easier for a newbie if we could present one interface and say "this is Linux" or "this is FreeBSD" but it's better to say, "this is RedHat with KDE" and if you don't like it, change it. Why should I be stuck with gnome or kde or twm when I can switch at will? What we need is easy to use newbie configurations that will hide all the choices; distributions like Corel have tried but aren't there yet. I refuse to give up my choices just so a newbie can use Linux, especially when we can accomodate the newbie AND keep our freedom.

    That said, Miguel has a point about code reuse, and my guess is that many people reinvent the wheel because they enjoy the challenge of coding. We certainly are on the way to reuse with things like Bonobo, and I think we'll see more reuse in major projects than we have in the past because it makes architectural sense.

    Miguel also should have picked a better title. "Unix sucks" is *not* going to get the mainstream to read the article, they're too used to one-line sound bytes.

  • by freebe ( 174010 ) on Wednesday August 09, 2000 @05:58AM (#867689) Homepage
    Maybe the reason UNIX sucks is that it's a square peg in a round hole? That UNIX is designed with a completely different philosophy - interoperability, modularization, tools, etc.? I'm having a bit of trouble here - the reason he's saying UNIX sucks is that it's not an object-based system? Funny, but concepts like pipes, output redirection, background processes, and the command line are integral to my computing experience, and I don't see UNIX sucking.

    Or maybe I just don't try to use UNIX as a component-based system, and as such don't see it suck. Maybe I'm not fitting a square peg in a round hole (or vice versa). When I want object-ness, I do use BeOS. UNIX!=User-friendly object-mish-mash-component-SOAP-XML-Hype. UNIX is a way of thinking that's different than other paradigms, and because of this UNIX sucks? I hardly don't think so.

  • Normal people (ie, non-zealots) call it X-Windows. It's a conciet from the popularity of Windows, but it's true.

    The same reason people rarely say "GNU/Linux" - it's not that we don't love the GNU tools, but give me a break, it's Linux.

    Coca-Cola would have an easier time trying to convince people to stop calling it "Coke". We like shorter, easier way to say things...

    I'm an investigator. I followed a trail there.
    Q.Tell me what the trail was.
  • No, sorry, that was *not* an insightful comment, but a comment that deliberately missed the point.

    Posix *wasn't* designed for beginners, but surely there is no reason why it should not *now* incorporate GUI aspects that enable users to access it without the need to become properrerheads; and without stopping PH's from doing how they want to do it.

    There is no real excuse for elitism except for insecurity.

  • I agree 100% with what you wrote. This is something that concerns me a great deal in general. If Microsoft has their way with .NET there will be no room for Linux. So what are the biggest elements with .NET?

    The ability to mix and match objects.
    Fairly transparent object invocatation, and the ability to access your information anywhere.

    What if we had some kind of system that stored everything in a browsable tree. Ie, any object could serialize itself to the tree, and anyone (provided they had access permisions) could access, store, read and run parts of the tree. Even more so, what if everyone could access their tree locally.

    For example, In c++

    umGeneric = new obj_from_tree("oft://objects.blah.com/World/object s/default_printer");

    assert(obj_from_tree->isA("printer"));

    printer->print(string);

    Parts of the tree could be simple dictonaries (in java, or hash's in perl) that have configuration files etc.

    Sorry, but wouldn't this be cool?

    Anyone interested?
  • Well, wait a minute. Let's look at something I don't think anyone has mentioned yet, apropos the "Strengths of Unix vs the Weaknesses of Windows" and vice versa. Miguel seems fond of taking specific examples from the Unix world and saying "this is bad" and then mentioning vague generalities from the Windows world and saying "this is good".

    Part of the problem is that there is no "Unix", in a concrete sense. Unix is more of a Platonic ideal than a specific system. Solaris is a kind of Unix, HP-UX is a kind of Unix, Irix is a kind of Unix, Linux is a kind of Unix (OK, maybe not legally, but philosophically), but none of them is Unix. To say "Unix does foo" is misleading; Unix doesn't do anything - specific implementations do things.

    Miguel uses examples like ssh, Samba, and Apache. Are those "Unix"? To my way of thinking, no; they're applications written for implementations of Unix. Bad applications do not make a bad OS - you can write a Windows application that doesn't use any reusable code and breaks every standard, but that doesn't change the strengths or weaknesses of Windows. Now, Miguel is right when he says (or implies, at least) that certain systems make it easy to write component-izable stuff, and Unix isn't one of them. But I think he's wrong in implying that a monolithic component architecture is the answer.

    Unix applications don't use reusable code or talk to each other? Sure they do - and I'm not talking about pipes, either. Does your web browser care what network card you have? No, because it only talks to the layer of the stack directly below it, and each layer of the stack is a black box only providing a fixed set of services.

    Part of the problem with Unix GUIs, I think, is that they've broken with the stack-oriented model to some degree. The application is responsible for too much, which is part of why X apps tend to look very different. Sure, you can link to Motif without worrying about Motif's internals, but that only exemplifies the problem - imagine if your web browser was "linked" to a specific network protocol!

    I guess I'm partly agreeing with Miguel here, in that you should be able to code without worrying about how services are provided. The difference is that Miguel seems to think that centralization is the way to go, whereas I think that decentralization with proper understanding of responsibilities is the way to go. It looks to me like in Miguel's world, you'd be stuck with one model for everything, whereas in a protocol-layer world, you can change any layer without affecting others.

  • I know the person I replied to didn't specifically mention 'interface', but that's what he was talking about. The 'interface' between the user and the system - i.e. how the user interactes with the system, is what we're discussing. This doesn't have much to do with the core of the system itself - for example, the virtual memory subsystem doesn't need to be 'user friendly', since most users will never use it directly.

    And I agree, choice of interface is good. BUT - when dealing with specific apps, it is very difficult with today's technology to create 2 UIs to the same underlying code. Perhaps with Glade and similar apps this will become more convenient in the future, but for the moment programmers usually need to try to satisfy both groups of users (newbies, power users) with 1 interface.

  • Hey, I am not a coder, just an end user and learned the hard way to just hate windows machines from experience and Linux is a godsent to me. I don't think I'll like a more "user friendly" machine if that implies dumbing down Linux! He writes: "Unix is a complex system internally despite its simplicity in its design, but it is not a system ready for end users. And this is what the GNOME project is building on top of the existing Unix foundation." Well, I for one am an end user, and like it the way it it, thank you very much! Let's hope this does not degenerate into another dumb box type operating system. Just my rant.
  • True. It sno big harm. They just annoy me.

    And my patience is thin. I need to get laid.
  • Except that GNOME is going about this entirely the wrong way. They're writing a lot of useful stuff (the canvas, html components, etc.) except they can't figure out why somebody would want to use this stuff outside of GNOME. GTK+ could benefit from the standard inclusion of some of these things and it's likie fighting for a firstborn to move them out of GNOME into GTK+.

    Is it so strange that the GNOME team primarily write GNOME components? GTK+ is a small and fast widget set and I think huge complex components like the GNOME canvas is the last thing it needs.

    Example: In the previous article about Miguel speaking (sorry, no reference), one poster mentioned how he had gotten flamed for taking the GNOME html component and removing the GNOME dependencies. Clearly, an html component that everybody can use is a good thing. Requiring GNOME to use this html component is not a good thing.

    AFAIK the only "GNOME html component" around is the GtkHTML component (used for example in the Helix Code Installer, Updater and some wizards). I'm pretty sure it works in GTK+ only apps too.
  • by jonabbey ( 2498 )

    What you describe is similar to what the OODS [oods.org] team is trying to put together. Their idea is to have a single API for interacting with any kind of data held in a directory-style structure, with the OODS software providing client access, permissions controls, and back-end data store drivers.

    They are just at the point of planning and discussing everything.. it's not clear that any signficant amount of code will be written in the foreseeable future, but if you want to join a discussion with people who are working on this sort of thing, check it out.

  • by miguel ( 7116 ) on Wednesday August 09, 2000 @09:22AM (#867714) Homepage
    * My Background

    I was asked once at Usenix, once at OLS "How long have you been using Unix?". At OLS someone just assumed I was a newcomer that had used a Mac all its life, that I had no idea of what I was talking about, and that I would be better of clicking icons on my Mac.

    I have been using Unix since the early 90s. My first contributions to free software was in 1992.

    I was the main author of the GNU Midnight Commander, a file manager that was a clone of the DOS file manager called the Norton Commander.

    Later, I started working with David Miller on the SPARC port of Linux: I worked on the kernel and on a bunch of device drivers that made the system usable. I also ported three libcs and did significant work on the various dynamic linkers used on the port (the libc4, the libc5 and worked partially on the GNU libc port).

    Afterwards, I worked with Ingo and Gadi on the Linux software RAID-1/4/5 implementation. Ingo later perfected it to the beautiful levels you see now.

    Later I joined the Linux/Indy team in which I worked on various tasks to bring up a complete system to Linux on the Indy. I abandoned the work when I began working on GNOME, three years ago.

    Miguel
  • by rhavyn ( 12490 ) on Wednesday August 09, 2000 @07:56AM (#867715)
    What is your point ? If you want a web browser, download one. Mozilla doesn't depend on GNOME or KDE, neither does good old netscape. If you want to use a GNOME app you need GNOME. Kinda like how if you want to run a Win32 app, you need the Win32 library. If you want to run a Mac app ... yup, you need whatever libraries MacOS provides. Lets even jump back a step. Why do I have to install pthreads if I want to run mySQL ? Some programs have requirements, and some people decided that they are going to write a GNOME app. So, you can either download the libraries (I prefer them to come in several small libraries then in on 100+MB library) or you go off and write your own.

    No one is forcing you to use GNOME or KDE or XFce, and some people genuinely like them.
  • I don't think Unix was ever built to have a pretty interface. After the iMac, it seems everyone wants to abandon functionality for style. Not that Unix is losing functionality by this, it's just taking away time that might have been well spent developing other features.

    Some OS's weren't make to be pretty, and I think unix is one of them. I much rather have it's power then any other other operating system's GUI any day...
  • by (void *)0x00000000UL ( 212170 ) on Wednesday August 09, 2000 @07:57AM (#867721)
    You hackers don't get it. Normal users have real work to do and a tight schedule. They can't afford to waste time learning apps, they have to works out of the box and be easy to use. What's wrong what that ?

    If you only write stuff for hackers how do you think Linux is going to succeed in the desktop market ? You can't just ignore your user's needs.

    There has to be a standard way of doing things.

  • It really does not matter what you use. If you use something like RDF or SOAP to handle the low level, you can use something very light wieght (XPCOM or perl namespace munging) to deal with the actuall calls.

    CORBA is slow, overweight and has no policy. These are problems with it. DXPCOM would be my first choice, but it does not exist yet.

    If I were to actually code this, I would probibly use XPCOM. I have enjoyed using it for the most part.
  • by NMerriam ( 15122 ) <NMerriam@artboy.org> on Wednesday August 09, 2000 @06:02AM (#867724) Homepage
    I didn't see anywhere in his paper that claimed a complete novice should be able to operate the system without assistance. He's calling for a standard interface (or guidelines), code reuse, and less complexity for complexity's sake. We shouldn't have a different control key set for cutting and pasting depending on which application we're in.

    I'm an investigator. I followed a trail there.
    Q.Tell me what the trail was.
  • Miguel

    then uses this argument for justification of his claim that a 'better way' to write Unix applications would be to use a component architecture built on top of CORBA.

    ESR says:

    The only way to write complex software that won't fall on its face is hold its global complexity down -- to build it out of simple pieces connected by well-defined interfaces, so that most problems are local and you can have some hope of fixing or optimizing a part without breaking the whole.

    These are opposing viewpoints?! It looks to me like Miguel and ESR are on the same page.


    ---
  • Look at Mozilla XPCOM for this. They are not that far away. Replace the usual RPC mechanism with SOAP, and you have something a bit bloated (not nearly as bad as CORBA tho) but completly open, and understandable.

    I would love to do something like this.

  • ...there are some basic functions which I rely upon that don't exist in emacs.

    There are things to complain about in Emacs, but lack of features? You must not be looking hard enough. There are always at least 2 or 3 different implementations of any conceivable feature...

  • I was agreeing with Miguel up until the point where he said that the interface of MS windows 95 was better than we had in X at the time. I disagreed strongly with that, for one major reason: We had the window manager. That means when a program dies, it doesn't freeze to your screen and eat up real-estate. You can still minimize it or move it out of the way. Windows *STILL* doesn't have this feature today. I get sick and tired of the way in windows a slow app that starts up "hourglasses" the cursor, eats up all the screen real-estate, and can't be dismissed until it wakes up enough to pay attention to my mouseclicks. Plus, the annoying focus policy of "focused windows must be on top" is really annoying to anyone who's experienced the luxury of typing into a shell window that's only partly exposed, while looking at the window on top of it (perhaps as a reference to what you are typing). The MS GUI has some big problems, and it scares me a bit that the maker of GNOME is someone who praises it and (apparently) wants to emulate it. Yes, the UNIX interface is old and crusty. But imitating Windows is not the way to fix it.
  • And then SCO sold it to The Open Group (motto - open wallet, give us money), which then decided that you can't call you system UNIX unless you license CDE, their cash cow. OT: Does anybody know the copyright license on the Lions book? If I wanted to make an OS based on the source in there, whose feet do I have to kiss/throw money at?
  • by Anonymous Coward
    He's trying to turn our wonderful Unix-like systems into Windows.

    I, for one, love the way applications are written now. He mentions how things like inetd and ssh have no code reuse. That's because they don't really do anything similar! libc does most of their work: socket(), connect(), select(), etc. There's no reason for these applications to share--and hence depend on--components.

    Stay the fuck away from our Unix.
  • by dublin ( 31215 ) on Wednesday August 09, 2000 @12:51PM (#867744) Homepage
    I'm far from a Microsoft programming expert, but I think I'm correct in stating that although these interfaces have indeed remained static up to now, this is more due to a design flaw than by design.

    One of the real weaknesses of the MS way is that (as it has been explained to me), there is no way to extend a COM interface - any new functionality requires creating a completely new interface that exists alongside and is (usually) a superset of the old one. Of course, you must still support the old interface for backward compatibility, but this isn't always done. (This really makes some sense, since the alternative is code bloat, but it breaks things, especially if app vendors "update" a MS-supplied DLL.)

    The DLL hell problem is quite serious, and has some significant and largely unknown side effects - here is one big reason why even W2K isn't up to enterprise duty: The DLL problem prevents running test and production versions of the same application simultaneously. Of course, this is something the Unix folks have handled forever simply by starting in another directory and/or tweaking the search path variables for executables and libraries. (For those of you MS folks that think it can be done, I have it on good authority (Microsoft's) that it cannnot be. It is possible to tie a particular DLL version to a particular app, but there is no way of ensuring that you will get the right DLL if another version of the same DLL has already been loaded into memory by another application (or another version of the same application.))

    This sort of behaviour *MUST* be avoided at all costs!

    As an aside, although I'm starting to be quite impressed with GNOME and it's rate of improvement (although it's an inexcusable resource pig), I still wonder how much farther we might be if this had all been done in Java, leveraging all those other components that are already built? (And yes I realize the freedom issues of a year or two ago. I also think they're almost totally fixed and/or irrelevant today - there are a lot of alternative implementations out there.)

    It just pains me to see so much effort thrown at reinventing the wheel yet again, but without the benefit of portable binaries and the attendant abilty to automatically and dynamically define the client/server(s) split point(s). This ability will eventually make Java or something like it the winner, since you can only pull that trick off with binary code that runs wherever you decide to send it...

    Miguel, if you read this, I'd be interested in your take on this latter point in particular. And keep up the good work, you may convert me yet... ;-)
  • What he is suggesting here strikes me as a call for a major revolution in the Open Source community. Creatinging a COM-ish (call it "GNU COMMIE", perhaps?) language for *nix and getting people to build apps for it could prove to be a huge organizational challenge.

    One question, though... Are there many *n?x people out there who feel this is something we really need? Or is it just an accute case of code envy on the part of Miguel?

  • by rlk ( 1089 ) on Wednesday August 09, 2000 @06:09AM (#867749)
    UNIX is BUILT on highly modular components with a high level interface to glue them together. It's just that the components are command-line programs, and the glue is lines of text (records) through pipes.

    Just because something's based on a command line with independent processes running in separate address spaces, and isn't object oriented, doesn't mean that it's not a modular, component-based architecture.
  • by EMN13 ( 11493 ) on Wednesday August 09, 2000 @08:07AM (#867754) Homepage
    I'm not exactly an emacs fan, but I find the basic functionality not that hard to use... In case you haven't given up yet, here's a lightning course.
    Basic typeing and arrow keys work as expected.
    Then I use search a lot: C-s.
    undo is shift-underscore.
    replace is (OBVIOUSLY...) M-%
    cut can be achieved by C-space at one and and C-w (wipe) at the other. Copy is the same but M-w. Paste (yank) is C-y. Any M-y's after C-y's paste things that used to be cut/copied, very neat as you never lose any text. C-k cuts the rest of the line.

    And of course quit is C-x C-c

    then there are some long commands I use a lot
    M-x enters minibuffer fo command typing

    line-number-mode shows line numbers
    sgml-mode, cc-mode, c-mode, html-mode etc go into those appropriate modes so you can use:
    font-lock-mode for syntax highlighting.

    Obviously there's much more, but that's 99.9% of what I use. Can be learned in 15 mins, really. stick it to the side of your screen.

    Also useful is the tutorial C-h t
  • "The first step towards fixing them is admitting that some parts of *nix do suck and should be made not to suck. Here's the relevant quote:"

    All well and good, but pointing to closed source projects and claiming they don't reuse code between themselves detracts from his message.

    There is actually quite a lot of code reuse. Perhaps not as much as he would like to see, but let's start by looking at examples of open source products instead. Projects where code reuse is at least an option.

    There are also no bricks floating in the clouds. Duh.

  • First, the dark green of links is hard to distinguish from the black in some conditions. Then there's the black for the "clicked link", so after I popped it up earlier then closed it before reading it, the link effectively disappeared. Then there's the philosophy of putting the link to the talk/paper/article *anywhere* but the first time it appears as a noun. In this case, I had to scrub the text with my mouse until the end where the cursor finally changed over "read"

    I swear, reading slashdot is like playing myst some days.
  • Hah, just the sort of West Undershirtian reply I'd expect. You are people too? Not from what I can tell. You sub-human visigoths are what give America a bad name abroad.

    Go back to your single-wide, artificial stucco, outhouse-using life. You pig.

    <SARCASM type=":)">

  • And, if you're interested in finding that
    Tim Berners-Lee book without violating the
    Amazon boycot [noamazon.com] you could try Fatbrain:
    Weaving the Web [fatbrain.com]
  • How do you pronounce Miguel's last name?
  • They're reinventing the wheel. The standard UNIX desktop environment, CDE, already has all this. ToolTalk provides a much more lightweight and nonintrusive distributed heterogeneous systems component architecture. ToolTalk is part of CDE. CDE is an excellent architecture for a complete desktop, and is already the standard. If you look into the technology, you'll see this. So basically, either these guys are completely ignorant of CDE's significance and capabilities, or they know about it and they just want their name on something (NIH).
  • by FascDot Killed My Pr ( 24021 ) on Wednesday August 09, 2000 @06:11AM (#867767)
    This is not a troll, this is an illustration of an opposing viewpoint.

    If I want to use some little GNOME program (say, Galeon), what do I need to do?

    Download the program.
    Figure out which libraries are needed for the GNOME stuff.
    Figure out which libraries are needed BY the GNOME stuff.
    Locate and download all those libraries.
    Find a place to put all those libraries.
    Debug all my existing applications because I just upgraded all my libraries (can you say "DLL hell"?)
    Occasionally: Answer "NO" to a program that wants to "associate files of type ABC with this program"

    I'm all for making things easy to learn. I am NOT in favor of making them just like Microsoft.
    --
  • I heard little about Miguel prior to 3 years ago which leads me to believe that he has never really worked with too many UNIX systems.

    Do me a favor. Go to a shell and try this:

    $ cd /usr/src/linux
    $ find . -name '*.[ch]' | xargs grep Miguel

    Take a long hard look at what you see. Then think about your statement.

    Perhaps you should let Miguel know about some of your concerns. You can easily reach him at miguel@kernel.org. Or miguel@gnu.org. :)

    --
    Ian Peters
  • by styopa ( 58097 ) <<ude.odaroloc> <ta> <rsllih>> on Wednesday August 09, 2000 @06:12AM (#867772) Homepage
    Posix was a term generated by the government in order to get around some restrictions. Basically, when the government is trying to set standards for what they will purchase and not purchase they cannot use trademarked names, ie they cannot say "We want a system that is UNIX complient." because UNIX is trademarked. What they can say is "We want a system that follows certain guidelines descriped in the Posix standard." and then make the Posix standard restrictive enough to limit the scope of what they buy to UNIX.

    Posix is not the generic term for UNIX because even NT is Posix complient (barely, but it is) and we all know that NT is not UNIX.

    As someone already mentioned in this thread, the UNIX trademark was sold by AT&T after the anti-trust ruling, AT&T had some major restrictions on anything not related to long distance communications. AT&T sold it to Novell, who sold it to SCO. From what I have been told SCO gave that trademark to some non-profit standards orginization, or something along those lines.

    UNIX is not just a trademark but a standardization. In order for a product to legitamitly called UNIX it must follow certain conventions.

    A more generic term is *nix, which refers to UNIX like. It covers UNIX, Linux, Minix, and several others.
  • Er... no.

    You've tried to find a gui for unix equivalent to Windows, which last time I checked doesn't exist, and when you failed you started ranting. If you want to keep using the Windows gui, stick to Windows. Really lots of people use it for businss and leisure, it certainly isn't that bad.

    But let's look closer at your beef with unix, and why in my opinion it is totally irrelevant. Has it ever occured to you that the majority of people actually using unix on the desktop are happy with it, or they wouldn't be using it, and they'd probably hate being forced to use Windows, or anything else for that matter, because it would be totally unfamiliar to them just like unix was to you?

    Has it ever occured to you that these users couldn't care less if there's a gui & set of apps functionaly & visually equivalent to Windows, since they don't *need/want* that functionality?

    And before you step in and say that if more people are to start using unix on the desktop, such a gui must be created, has it ever occured to you that such people minght not give a damn if more people start using it or not?

    Why should they care what other people use on their home box. I'm using someting that I know, like and am familiar with. If you don't like it, I honestly couldn't care less. I don't care what my users are using on our network either. They should be and are free to shoot their feet any way they please.

    Lately I've been using windowmaker + wterms + gtk (gtkstep rocks!) w/ some gtk apps (gimp, gtksee, xchat...) and netscape etc. During the last year or so I seem to have stabilized on a consistent setup on my laptop, workstation at home & work etc., before that I would experiment often, try new things etc. Thing is, it's been almost a year since i last did a dramatic change on the setup appart from the odd upgrade here and there, the occasional tweak here and the odd background image change =P

    I can say that I finally found the ideal gui for myself - i'm very productive, everything is in the right place, it's readable (contrary to 99% of the themes on theme.org). Granted this path to nirvana wasn't painless, but hey, there's no way an out of the box gui can please all people. Accepting anything prepackaged is bound to be a compromise, including windows.

    Chances are, you're probably not gonna like my setup. My point is though, that I'm not gonna like your windows setup either. You miss some windows apps on unix, I'd miss some unix apps on windows.

    Oh, and for the curious, here are some shots of my gui nirvana:

    shot 1 [cc.duth.gr]
    shot 2 [cc.duth.gr]

  • by mcelrath ( 8027 ) on Wednesday August 09, 2000 @06:12AM (#867778) Homepage
    At one time I invested a lot of time and thought into an object-based operating system, and encountered many of the same problems that Bonobo (or any component-based architecture) will/has encountered. In particular: How do you do versioning? Miguel speaks fondly of objects embedding themselves in each other, but this is a disaster if one component doesn't do what it's supposed to. As evidince, look at M$ Word, Excel, etc. You can embed them in each other, but you have to have the right versions or everything goes to hell. And if one object misbehaves it can destroy the entire document. Not only that, but in order to look at a simple document (with an embedded spreadsheet), you have to load a massive amount of software to render it. Is this necessarily desirable? This will be a much larger for open-source than closed, since the versioning is finer, incremental updates are widely available, and people will try to use software of subtly different version numbers.

    Is the interface definition used to determine "compatability" of an object for a particular purpose? Can interfaces evolve? Can an object add functionality, but still be used by other, older objects for the older purposes? Must an evolving object conform to several interfaces (adding bloat), or can there be v2.0 of an interface, after the designer realizes there's a Better Way to do it?

    These are hard problems, and ones I was not able to answer to my satisfaction. Evidinced by their software, it seems that M$ has not either. Do you really want to embed an editable spreadsheet in a document, and deal with the bloat and crashes that will occur? Or is there a Better Way?

    Of course, I could probably answer all these questions by digging into the Bonobo and CORBA documentation, but stimulating discussion is good too.

    --Bob

  • by Cat Mara ( 211617 ) on Wednesday August 09, 2000 @06:13AM (#867779) Homepage

    The core of Miguel's argument is that the Unix world is in chaos because the designers of Unix have failed to form and enforce policy down the years. A good point.

    But let's look at the history of Unix here:

    • Invented in the '60s by two researchers as a skunkworks project;
    • Grew to maturity in the '70s in an academic environment;
    • Tinkered with in the '90s by scores of independently-minded hackers scattered all over the world.

    Now, Miguel, could you please tell me precisely how is one going to enforce policy on such a disparate user base, most of whom are going to react with instinctive loathing towards anybody attempting to throw their weight around, to say my thing is The Right Thing damnit, for whatever reason?

    Unix has survived precisely because there is no hallowed policy handed down from above. It evolves. It changes to meet new needs. Those components of Unix that are shared, like glibc, have developed through consensus and bitter experience. If you want to develop in an enforced-policy environment, well, there's Windows NT or VMS or OS/390.

    The Cluetrain has already left the station, Miguel. You on it or under it?

    --
    Cat Mara
    Love me, I'm a liberal!

  • >Sorry, but you've got no defense here. Balsa, Mutt, even emacs will read mail.

    And none of them handle multiple POP accounts. Mutt certainly isn't going to display that scan of the new baby in the message, or show a company logo at the top. I guess no one ever included pictures in snail mail either or ever wrote on letterhead either. But YOU have no use for these features, so they're useless, right?
  • Yeah, it would be nice if GNOME was built on some other foundation than C. The problem is that if it did, it would die. The bottom line is that programmers follow their short-term interests and any solution has to be immediately useful in order to thrive. So you have to be able to program C with it.

    As for Java, remember that the GNOME guys are all rabid free software zealots and wouldn't dream of depending on a proprietary language like that :). Nor do they have the marketing team to force people to switch to another superior language (see what happened to ObjC, Eiffel etc). So C compatibility is really the only way to go.

  • SOAP to handle low-level? My god, you thought cross-process CORBA calls were expensive to marshal, it has nothing on SOAP.

    XPCOM is nice, but it's in-process only and any attempt to use component server middleware would be a grotesque hack ala DCOM.
  • Probably but that's just my point. This holy grail of code reuse amongst languages was already hard when all languages were procedural and relatively similar to each other. Now that we have object oriented, functional, dynamic languages it becomes just downright impossible. People who use a language do so because it gives them what no other language does, so why would they give up all the cool stuff about their language to fit into some least sommon denominator VM. It just doesn't make sense.

    A Dick and a Bush .. You know somebody's gonna get screwed.

  • First, is anyone from GNOME talking to anyone from GNUstep? While approve of the Bonobo architecture ideas, it looks to me like the two are doing very similar things. (And WindowMaker rocks my world.) Second -- does anyone else think that an object-based OS doesn't really sound like UNIX? -_Quinn
  • Bloat!=Flexibility. You wouldn't believe how overbloated GNOME and KDE are. From my experiance, BeOS has a fairly complete application framework, with a lot of consitancy. It is also the lightest major OS available on x86 (with the exception of QNX Neutrino). (Out of *BSD, *NIX, Windows, etc) The major problem is that GNOME and KDE suffer from huge feature bloat. GNOME implements a file system, uses CORBA (trying to kill a fly with a Buick there) for objects, and in general puts in a lot of stuff 99% of people don't need. Consistancy does not have to be accompanied by huge overbloat. I just has to be designed with some sense of what would be better if left out.
  • Having total freedom tends to lead into anarchy, people generally would do things properly on their own, but not everyone can be trusted. What MacOS and Win32 do try to do is what Unix does, basically do what you want but don't eat the system. In the end they don't restrict you any more than Unix (in most cases), but they do ASK that you follow guidelines as to your UI, and provide an interface to make that easier (the widgets, at least, are relatively consistent in win32).

    One advantage for both newbies AND experienced admins is that across ALL applications, the UI is consistent. This is a major complaint I hear about *nix. There is little to no consistency in UI layout between applications. Hell, there isn't even a common clipboard that supports more than just plaintext.

    You aren't restricted to their toolkits, though. If you wanted in both OSes you could use alternate widget sets (GTK is available for Win32). But then you run into the problem that your UI is somewhat out of place (ie, it doesn't follow the user selected color scheme at times), and duplicating code unecessarily. What Win32 and MacOS are famous for, GUI wise, is the consistency of the UI. Makes life easier for everyone, including end users and developers.

  • by jetson123 ( 13128 ) on Wednesday August 09, 2000 @04:22PM (#867834)
    Yes, UNIX lacks a component system that works well for GUI and end-user apps. Yes, at a very high level, it should give people the capabilities of COM on Windows.

    But I think neither COM nor CORBA are the answer. COM and CORBA are both rather complex systems because they are trying to patch up deficiencies in the underlying languages, C and C++. In an environment that encourages reuse, you should be able to just serialize and send objects to other components without lots of error-prone declarations. Such systems exist, and have existed for decades. But you simply can't build them reliably on top of C/C++.

    Ten years ago, Objective-C was a pragmatic and efficient answer to that problem. Objective-C is simpler than IDL and gives programmers more power. Today, the obvious answer would seem to be Java, although even it is still more complex than it probably ought to be.

    While I appreciate the short term utility of Gnome, I think in the long term, the effect of the Gnome project (and KDE, for that matter) is going to be harmful. It continues to encourage people to develop in and for an environment that is fundamentally not well suited to building software components and getting a lot of code reuse.

    If people want to do something relevant for end users in an industry-standard environment, I think they should contribute to Java-based desktop application efforts. The Gnome programmers are smart and capable: if even a fraction of the Gnome effort went into open source Java implementation (e.g., kaffe [kaffe.org]) and Java desktop apps (e.g., JFA [javalobby.org]), we'd soon have a good environment that would be much easier to extend with new components than a big C/CORBA system.

  • by RovingSlug ( 26517 ) on Wednesday August 09, 2000 @11:11AM (#867854)
    /etc must die.

    Miguel touches on the mess of configuring services. He proposes a solution for working with existing configuration files using a perl backend and GUI frontend. This is an admirable short term solution for a larger, significant problem.

    The inherent problem is that standard unix /etc and /usr and /lib structure was spawned from the mind of a C programmer in which global data is deemed an acceptable solution. /etc is a form of global data, which is fragile and inherently carries minimal context. It's fragile in that there's no standard interface to retrieve config properties - so that any program other than the parent of the config file cannot reliably expect to parse it. And, without context, it's unclear which programs depend on a particular config file.

    In the spirit of the changes proposed by Miguel, I propose that applications and otherwise all packages be components even in the way they live in the system. Let every package have an arbirary, unique directory, and let everything owned by a package live only in that directory. Let there be a common system component that exposes packages and their configuration on request. Let all packages find and expose other packages only through this component. Let the system package component internally record at most where to find other packages. Further configuration is stored in the package's own directory.

    There are a number of advantages to this model:
    1. First order installation becomes trivial. Just dump everything into a directory. The system package component will automatically find it.
    2. Complete uninstall becomes trivial. Just blow away the package's directory.
    3. Exposing a package's configuration is standardized, stable, and protected through the system package component.
    4. "Custom" packages and their configuration is trivially persistent across reinstalling the operating system.

    This is a problem that has been clumsily attacked by both RPM's and the MS Window's reigstry. Both tried to solve the problem by making prodigious use of massive amounts of internal data - data that is subject to unneccesary and unwanted management and corruption. With the proposed system package component, the small amount of internal data is easily reconstructed by scanning the file system. If you assert that packages access even their own configuration data through the system package component (much like the interface to a registry), then each package's configuration data can be stored in something standard and sane, like config.xml.

    I code. If you want help, I'll give it.

    Down with global data! Down with /etc!

    - Cory
  • by alispguru ( 72689 ) <bob.bane@ m e . c om> on Wednesday August 09, 2000 @06:24AM (#867868) Journal
    ... storage management via reference count really, REALLY, REALLY sucks. It's a first-class recipe for memory/resource leaks whose badness has been enshrined in the Hacker's Dictionary [tuxedo.org] for Ghod's sake. Its only advantage is that it spreads the management overhead out so evenly over your whole system that you can't see how much memory and time it really takes up.

    Miguel de Icaza seems like an otherwise intelligent guy, so I have to assume that CORBA is forcing the use of reference counting here. If that's so, then CORBA sucks even worse than I thought.

  • by MenTaLguY ( 5483 ) on Wednesday August 09, 2000 @06:24AM (#867872) Homepage
    If you spend much time looking at .NET, you'd realize that it's essentially built on top of COM itself.

    You need something like DCOM implemented first before you can even think of implementing something like .NET, let alone transcending it.
  • by cduffy ( 652 ) <charles+slashdot@dyfis.net> on Wednesday August 09, 2000 @06:36AM (#867873)
    CORBA has interface versioning. As you said, you could have prevented this mess by just reading the spec.
  • by sugarescent ( 30924 ) on Wednesday August 09, 2000 @06:37AM (#867874) Homepage

    Miguel's article is spot on. I love everything about Unix except the fact that Component Based programming is so underused. If there is only one thing Microsoft has done right, it is the way they have developed and pushed COM. With COM, I can write a piece of software that performs a task (be it a Widget or piece of middleware) and COMify it.

    Except that GNOME is going about this entirely the wrong way. They're writing a lot of useful stuff (the canvas, html components, etc.) except they can't figure out why somebody would want to use this stuff outside of GNOME. GTK+ could benefit from the standard inclusion of some of these things and it's likie fighting for a firstborn to move them out of GNOME into GTK+.

    Example: In the previous article about Miguel speaking (sorry, no reference), one poster mentioned how he had gotten flamed for taking the GNOME html component and removing the GNOME dependencies. Clearly, an html component that everybody can use is a good thing. Requiring GNOME to use this html component is not a good thing.

    Write the reusable software at the right level; don't GNOMEify everything in the name of "software reuse".

    -Nathan

  • by rho ( 6063 ) on Wednesday August 09, 2000 @06:38AM (#867876) Journal

    Wow, it's always tough when a true Indian wanders off the reservation!

    Various people like to criticize Microsoft for producing "bloated and monolithic applications". Before we criticize Microsoft, lets take a look at the end user applications that we have on Unix outside of GNOME: Netscape, GhostView, XDVI, Acrobat, Mathematica, Maple, Purify, FrameMaker, Star Office.

    The only common denominator on those applications is libc and Xlib. Some share Motif, but that is about the extent that these applications are sharing any code. And of course, the Unix "components" play no role in the equation: they are basically never used (I can only think of the printer spooler daemon being used, and even in this case: it is not even compatible across operating systems).

    Now, lets look at Microsoft "bloated and monolithic applications" again: lets consider "Internet Explorer".

    Internet Explorer is not a single executable as you might think. Internet Explorer is built of a collection of COM components. These components are developed individually, debugged individually, exported individually, and eventually, all of them create the illusion of an integrated application.

    Well, he has a point. Unix should be the first OS to use modularized components with rampant code-reuse, not one of the last. Remember part of the Hacker Ethic: do not re-invent the wheel.

    Imagine! Maybe Microsoft does do some things very well! (I know IE has much better support of CSS than Netscape does -- not to beat a dead horse, but Mozilla isn't looking all that great either on several fronts). Could it be that this modularity (even done as slipshod as it is on Microsoft OSes) is part of what encourages people to write software for Microsoft? Ease of development? (I'm not a True Programmer, so <TAKE type="salt" size="grain">

    I wish the best for Helixcode -- just before you get carried away with making it "easy to use", try to get some UI experts in there to help design things. Just because it has a button doesn't mean it's easy to use. Where the button is placed is just as important as having the button.

  • by Carnage4Life ( 106069 ) on Wednesday August 09, 2000 @06:25AM (#867882) Homepage Journal
    That was a very, very good article by Miguel. Unfortunately the first few posts I have read are from posters who obviously didn't read it and instead are making personal attacks at Miguel.

    Miguel's article is spot on. I love everything about Unix except the fact that Component Based programming is so underused. If there is only one thing Microsoft has done right, it is the way they have developed and pushed COM. With COM, I can write a piece of software that performs a task (be it a Widget or piece of middleware) and COMify it.
    Once this is done, anyone can use it regardless of what language it was written in, fast XML parsers can be written in C++ and used in from Javascript or VB. This way developers of business apps do not have to make the choice between a.) putting up with a slow app or b.) writing one themselves with all the attendant bugs therein especially if they have little C++/C skills, also they can go on towards actually creating their application instead of worrying about if they malloced() enough space for their char*'s.

    Lots of *nix people believe this implies laziness but fail to realize that reinventing the wheel dozens of times over is folly.

    Example I:
    I am currently designing and implementing a project management system on Windows(TM) for a small business with a few of my friends. two of them are *nix hackers and they balked at using an XML based protocol to transfer data between the client and server. Now instead of simply designing our protocol then using one of the dozens of available parsers [xml.com] to do this, they decided that we should invent our own binary protocol and write our own parser to parse it.

    Our project involves code written in both C++ and Javascript/ASP. We could have used a single COM based parser to consistly interact with the data both from the C++ and the Javascript code but instead its been 2 weeks and counting and our homegrown parser is still being written, tested and debugged. In my opinion this is nothing but a waste of time. When I ask them why not just use XML and an already existing parser their replies boil down to "It just feels wrong.". The chances that a bug or two will slip through in testing or that there is a buffer overflow in our parser is not unlikely considering that most early versions of parsers written in C++ have a few bugs like this hidden somewhere. in this situation component based programming would have allowed us to focus on building and designing our actual application instead of focusing time and energy on a tangential application.

    Example II:
    At work a MBA intern asked me if it was possible to create an application that housed a search engine that searched a database of MBA students based on criteria like concentration, work experience, graduation date, etc. and then displayed results with links to their resumes in MSFT Word(TM) or HTML format which could be stored on a CD to give recruiters at career fairs. Their first attempt had been to use VB and Access which turned out to be a disaster because of DLL Hell [desaware.com] based issues. My simple solution was for them to store all the students in an XML file and to write a Javascript page that used the COM based XML parser (written in C++) to perform the search. Writing this page took less than 2 hours.

    Now they have this search functionality they can press on a CD and give out at career fairs which any recruiter can view without needing more than MSIE 4.0 or greater.
    Without Component based programming their request would have been impossible to fill in their time frame and would have also required that the recruiters machines would need to fulfill a stricter set of requirements (like a Webserver being installed or they'd have to install an app).


    In conclusion my question is "Why has it taken so long for a major *nix push towards component based technology?". After all we've had CORBA for almost a decade [omg.org] but there hasn't been that much a big push towards components. Frankly I am eagerly awaiting MSFT's .NET for one reason only...cross language inheritance. The thought that my C++ components can be inherited by my Perl, Java or Javascript objects makes me extremely *CENSORED*.
    FOOD FOR THOUGHT
  • by gempabumi ( 181507 ) on Wednesday August 09, 2000 @06:29AM (#867890) Homepage
    Linux on the server: happy happy. Wouldn't choose otherwise.

    Linux on the desktop: does indeed SUCK.

    I've been using Unix in a server environment since 1992. Never had any major problems. On the desktop, I started with Mac, fiddled with NeXT, tried Sun and DEC workstations, and eventual moved to M$ Windows (for gaming, nothing else compares).

    All of those OS's have their strengths and weaknesses. And, in hope af creating a better world, last week I bought an extra hard drive and installed Linux (RedHat 6.2, am told Debian is better but no CD available) on it to play around.

    In general, a less than fulfilling experience. Here are my observations:

    1. I have to choose a desktop environemnt? GNOME or KDE? I'm supposed to know which has better Apps? Great idea - split a limited developers pool among two environments - so instead of getting one set of applications that work well, we get two sets of applications that are in perpetual beta.

    2. Web Browser. At no time while using a PC do I have less than 4 or 5 browser windows open. Trying to work without a functional browser is difficult, if not impossible. I just don't enjoy opening NN and seeing my available memory disappear. (Last week, Mozilla was declared dead - how could this happen when it hasn't even been born yet?)

    3. Mail Client. I spent days looking for a mail client for GNOME which supported multiple POP mailboxes. I found a few, but they ended up in wild-goose chases for libraries to replace those which where outdated, too new, etc. Never actually got anything to compile. Heard there's a good mail client for KDE, which means I made the wrong choice back at #1.

    4. Editor. Uhh, I use vi and emacs when there is absolutely, positively, nothing else available. Don't get me wrong, I first learned emacs over 8 years ago. But there are some basic functions which I rely upon that don't exist in emacs. Give me something like HomeSite on a linux box and you've got a convert.

    5. Word and Excel. Regardless of how much other Microsoft software sucks, these two products are hard to beat. Also, they are practically industry standards. If you work in any office environment, you'll be sure to get these sent to you all the time. Of course, you can read them from your linux box - but if you want to edit them, it's lilo:dos yet again.

    I use my computer to work. It is a tool which I need to function efficiently. I played with my new Linux Desktop for a few days, then when I had real work to do, I rebooted back to DOS. A real disappointment.

    I know, it's open source, help and code it instead of complaining. I do code open source software, but for web applications. I don't code for the desktop. To grow, linux needs the desktop. To win on the desktop, Linux needs the killer apps - at least a browser, a good mail client, and an editor.

    To get there, I'll argue that Linux needs less developers rather than more. I'm tired of seeing 2000 new apps which are v.0.0.0.1beta0.0.5-unstable. The paradigm of "release early and often" needs to be rethought. Release when you have a functioning application. If you have an idea for a new app, look around to see if anything else is out there first. If someone is already working on the same application, join them rather than creating a new tarball which will never get out of beta.

    Open Source can and will take over. But it won't do so without the Desktop. And the desktop is all about applications.
  • by AndroSyn ( 89960 ) on Wednesday August 09, 2000 @06:41AM (#867892) Homepage
    It seems like a lot of people are missing Miguel's point. He is not saying that you *must* do it this way, he is just saying that this is just one way of doing it, a way that he feels is better.

    What I see as one of the points here is that a lot of people are wasting a lot of time by writing a bunch of support code for their application because they are not reusing code. How this hurts us is the fact that this time could be used more effectively on working on the logic of the application, rather than rewriting yet another html parser or whatever.

    I know on a few pieces of software I have written I ended up using glib, because there are just so many nifty functions that programmers are constantly rewriting. And I can see his point after using what is still a fairly lowlevel interface.

    Also, as far as a lot of people saying, well we have pipes and that's all we'll ever need is just silly. I mean yes, pipes are neat, but god damn, how do you really expect to write anything complex and have it be relatively fast when your swishing data via pipes and firing off a bunch of new processes via fork().

    Modularity is really the key to have a extensible OS. Linux to some extent is modular, but not really. Take a look at the HURD for example, from the design viewpoint, its a beautiful kernel. Sure microkernels are a bit slower than a monolithic kernel at this point, but what difference does say a 3% performance hit matter.

    Code sharing and reuse is really what open source programs should be about. There should be common APIs and interfaces. Lets let go of some of the baggage that has accumlated with us over the years and stop trying to be a UNIX workalike and do something innovative. Linux and GNU are really the standards that the rest of the Unix community are trying to live up to now, we should show a bit of leadership here.

  • by miguel ( 7116 ) on Wednesday August 09, 2000 @07:19AM (#867927) Homepage
    There are well known ways of working around the problem you describe. Basically, you want to avoid changing interfaces once they have been published.

    For instance, the published interfaces in Microsoft Windows have not changed since they were published in the first version of OLE 2.0.

    When Microsoft has expanded the funcionality they have created new interfaces or new methods, and they have retained the behaviour and previous interfaces.

    The DLL problems in Microsoft applications are of a different nature, and can not be attributed to faults in their component system. It is a separate problem, still a problem for end users, but a separate one.

    Miguel.
  • by graniteMonkey ( 87619 ) on Wednesday August 09, 2000 @06:52AM (#867933)
    Disclaimer: The following is not a whole-hearted endorsement of Linux et al. If you are easily offended by constructive criticism, please disregard this posting. Moderators: Karma exists beyond Slasdot, and it'll come to getcha.

    First of all, to those of you who are criticizing Miguel by saying "Miguel is wrong because being Object Oriented isn't necessary", or "Miguel is wrong because XML isn't necessary", I hope you're keeping this in mind: Miguel's comments can be broken down into two parts("You know, there are two kinds of people in the world..." :) )

    1)We should be thinking about ways in which the UNIX philosophy is deficient, rather than continually reassuring ourselves that it's all okay. Look at it pragmatically: Who's got the biggest market penetration? Who's system is easier to learn to program in for the beginner, ignoring cost?

    Okay, these are total flamebait questions, so please, please don't respond to these in particular. Use your imagination, and think of some ways in which Windows is better than UNIX, rather than touting all the advantages of your pet operating system. Otherwise, you're just brainwashing yourselves with your own marketing.

    The question here isn't which way we should take things, it's how we should think about them. If you want to respond to this half of the question, address what the community should expect of UNIX, not how it should be done.

    2)UNIX needs standards, reusability, etc. This is a set of recommendations to the community about where things should go specifically. If you agree to Miguel's motivations in the first part, then read on. His argument is based on looking at "the competition", and I can give you a concrete example.

    He mentions IE, and how it's actually made up of a large collection of components rather than being a monolithic application. True. If I want IE's rendering capabilities in my application and I'm using something like Delphi(example because I actually had to do this once), Hell, I'll just draw myself a window and drop the browser component into it. You can argue about whether it's bloated code or not, but the end result is that I didn't have to reinvent the wheel to get something pretty momentous done. Further, I can now focus on doing something with this browser component that hasn't been done before.

    For those of you who aren't interested in looking into it, Microsoft is working on something called dotNet. There's a lot of argument about what it all is, and whether it's useful, a product of the devil, etc. The thing that excited me about it is that components from one language can be used in another. And here's where I must admit that I didn't read the details about Bonobo. But my point is that Microsoft is going to have a fully operational Death Star of interoperability between languages pretty danged soon. Miguel rattles off a list of languages:


    C++ objects live within the C++ universe.
    Perl objects live within the Perl universe.
    Java objects live within the Java universe.
    Python objects live in the Python universe.
    Object Pascal objects live in the Object Pascal universe.
    Gtk objects live in the C/Gtk universe.


    And this is exactly what isn't going to be the case with dotNet.

    I know most of you have lost interest by now, and are happily moderating me down, flaming me, etc., but I have an appeal to those serious programmers and geeks amongst you who bore with me this far. It doesn't matter who came up with it, but isn't that just a bitchin' cool idea???

    As you know, everyone who writes about their new features admits that you can already do the same thing in plain old C, but you also know how the rest of it goes.

    By now, I've totally lost track of any other points I was going to make, if any. Please fill in the blanks with anything relavent you see:

Beware the new TTY code!

Working...