Miguel Says Unix Sucks! 478
alessio writes: "On the front page of Linux Weekly News there is a report from the Ottawa Linux Symposium where the adorable Miguel de Icaza supposedly states that Unix has been built wrong from the ground up." It's actually a pretty cool interview, and as always, Miguel makes his point without any candy coating! The major point is the lack of reusable code between major applications (a major problem that both KDE and GNOME have been striving to fix for some time now).
Re:Unix was there first. (Score:2)
Sounds like Mac OS-X (Score:4)
All of these resources are shared by all applications, where possible, to conserve resources. Most of them are very easy to use and many require no coding to setup. For instance, to add retractable drawers to the sides of your windows, you just drag-connect lines from the drawer instance to the window instance, to the view to be contained inside the drawer, and a line from the button/actuator-widget to the drawer instance and boom you are in business. No coding...
Apple certainly has the best reputation for this. All of these details are specified in a UI guidelines document and standard menu configurations are built into InterfaceBuilder. X has a nice built-in software installer. When you install it leaves a receipt you can click on to uninstall or just compress some software.X has a very powerful "Bundle" system (from NeXT). A bundle is a directory containing various subdirctories that contain application resources (binaries, source, headers, documentation, images, sounds, strings to be displayed the user, UI's, etc..). Localizable resources (like string, images, UI's) are kept in seperate directories for the region/language the resources are specific to. The Bundle class automatically fetches the proper localized resource based on the user's localization preferences. The Application itself is a bundle and there are bundles known as "Frameworks" for shared libraries. Frameworks can contain anything (code, headers, source, docs, images, sounds, etc...) and are stored together and are versioned (two or more different versions coexist peacefully: no more problems of a newly installed app installing an incompatible version on top of an existing version).
No API is needed for putting icons into the dock since the user can simply drag the application icon there himself; no having to drag icons into some obscure folder deep inside the system hierarchy.
Oh yeah, it's all running on BSD Unix with a Mach kernel. The sources of which are available here [apple.com].
So you see, Unix can be made into a modern operating environment for all users, with a consistent user interface, and an API that is a joy to use for developers. However, they didn't build it on X and you'll probably have to buy a Mac to get it for now.
Burris
Re:Software that doesn't suck (Score:2)
---
Re:Do something about it (Score:2)
Your concerns would be more substantial if you'd stop confusing (as I've seen you do before) "open source" with "bazaar-style development", and "bazaar-style development" with "bad engineering".
They are each entirely orthagonal. Just as proprietary development, as such, never assures good engineering, neither open-source nor bazaar-style development, specifically, assures bad engineering.
After all, several highly visible open-source projects were developed cathedral-style, and are considered very good in terms of quality (perhaps "category-beaters"): GNU Emacs and GCC come to mind.
I used the "cathedral" approach to develop g77, also to assure quality, even before I understood it as a "cathedral" model, and after I did, I often resisted "bazaar-style" attempts to "improve" it when I felt they didn't, or wouldn't, meet the quality criteria I tried to uphold for it. (Failures being due to my own personal failings, at least mostly, not the fact that g77 was open-sourced! See my GNU Fortran (g95) page [std.com] for more info.)
Certainly I agree with your implication that much open-source/bazaar-developed software, including some widely celebrated, is developed to a lower standard of engineering quality than should be the case for products of their ilk.
But the fault is not that they're open source, or developed bazaar-style. Those are "features" that allow many more developers to participate, with less up-front investment overall, for better or worse (depending on the quality of the developers, and especially their "developments", e.g. patches, as allowed by the project maintainers).
As far as these three concepts being entirely orthagonal, what I said above is not quite true...
That is, without the public being able to view, modify, and try out the source code for a public software product (whether it's a distribution, like Windows or Linux, or the software running a public web site like slashdot.org or etrade.com), I don't see how anyone can claim their public quality assurance can reach the same high level that it (theoretically) could if it was open-sourced.
Of course, opponents of open-sourcing have long argued that without up-front investments of capital, quality is not affordable.
That may be true, but IMO the more pertinent issue is that only via open-sourcing can everyone determine for themselves whether the up-front investments that have been made have indeed resulted in a product of sufficient quality.
So it often amuses me to see people like yourself essentially (as you appear to do) prefer to blindly trust some corporation to produce quality software on the theory that they had the money to do it, instead of insisting on the product being open-sourced so you don't have to trust it, and can look at the code instead, discuss it with friends, muck around with it to see how robust, extensible, stable, etc. it is, and so on.
Because, in the end, as much as you liked VAX/VMS, in the short time I worked on that type of system, it crashed many more times than Linux has ever crashed on me (about 10 years using Linux versus maybe 3 using VAX/VMS).
And when I found a bug in the Linux kernel (long ago), I reported it and it got fixed very quickly. (Probably because I provided a patch.) I found it only because I happened to be looking through the source code, not because I actually ran into the bug! (It involved fouling up group-protections of files in one place, IIRC.)
But when I ran into a bug in VMS, it took a long time to demonstrate it sufficiently as a bug to my management so I could view the source on microfiche, track it down, and then send it to the Black Hole of DEC. To my knowledge, it was never fixed. (It involved random hangs while doing straightforward, but asynchronous, I/O to normal text files. That got me much better performance on a text-to-PostScript converter I'd written, but I had to back it down to using synch I/O, thanks to the bug.)
Had VMS been open-sourced, not only would I have been more easily able to find and fix that bug and get it out to others...
Whereas those shops that committed to Unix in the early '70s on the basis that it was lean, mean, and came with source code are still able to preserve a substantial portion of that investment by using *BSD and Linux systems, which support a dizzying array of hardware (CPUs and other components), allowing people to pick the hardware that best suits their present needs.
So, open source is not a panacea, neither is bazaar-style development (despite ESR's tendency to write as if it is), but they aren't inherently going to do anything but improve quality over the long run, since quality includes viability of investment in technologies over time as a component.
Unix works like a free market (Score:3)
Consider that Unix and Open Source development is working like a free market: while there is a lot of variety, and while that causes problems, the benefit is that people do want simpler solutions, but instead of 'staying with' some simpler solution imposed upon them, people choose the best are available (e.g. Red Hat), and run with it, and then so does everyone else, and the bad solutions die.
The interesting comment about people developing Windows Manager skins reflects this: people get fed up with too many window managers, and start to develop skins. Then it becomes possible to have any 'style' window manager, sitting above a 'core' window manager : so then everyone starts to choose the best 'core' window manager. At the end of the day, you have the best solution: an excellent 'core' window manager, and an excellent freedom of different 'styles'.
The free market has decided.
Re:Lack of a graphics design (Score:2)
Re:I can't decide whether to laugh or be afraid. (Score:2)
So what's he offering to do? Start "deciding policy" for us? Is this a thinly veiled excuse for heavy-handed GNOMification of existing apps like xscreensaver, rather than the more sensible solution of letting them be visible through GNOME?
No, you miss the point. It's not about Gnome deciding policy. It's about creating libraries and API's to hack against (if you want to, there will always be choice, Miguel or anyone else in the Gnome camp will tell you that) for common tasks such as printing, image manipulation, font rendering, the toolkit, etc. So, by *choice*, if you want, you can use a set of applications that have some commonality.
Miguel is an open admirer of how Microsoft does software development.
Someone please tell me this is a belated April Fools joke!
It's not a joke, but it might not mean what you think it means. I've had the opportunity to talk with Miguel (while waiting for Phantom Menace to start) about his views on software design, and particularly how Microsoft does it. What he admires about Microsoft is there reuse of code through a set of common libraries and their component architecture. Granted if the code is unstable, you're going to have a lot of unstable applications. That's where he doesn't admire Microsoft. So he wants to pick the good things that Microsoft is doing, and improve, by writing stable, quality code. Check out the code to gnumeric some time if you want to see some beautiful code.
----
Re:DLL hell (Score:2)
I'd reccomend designed API's over evolved ones... I hate to kick UNIX while it is down, but the new QNX/Neutrino system API's are very well designed. You don't know what you are missing until you see them. There are about 30 or so system calls total. For example all timer events are encapsulated in one system call. They have implemented thier POSIX, BSD etc. interfaces as a compatability layer on top of these calls. And I'm sure you are all sick of hearing about message passing microkernels, but doing IPC with one system call that reads like "get these buffers to that process and bring me it's response" is soooo nice.
What I'm wondering is when someone will write a module for Linux that has a really clean, well designed system API like this available. Just because Linux performs UNIX calles doesn't make it a legacy design under the hood. It just supports a legacy system.
One last point: UNIX support is very very good. The existing free software code base is invaluable, and it works just fine as it is. There is no point dumping support for it, considering the limited expense of providing a UNIX system call emulation vs. the thousands of "man" years of work that has gone into it.
Unix is by far the best (Score:2)
Unix, while having no component model, has things going for it that outweigh reuse. The list is ridiculously long but "free" is at the top of my list.
And, surprise, Unix has a component model now -- in fact, two of them. They are called JavaBeans and Enterprise JavaBeans (EJB). One for CORBA is in the works. Bye-bye Microsoft.
Re:Wrong. (Score:2)
Actually he has gone beyond that and is upset about other things I havn't been trubbled by yet. Things that may actually be problems. They are also things he is working on fixing, so I'm content to let him complain. When he is done we will see if he was right.
Yes, that makes things simpler in the windows world (even if the "standard" APIs change every few years, or less). Lack of stability (as in frequent crashes) make it harder. Gain a little, lose a little. Having done very little windows devlopment I don't have a very informed opnion on which is nicer. My little side trip into it felt unplesent, but it may have gotten better if I had stuck with it.
I seem to make quite enough money writing server applications for Unix systems, and the support issues are less of a pain there too.
The "desktop" stuff I do are all hobby projects. Either something I do because I want to see how easy it is to do (like streaming audio over a normal HTTP chanel), or to learn something I can't "afford" to take the risk for in a comercial project (like the STL), or because it just plain looks fun (xtank -- which had it's own threading system).
Also I have a pet thery about desktop tools. The Unix market may be pretty small, but it is also wide open. If I had a new structured drawing program, I might do better trying to sell it into the Unix market where there isn't a market leader rather then going head to head with Visio on Windows. But that is just a thery. I'll stick to server apps. I'm good at it, and they pay well.
Re:ROTFL (Score:2)
Within the next few years, Linux will dominate every aspect of computing, because it makes sense. A royalty-free OS that levels the playing field. It would be stupid NOT to adopt it. So WHAT if the desktop is the last thing to fall to us, rather than the first?
Only half the story (Score:3)
At Usenix, his talk started from the premise that "the kernel sucks" but only as a springboard to cover extensively the approach, the philosophy really, of moving away from kernel-centric development to a component focus.
Now as a relative old schooler in all this I applaud the notion that every generation needs to overthrow the excesses and cruft of the previous one, so to that extent Miguel's to-the-barricades rhetoric is welcome. Unix, and Linux, have become a sprawling pasted-together mess, which is evident if you compare, say, Aeleen Frisch's first system admin book in 1991 with what there is now.
And some of the principles Microsoft has embraced in software architecture may also be applauded. Although I hasten to add that their implementation of those foundations has broken every conceivable rule of software architecture/engineering, not to mention common sense. Nevertheless, I think Miguel's willingness to learn from good principles wherever they may be found is also welcome.
But where I part ways is with his proposed grand solution space, which basically amounts to: CORBA.
CORBA is yet another sprawling, somewhat incoherent and definitely incomplete attempt to Make the World Behave Like We Say It Should Or We'll Stamp Our Little Feet. I have long felt that the dependence on CORBA, not merely the availability, is a millstone around the neck of both GNOME and KDE.
I've read quite a bit of CORBA and component model advocacy, and it reminds me all too much of IBM-think circa the mid-1980s. "You will use SNA because it's good for you. Here, just implement this spec that comes on four bookshelves of binders."
The brilliance of the UNIX philosophy is scalability built on self-evolving systems, not based on universal frameworks that try to provide order through mapping. As has often been noted, the map is not the territory. And it should not be.
But mapping (metaphorically, if not in its strict technical sense) is what component architectures are all about.
DLL Hell was just the first phase of this. You can argue that DLLs are not "components" by the standard definition, but they are component-like and function in many ways as if they were in such a framework. COM rationalizes and makes DLL World somewhat more orthogonal to the component model, and that is the positive sense that Miguel seems to respond to. I can see some merit to the reuse and iterability inherent in this approach.
But that is entirely a developer-centric approach, and this is where I think Miguel's vision will be sorely tested, and emerge as at best mixed bag: one part fixing some rather sticky issues for GUI development and near-field reusability, one part creating a whole new layer of complexity and frustration for the user, the system planner and the sysadmin.
Components are not static; they evolve, and they evolve both in form and function. In other words, wish as much as you might for a static API for a given component (as Miguel sort of did during his Usenix speech), but it's not gonna happen. Both what the component does and what hooks and appearances it presents to the world are destined -- in fact must -- change over time. That's the lesson we learned as soon as Microsoft put out the first revision to the first DLL.
It is simply impossible to imagine some Component World Authority that has the job of telling every component architect and developer: this is your feature set, this is your quest: go forth (or go C++ or go Java ) and Make It So, and Thus Shall It Always Be! Nuh uh, not gonna happen.
The advantage of reusable libraries as in C compilers is that while some variation in them may be permitted (whether this is a good thing or not is circumstance-dependent), you are basically faced with a binary result: either the code compiles or it doesn't. Once it does, you have a static binary that will continue to work as long as most of the underlying OS stuff remains the same.
In Component World, you are now at the mercy of component dependence every single time you run code. And since the component framework is both (1) more dynamic and (2) more distributed than we are used to with our desktop computers these days, this is going to pose major problems. Some of these have already been noted by the GNOME skeptics who have posted here (including at least one well known GNOME developer).
The problems are inherent in component architecture: compatibility, resilience, security. This is less of an issue when all the components reside in devices connected to one backplane, usually inside one metal box. But with distributed apps and, probably more importantly, mobile apps, this is increasingly going to pose problems.
Remember when you installed some random program in Win 9x and it changed a DLL so that your email didn't work any more? At least you have the ability to reinstall DLLs/programs/the system itself (depending on the severity) to deal with the compatibility problem.
W2K supposedly deals with this by creating its own little mini-World Component Authority backed by an internal database subject to all the usual database reliability and performance issues (plus of course it's closed source). All this does is allow a bigger mess to be made at some point.
But what about this? You're running a nice little cell phone/PIM gadget that is built on GNOME and CORBA, and suddenly you can't get your email any more because some schmucko at a service center upgraded to the latest/spiffiest version of a component your handheld relies on via its mobile link to do its work.
Welcome to Component Hell.
As a non-developer and mere observer of the passing landscape, I would be happy to have someone come along and explain exactly why I am all wet. But for the moment, I am persuaded by Miguel's disdain for the suckiness of the kernel, and completely unpersuaded why components, as instantiated by CORBA and GNOME, are a universal solution rather than a local fix.
-------
Re:Why does every app menu start with "File"? (Score:2)
Miguel Says Unix Sucks! (Score:3)
Miguel Says Unix Sucks! -- SlashDot News Headline
Now if I posted to SlashDot that Unix Sucks! I would get -1 troll....
Not saying I would knock Unix, though.
Re:Less than what?... (Score:4)
Another one:
There's a town in MI outside Detroit called Novi. Everyone uses emacs there.
Hypocritical (Score:2)
Enter the egos. Gnome and KDE will not cooperate with each other, even at the basic levels. If the lack of code reuse is something that really gets Miguel's goat, then perhaps a stronger effort should be made when negotiating with the KDE developers...
Matthew
Wrong. (Score:4)
No, most of the world's commercial desktop software was written for Windows, because *big drum roll here*... most of the world's commercial desktops run Windows!
And that's not because of API standardization, or you would have seen people fleeing in droves at the Win16->Win32 switch which forced everyone to rewrite all their software. Borland's OWL libraries and Microsoft's MFC would have destroyed the Windows programming "community".
That's simply because Microsoft managed to get contracts which put their software on the majority of clone computers, because clone computers, and because Microsoft allowed (some might say forced) network effects to turn that majority into a monopoly.
The problem with Linux above the kernel level is that you can run into a situation of multiple competing API's for most everything, which can become a bit of a programming nightmare.
Bullshit. Name one GUI Linux program you've written. Did you try to write it using two toolkits? If not, then exactly how did the existance of whatever toolkits you didn't use make your life a "nightmare". All it did was give you extra choices to find an API you liked best before you started to program.
Remember, if programmers were forced to use one toolkit, we might be stuck using Xaw, Motif, or even Win32...
How do you define "easier"? (Score:3)
As for vi versus Notepad... well, a friend of mine has a good ease of use formula. The proper measure of ease of use is the total time spent doing a task. The formula for this is T(l) + nT(d), where T(l) is the time required to learn a task, T(d) is the time required to perform it, and n is the number of times it is performed. So for tasks you rarely do, T(l) dominates. But for tasks you do often (like, say, the several hours a day i spend in text editors), T(d) dominates.
The essence of this is that while vi is much harder to *learn* than Notepad, it is much more powerful as well, reducing use time. And if you spend several hours a day editing text (like most programmers do), the time to learn a more powerful editor is paid for many times over by the speed gain for complex tasks.
This is why i recommend to friends who use computers daily, even non-programmers, that they take the time to learn Linux. Not because it's more cool or politically correct, but because it's more *productive*. The learning curve in the short term is paid for by productivity in the long term.
And THAT, Young Jedi, is ease of use.
--
Noone ever claimed that unix doesn't suck. (Score:5)
Admitting the problem is progress (Score:2)
"Suck" in this case, needs clarification. In general, no one says the Linux kernel sucks or classic terminal window mode Linux sucks. The problems have come from trying to use this as a foundation for something more all-encompassing and modern. Is this possible? Sure, but it is difficult. Microsoft and Apple had had the advantage of only trying to support one graphics subsystem, one (admittedly huge) API, and one GUI. Linux developers have to build these layers themselves, and it is hard to keep from stepping on one anothers toes. Gnome and KDE have the same DLL hell as Windows, only they're called "shared libraries" under UNIX
Admitting there's a problem is much better than blind zealotry.
DLL hell (Score:2)
Fuck Miguel (Score:5)
And why is it that Miguel is held in such high regard among Slashdot users? He wrote a fairly nice desktop environment. So what? So did the KDE team, but most people can't even name a single person who worked on that project. So he thinks Unix sucks? Good for him. Everybody is entitled to their own opinion, but that doesn't mean that they are right.
</rant>
--
Example: printing (Score:2)
---- ----
OOP (Score:3)
We're all different.
Re:Unix was there first. (Score:2)
Re:The Solution is... A Monopoly! (Score:2)
Because RH Linux is so widely used, it means that when people think "Linux" the first company they want to get a commercial distribution -is- Red Hat.
Anyway, because of RH Linux's open design, you can run whatever GUI you want on top of it, just as long as it's reasonably standards-compliant. I'm waiting for Eazel "Nautilus" extension to GNOME that Andy Hertzfeld (one of the few people who really has a clue about proper interface design) is working on.
Besides, if you have a Red Hat Certified Engineer (RHCE) certification chances are pretty good that any company that does a lot of work on the Internet will hire you almost on the spot.
Re:Oh brother. Can't see the forest... (Score:2)
Compare to Win2000, which has an anemic semi-journaled hack of HPFS which is called NTFS; Mac OS 9 or Windows 98/Me, which doesn't have one yet; and OS/2, which didn't get one until AIX's JFS was ported to it by IBM.
Steven E. Ehrbar
Re:Paridigmns for a new OS? (Score:2)
Re:Reusable alright! You don't have a choice. (Score:2)
You're a shame to geeks everywhere Darth (Score:2)
Miguel's point about Unix being stagnant is so true yet people never fail to point or some "new and improved" copy of something thats been done before. The best thing to happen to Unix recently was Mac OS X and before that it was probably Beowolf clustering. Instead of calling OS X Macix or something lame like that, they separated themselves from the guys at Bell Labs 30 or so years ago. Apple took a stable and mature kernel and built a user friendly interface on top of it. Apple doesn't force you into a command line or make you edit things by hand using pico or emacs. In OS X all the configuration files are standardized and formatted using XML. Users of OS X don't have to think about what is behind their candy shaped buttons and mp3 files. They press the power button and get to work or play. The open source alternatives take pride in their non-uniformity and command lines. You're not going to take over the world with terminals, you take over the world by making the computer transparent and letting the applications take center stage.
Re:Linux = No Innovation (Score:2)
Unlike Microsoft, Linux never claims to innovate.
And no, adding some lame extensions to a web browser does NOT count as innovation, especially when Netscape submitted similar extensions to standards bodies before they added them.
Mozilla's success and
Re:He's right, of course. (Score:2)
That's the usual tradeoff. But it's escapeable with hardware support. Consider a system call, where the kernel has access to the stack of the caller, but the reverse isn't true. Pentiums and above support a similar mechanism for several protection rings, so you could, with suitable OS support, have non-kernel things applications could call but couldn't crash.
What you really want is a mechanism which can handle a crash on either side gracefully, just as a CORBA/Java RMI call can. This can be done inefficiently now, more efficiently with good OS support for call-type IPC, and very efficiently with some hardware support. For a while, I was thinking of working on L4/Linux [tu-dresden.de] and bringing it up to a usable state, but L4 isn't ready yet. Good ideas, though; it's a must-read for OS designers.
The hardware needed is already in some SPARC CPUs. It's unused, because the Spring OS people went off and did Java, but it's there.
As for getting people to convert to safe languages, I agree with jetson123 that it's desirable, but it's politically hopeless. I wish we were all using a Modula family language. I miss the Modula experience: once it compiles, it often runs correctly the first time. But even Compaq/DEC has shelved Modula.
Miguel de Icaza is an Idiot (Score:4)
I maintain that Unix does not suck, but rather that it is beautiful in it's flexability. This bozo claims that it's weakness is "not deciding policy"
Go program for M$, Miguel.
Unix (and for that matter the entire Open Source movement) is about freedom, not about having mission-critical decisions made by some corporate suit who, incidentally, is only interested in making their company more $$.
I repeat, Go program for M$, Miguel.
Miguel claims that a weakness of Unix is in not sharing more code between applications. M$ shares code extensively betwen applications
Miguel obviously has a LOT of trust in M$
that and
Making decisions (good or bad) and taking responsibility for them is part of being an functioning adult. The ability to make decisions is essential. To have the decisions already made and have no control over them is unacceptable. Any competent Unix sysadmin knows that security is his responsibility. A Unix sysadmin who has his boxes repeatedly compromised is likely to be out of a job before too long. When a M$ box gets compromised, it is no great shock, in fact the sysadmin of that box can't be held accountable for a system in which he has no control
I can't believe this idiot sucked me in and made me waste time stating the obvious...
</rant>
Red Hat's Gnome sucks (Score:3)
Why does the default desktop supplied by Red Hat have to be so uggly? The icons on the taskbar aren't event lined up properly for christssakes, but seem to be placed by random. The theme is most boring one possbile, and the settings of the windowmanager is enough drive anyone mad. When you've installed the latest Red Hat you have to spend at least an hour to get the settings somewhat usable. Don't even get me started on the *totally* messed up Netscape fonts. What are people new to Linux going to think! They can't be expected to mess around with fontpaths and fontservers.
The point I'm trying to make is that Red Hat has just slapped the latest version of GNOME available at the time, compiled it straight from its pristine sources and added two links to redhat.com on the desktop. That's just not going to cut it, not this century. If you want to see how a desktop *should* look , straight out of the box, take a look at Helix Code's GNOME version [helixcode.com]. Now *that* is a good looking and behaving desktop, a desktop I wouldn't be ashamed to show a user who knows nothing about Linux. First impressions are important! If Red Hat has any clue they'll be using Helix's versions from on. They are VAR after all, so how about adding some value to the product! It costs them nothing.
Okay, I'm done ranting now.
Re:Lack of a graphics design (Score:2)
And what is X if not a server-based system?
Re:Duh!! (Score:2)
Yes, you can build up big complicated things by piping together commands, and redirecting stuff, and using sed and awk and perl and grep and find and all the rest.
THIS IS A CRAPPY WAY TO WRITE SOFTWARE.
If you change any component, it will break. It will not be portable because every Unix out there has different options for all those commands, and they mean slightly different things. Even worse, there is no error handling. Since all your data is a text stream, dealing with binary data, or heaven help us, actual structured data, (like records, or objects, or whatever your favorite language calls them) is painful or impossible.
You claim "we've got reusable code running out of our ears". Yeah right. I challenge you to build a sophisticated, portable, maintainable application out of that so-called reusable code.
Even worse, there is no excuse for this state of affairs. Before Unix was even invented, there were LispStations. On those machines, instead of text streams, there were functions. Functions with error handling, defined interfaces, and even fancy stuff like introspection.
Unix could have been better.
Nonetheless, I still like Linux better than anything else out there right now... because it has source, it can be improved. The object models that KDE and Gnome are moving toward sound like a great start. They may not be perfect, but hey... what is?
Torrey Hoffman (Azog)
2-way XML gloppy feely pipes? (Score:2)
Yeah, I noticed that glossing over of the difference between applications that run on text streams like cat, ls, grep and frieds, and those applications that want a little more in the way of plumbing than is provided by a plain old "|".
So, then, presumably if you want change mozilla to use qt graphical componentry instead of gtk (assuming mozilla had a "generic" layer instead of hardwiring one of two competitive but similar GUI toolkits), would it be as easy as changing
toI don't think so. And besides, the sheer number of types of valves and pipefittings that would be required to express the relations for each kind of interaction would proliferate so fast that even a puntuation-hardened Perl programmer would start to weep.
But then I got to thinking (dangerous, I know)- why not just robustify the text stream concept to being 2-way XML with some high level publishing and querying of services that are offered/desired and letting the ends of the pipes (applications, components) negotiate the best fit. Kind of like pipes with glop on the ends?
Loading up the apps and the components with pipe end feelers would surely make then heavier, but at least they would fit more often and more easily than they do now. Some kind of testing and negotiation about what is offered and desired would probably provide some mechanism for resolving things like DLL-hell, too.The idea probably isn't new. I kept thinking of autoconf feature tests :)
Also, while I haven't researched it, some of these ideas must lurk in either Jini or SOAP.
Speed read
Re:Lack of a graphics design (Score:5)
True. When I read this:
I couldn't help but think of X. The lousiness of that system is the best example of the problems that come when you avoid policy decisions. And the awful arguments made in X's favor whenever the topic of its suckiness comes up in Slashdot are certainly consistent with the idea that this avoidance of policy decisions is a 'hacker defense system'.Probably the best example of what Miguel is talking about is the difference between what you can do with cut-and-paste in X and what you can do with cut-and-paste in Windows:
But once you move to a graphical environment and thus aquire the ability to effectively represent much more structured data to the user, you need to provide higher-level interfaces to that data. Those text-oriented tools will be pretty much worthless if the file you're dealing with contains a description of a structured drawing. As a result of X's adoption of the Unix approach, all you can really cut-and-paste in X is (surprise!) flat, unstructures text strings.
This pays dividends far beyond cutting and pasting: strong application interoperability means that you can easily access and 'reuse' an existing application's functionality. An example out of my own experiences as a Windows developer, a few years back: I once spent a month working on a project whose goal was to build a graphical scripting tool for a specialized purpose: Users would draw out simple flowcharts, then our tool would generate code from these flowcharts. Rather than build our own flowchart-drawing tool, we were able to use Visio: We designed a set of custom Visio shapes that users could use to draw flowcharts. Then, the development environment we'd built would send users into Visio whenever they wanted to edit charts. When the user was done editing, the development environment would talk to Visio via OLE automation, pull out a highly structured description about the flowchart (basically, a list of all the symbols and their types (including some parameters that the user could specify, such as the conditional expression for a decision symbol), and of the links between symbols) and build a simple C++ representation of the chart that the code generator could then take as input. My job on the project was to build the layer that talked to Visio and built the code generator's input data structure, so I dealt pretty heavily with OLE. It worked out great for us, saving us an enormous amount of development time. And we ended up with a much higher-quality final product - instead of building our own mediocre tool for graphically editing flowcharts (which we would have probably ended up having to do if we were working in unix
I liked Miguel's comments. I'm glad to see that someone is willing to stand up and say that while the emperor may not be completely naked, he should probably put on some pants...
Re:How do you define "easier"? (Score:3)
I would say that if you can do this problem you are too skilled a computer user to really judge ease of use issues. Ease of use is not for those of us who already know how to make a computer sit up and bark. It's for democratizing the power of the computer and offering it to that other 99% of the human race, the clueless (with the assumption that as a result, some of them will get a clue).
Paridigmns for a new OS? (Score:5)
I think UNIX [bell-labs.com]did alot to change the way OS design was viewed. UNIX treats everything as a file. UNIX focused on making a system with multiple users on the same system at the same time.(multiprocessing anyone?)
I think the boys over in Murray Hill [bell-labs.com] are doing alot now with Plan9 [bell-labs.com] and a few other ideas I sometimes hear they kick around.
My question to all of you obviously more experienced coders out there:
What's the next paridigmn for creating the next less sucky OS?
Treat everything as a data object? a module?
I don't know. I would love to see an OS based on a functional programming language. Something small and compact without too much bloat to it. Code up a decent GUI as well. Or how about this...the GUI is the text. Multiple windows of text ala an Xterm, clicking on the word disk0 or some such thing would open up another window showing you the contents of the disk0 object.
Every piece of text is a mouse clickable object. If you type in disk0 it becomes a mouse clickable object which links to the contents of disk0.
Perhaps we would arrive at a new GUI or a new concept that makes either more sense to users, or perhaps is faster to operate with, with minimal learning curve.
A natural language based OS?
A user can type in his questions (eventually speak to the computer ala voice recognition) and receive textual and aural inpouts from the machine. I.E. "Computer, please tell me the contents of disk0." "The contents of disk0 are, foo.txt, bar.c, baz.h"
Eventually somebody or something has to sit down and figure out a different way of looking at the data we are presented and see if it makes more, or less sense than what we currently have.
I don't know who that somebody is but I think it won't kill me to sit down tonight and see if I can come up with a few ideas.
I'm thinking about using a functional language because it forces me to look at things slightly differently than when I write C code.
Anyone else have any ideas or pointers to projects currently looking at stuff like this?
It would be a nice project to jump in to, no?
Dan O'Shea
Linux = No Innovation (Score:2)
Now alot of you dont believe Microsoft has made or ever will make any effort to innovate. Those of you that do believe that need to remove those pink tinted glasses that have become eternally attached to your face and finally face the truth of the matter.
Like it or not, your favorite monopoly is innovative. Though whenever they are innovative - the so-called community here on Slashdot is able to twist and turn it into something vile and evil.
Dont believe me?
Everytime Microsoft decides to enhance the functionality of their web browser (which is used by 86% of the internet population) the Linux community whines and complains that Microsoft is simply manipulating publically mandated standards in order to raise the value of their stock options.
The fact is however, Microsoft does own 86% of the market as far as web browsers are concerned (over 98% when it comes to Operating Systems) and in order to keep that position they have to do something that anybody in the Linux community has yet to even dream of doing: INNOVATE.
Look at that sorry POS software you all so lovingly refer to as Mozilla. Thats probably the most dismal failure of the Open Source movement as of yet. (Though I assure you it will not be last) Ever since Internet Explorer 4.0 - Netscape has simply been unable to compare in either featureset, speed, stability, or ease of use.
This brings me to my next point however..... has anybody here heard of Windows
BS.
Over the last five years - Linux has been playing catchup - whether to Microsoft, SCO, or somebody else - it doesnt matter. As of current your favorite penguin sponsered OS is suffering from feature and driver depervation. This will continue to be the case until you people begin to see things as they should rather than as you want them to be.
Remember - Not only does X suck (being one of the most insecure programs on the face of planet earth) - but so does KDE, GNOME, StarOffice, GREP (never did care for that one - who the hell came up with that name anyway - some of those berkley boys must of been toking when they came up with that one)..... need I say more?
Oh and by the way - your flames may be directed to: darkgamorck@home.com
Gam
Re:Lack of a graphics design (Score:3)
You can flame Microsoft all you want, but the fact that Windows has a singular WIN32 API drastically simplifies program development and software driver development. Because of that standardization, that's why most of the world's commercial software for desktop machines -are- being written for Windows.
The problem with Linux above the kernel level is that you can run into a situation of multiple competing API's for most everything, which can become a bit of a programming nightmare. That's why people are gravitating towards supporting Red Hat, Caldera, S.u.S.E. and TurboLinux commercial distributions, because at least you'll know what API's to program for with each commercial distribution of Linux. Is it small wonder why Red Hat has become the "de facto" standard for Linux almost everywhere?
Re:Unix was there first. (Score:2)
MSDOS is successful because the company that created the architecture that is now the most popular in the world initially only sold systems with MSDOS. The architecture is successful because it was dirt cheap (relatively speaking) when the clones hit the market.
I think saying Windows is successful is like saying the color NTSC broadcast signal in North America is successful - after all, that's what all the stations broadcast, and that's what all the TVs receive. The color signal is horrible because it had to be compatible with the black and white signal, which still takes up half the bandwidth. The entire color portion of the signal gets only the other half of the bandwidth.
I'm not saying Windows is bad, and I'm not saying NTSC is bad, I'm saying they stuck with backward compatibility instead of creating a better technology from the ground up. Both could have been so much better. Some things allow for being extended, some things are kludged into being extended. NTSC signal and Windows are examples of the latter.
Miguel is right, but I don't think it's about Unix as much as it is about a computer/human interface. Windows DOES do it right - there should be one printing dialog, applications should share interfaces to the hardware - printing, scanning, the graphics subsystem, etc. But I don't think it's Unix so much as it is the X Window System. I believe it's a layer in between. Actually, so does Miguel. He shouldn't have said that Unix sucks, he should have said X11 needs a framework to allow portable and reusable code. After all, like Windows, when you quit the GUI, you are stuck with DOS services. If you use an old Word Perfect under DOS, you still need it's collection of fonts and it's printer drivers.
I agree there should be a combining of efforts into one user interface, compatible window managers, and so forth. I also think there should be (and will always be) a set of applications that don't fit this framework. There's nothing wrong with the Unix mentality of "do one thing and do it right", even though the existence of a do-it-all framework would be a great addition. Just don't force anything on anybody - that's the Unix way.
----------
Why Unix is the way it is. (Score:3)
Think about how easy life would be if only we could reuse existing components. For example I'll build my life by taking the 'Bill Gates wealth component', the 'Alan Cox programming component', the 'Jean Claude Van Dam appearance component', the 'James Bond suave component', and the 'Sarah Michele Gellar girlfriend component'. Nice life huh?
Of course, if everyone else gets to build their life the same way, it becomes a mediocre life not worth living. If everyone gets to choose to be as wealthy as Bill Gates then everyone is equally poor; prices would sky rocket until a loaf of bread was a billion dollars.
If everyone could program like Alan Cox there would be no demand, or use, for you as a programmer. Why would anyone get you to do the coding when they could get any of 6 billion people to do it?
Unix provides a stable base and a uniform API for applications, good design decisions flourish, bad ones die out.
The problem with the reusable component approach is that it requires bad design decisions to flourish. If there is a poor design decision made in a commonly used component it can't be corrected because of the number of programs it would break if it were changed. Instead of the fittest surviving, the most popular survive. What is worse, there is no basis for comparison and improvement, all programs take on a uniform boring sameness; there is no good or bad to choose from, and learn from. No evolution can take place.
What the component approach does is guarantee that bad design decisions live forever, because no one knows they are bad.
Component programming is like a good looking, but heartless woman; looks great at the start of the relationship, but the marriage is a horrible one.
Re:Software that doesn't suck (Score:2)
---
Re:The Solution is... A Monopoly! (Score:2)
Case in point, I recently installed Oracle on Slackware. Oracle recommends installing it on RedHat, and has a nice bundling arrangement. I'm sure most corporate Oracle for Linux users will stick with Red Hat. But the fine print reads that all Oracle needs is a recent kernel, compiler, and c library. In practice, it was necessary to add a few symlinks to mimic the Red Hat locations of some basic tools, but other than that is was fully compatible. Oracle uses NOTHING that is unique to RedHat, but they make a point of only supporting that distribution.
My whole decision to run Slackware rather than RedHat was that if I wanted default decisions to be made without my knowledge, and GUI-only configuration, I would have stuck with Windows.
Reusable code is fine, but like someone already mentioned, console-only *nix gives you that. I don't understand why the "Desktop Environment" projects feel it's necessary to re-implement everything with a GUI. Do we really need a GUI to dial into an ISP, when we can just as easily run a script from either xterm or (gee-whiz) a window manager configured root menu or hot-key.
If we're just trying to mimic what Microsoft has done with Windows, we will only look comparatively sloppy and inconsistent. IMHO, the beauty of Unix and Linux is the Unix philosophy. Take how *nix handles email: sendmail is pretty standard, but there are alternatives, and your decision to use one of those doesn't affect who you can communicate with, or which clients the user must use.
I think people should be able to choose their window manager without having that affect what applications they can run. They should be able to choose a between several different browsers, email clients, instant-messaging clients, file managers, terms, menus, etc. The re-use of code should be on the lowest possible level, so that these choices can remain independent. If I am forced to choose between All-GNOME or All-KDE, I would choose neither.
Set of reusable components: corelinux (Score:2)
I still think that resuability is best achieved using C++ and OO languages.
Re:Oh brother. Can't see the forest... (Score:2)
Unix was a platform for Internet innovation 15 years ago, and Web innovation 8 years ago. What Internet innovations would you be refering to IN THE LAST 5 YEARS? EMACS 21.20030341458587? NcFTP? All of the really cutting edge work (Apache's sub-projects, IPv6, component development models, high end filesystems, etc) are all either being developed as cross-platform projects that UNIX is only one target for, or UNIX (and Linux) are playing catch-up on (e.g. journaling filesystems).
Out of context quoting (Score:2)
Until very recently (the advent of Linux on the desktop) Unix was primarily used by developers and systems administrators. These are people who's primary tasks can all be solved by either editing text files or piping together applications on the command line. There was and still is no major need for developers and/or sysadmins to be able to embed applications or objects in one another in a GUI.
On the other hand, several end user applications can benefit of being able to embed applications within each other and share data in a uniform manner. That is why I noted that maybe it is time for the paradigm to shift.
PS: Of course there are many pitfalls that have to be avoided such as the library version conflicts (Windows DLL hell) that occur when an app is upgraded and uses more recent components than its the others on the system.
Re:WinTel is FAR (FAR FAR) easier than *nix (Score:2)
Certainly it had a couple of graphical editors but quite honestly i'm happier with pico.
However to get gedit or gnotepad to work I suspect they need to be compiled etc... This involves me first finding a copy of sunwspro with which to compile gcc, then using that gcc to attempt to get the open source stuff to compile... but hang on i dont have root access so I have to piss about with paths for ages.
Quite honestly i've never got anything more than about 50k of source to compile properly here. It's a bit demoralising.
At least solaris does have fairly good binary compatibility but then all the binary packages assume you are root.
And with respect to the comment about end users not being able to install win98 - you are wrong.
Considering that a win98 install comprises, putting the cd in, turning on the pc (possibly setting th bios to boot from cd first) then clicking next several hundred times.
You'd be amazed what end users can do when their windows installation crashes before a big deadline. I've even known mothers of my friends to be capable of a reinstall... and that's saying something
Re:The Solution is... A Monopoly! (Score:2)
Care to explain why Red Hat Linux has become the de facto standard for Linux? The reason is very simple: IT managers want -standardization-, which drastically reduces support and programming costs.
Because the likes of Dell Computer and IBM are big supporters of Red Hat, the fact that Dell and IBM will provide technical assistance in supporting Red Hat Linux means instant credibility for Red Hat in the corporate world, and it's probably the big reason why Red Hat Linux is the current de facto standard.
Re:Do something about it (Score:2)
I've been thinking about this a lot, lately, and I've come to the conclusion that this really isn't true anymore. The Bazaar's been bought out, leveled, and turned into a strip mall.
For example, here we already have two groups (GNOME, KDE) whose architectures, approaches, and hidden assumptions are basically entrenched in the marketplace. The "community" has already decided that we shall use CORBA (with all that entails). It's already been decided that we're going to use the same basic windows/mac/amiga hybrid interface (look and feel between KDE and GNOME are basically the same IMO). Other window managers are begrudgingly supported, but each environment has a definite pressure towards the One True Window Manager. It's already been decided that the ideal free office suite is essentially going to be a pale copy of Microso~1's suite... I could go on but you get the point.
Honestly, I'd love to help change this, but think about it. If a third team came from out of nowhere and proposed/implemented a simpler component architecture that wasnt so tied to one GUI (or tied to a GUI at all -- GUIs should be wrappers, not core software), or tied to one huge set of libraries, that didnt require developers to buy into one overarching desktop environment... or that wasnt subsidized by RedHat, TrollTech or Corel for that matter... what do you think would happen? It would go undernourished and die a slow whimpering death, amid cries of "but we already have one component architecture too many!"... assuming that anyone noticed it at all.
There's no point, anymore. It's actually become a very repressive and stifling environment. It's the 1980s all over again.
Hmmm... does Miguel have the courage to take a step towards consolidation?
To hell with consolidation. Some of us still believe that UNIX is about innovation, diversity, and beautiful, sweet, ubiquitous chaos. >}:)
Re:Lack of a graphics design (Score:2)
It provides a level of abstraction on top of
It may not be perfect, and it may not be fast, but it's a lot more then you make it out to be.
It's a much better platform the target an app kit to then
Better to have many thin layers of abstraction implementing standardized interfaces then one BIG standard API to do everything. This is what we've learned from n years of software engineering.
UNIX's central idea: low level reusable components (Score:3)
Re:Lack of a graphics design (Score:2)
There's a solution to that. (Score:2)
of course! (Score:2)
It all depends. (Score:2)
Re:Wrong. (Score:2)
You're getting into a chicken and egg routine here. He's saying that the Win32 API is the standard because Windows is the standard, not the other way around. I think he's technically correct on that since Win32 forced everyone to rewrite all their Win16 software. Why did they rewrite all that software? Because everybody runs Windows.
That will change (Score:2)
We're all different.
Analogy (Score:2)
Everything Sucks (Score:3)
I love Linux for its flexibility. Drop the kernel in and everything else is optional. Want the standard UNIX utilities? Add 'em. It's optional. It's all optional. No one dictates that policy. That means I can install Linux on my embedded device and leave off 98% of the crap you get in a standard distribution, hack some sort of GUI out. GGI or X on GGI or X on custom hardware. It doesn't care. No one set a policy dictating things. But wait! I don't want a window manager on my embedded hardware! NO PROBLEM! I can make my own UI!
Griping about the flexibility that makes the system great is stupid. Remember the Chinese guy from UHF? Lets all face Miguel and say it together "STOOOOPID! YOU SO STOOOOPID!"
Tongue firmly in cheek, of course.
Anyway, now that we've got that out of our systems, the point about component programming is valid. The text tools are designed to be simple and flexible, but the GUI is a relatively new add-on and is in some ways more primative than Windows 3.1. I've complained about the lack of a decent print subsystem myself. And GUI apps tend to try to do more than the simple text based ones. I think many people view X as nothing more than a way to keep 10 or 15 text terminals in view at once.
Thing is, this is all going to get fixed. Several companies are working on the printing problem. Once they all screw it up and present 15 different conflicting standards, some group of free programmers will get pissed off enough to write one from scratch. X could go away as well. Much of the new software is GTK based, and porting GTK should be as easy as porting GDK and a bit of other stuff. ORBit doesn't rely on X, and most of the Gnome stuff builds on GTK.
UNIX may suck, but unlike the competition, UNIX is going to get better.
He's right, of course. (Score:2)
On the other hand, the open source community has the advantage that any one group can lay its hands on all the source. So if some group undertook to clean up Linux and produce "Linux 2", they could do so.
Hear, hear (Score:2)
Anyone? Bueller?
---
Re:Multiple Window Managers! (Score:2)
This is simply not true. Linux interfaces have been awful, oh so unimaginably awful, until recently. Now they've moved up to mediocre
Windows standardized their interface and thus restricts the user.
This is a myth, or at least we have yet to see any evidence to the contrary. Linux provides a broad, empty canvas for interface designers, yet we haven't seen anything innovative or especially slick from a usability standpoint.
I am very afraid that this flexability which linux posseses will be deystroyed by gnome/KDE. As these projects progress more and more programs which could have been implemented on the command line are implemented in gtk. Soon I won't be able to access my settings except by dialog boxes and I will once again be trapped in windows hell.
This is a classic overreaction. If you have access to a terminal window--heck, even OS X on the Mac will have this, though it won't be advertised to the general public--then you can do whatever you want.
I agree, though, in that KDE and Gnome are generally poor interfaces to standardize upon. It's not at all clear why one would choose a Linux/X/Gnome combination over, say, Windows 2000 or NT. The Linux kernel is cleaner and more stable, yes, but that doesn't matter when you put millions of lines of code on top of it. Instead of the kernel crashing, part of Gnome crashes. Better? Yes, but not something to get excited about. Perhaps what is needed here is a simpler GUI that would be a better base standard than X and a minimalist window manager.
A standardized interface means several things. It means no competition which stagnates development.
Linux GUI development is already stagnant. Years and years are spent in an effort to get the same functionality as crusty old Windows. I wouldn't be at all surprised if Microsoft, who can certainly afford to pay respected experts what they deserve, manage to move GUIs to the next level before any open source project does.
Ironic and stupid (Score:2)
I agree that this can be a problem if you are writing a "major application" under Unix. You constantly come across problems where you think "surely this has been solved before".
And GNOME is doing an admirable job of solving some of this problems in libraries--unfortunately they still are global solutions. You have to buy into GNOME to an unacceptable degree to get the solutions to work. For instance, you have to use GTK, CORBA, etc, etc, etc.
--
Re:Unix was there first. (Score:2)
Consider all the fuss people here make about the latest ghz chip and the cool looking mac range of toys.
Older != Better != Newer
Re:How do *you* define "easier"? (Score:2)
The problem with Windows is that everything is either easy, or *impossible* (or at least extremely difficult). I've worked for years with both, and i've spent far more time beating my head against a wall trying to get Windows to do even trivial tasks, if no Microsoft engineer thought of the task before i did. The joy of Unix is that i can easily combine tools to perform just about any specialized task i can think of.
Yeah, i'm a power user. I'm an experienced programmer. But AS A POWER USER, i consider Windows to be downright user-hostile. At this point, i would not take a job that required me to use Windows rather than Unix/Linux as my primary interface. I'm far, far more productive at my Linux box.
--
Re:Components and X (Score:2)
Your understanding of components is limited. "Stringing together programs" is not components.
Why? Well, a lot of reasons. One: There is no error handling. Piping text files is one way. There is no way to pass error messages "back" down the pipe. Have you ever tried to debug a complicated shell script that uses a bunch of pipes?
Two: Pipes can't really pass structured data. How do you push a linked list through a pipe? How about a hash table?
Three: All of those "components" are really programs, and their communication is inefficient. Running your example would start at least three full-blown programs, and these programs have to communicate through text files - so data gets copied repeatedly from one address space to another. This is inefficient. If it was a real program using real components, the data would be loaded into memory once, and each object could access it. Much faster!
Yes, on Unix, everything is a file. Probably a text file. So Unix is really good at text files. Did I mention that it's also really good at text files? Yup, if you want to process text files, Unix is great!
X Windows is only the most OBVIOUS problem, because when you are working with graphics, piping files just doesn't cut it anymore. But if Unix had been built on a real component model, and not just the idea of piping text files around, then everything would be better, even at the command prompt. And, XWindows could have used a more powerful system, and then it would have had real cut and paste, real printing capability, real font handling, etc.
So if you agree with Miguel about X, you should also agree with him about Unix in general. The problem with Unix is that the unidirectional piping of unstructured files is not a powerful enough communications model.
This is why Miguel likes Windows: despite all the problems, Windows at least has real components. For TEN YEARS NOW, since Windows 3.0, you have been able to paste a complex object like a drawing into another complex object, like a spreadsheet.
Unix has only begun to pick up this capability in the last year with KDE and Gnome! Ten years late! (More if you consider the Macintosh!)
Torrey Hoffman (Azog)
Do something about it (Score:5)
Actually, Miguel is one of the few people who is in a position where doing something about it is actually feasable. Whatever happened to that KDE & GNOME common component archetecure? That would have been a step in the right direction.
I do believe that there is to much ego flying about for a lot of good things to get done. It takes a big man to climb down and say, okay, lets merge. Lets reuse. You can do it better than me, and with OS development kudo is currency, and to loose ego is to loose currancy.
Hmmm... does Miguel have the courage to take a step towards consolidation?
Thad
Lack of a graphics design (Score:4)
The terminal does just fine with the components it has. There are quite a few shared libraries, and for (for instance) printing, everything uses lpr - plain and simple. But a drawing model like X does not a application kit make.
Personally, I think that the best approach for an application development framework is a server-based model like BeOS. In Windows, programs duplicate functionality that's handled by one server in BeOS. Linux (and UNIX) is a great command-line environment, and provides a rich environment on top of that. Just don't use X for anything more than xterm, xclock, and xload.
Re:Balance is Key (Score:2)
Console based stuff is reusable (Score:3)
Today, people want to build GUI apps and he is right to say that UNIX lags behind Windows in reusability in that regard. But this is clearly not a "design flaw" just a lack of a widely used toolkit of common objects.
Miguel is also ignoring the fact that a closed, tightly controlled platform like Windows will always have a higher level of uniformity (and reusabilty) than an open platform which must rely on de facto standards rather than the "king's edict" so to speak. In that sense then openness is a design flaw. No, I don't buy it either... Gnome is on track to provide the kind of high level reusable objects he wants. He should stop whining and write code.
--
Re:Lack of a graphics design (Score:2)
>"Matthew Gream is a goat fucker."
>-Richard M. Stallman, 1996
You should try it sometime, it may relieve that anger that you have built up.
COM/bonobo/whatever (Score:2)
Sorry if this makes no sense; I just got to work, and it's early. ;-)
Re:He's right, of course. (Score:2)
I dont agree with your opinion, and on one of your points in particular I have to speak out. You state "text-file based system administration has to go" but personally I'd rather have that than some kind of opaque registry. I dont mind if somebody builds a nice easy GUI interface to those files and I may even use it if it makes my life easier but once something breaks I want those files to be readable and FIXABLE with a text editor and the mark 1 eyeball - so that when the system is flailing around in agony and crashing about my ears I can get it into single user mode, grab a tool that I can count on to work even when everything else is pretty much broke and at least get my system to a point where it will boot normally. I'm sure you can come back to me and point out that simple command line tools could be built to do that with any file format but it misses one big advantage of plain old text - the humble comment. If all my config files are pretty much self-documenting (which they should be if I'm doing my job right!) then I can do things like
and be a little more confident that I or a colleague wont forget that little wrinkle and step in the same gotcha later.
# human firmware exploit
# Word will insert into your optic buffer
# without bounds checking
Not really accurate statement (Score:4)
What he said is that there is no innovation going on in UNIX and that number of its fundamental features while attractive to our community, are preventing the whole world from using the operating system.
He cited Apple's work on MacOS X as an example of a team that changed some of the fundamental kernel designs on behalf of "end-users".
Miguel's big point is that there isn't a component model and code reuse simply doesn't happen. He is right on the money with that.
However I don't know about the solution of just copying COM/ActiveX/OLE, especially when Microsoft is now dumping COM in favour its
I suspect Java is in the Linux desktop future whether people want to admit it or not. The Java2 integration on MacOS X that was demonstrated at JavaOne proves how much Microsofts component model for applications is obsolete.
In the rest of his keynote he talked about innovation in specific applications such as mail and the whole INBOX/foldering problem. I hope GNOME (and now SUN and StarOffice/OpenOffice) can address some of the design problems with Microsoft Office.
He did say UNIX sucks, and he is correct, many things about it do, but there is suckage on every platform. His point was we have to fix the things that suck on UNIX and he is not advocating re-doing it from scratch.
Oh brother. Can't see the forest... (Score:5)
More evidence of Miguel's genius can be seen in his critique of Unix in general. Unix is not a platform of innovation. Take the biggest development in all software markets in the last five years: the internet. Unix could never have produced the innovation of the internet...
Miguel's a little confused.
It drives me nuts when people who are a little bit smarter like Miguel, start to think they are really smart, because while he can see problems, he is still not smart enough to see solutions. Allowing for many many window managers is not a mistake, it's the trend: think about skins. No, the problem is that the developers who are writing all the window managers keep starting from scratch, or pay little attention to the other window managers. For example, I like the focus to follow the mouse. I'd like to set that one time in one place, then experiment with different window managers to see which I like (today... :) But you see? That's a simple solution to a problem. There's no need to throw the baby away with the bathwater, which is what Microsoft did. Microsoft was a unix systems house back when they produced DOS, and many features of DOS were modelled from Unix. It took them years and years to reintroduce simple things like memory management and multitasking, and then they set off to create NT, an OS that nobody even wants to clone.
Yep, it's true that some areas of Unix are very weak, like printer drivers, but that's more a reflection of the culture: Unix isn't used on office desktops much. Windows has equally glaring deficiencies: think of how much Windows code gets "reused" every day by hackers exploiting the security holes :)
Nope, Miguel, you are not onto anything big, just another Dvorak in a different suit.
Components are not the be-all and end-all (Score:3)
Taken to extremes - like our good friends in Redmond - you wind up with many, many applications depending on a large number of common components, with (here's the kicker) at times incompatible APIs. Need BeltchWord 5.0 and FlatuanceDraw 6.2? Can't do that if they each want different versions of the same component.
And then you get situations where an application upgrades a component that the OS/Window Manager depends on.... version control lunacy.
I believe this is called "DLL Hell" in Windows circles.
No thanks Miguel. I like and use GNOME, and I look forward to useful things like a common GNOME printing model, but I also very much indeed like the current UNIX way of doing things with regards to the window manager, X, and the kernel.
Some may see 20 years of development as "stagnant" but I see 20 years of continuous evolution. Cockroaches haven't changed much in 20 million years, because they don't have to - they're pretty damned efficiant as shipped.
Re:Software that doesn't suck (Score:2)
---
Re:He's right, of course. (Score:2)
Not necessarily. What you really need is some specialized hardware support for IPC. Pentiums and above have some of what's needed; look into "call gates" and "task gates". Sparc 9 and above have more of it, as a spinoff of the Spring OS project at Sun. In time, I think we'll see this, called "COM/CORBA acceleration". It's about time for a CPU enhancement that benefits somebody besides gamers.
All those COM/CORBA/OpenRPC/JRI mechanisms are, at bottom, subroutine calls. There's a slow setup/object creation/directory mechanism that has to operate before you make the subroutine call, but once that's out of the way, everything looks like an ordinary subroutine call. What's needed is a very efficient mechanisms for those calls once all the setup is in place. L4, QNX, EROS, and Spring all achieved this in under 100 instructions without special hardware, so it's possible. What's needed is to get the cost down to the level of a DLL call.
Once you have this, writing component software looks much more attractive. Right now, there's a big performance penalty for breaking an app into components like this. If that went away...
Why isn't he doing his part, then? (Score:3)
But if Miguel wanted to help improve the situation, why did he go off developing such a huge software project in C on UNIX? It is C that makes component based development such a pain. C lacks even minimal support for component based development (e.g., no dynamic typing, no reflection), and it is impossible to make large, component based systems in C both robust and efficient: there is no fault isolation--a bad pointer in one component will crash other components unless you put them in separate processes.
The answers to these problems are well known. Systems like Smalltalk-80 and the Lisp machine were fully integrated, component based environments where everything talked to each other. And almost any language other than C and C++ is better for component-based development and provides reuse.
Microsoft does not have the answer. Microsoft's component model, COM, has very serious problems. It's complex because the languages it is based on don't have any support for component based development. And despite its complexity, it is still dumbed down because anything else would be umanageable in C/C++. And it has no fault isolation, meaning that if you load a bunch of COM components and your program dies, you have no idea what went wrong.
In fact, UNIX had an excellent, reusable component model: command line programs that interchange data with files. That's no good for building a graphical desktop, but it was excellent for the UNIX user community--people manipulating lots of data. And that model has been extended to graphical desktops and networked systems in Plan 9 and Inferno, which also address many of the other problems with C/C++ through Alef and Limbo. Or, alternatively, Objective-C and OpenStep managed to build something that support powerful reuse and component based development on top of UNIX. And Java is excellent at supporting both component-based programming, reuse, and fault isolation.
If Miguel genuinely wants to improve the situation, why isn't he using the tools that will let him do so? Why isn't he learning from the long history of component-based development that preceded both him and Microsoft? Why is he copying Microsoft's mistakes and mediocrity? Why isn't he supporting tools that genuinely make a difference rather than encouraging the use of tools (C/C++) that were never intended for this kind of work?
People say about democracy that "it is the worst form of government, until you have tried the others". I think the same is true about UNIX. Gnome and GTK help improve the usability of a flawed tool. As such they are really welcome. But by not addressing the root causes of the problems, we'll probably be here discussing the very same problems again in another 15 years, because everything people complain about UNIX was known 15 years ago, nobody fixed it, and it (and its clone--Windows) still became immensely popular.
Re:Linux = No Innovation (Score:3)
All this innovation for the sake of innovation is stupid. Innovations must solve problems. Go ask Ross Anderson if he how he designed the system. Did he slap code together and say - there I call it the StegFS, or did he pose a problem about the issue that of encryption does not address, and then propose a solution.
OTOH, MS coming out with "focus" control technology is just that - a hammer in search of a nail. MS, in their backwards marketing-directed software development, is causing the software inductry to go in circles - going nowhere.
We do not need reusable stock libraries... (Score:3)
My point is that most people here are saying we need a complete set of standard libraries. I am saying we need acomplete document describing how a standard libaries should interact. Then we build hundreds of libraries to this standard. The unix way, where everything is a file is a very basic implimentation of this - say I do something like: someprog | sed | cut | awk > file (flags striped), for some task, but find that set|cut|awk is not poweful eneugh. An hour later I can do someprog | perl > file. No changes to someprog no changes to the file. No changes to the mythical pion who depends on post processing the output of someprog.
I want options. I do not want some idiotic stock library designed my some fool.
Re:Unix was there first. (Score:3)
unix wasn't the first operating system in the world
unix will not be the last either
as time goes by better ways of implementing things are discovered. Whilst windoze might not have the best underlying operating system I feel that it does have a far better user interface than any linux/unix variant. Sure gnome looks pretty but that's just your aforementioned flash.
To be fair windows does make it possible for end users to set up and work a pc without the amount of technical knowledge required to install linux.
Lets face it, most people do find dragging files into a bin easier than remembering to use "rm -r foldername". Personally i like command line stuff but that's just me.
If windows is so bad then why do more people use it than linux?
He's right. (Score:3)
Most Unix applications share little or nothing with each other, save for the C library and X libraries. Everything else appears to be an attempt to re-invent the wheel, sometimes coming up with an eccentric triangle instead.
The main advantage is that if a security hole or bug is discovered in a library, an replacement library will resolve the majority of problems. A certain $oftware company does this a lot. The other advantage is that it saves memory.
Gmome appears to be doing more than KDE in this field. Run ldd against a typical Gnome application, and a whole host of component librarires will be linked in - Imlib and others for image rendering, GTKXmHTML for HTML, Gtk and libgnome of course, and so on.
Gnome is standardising on which libraries to use. Unix libraries have become fragmented, with many features duplicated between competing libraries. The present situation elsewhere is a mess, due to it not being controlled.
The only other environment I can see that does something very simillar is Perl, with standard modules available on CPAN. Python may do the same, but I havn't looked at Python closely enough.
OOP Reuse Myth (Score:4)
Althought you don't expicitly state it, you seem to be implying that OOP encorages more reuse than other programming paradigms, now, while OOP does encorage more reusable code to be written, it had not been shown that this actually generates more reuse in practice.
Thad
Re:Unix was there first. (Score:3)
UNIX did a lot of things right. If you look at what Martin had to say, he's looking for more code reusability. Unix did it at the program level, now he's asking for it to be done at the functionality (sub-application) level. He's actually asking for an extension/deepening of a core UNIX principle to where we could/should have been working it a long time ago.
It just got a bit stagnated because of the closed-sourcing of UNIX back in the '80s.
Intresting..... (Score:5)
I have a crapy graphics card so my whole computer is a piece of crap!!!
Just because UNIX lacks in some resuable code in it's graphical shell it sucks. What about the fact that I can do almost everything i need to maintain a system over a serial port.
Unix needs a lot of changes inorder to become a desktop OS. UNIX was designed for mainframes Three decades ago. X and desktops came into existence decades afterwards. Miguel's analogy is like saying a 1960 automobile doesnot have airbags so it sucks. But the basic engine and chasis design is the same only todays cars have improvements.
Resuable code.... Just count the number of OSes out there that were built using a UNIX kernel.UNIX must have done something right.
I wouldn't say X sucks, I would say X is too old for todays standards. Just like a PDP11 is old by today's standard.
What the *nix world needs a newer grphical shell that defines a standard API that people can utilize. You can write all the Window Managers you want as long as you confirm to that API.
The API should include:
1) Unified standard printing architecture.
2) Resuable components for the primary functions of applications.
3) a standard for user interface (menu options e.t.c) Like edit->prefrences and not tools->options and file properties and every other place .
4) A standard method for software installation. Like src goes here and binaries go here and so on.An API to make installation easy such that icons get put in the menu and links get crated automatically on the desktop.
All this and many more standardizations are key to Unixes entry into the desktop. Standardization doesnot mean one window manager but that the basic UI should remain consistent.
The only reason people like windows (Yes
Till we realise this and look at it from a consumer point of view I don't see unix or linux on every desktop in the world.
Duh!! (Score:4)
ls ???? | grep ?????
You want to combine a whole bunch of components? You can use a shell script or even perl.
We've got reusable code running out our ears.
"Everything is a file" (Score:3)
That's not really what "everything is a file" (EiaF) means. EiaF is really a pretty low-level thing, meaning that all sorts of objects - files, devices, fifos - in a common namespace and are accessed via a common set of syscalls - open, close, read, write, ioctl. This was actually an advance over earlier operating systems which often required that you use different syscalls to get different kinds of descriptors for each kind of entity, and which had multiple namespaces as well. Ew. You can see the power of EiaF not so clearly in UNIX itself, which contains many deviations from the principle, as in Plan 9, which was the "next act" for the UNIX principals.
There are a couple of other principles that you seem to be confusing with EiaF, and I think it's worth discussing them too. One is the idea that files should be unstructured. Again, this is a low-level idea, this time referring only to the "physical" layout of files and to the filesystem interfaces. As a filesystem designer and implementor, I can say this principle is very important. Filesystems have quite enough to do without having to worry about different record types and keyed access and so on - as many pre-UNIX OSes (most notably VMS) did. Man, was that a pain. What gets built in user-space, on top of that very simple kernel-space foundation, is up to you. More complex structures have been built on top of flat files since the very first days of UNIX (e.g. dbm files).
Another related principle is that data should be stored as text whenever possible. This is an idea that's gaining new life with the widespread adoption of XML to represent structured data, and again it's a good one. Doing things this way makes it much easier to write filters and editors and search tools and viewers and such (or to perform many of these tasks manually) than if the data is all binary. It makes reverse engineering of file formats easier, which is a mixed blessing, but it also makes manual recovery and repair easier. Converting to/from text also tends to avoid some of the problems - endianness, word size - that occur with binary data. Obviously there are many cases - e.g. multimedia files - where conversion to/from text is so grossly inefficient that it's not really feasible, but in very many other cases it's just a pain in the ass for the next guy when some lazy programmer decided to dump raw internal data structures in binary form instead of doing it as text.
In conclusion, I'd say that by all means people should try to retain the structure of data. Even better would be if the means for manipulating data could be provided and linked to the data itself in some standard way, like OLE/COM does in the MS world. At the very least, even without a common framework, it would be nice if more programmers would provide libraries with which to manipulate their data files. But please, let's do all this on top of text wherever possible, and let's do that in turn on top of a flat-file kernel abstraction within a single namespace. These are some of the more important principles that led to UNIX being such a success.
Re:Lack of a graphics design (Score:3)
Close, but not quite.
What's really needed is a component model (like Bonobo) and a standard URI-type reference that defines the component in terms of the content to be displayed, like OLE uses. So, if you cut-and-paste from the diagramming tool, you should get a snippet of XML that identifies the Bonobo component that is needed to display and/or edit the diagram, and the description of the diagram data. That way your componentware program can display the diagram exactly the way the diagramming tool can.
In addition, Windows permits various rendered versions of the data to be included in the clipboard structure, so in the hypothetical Linux example your XML snippet would probably define:
A text representation (required)
A Bonobo reference with data (optional)
A PNG or other graphic (encouraged)
A space for both standardized and application-defined extensions (SVG, MPEG, binary data structure, URL, etc).
That would be pretty much analogous to the Clipboard. Ideally, a negotiation could take place to prevent clipboard-overloading (just the Bonobo invocation interface and the minimal definition of the clipping bounds is passed to start, and the request is resolved between apps without the framework in the way), but that would require sharing the clipboard-access code :-)
Miguel and the rest of you are, of course, free to attend to the small matter of implementation :-
I can't decide whether to laugh or be afraid. (Score:4)
Well, duh. Did he expect independent commercial software shops to share their code with each other?
Someone please tell me this is a belated April Fools joke!
He goes on to make reasonably valid points about how "reusable components" are available under Windows. What he misses is that this puts other software shops completely at the mercy of the components' owner, Microsoft. Is he proposing a Unix where everyone is similarly dependent on GNOME's components?
OK, GTK+ and Qt provide some nice reusable components. The advantages are obvious. I use them myself. So why is he dredging up all this irrelevant/clueless/scary stuff?
I am a GNOME user, and often defend it when it is unfairly maligned, but I don't think I like the way this is headed. No, not at all. Hopefully he's just talking out his ass rather than presenting a carefully thought-out position.
--
Re:Duh!! (Score:4)
Did you read the even read the article before posting this? When he said we are lacking resuable code he mentioned APPLICATIONS like Acrobat, Staroffice, and Netscape. Aside from the C libraries, there is *NO* re-used code between any of those applications. His example was Printing. Each application has its OWN printing system, configuration, and method of working. The sad thing is, they all pretty much do the same thing... generate a Postscript file.
He isn't talking about ls, grep, cat, cut, paste, and UTILITIES like that. He's talking full-blown applications. You know, applications...the things that people have to have to USE their computers.
---
The Solution is... A Monopoly! (Score:5)
Where he is completely wrong is his claim that Unix is no longer a platform for innovation. He's got that completely backwards -- indeed, the whole reason for the inconsistency of user interfaces is the very openness and relative simplicity of Unix. Each layer is separate from the next, so it's easy to write a new GUI system on top of the OS without changing any of the underlying layers. And people have done just that, which has led to several generations of X and other apps lying around (Xaw, Motif, OffiX, etc) -- people see a problem with the existing GUI and they reinvent the wheel, leading to a proliferation of incompatible interfaces.
Hmm, just like KDE and Gnome.
The upshot is, because it's open, we have a choice. And choice can lead to inconsistency. So if he wants to work on a platform where everything will always be consistent, he can go work for Apple or Microsoft. Otherwise, he'll just have to make Gnome so good that no one will want to use anything else, because there isn't any way to shove things down people's throats in the *nix world.
And that's a Good Thing (tm).
Balance is Key (Score:5)
As a developer I refuse to link my applications with GNOME because it has taken a few good concepts and gone WAY overboard. GNOME initially seemed to be a set of developer guidelines to promote a common look-and-feel. A few "meta-widgets" were created on top of Gtk+ to promote this. (gnome-stock-this and gnome-stock-that)
This was good. Then someone decided to go even further. More widgets where added. Many of these widgets should have been added at a low level (read Gtk+) but instead where added in at the GNOME level. Now you have widgets that depend on gnome-libs and a fairly incestious circle is starting to emerge where GNOME depends on GNOME and its getting so complicated that no developers I know are willing to shackle thier projects to the great beast that GNOME has become.
Miguel and Co. can't see the forest for the trees. I recently ripped the GNOME out of GtkHTML and created CscHTML (http://www.cscmail.net/cschtml [cscmail.net]) Miguel and several of the other GNOME developers couldn't comprehend why anyone would do such a thing. They couldn't understand the need for a non-GNOME dependant HTML widget. They couldn't agree that a "Gtk Widget" (GtkHTML) shouldn't depend on GNOME. Circular dependancies are a bad thing. GNOME depends on Gtk. GtkHTML depends on GNOME. Chicken, Egg?
Code re-use is a good thing in moderation. Not every hunk of code needs to be a re-usable object, and interdependancies can be bad if they get out of hand (which they clearly have in the case of GNOME) Miguel has stated many times that the dependancies in GNOME will only GROW as time goes on. He sees interdependancy as a wonderful thing, and is so hell bent on code re-use that he is turning GNOME into a huge monster of code that no one wants to link to because no one wants to depend on 20 or 30 different libs. GNOME needs to be split, some of its libs more appropriately belong in lower level widget sets (such as Gtk+) and some of its items should be stand alone utilities. Trim the fat from GNOME and maybe developers would start to use it again.
Re:DLL hell (Score:3)
There actually was a Unix derivative that did it right, but didn't catch on: NextStep. BSD kernel, with incredible development tools and standard libraries. With it you could throw together a professional application in hours/weeks instead of months/years since it handed you all of the primitive elements you could ever need in a consistent way. Much of Java was actually inspired by Next's tools (to be honest, Objective C is actually superior in some aspects to Java, and yes, it's OS-independent). Whether they admit it or not, all of the modern development tools (KDE, Gnome, M$ Visual Studio, etc.) are using more and more of the ideas inspired/stolen from NextStep.
It will be interesting to see how Apple's move to a NextStep derivative works out. Due to the fact that they're working to maintain backward compatibility, MacOS X is probably an inferior design to the original NextStep, but certainly an improvement over existing MacOS versions.
Unix Is Not Windows (Score:3)
COM is descended from Object Linking and Embedding which was a way to have objects created in one application to be reusable by another. Basically MSFT's entire component revolution can be traced back to the "drag and drop an Excel spreadsheet into a Word document" problem. Everything that has occurred since then COM+ (reusable components independent of language), DCOM (distributed reusable components) and now
Now on the other hand, Unix applications until very recently did not have the cross communication problem that Windows apps had. Everything is a file, if I want to communicate between applications I simply use a pipe. All files I could possibly want to edit can be viewed in Emacs. To put it simply there was no need for a reusable component model simply to share data between applications.
Now decades after Unix was invented (which predates Windows and COM by over a decade) maybe time has come for that paradigm to shift.
Re:Do something about it (Score:5)
And a mantra is all it is.
There is a very small core taking care of the software. A lot of us are users who at the end of the day working with computers, perhaps just want a free OS to check mail and surf the web. For the most part we don't want to put in another 8 hours debugging un-mapped, un-documented, and un-planned code. For the most part we run the most "stabile" version in a program, collected by a package tool.
Every once in a while we may compile something. But for the most part, we have neither the time, nor the inclination to code. This may explain the popularity of Netscape 4.x, AND the lack of programmers for Mozilla. A lack of eyeballs is due to both the "works good enough" mentality from years of commercial OS use, and the above mentioned apathy.
If you complain, then fix it. If you can't fix it, find someone who can, or email the primary author. If they give a nasty response, then use another program. This is certainly possible with 10 ICQ programs, 5 Napster clones, 3 Gnutella clients, and 15+ browsers. There is your freedom.
Unix design philosophy (Score:5)
Think about it -- it's silly that GUI programs are calling something that looks "internal" to them to pop up a dialog box. They should be issuing a shell command, like 'dlgmsg "Your repartitioning is complete." -b OK'. Or 'dlgmsg "Do you want to purge your deleted messages?" -b Yes -b No'. /dev/proc/ is massively useful; why don't we have /dev/gui/? It seems to me that the whole Window Manager Bloating Wars came about because we chose to ignore the features of Unix that would have made it easy. Why do we have window handles instead of files (i.e. named pipes created by kde)? Why is changing a window's menus any more complex than 'menubar /dev/gui/win46 -m 0 File -mi 0 0 Open...'? Why is listening for window events harder than parsing /dev/gui/win46?
I know it's a hell of a lot more complicated than that, of course, and I can see a lot of flaws and complications in the above... but hell, maybe the window manager should have to run as root anyway (sarcasm). Does anyone know of a project that tried to do something like this?