Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
GNOME GUI

Inline Review With Miguel De Icaza 198

Thanks to Dare Obasanjo for conducting this interview with [Miguel De Icaza], and sending it on to me. I've posted the interview below here - interesting answers, and very thorough. Well done, Dare.

Interview With Miguel de Icaza

Bringing a component architecture to the UNIX platform

Summary
In this interview, Miguel de Icaza, the founder of GNOME and Ximian, talks about UNIX components, Bonobo, Mono and .NET.

By Dare (Carnage4Life) Obasanjo

Dare Obasanjo: You have recently been in the press due to Ximian's announcement that it shall create an Open Source implementation of Microsoft's .NET development platform. Before the recent furor you've been notable for the work you've done with GNOME and Bonobo. Can you give a brief overview of your involvement in Free Software from your earlier projects up to Mono?

Miguel de Icaza: I have been working for the past four years on the GNOME project in various areas: organization of it, libraries and applications. Before that I used to work on the Linux kernel, I worked for a long time on the SPARC port, then on the software raid and some on the Linux/SGI effort. Before that I had written the Midnight Commander file manager.

Dare Obasanjo: In your Let's Make Unix Not Suck series you mention that UNIX development has long been hampered by a lack of code reuse. You specifically mention Brad Cox's concept of Software Integrated Circuits, where software is built primarily by combining reusable components, as a vision of how code reuse should occur. Many have countered your arguments by stating that UNIX is built on the concept of using reusable components to build programs by connecting the output of smaller programs with pipes. What are your opinions of this counter-argument?

Miguel de Icaza: Well, the paper addresses that question in detail. A `pipe' is hardly a complete component system. It is a transport mechanism that is used with some well known protocols (lines, characters, buffers) to process information. The protocol only has a flow of information.

Details are on the paper:
http://primates.ximian.com/~miguel/bongo-bong.html [Dare -- check the section entitled "Unix Components: Small is Beautiful"]

Dare Obasanjo: Bonobo was your attempt to create a UNIX component architecture using CORBA as the underlying base. What are the reasons you have decided to focus on Mono instead?

Miguel de Icaza: The GNOME project goal was to bring missing technologies to Unix and make it competitive in the current market place for desktop applications. We also realized early on that language independence was important, and that is why GNOME APIs were coded using a standard that allowed the APIs to be easily wrapped for other languages. Our APIs are available to most programming languages on Unix (Perl, Python, Scheme, C++, Objective-C, Ada).

Later on we decided to use better methods for encapsulating our APIs, and we started to use CORBA to define interfaces to components. We complemented it with policy and a set of standard GNOME interfaces for easily creating reusable, language independent components, controls and compound documents. This technology is known as Bonobo. Interfaces to Bonobo exist for C, Perl, Python, and Java.

CORBA is good when you define coarse interfaces, and most Bonobo interfaces are coarse. The only problem is that Bonobo/CORBA interfaces are not good for small interfaces. For example, an XML parsing Bonobo/CORBA component would be inefficient compared to a C API.

I also wrote at some point:

My interest in .NET comes from the attempts that we have made before in the GNOME project to achieve some of the things .NET does:

  • APIs that are exposed to multiple languages.
  • Cross-language integration.
  • Contract/interface based programming.

And on top of things, I always loved various things about Java. I just did not love the Java combo that you were supposed to give or take.

APIs exposed to many languages we tried by having a common object base (GtkObject) and then following an API contract and a format that would allow others to wrap the APIs easily for their programming language. We even have a Scheme-based definition of the API that is used to generate wrappers on the fly. This solution is suboptimal for many reasons.

The Cross-language integration we have been doing with CORBA, sort of like COM, but with an imposed marshalling penalty. It works pretty well for non inProc components. But for inProc components the story is pretty bad: since there was no CORBA ABI that we could use, the result is so horrible, that I have no words to describe it.

On top of this problem, we have a proliferation of libraries. Most of them follow our coding conventions pretty accurately. Every once in a while they either wont or we would adopt a library written by someone else. This had lead to a mix of libraries that although powerful in result implement multiple programming models, sometimes different allocation and ownership policies and after a while you are dealing with 5 different kind of "ref/unref" behaviours (CORBA local references, CORBA object references on Unknown objects, reference count on object wrappers) and this was turning into a gigantic mess.

We have of course been trying to fix all these issues, and things are looking better (the GNOME 2.x platform does solve many of these issues, but still).

.NET seemed to me like an upgrade for Win32 developers: they had the same problems we had when dealing with APIs that have been designed over many years, a great deal of inconsistency. So I want to have some of this new "fresh air" available for building my own applications.

Dare Obasanjo: Bonobo is slightly based on COM and OLE2 as can be gleaned from the fact that Bonobo interfaces are all based on the Bonobo::Unknown interface which provides two basic services: object lifetime management and object functionality-discovery and only contains three methods:

 
        module Bonobo {
                interface Unknown {
                        void ref ();
                        void unref ();
                        Object query_interface (in string repoid);
                };
        };

 
which is very similar to Microsoft's COM IUnknown interface which has the following methods
 
HRESULT QueryInterface(REFIID riid, void **ppvObject);
ULONG AddRef();
ULONG Release();
 
Does the fact that .NET seems to spell the impending death of COM mean that Mono will spell the end of of Bonobo? Similarly considering that .NET plans to have semi-transparent COM/.NET interoperability, is there a similar plan for Mono and Bonobo?

Miguel de Icaza: Definetly. Mono will have to interoperate with a number of systems out there including Bonobo on GNOME.

Dare Obasanjo: A number of parties have claimed that Microsoft's NET platform is a poor clone of the Java(TM) platform. If this is the case why hasn't Ximian decided to clone or use the Java platform instead of cloning Microsoft's .NET platform?

Miguel de Icaza: We were interested in the CLR because it solves a problem that we face every day. The Java VM did not solve this problem.

Dare Obasanjo: On the Mono Rationale page it is pointed out that Microsoft's .NET strategy encompasses many efforts including

  • The .NET development platform, a new platform for writing software.
  • Web services.
  • Microsoft Server Applications.
  • New tools that use the new development platform.
  • Hailstorm, the Passport centralized single-signon system that is being integrated into Windows XP.
and you point out that Mono is merely an implementation of of the .NET development platform. Is there any plan by Ximian to implement other parts of the .NET strategy?

Miguel de Icaza: Not at this point. We have a commitment to develop currently:

  • A CLI runtime with a JITer for x86 CPUs.
  • A C# compiler.
  • A class library

All of the above with the help of external contributors. You have to understand that this is a big undertaking and that without the various people who have donated their time, expertise and code to the project we would not even have a chance of delivering a complete product any time soon.

We are doing this for selfish reasons: we want a better way of developing Linux and Unix applications ourselves and we see the CLI as such a thing.

That being said, Ximian being in the services and support business would not mind extending its effort towards making the Mono project tackle other things like porting to new platforms, or improving the JIT engine, or focusing on a particular area of Mono.

But other than this, we do not have plans at this point to go beyond the three basic announcements that we have made.

Dare Obasanjo: There are a number of other projects that are implementing other parts of .NET on Free platforms that seem to be have friction with the Mono project. Section 7.2 of Portable.NET's FAQ seems to indicate they have had conflict with the Mono project as does the banning of Martin Coxall from the dotGNU mailing list. What are your thoughts on this?

Miguel de Icaza: I did not pay attention to the actual details of the banning of Martin from the DotGNU mailing lists. Usenet and Internet mailing lists are a culture of their own and I think this is just another instance of what usually happens on the net. It is definitely sad.

The focus of Mono and .NET is slightly different: we are writing as much as we can in a high level language like C#, and writing reusable pieces of software out of it. Portable.NET is being written in C.

Dare Obasanjo: There have been conflicting reports about Ximian's relationship with Microsoft. On one hand there are reports that seem to indicate that there may be licensing problems between the license that will govern .NET and the GPL. On the other hand there is an indication that some within Microsoft are enthusiastic about Mono. So exactly what is Ximian's current relationship is with Microsoft and what will be done to ensure that Mono does not violate Microsoft's licenses on .NET if they turn out to be restrictive?

Miguel de Icaza: Well, for one we are writing everything from scratch.

We are trying to stay on the safe side regarding patents. That means that we implement things in a way that has been used in the past and we are not doing tremendously elaborate or efficient things in Mono yet. We are still very far from that. But just using existing technologies and techniques.

Dare Obasanjo: It has been pointed out that Sun retracted Java(TM) from standards processes at least twice, will the Mono project continue if .NET stops being an open standard for any reason?

Miguel de Icaza: The upgrade on our development platform has a value independently of whether it is a standard or not. The fact that Microsoft has submitted its specifications to a standards body has helped, since people who know about these problems have looked at the problem and can pin point problems for interoperability.

Dare Obasanjo: Similarly what happens if Dan Kusnetzky's prediction comes true and Microsoft changes the .NET APIs in the future? Will the Mono project play catchup or will it become an incompatible implementation of .NET on UNIX platforms?

Miguel de Icaza: Microsoft is remarkably good at keeping their APIs backwards compatible (and this is one of the reasons I think they have had so much success as a platform vendor). So I think that this would not be a problem.

Now, even if this was a problem, it is always possible to have multiple implementations of the same APIs and use the correct one by choosing at runtime the proper "assembly". Assemblies are a new way of dealing with software bundles and the files that are part of an assembly can be cryptographically checksummed and their APIs programmatically tested for compatibility. [Dare -- Description of Assemblies from MSDN gloassary]

So even if they deviate from the initial release, it would be possible to provide assemblies that are backwards compatible (we can both do that: Microsoft and ourselves)

Dare Obasanjo: Looking at the Mono class status page I noticed that a large number of .NET class libraries are not being implemented in Mono such as WinForms, ADO.NET, Web Services, XML schemas, reflection and a number of others. This means that it is very likely that when Mono and .NET are finally released apps written for .NET will not be portable to Mono. Is there any plan to rectify this in the future or is creating a portable .NET platform not a goal of the Mono project? Similarly what are the short and long term goals of the Mono project?

Miguel de Icaza: The status web page reflects the classes that people have "requested" to work on. The status web page is just a way of saying `Hey, I am working on this class as of this date' to avoid code duplication. If someone registers their interest in working on something and they do not do something after some period of time, then we can reclaim the class.

We are on the very early stages of the project, so you do see more work going on the foundational classes than on the end user classes.

I was not even expecting so many great and talented programmers to contribute so early in the project. My original prediction is that we would spend the first three months hacking on our own in public with no external contributions, but I have been proved wrong.

You have to realize that the goals of the Mono project are not only the goals of Ximian. Ximian has a set of goals, but every contributor to the project has his own goals: some people want to learn, some people like working on C#, some people want full .NET compatibility on Linux, some people want language independence, some people like to optimize code, some people like low level programming and some people want to compete with Microsoft, some people like the way .NET services work.

So the direction of the project is steered by those that contribute to it. Many people are very interested in having a compatible .NET implementation for non-Windows platforms, and they are contributing towards filling those gaps.

Dare Obasanjo: How does Ximian plan to pay for the costs of developing Mono especially after the failure of a number of recent venture funded, Free Software-based companies like Indrema, Eazel and Great Bridge and the fact that a sizable percentage of the remaining Free Software based companies are on the ropes? Specifically how does Ximian plan to make money at Free Software in general and Mono in particular?

Miguel de Icaza:Ximian provides support and services. We announced a few of our services recently, and more products and services have been on the pipeline for quite a while and would be announced during the next six months.

Those we announced recently are:

  • Red Carpet Express: a subscription service for those who want a reliable high speed access to the Red Carpet servers.
  • Red Carpet Corporate Connect: We modified our Red Carpet updater technology to help people manage networks of Linux workstations easily and to deploy and maintain custom software packages.
  • Support and services for the GNOME desktop and Evolution: Our latest boxed products are our way of selling support services for the various products we ship.
We have also been providing professional services and support for people integrating free software based solutions.

The particular case of Mono is interesting. We are working on Mono to reduce our development costs. A very nice foundation has been laid and submitted to ECMA. Now, with the help of other interested parties that also realize the power of it, we are developing the Mono runtime and development tools to help us improve our productivity.

Indeed, the team working on Mono at Ximian is the same team that provided infrastructural help to the rest of the company in the past.

Dare Obasanjo: It is probably little known in some corners that you once interviewed with Microsoft to work on the SPARC port of Internet Explorer. Considering the impact you have had on the Free Software community since then, have you ever wondered what your life would have been like if you had become a Microsoft employee?

Miguel de Icaza: I have not given it a lot of thought, no. But I did ask everyone I interviewed at Microsoft to open source Internet Explorer, way before Netscape Communicator was Open Sourced ;-)

This discussion has been archived. No new comments can be posted.

Inline Review With Miguel De Icaza

Comments Filter:
  • by Magumbo ( 414471 ) on Monday September 24, 2001 @03:40PM (#2343095)
    He didn't ask what we're all dying to know:

    Why does the ximian logo [slashdot.org] look exactly like a spider shoved up somebody's left nostril?
  • by Anonymous Coward

    so, Miguel is obsoleting bonobo before it is even ready for primetime? If you pay attention troughout the interview in the end it just boils down to..
    "we were having a really tough time meeting our goals so we've decided to do this instead, Microsoft was doing it so we thought if they can make billions off of it maybe we can make a few mil."

    You know, I used to love Gnome when the people behind it knew what they were driving towards an object based unix framework.. nowadays they haven't a clue, KDE has put up and it's time Miguel shut up.

    --iamnotayam
    • by Anonymous Coward
      You're missunderstanding the Bonobo's relationship with Mono. Bonobo won't be obsoleted by Mono any more than COM will be obsoleted by .NET. In fact, .NET makes writing COM objects dirt simple and it makes using COM objects as simple as using regular C# classes. Mono preserves the same relationship. In theory, the same should be true for UNO, XPart, and Mozilla's component library. You shouldn't have to know how a component is *implemented* in order to use it.

      So why would anyone use Bonobo instead of Mono? Lots of reasons. For one thing, it's easy to wrap legacy applications and languages in Bonobo than Mono. Since most of the world's applications are legacy applications and Mono won't be finished and stabilized for at lest 2 years, this is a big deal. Also, not everyone will be developing for Mono, so Bonobo is a good way of creating components. And Bonobo isn't GNOME specific. You can use it on Windows without including any GTK+ libraries.

      BTW, KParts is Qt specific. You can't write an UNO (OpenOffice's component) KPart. You can write a Bonobo-UNO bridge so all UNO objects appear as Bonobo components and can be used by Bonobo applications.

      • by Anonymous Coward
        Your information about KParts is incorrect - it is not Qt specific as demonstrated by the XParts adaptor. This is used in the Mozilla KPart for example.

        Rich.
  • i like that last line of the interview.
    = i asked microsoft to give away everything that they had paid developers to make for the last 3 years for free...
    • IE has always been free (and probably always will be). For the Macintosh it is a great browser, the Windows version has always had a bit more kruft in it. I've never really understood why.

      At any rate, he just asked that they make the source available, I'm not a huge open source guy, but I think that could have really helped back in the days of IE 3 -- and may have limited the W3C from pulling some of that crap, that has made the standard useless and IE incompatible (proof that standards bodies can do alot of harm when they aren't impartial).
  • by tim_maroney ( 239442 ) on Monday September 24, 2001 @03:58PM (#2343232) Homepage
    every contributor to the project has his own goals: some people want to learn, some people like working on C#, some people want full .NET compatibility on Linux, some people want language independence, some people like to optimize code, some people like low level programming and some people want to compete with Microsoft, some people like the way .NET services work.

    Does any contributor's goal include a focus on usability issues and user experience design? If so, they weren't apparently worth listing.

    As in many other interviews, de Icaza's comments are focused almost entirely on technical issues, and not on design issues. Component architectures may be fascinating for engineers, but they don't deliver an enhanced experience for the user by themselves.

    To really improve the Linux user experience will require the kind of passionate engagement with the user that Apple has had, but instead we seem to be seeing a very programmer-centered set of interests and preoccupations at Ximian.

    Tim
    • by Carnage4Life ( 106069 ) on Monday September 24, 2001 @04:05PM (#2343287) Homepage Journal
      Does any contributor's goal include a focus on usability issues and user experience design? If so, they weren't apparently worth listing.

      Considering that they are currently working on the compiler, the language runtime and base class libraries for Mono I fail to see what user interfaces have to do with anything at this stage in the development process.

      On the other hand if this was an interview about GNOME, which it isn't then I assume he would have mentioned the user interface issues.
      • by tim_maroney ( 239442 ) on Monday September 24, 2001 @04:13PM (#2343326) Homepage
        Considering that they are currently working on the compiler, the language runtime and base class libraries for Mono I fail to see what user interfaces have to do with anything at this stage in the development process.

        That's exactly the problem. It's called "user-centered system design" for a reason. User experience is upstream from engineering in a user-centered project. You don't bring designers in late in the game to slap some icons on the system. Instead, you have a set of designs that engineers work towards implementing.

        On the other hand if this was an interview about GNOME, which it isn't then I assume he would have mentioned the user interface issues.

        I believe your assumption is in error. I have seen de Icaza discuss GNOME in exactly the same way -- naming lots of libraries and implementation strategies, but saying almost nothing about user-facing issues. That's why I noted the continuing pattern in my message.

        Tim
        • by Anonymous Coward
          At the moment, the focus is on writing the spec and all it's libraries. Hello world works but little else. Interestingly enough, the GTK+ bindings for C# are in good enough shape that they too can create a GUI "Hello World":

          http://lists.ximian.com/archives/public/mono-lis t/ 2001-September/001484.html

          Once the base libraries are in place and the GTK+ bindings are finished, it should be possible for more user-centered features to be added. BTW, don't forget that Sun is adding user centered features to Gtk+ so Mono will automatically inherit those.

        • That's exactly the problem. It's called "user-centered system design" for a reason. User experience is upstream from engineering in a user-centered project. You don't bring designers in late in the game to slap some icons on the system. Instead, you have a set of designs that engineers work towards implementing.

          The user-centric design depends on the users of your system. The users of a compiler and language runtime are not going to interact with a GUI, so user interface discussions are irrelevant at this time. Secondly any GUI issues that wil be brought to the implementers of the language runtime and the compiler will be technical issues to probably do with performance and not user interface issues.

          I believe your assumption is in error. I have seen de Icaza discuss GNOME in exactly the same way -- naming lots of libraries and implementation strategies, but saying almost nothing about user-facing issues. That's why I noted the continuing pattern in my message.

          This I cannot agree or disagree with since most of the interviews involving Miguel or articles written by him I have read are about components and the like which I am interested in and not GUIs which I am not.
          • The user-centric design depends on the users of your system. The users of a compiler and language runtime are not going to interact with a GUI, so user interface discussions are irrelevant at this time. Secondly any GUI issues that wil be brought to the implementers of the language runtime and the compiler will be technical issues to probably do with performance and not user interface issues.

            Why can't compiler users interact with a GUI? I believe this one point is holding Linux back more than any other. There are too many things in Linux that require you to compile from source code, just to be able to run everything that people are used to with Windows.

            It's easy to load up a game on Windows, you run the installer, then you run the game.

            Under Linux, if you want to use high quality peripherals, half the time you need to recompile or patch your kernel, or compile and add modules to your system.

            All of this has to be done using arcane commands and confusing config files. How-tos are not useful to beginners, since the differences between distibutions and versions mean that you almost need a CS degree to even get a common setup working.

            Lets get some user-centric design right into the guts of the system, and make it easier to start out.

            • Well, given that you seem *too* interested in the user aspects of Mono, I can tell you some of it:

              The Mono compiler is in itself a bunch of classes written in C#. Those classes are being developed so that you can have multiple instances of the compiler for example and to be reusable as a component. The idea behind what can look like a really stupid goal is to make the C# compiler a component that gets plugged into SharpDevelop (a GPL .NET GUI for application development. A free Visual Studio if you will).

              Also parts of the interfaces in the compiler are there so that SharpDevelop can provide method completion and useful suggestions to the user (what Microsoft calls "IntelliSense").

              The compiler "base" classes can be shared, and is indeed the foundation for a work in progress Mono BASIC.NET implementation. A separate effort with the same goals in mind is creating an ECMA Script compiler.

              But to us -Ximian- Mono represents a way of reducing our cost of developing end user applications. Evolution being our largest program (750k lines of code so far) has shown the needs for a better platform for software development. Everyone likes to criticize Microsoft for writing unstable applications. The problem is that once you reach certain complexity, it is hard to keep improving and extending an application without a deep knowledge of its internals.

              So Mono to us is just a new platform to build better, larger, faster, nicer, more robust applications. Oh, it has the side effect of being compatible with the .NET development framework, but that is just an "interesting" side effect.

              Love,
              Miguel.
        • That's exactly the problem. It's called "user-centered system design" for a reason. User experience is upstream from engineering in a user-centered project. You don't bring designers in late in the game to slap some icons on the system.


          What gave you the idea that a programming framework was a "user-centered project"? If only you were trolling...

        • It's called "user-centered system design" for a reason. ... you have a set of designs that engineers work towards implementing.

          All software is made of many layers, and there are different concepts at each layer. The highest layer deals in concepts that are exposed to the user, and should be designed in a "user-centered" way. The other layers, by and large, deal in concepts that the user shouldn't have to think about, so their design should be based mostly on engineering considerations.

          The notion that every layer of design needs to be "user-centered" is a gross distortion that harms software development. It would enormously complicate the design of the lower-level functionality, impeding the development of clean, simple layers that can be used by a variety of applications (some unforseen). I think there are enough examples to make this obvious.

          By your standard, it seems we should criticize CPU designers for not considering the end-user. This is the same fallacy that leads people to say foolish things like, "You can't build a beginner-friendly interface on top of Unix".

          I have seen de Icaza discuss GNOME in exactly the same way

          Well, even though GNOME overall is a user desktop, large parts of it are not at the user-facing level, so it is entirely appropriate for Miguel to talk about those parts of GNOME from an engineering viewpoint. Hopefully, though, other people in GNOME think more about the user-facing parts. :-)

        • First of all assume that the only people who are going to be using these libraries to program are going to be programmers... This is a fair assumption since the only people who are going to be using these libraries are programmers.

          Now do a search for "interface."

          It is pretty clear that Miguel does care about the interfaces that the users of his libraries use. Or in other words, the interfaces that he cares about are the same interfaces that users use.

          Hope that helped.

        • I don't know what planet you are from. But here in the real world the only "users" that are likely to use the Mono compiler and language runtime are developers.

          In other words, the developers working on the product don't need to query a bunch of lawyers, accountants, and fast food personnel to find out how they should work. They already know how they should work.

          The fact of the matter is that the compiler should work very similarly to gcc, or the command line java compilers from Sun and IBM, or every other compiler written in the last 10 years. Especially since their target audience is currently Linux developers.

          When Miguel talks about Gnome to hackers, of course he talks about libraries, APIs, and implementation. On the other hand, Miguel makes no bones about modeling his User Interfaces from Microsoft's software. Gnumeric's interface, for example, is close enough to Microsoft Excel that I recently used it to ace a College course in the use of Excel. As far as I am concerned, when it comes to spreadsheet usability aping Excel is the only way to go. I have also seen lots of articles by Miguel where he praises Windows UI, and badmouth's Linux's UI (including Gnome).

          Believe me, when it comes to the "user centered" design of a compiler Miguel is as qualified as anyone I can think of to choose the UI design.

    • by miguel ( 7116 ) on Monday September 24, 2001 @04:50PM (#2343545) Homepage
      I have talked about those in the past at various conferences. Specially after having met Andy Hertzfeld and his team of hackers at Eazel.

      The Eazel people always talked about usability, and always tried to make computers easier to use. I think their contribution to the GNOME project will live forever in terms of having taught us that things need to improve in that area.

      Sure, the interview did not talk about these topics, because the questions that Dare made were not focused in that area.

      If you want to see the kind of things that the GNOME project members are doing towards improving the user interface, I suggest you read a number of mailing lists: gnome-2-list, gnome-devel-list, gtk-list and gnome-hackers. They have archives on the web.

      Many new usability features are added to GNOME continously. Most GNOME hackers have been reading on topics of user interfaces and usability and have been acting based on this input.

      Also, Sun has been providing direct feedback from the usability labs (you might want to check gnotices and the developer.gnome.org site for the actual articles and comments).

      Based on this input we have been making changes to the platform to make sure that GNOME becomes a better desktop.

      I am sure others can list in more detail all the new improvements that we got. For example, I just found out about the new screenshooting features in the new gnome-core; The Setup Tools got a great review on the Linux magazines for its simlicity to customize the system; There is a new and simplified control center and control center modules that addresses many of the problems in usability that we had in the past.

      Better integration is happening all the time in Nautilus.

      The bottom line is: the GNOME project is more active than ever and if you want to get to the source, you can either check the mailing list archives, or we should run a series of interviews with the various developers of GNOME that have contributed those features.

      Dare was interested in Mono, so that was the focus of this article.

      Miguel.

      • Most GNOME hackers have been reading on topics of user interfaces and usability and have been acting based on this input.

        Has any actual UI usability (non-technical) person been involved in the process at all?

        This is a lesson that Linux never seems to want to learn.
      • http://usability.gnome.org and usability@gnome.org are the primary mailing lists for the GNOME usability project...

        -Seth
    • Yeah, all I've gotten from this interview is an impression that all these guys are interested in is some weird API fetish where they make APIs to wrap around other APIs to put on top of other APIs, and never get around to actually consistently using what they make. The result is a mish-mash of legacy and bleeding edge APIs that are used exactly once per application before moving on to the next wacky Microsoft-wannabe API, completely negating the usefulnness of making all these "reusable" components.

      Although I have some odd attachment to Gtk flavored apps over Qt/KDE stuff, UI-wise the GNOME stuff seems downright inferior.

    • I agree totally with what you're saying. I have seen one cognitively unsound design after another being pumped out of Ximian. Some of the stuff I've seen in the Ximian installers (and one nasty beast called metatheme) are things that no HCI professional worth his salt would implement. Miguel either is uninterested in good UI design or has no idea as to what it is. He's interested in doing what microsoft does: creating desktop applications that are a programmer's dream and an end-user's nightmare. Not that I'm singling Miguel out for being like this, because virtually all people working on the GNOME environment have this programmer-centric tunnel vision that has lead them to hang themselves with their own rope.

      IMHO, the only way the linux user experience will be improved is if people from the mac community who are disenchanted with the direction that Apple has taken and marketshare limitations of the PowerPC architecture create a new desktop environment that "just happens to use the linux kernel" (as opposed to a desktop environment that "brings linux to the desktop") and puts the greatest emphasis on how people use technology, not the technology itself.

    • "Component architectures may be fascinating for engineers, but they don't deliver an enhanced experience for the user by themselves."

      Yes they do. If it means much less time wrangling over obscure technical hacks, that's all the more time programmers can spend on "user" issues. So yes, good user-agnostic technical design can lead to good end-user experience. Contrast that with having a horrible mess of infrastructure painted over by a GUI which has had lots of time and money thrown at it.
  • by Anonymous Coward

    I still don't see how mono will solve the inter language problems in unix. Will they have a .h to .NET converter or something like that ? with automatic binding between their JIT and the .so libraries ?

    Will updating a .so or a .jar file automatically relinks the .NET application?

    Anyway, I'm kind of happy people are starting to address the inter language issues. I have a template based C++ library (NURBS++ if you care to know), I'd like to integrate it in other languages easilly so I can add scripting to it.

    Scheme might be better then C++ for RAD and idea testing. Not sure, but with something like .NET it wouldn't be an issue I could try a new language and plug my stuff in it easilly. Especially if mono comes with a class browser...
    • by Anonymous Coward
      Qt is great for C++ RAD
    • by miguel ( 7116 ) on Monday September 24, 2001 @04:40PM (#2343475) Homepage
      .NET is its own platform. The "compatibility" of components is only guaranteed inside this universe.

      That means that if you use a Pascal component, it has to be compiled with a compiler that generates files in the CIL format. There are compilers for a bunch of languages out there (proprietary mostly, but there are some research and free compilers as well. The one we are writting is free).

      That being said, the .NET framework provides a great ammount of facilities to interoperate with other languages. From allowing you to provide custom marshallers, to provide full introspection of components to the low-level Platform Invoke interface.

      The interesting bit here is that the runtime can provide bridges to arbitrary component systems. For example, Microsoft in their runtime have support for COM. We envision our runtime supporting some sort of COM (XPcom maybe) at the runtime level and things like Bonobo at a higher level (as it does not need any runtime support, just a set of classes that use System.Reflection).

      Miguel
  • API Wrappers (Score:5, Interesting)

    by David Greene ( 463 ) on Monday September 24, 2001 @04:05PM (#2343280)
    Our APIs are available to most programming languages on Unix (Perl, Python, Scheme, C++, Objective-C, Ada).

    Y'know, I hear this all the time, but it just ain't true. The C++ support for Gnome is horrendous. It's been a few months since I've last looked, though. Has it improved at all?

    As an example, I'd like to use the canvas in a project I'm planning but there wasn't any C++ interface when last I looked.

    • Re:API Wrappers (Score:3, Informative)

      by Cactus ( 5446 )
      I don't know what C++ wrapper you were looking at, but GTK-- (and its respective libgnomeui wrapper, GNOME--) offer extensive C++ bindigs that go to great length to give you a "clean C++ feel". Here's a link to the home page [sourceforge.net]. As for the canvas, I myself have used the C++ interface to the GNOEM canvas so I am pretty sure they exist and are usable (in fact, they are better than the C API since they are completely type-safe).
      • Glad to hear the canvas has an interface. It was in fact Gtk--/Gnome-- that I investigated. It seems a little dishonest of Miguel to imply that full bidnings exist for all these languages. I know for a fact that full C++ bindings didn't exist a few months ago and Gnome developers were making such statements even then.

        If Gtk--/Gnome-- have been fleshed out further, that is fabulous. I'll take another look when I get started on the project (any year now... :)).

  • I've always wondered what the point of the interpretted code was. Why not just make a new object format, or extend an existing one, but make it pure intel object code? You could even just mandate use a subset of it to make it easier to write an interpreter on non-Intel platforms. This seems like it would be much more effecient. The real gain from .NET is a common object format for OO languages, so why not just make ELF better?
    • > I've always wondered what the point of the
      > interpretted code was. Why not just make a new
      > object format, or extend an existing one, but
      > make it pure intel object code?

      I guess because Intel object code is far from being perfect. Too many opcodes to consider, too many specialities that would have to be emulated. Ok, there are some 386/486/pentium emulators out there, but I can think of better VM designs to make byte-code portable.
      On the other hand: Why not use Z-Code [demon.co.uk]? I remember one company that even made a database program for this architecture...
    • I think the real key is data interaction -- that data structures created in one language are uniformly available to all languages. I don't think this really relates to ELF. But then, it doesn't really apply to the runtime either.

      The problem as I see it: CORBA tried to apease people who used crappy languages (like C), and that made everything 10x harder (or more?). .NET makes a new runtime for all the various languages, and it doesn't have to deal with the semantics of unfriendly languages. Some dynamic languages are fairly easy to port -- I think Perl and Python are both available without a huge amount of effort (?) -- and other languages have to be reinvented (C#).

      Then they put the whole thing on a VM, which seems unnecessary. Most of the dynamic languages already run on a VM anyway (their respective byte-code interpreters), so maybe it's not a big deal. But it's conflating two different ideas -- sharing data and execution in different environments. This makes for less confusion of non-related ideas than exists in Java, but it's still unfortunate.

      But then again, the alternative is for them to make all the different pieces anyway, and give each one an acronym of its own.

      • I think the real key is data interaction -- that data structures created in one language are uniformly available to all languages. I don't think this really relates to ELF.



        The reason it relates to ELF is that there needs to be a common format for storing type information in the object files. Remember, in the CLR, you can have Java extend a C++ class. This means that every class needs to have it's object information stored in the ELF file. In addition, there need to be standards for loading and initializing classes, as well as how they link.

        • How would it help if there was a format for ELF when so many languages don't use ELF? Like, nearly all of them.

          I would expect current C++ compilers to be incompatible with this system, because they don't allow enough runtime introspection. I wouldn't expect C to work at all, except in a manner that involves lots and lots of function calls for everything (the if-you-must-use-C-you-will-suffer style, which isn't that uncommon -- the GTK object system feels a little like that).

          Maybe this is getting me to understand why the CLR is being used. If you can subclass objects from other languages, you then will have some of the methods implemented in one language and portions in another. This is easy if they all are actually a common set of bytecodes. Easy to GC too. This is hard if there's multiple interpreters/binaries/etc. being invoked, and near impossible to GC.

          • How would it help if there was a format for ELF when so many languages don't use ELF? Like, nearly all of them.

            ***************

            Actually, all Linux-compiled languages use ELF. C, Java, C++, Objective-C, etc.
            • True, but all the non-compiled languages don't use ELF (including normal byte-compiled Java). So I don't see how ELF would much relate to language independance.
              • True, but all the non-compiled languages don't use ELF (including normal byte-compiled Java). So I don't see how ELF would much relate to language independance.

                **********

                You have to pick a format, so why not pick a native one? There seems to be no need to develop a brand new format when one already exists for compiled languages.
    • The landscape is very quickly becoming littered with VMs...it won't be much a surprise if MS wins this round simply by dividing the competition.

      I would like to see a cook-off between Mono and Parrot though, for similar high-level code. It would seem smart to use the best underlying VM for all of these high level languages.

    • Actually the CLR doesn't interpret code, at least not in the current implementations - they always JIT compile down to native code. (And MS's CIL was designed to be used thus - intpretive execution was an explicit non-goal.)

      As for the advantages, well several spring to mind, and these all apply to Java just as much as .NET:

      1. Platform independence - this might seem a Java-only concern, but remember that 64 bit processors are lurking on Microsoft's horizon. MS are still living with the pain of the 16 bit -> 32 bit transition, so they would like to avoid a repeat performance. Compiling to IL is a crucial part of .NET's strategy for providing a smooth upgrade path from 32 bit to 64 bit processors. Sure, the 64 bit processors will run 32 bit code happily, but that's not really a viable long-term solution: 16 bit code still runs today, but that doesn't mean you would want to run it if you could possible avoid it.
      2. Security and integrity - both Java and .NET perform extensive validity checking of binaries before they run them. In particular they make sure that the code never does the moral equivalent of void* p = (void*) someInt;. Both Java and .NET go out of their way to try and make it impossible to write code which would allow a buffer overflow exploit. Type safety is an important part of the way they do this, and the IL representations in both systems have been carefully designed to be susceptible to such validity analysis. Raw Intel machine code cannot reasonably be analysed thus. Also, both environments rely on being able to walk the stack frame reliably in order to perform security checks. Running native code would allow such code to mess about with the stack to subvert such validation checks.
      3. Performance - yes, believe it or not there are actually performance advantages to using an intermediate representation in your binaries rather than native compiled code. The hotspot JVMs from Sun monitor the behaviour of code at runtime and base their optimizations on this analysis. This enables a whole class of optimizations which are infeasible with compile-time analysis: compilers are just not smart enough to be able to deduce from first principals many of the things that can be determined empirically by examining a running system. Plenty of work has been published showing that this technique can provide superior average performance than static compile-time optimization. (The downside is that the transient performance is worse - code is slower at the start, but becomes quicker over time. It's great for server-side code, but less good for, say, command line utils.) Note that the JVMs currently seem to be ahead in this area - the current CLR implementations don't appear to exploit this technique yet. Doubtless this will happen.
  • Given the new commercial and political environment, (since Sept 11), the demand for computers and technology may be greatly diminished. The demand for new hardware was definitely dropping off before, now even more so.

    I think this will put a major crunch into development projects like Mono and .Net

    Obviously this interview was probably done a few weeks ago, so I wonder how things have changed over there.

    I'm just wonder how much demand there will be for projects like this, especially if MS is betting the farm on it. You can only bet the farm so many times before you loose.

    • > The demand for new hardware was definitely dropping off before, now even more so.

      And how many tens of thousands of computers were destroyed in the tragedy and will need to be replaced?

      My dad even suggested buying Dell and Gateway stock for that reason.
  • Miguel de Icaza: We were interested in the CLR because it solves a problem that we face every day. The Java VM did not solve this problem.
    What does he mean here? I did not find any evidence in the interview, and I'd really like to know the advantages of .NET - when all comes to it, both techs are VM based and thus bound to suffer from similar weaknesses.
    • Re:Enlighten me (Score:3, Informative)

      by miguel ( 7116 )
      The CLR and the type system as well as the Language Specification in the .NET Development platform solve the problem of language interoperability.

      It does not matter in which language you define a function or a class, other languages targetting the CLR can consume the data (as far as the languages are CLS compliant, there is a spec you have to follow to get this).

      The JVM did not have such functionality nor such a spec.

      Miguel.
      • There is this long list of languages for the JVM [tu-berlin.de]. It seems like it's not too hard to adjust to the JVM - wouldn't that be preferable to creating a completely new thing, especially given that modern JVMs are pretty mature? Or are the JVMs too much adjusted to the Java programming language and don't work well (= fast) for other languages?
        • Java could have been moved in that direction with some trouble. For example it can not cope with languages like C or C++ efficiently.

          Also it lacks features like P/Invoke that were things we were interested in.

          We were trying to see how to move ahead with GNOME, and we made a choice.

          Miguel.
          • Miguel,

            You will find this article very interesting:
            JVM And CLR [byte.com]

          • Hmmm. Married in haste, we can now repent at leisure.

            There probably are some fundamental limitations of the JVM - I expect talking to LISPers, embedded database people or workflow app developers, for example, would quickly elucidate a good set of requirements for an improved VM - but the CLR is such a transparent knock-off that it doesn't add anything significant itself, essentially MS seem to be relying just on better integrated tools to compete.

            What there might be in this struggle that could benefit Linux developers remains a mystery to me.

            --
  • Eh...!? (Score:2, Informative)

    On the question on what they are supposed to live on:
    * Red Carpet Express: a subscription service for those who want a reliable high speed access to the Red Carpet servers.
    * Red Carpet Corporate Connect: We modified our Red Carpet updater technology to help people manage networks of Linux workstations easily and to deploy and maintain custom software packages.
    * Support and services for the GNOME desktop and Evolution: Our latest boxed products are our way of selling support services for the various products we ship.

    Are you kidding?
    • don't forget professional services and consulting, which can bring in a good deal of money. he mentioned that in the same interview, but i guess you chose to overlook it.
      • No, thats basically what I laugh about :)

        Say I spend X amount of money on developing something and then are going to try to make this on service and support.

        What happens now is that anyone can offer this service&support, they have to cover the salaries for the consultants and some more. I, on the other hand, must cover the same +the development costs. This makes it impossible for me to compete, I simple can't sell the same thing but have higher costs.

        This is exactly why companies in general are constructed that each department must live on its income. Departments _can_ make a loss if it's directly makes it possible for another department to make a profit on what it produces. However, for this to work it must have some exclusive benefit. Otherwise someone else can sell it to a lower cost (since they don't have to pay lots of cash for the production).
      • See above. Ximian must be cashing in quite good on consulting fees from Sun and HP while helping them migrate to GNOME. (Altough HPaq might be questionable right now :-)

        Mikael
  • by Tony ( 765 )
    KDE sucks.

    Gnome sucks.

    Computers, in general, suck.

    This interview wasn't about Gnome; it was about a component model that might be better than the one Gnome currently uses. Although MS doesn't often come up with good ideas, it does employ some extremely bright people; if some of those bright people come up with a good idea, it behooves us to learn.

    In this way, perhaps computers will someday suck less.
  • Why would anyone want a Windows .NET clone? We want choice.
  • The focus of Mono and .NET is slightly different: we are writing as much as we can in a high level language like C#, and writing reusable pieces of software out of it. Portable.NET is being written in C.
    Just because a system is written in C doesn't mean that it isn't reusable. GNOME was written in C, and it is reusable, just as Miguel said in the interview. The core of Portable.NET is also very reusable.

    Portable.NET has a different focus to Mono. Writing the compiler in C has two benefits: speed and bootstrapping. A well-crafted compiler in C will always be faster than one written in a garbage collected language, no matter how good the JIT is.

    Bootstrapping is also easier with a C compiler: anyone with gcc can install Portable.NET and get it to run on their system. To bootstrap Mono, you have to have Microsoft's system installed.

    There are many people who don't have Windows or don't want Windows. They then have to install the binary version of Mono. This introduces a security problem: you have to trust that the binary is correct, because you cannot guarantee that the published source matches the binary. With Portable.NET, if you trust your copy of gcc, and you can't find any backdoors in the code, you can trust your copy of Portable.NET.

    In reality, it comes down to preference: I prefer to write compilers in C, because I believe that is the best language for writing compilers. Miguel has a different preference.

    Rhys Weatherley - author of Portable.NET
    http://www.southern-storm.com.au/portable_net.html [southern-storm.com.au]

    • Just because a system is written in C doesn't mean that it isn't reusable. GNOME was written in C, and it is reusable
      But surely the point being made was that code wasn't reused in practice. It's all very well pointing out that in theory one can write reusable code in technology X, but if the reality is that nobody does, then something must be wrong. .NET seems to attempt to address some of the pragmatic reasons why people don't reuse code in practice, and it tries to solve those problems.
      A well-crafted compiler in C will always be faster than one written in a garbage collected language, no matter how good the JIT is.
      Bzzt! Not actually true - there are some performance optimisations which are available for a JIT compiler which a static compiler cannot realistically do. For example, the hotspot JVM is able to do something called 'monomorphic inlining'. This is a technique which, based on runtime analysis of the code, converts virtual function calls into non-virtual ones, which opens up a whole raft of extra optmisations.

      In reality, the reverse of what you state is true - a traditional static compilation is at best no better than JIT compilation can get. There is no information available to a traditional compiler that is not also available to a JIT compiler, so why would a JIT compiler ever do a worse job? However the JIT compiler has the added bonus of being able to bring runtime analysis to bear.

      Garbage collection is not really the issue - it's orthogonal to JIT compilation: (1) you can build statically compiled systems with GC, (2) for some purposes GC actually turns out to be faster. (Quick question: which takes longer, allocating from a GC heap, or allocating from a non-GC heap?) GC has different characteristics, but it's just not right to say it is simply slower - for some applications it's faster, for some it isn't.

      Read Jeff Kesselman's book 'Java Platform Performance' makes very interesting reading, and debunks many of the performance myths about GC and VMs that abound.

      • You argue about practical code reuse and then postulate hypothetical JIT compilation.

        And the fact is that C is rarely reused for the same reason every language is not reused -- it takes longer to find and understand the existing code than to rewrite it from scratch.
  • i've been behind gnome and ximian (helix) 100 percent for a long time now. i really like miguel a lot, but it just goes to show what people will do for money.

    you wouldn't see linus torvalds making a decision like this. linus seems more concerned about principal than money.
    • Re:dammit anyways (Score:4, Insightful)

      by Jason Earl ( 1894 ) on Monday September 24, 2001 @06:39PM (#2344371) Homepage Journal

      I hope you are trolling, but you probably aren't.

      Miguel is building Mono because he A) thinks it is cool, B) will probably be popular, C) Microsoft did much of the hard work of designing and documenting the system :).

      Basically Gnome has always been about being able to reuse Gnome libraries and components in your language of choice. That's a pretty darn good goal, but it is definitely trickier than it looks. Micrososft and Miguel have both come to the conclusion that the easiest way to solve this problem is via a virtual machine.

      Basically, it would allow Python hackers like me to reuse any Mono component using a simple:

      import foo

      Not only that, but Perl hackers could then import my Python package using a simple:

      use bar;

      These packages would likewise be available from any other language that had been ported to the CLR. Now, that's some pretty cool stuff.

      The fact that Microsoft sponsored .NET, and that they have tied the CLR and the virtual machine with a lot of tech that is basically evil (Passport and Hailstorm), doesn't mean that the idea behind Mono isn't pretty cool.

      When it's all said and done Mono will probably be compatible with .NET in the same way that gcc is compatible with Visual C++ (ie. not very), but that's still good because it will give Gnome hackers another tool. Miguel's canonical example is reusing an XML parser. Such a thing isn't really possible with Bonobo, but it will be possible if the XML parser is written as a Mono component.

      Personally, I am content using a mixture of Python and C, but the idea behind Mono is intriguing, never mind who wrote the specification.

  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
    • FWIW I've taught 3 courses on C# in the last month. I also teach Java courses, but demand has tailed right off for those at the moment.

      I guess it depends on the advertising focus of the company you teach for. If all the C# instructors in your company are unenthusiastic about the technology then it's no surprise these courses aren't selling - courses delivered by someone who dislikes what their teaching about aren't going to be worth attending. But where I work there's a great deal of enthusiasm for this technology (and also for Java), which is presumably part of the reason why we're selling a lot of these courses right now.
  • I don't know how much many of you know this, but bonobo's are actually a species of monkey with hyper-sexual behavior. Gives a whole new meaning to the idea of interfacing bonobo components with each other...
    • here, i'll try adding crap at the top to get past the l***ness filter (seventh try now!)

      bonobo's are actually a species of monkey with hyper-sexual behavior.

      Ape.

      Your comment violated the postersubj compression filter. Comment aborted.

      (more crap to get mast the l***ness filter)

      (yet more crap)

      (this is my fourth try now)

      god DAMN you people!

      grrr...

      (Use the Preview Button! Check those URLs! Don't forget the http://!)

    • Actually, it is a Chimpanzee not a monkey, and is more similiar to humans than the common chimp. (over 98% identical) Also, it is not new, it was discovered in the 1920s.
      • Actually, bonobo's are a species in the family Pongidae, which also includes chimps, gorillas and orangutans. While they were regarded as a subspecies of chimpanzee until 1933, they are actually a seperate species entirely. As for being new, I never said Bonobos were new. I said that the knowledge that Bonobos are hypersexual gives a new meaning to the Bonobo component framework.

        You're the one who wanted to get technical...
        • behavoir does not a species make
          location does not a species make
          appearance does not a species make
          If they can have kids they are the same species. Wolves & Star Trek notwithstanding.
  • by Anonymous Coward

    I agree that, technologically, both .Net and Mono are very cool. They are a step in the right direction for both programmers and users. However, I quite simply don't want either of these overall systems. I don't want to use .NET on top of Windoze, and I don't want to use Mono/Gnome on top of Linux. The way I see it, if you are going to create a system that is based off of modular components, that provide a generic way to interface and thus are infinitely reusable, you should start building at the kernel level. You should also provide a mechanism that will allow a user to link an application, built of components, into efficient machine code. I don't see why (and I'm not a kernel hacker, so forgive my ignorance) I can't plugin a new process scheduler while my OS is running. What I'm proposing is the design of a system that is not only based off of generic reusable components, but one that is designed from the ground up for that very purpose.

    In that system, I could build my OS from a custom set of components to create an OS optimized for my purposes. I could even have multiple "themes" that I had designed to accomplish certain tasks, and switch between them without a reboot. For instance, I could have my web-browse/email/irc theme (optimized to the kernel level for that type of work), or my multitrack digital audio recorder theme (again, optimized to kernel level), or my http/ftp server theme, etc, etc.

    With the type of system I'm talking about, application developers could build a web-browser out of their choice of reusable components using a Themebuilder application. I can then download the theme, which defines all of the components I will need to build the application and automatically download them as well. Any components I already have, will of course not be downloaded again. Then, if I don't like something about it, I can open it up in my Themebuilder and switch out the HTML rendering engine. Boom. Mozilla using the IE engine (or whatever). And, this method should be applicable down to the kernel level. Don't like this kernel, switch it out for that other one.

    Of course, all of this would have tremendous overhead, decreasing performance. But why not design the system in such a way that the Themebuilder can compile an application into a static image that is efficiently optimized?

    And, to the point of free software. Build it in as a part of the system (even if an optional one). Create the component packager in such a way that, given the proper command, it includes a full compressed copy of the original source code inside the binary distribution of the component along with a copy of the GPL :). So, if you want to hack that HTML renderer to add a couple of microseconds, you can. Of course, that would be an optional feature of the packager, or could use any license. Using this method, the source travels with the binary (why shouldn't it?), which is a small payload when compressed, and is just a fraction of what could conceivably be saved by using a component architecture such as this.

    I guess what I'm saying is that, we seem to be at a crossroads in operating system design. Do we want to keep building crap on top of crap just to make the original crap capable of doing a half-assed job of what it should? Or do we want to put our heads together, think about everything that we and others have learned, think about what we can imagine as the operating system of the future, and make it happen?

    Just my two cents. Maybe I'm crazy. And, maybe I'm the one who missed the point. But, I'd love to hear others' ideas of their ideal operating system of the future.

    thanks.
    • I think the main problem with what you suggest is that nothing would ever work!

      Look at the hardware world - here we have geniune reuse of components - you really can go out and buy a black box (or grey in the case of chips in ceramic cases), read the documentation on how it works and integrate it successfully into a system. But the amount of work you have to do to accommodate the particular component you chose is huge.

      From a high-level point of view, a PowerPC processor does the same thing that a Pentium 4 does. But the details are so different that you have to completely redesign the system if you want to migrate from one to the other. The same problems would plague any attempt to make such low level software pieces as a scheduler 'pluggable'.

      Even components that are designed to be plug compatible suffer from problems. Not all PCI devices work properly in all systems. Even consumer electronics doesn't get it right - I've had to help lots of people get VCRs working properly. The advent of digital TV has made it really hard to get everything working properly. (I'm British - most people here can receive one of 3 competing digital TV services, either cable, satellite or terrestrial. So this is a big issue here now.) It took me about an hour to grok my parents system and wire it up so that it worked, and I've spent 2 years designing MPEG2 broadcast systems! Did you know that over 90% of people don't use their VCRs to do anything other than play rented videos because they can't work out how to connect the things up? And yet the interconnects between such components are some of the best-standardised component interfaces in the world! The real problem here is that the conceptual model is overcomplex - unless you understand the various paths down which information flows in this system (which parts are digital, which are analogue, how they are multiplexed etc.) you're never going to get it working reliably.

      So the liklihood of getting a complete computer system to work where everything has been componentised is extremely slim. It would mean that every user suddenly had to be a systems integrator. Anyone who has worked professional in systems integration will tell you how long-winded and complex the integration process is.
  • I read the article and am very excited about "componentizing" Unix...so has work been dropped on this front or is it still ongoing? I find this approach, and HURD's, very elegant and better in the long run. Unix is way to complex as it is will hundreds of libraries, independent executables, and configuration files strewn all over. We need to move to a higher level of abstraction, a component model. It looks like Ximian is going even further and embracing .NET, although I doubt that this will ever be useful at the system level. A "native" RPC mechanism like CORBA is probably required (um, we can't exactly be sending gigantic SOAP requests around to read a block from a device). So, is there any Bonobo work being done outside Gnome, at the system level?

Competence, like truth, beauty, and contact lenses, is in the eye of the beholder. -- Dr. Laurence J. Peter

Working...