Inline Review With Miguel De Icaza 198
Interview With Miguel de Icaza
Bringing a component architecture to the UNIX platform
By Dare (Carnage4Life) Obasanjo
Dare Obasanjo: You have recently been in the press due to Ximian's announcement that it shall create an Open Source implementation of Microsoft's .NET development platform. Before the recent furor you've been notable for the work you've done with GNOME and Bonobo. Can you give a brief overview of your involvement in Free Software from your earlier projects up to Mono?
Miguel de Icaza: I have been working for the past four years on the GNOME project in various areas: organization of it, libraries and applications. Before that I used to work on the Linux kernel, I worked for a long time on the SPARC port, then on the software raid and some on the Linux/SGI effort. Before that I had written the Midnight Commander file manager.
Dare Obasanjo: In your Let's Make Unix Not Suck series you mention that UNIX development has long been hampered by a lack of code reuse. You specifically mention Brad Cox's concept of Software Integrated Circuits, where software is built primarily by combining reusable components, as a vision of how code reuse should occur. Many have countered your arguments by stating that UNIX is built on the concept of using reusable components to build programs by connecting the output of smaller programs with pipes. What are your opinions of this counter-argument?
Miguel de Icaza: Well, the paper addresses that
question in detail. A `pipe' is hardly
a complete component system. It is a transport
mechanism that is used
with some well known protocols (lines, characters,
buffers) to process
information. The protocol only has a flow of
information.
Details are on the paper:
http://primates.ximian.com/~miguel/bongo-bong.html
[Dare -- check the section
entitled "Unix Components: Small is Beautiful"]
Dare Obasanjo: Bonobo was your attempt to create a UNIX component architecture using CORBA as the underlying base. What are the reasons you have decided to focus on Mono instead?
Miguel de Icaza: The GNOME project goal was to bring missing technologies to Unix and make it competitive in the current market place for desktop applications. We also realized early on that language independence was important, and that is why GNOME APIs were coded using a standard that allowed the APIs to be easily wrapped for other languages. Our APIs are available to most programming languages on Unix (Perl, Python, Scheme, C++, Objective-C, Ada).
Later on we decided to use better methods for encapsulating our APIs, and we started to use CORBA to define interfaces to components. We complemented it with policy and a set of standard GNOME interfaces for easily creating reusable, language independent components, controls and compound documents. This technology is known as Bonobo. Interfaces to Bonobo exist for C, Perl, Python, and Java.
CORBA is good when you define coarse interfaces, and most Bonobo interfaces are coarse. The only problem is that Bonobo/CORBA interfaces are not good for small interfaces. For example, an XML parsing Bonobo/CORBA component would be inefficient compared to a C API.
I also wrote at some point:My interest in .NET comes from the attempts that we have made before in the GNOME project to achieve some of the things .NET does:
- APIs that are exposed to multiple languages.
- Cross-language integration.
- Contract/interface based programming.
And on top of things, I always loved various things about Java. I just did not love the Java combo that you were supposed to give or take.
APIs exposed to many languages we tried by having a common object base (GtkObject) and then following an API contract and a format that would allow others to wrap the APIs easily for their programming language. We even have a Scheme-based definition of the API that is used to generate wrappers on the fly. This solution is suboptimal for many reasons.
The Cross-language integration we have been doing with CORBA, sort of like COM, but with an imposed marshalling penalty. It works pretty well for non inProc components. But for inProc components the story is pretty bad: since there was no CORBA ABI that we could use, the result is so horrible, that I have no words to describe it.
On top of this problem, we have a proliferation of libraries. Most of them follow our coding conventions pretty accurately. Every once in a while they either wont or we would adopt a library written by someone else. This had lead to a mix of libraries that although powerful in result implement multiple programming models, sometimes different allocation and ownership policies and after a while you are dealing with 5 different kind of "ref/unref" behaviours (CORBA local references, CORBA object references on Unknown objects, reference count on object wrappers) and this was turning into a gigantic mess.
We have of course been trying to fix all these issues, and things are looking better (the GNOME 2.x platform does solve many of these issues, but still).
.NET seemed to me like an upgrade for Win32 developers: they had the same problems we had when dealing with APIs that have been designed over many years, a great deal of inconsistency. So I want to have some of this new "fresh air" available for building my own applications.
Dare Obasanjo: Bonobo is slightly based on COM and OLE2 as can be gleaned from the fact that Bonobo interfaces are all based on the Bonobo::Unknown interface which provides two basic services: object lifetime management and object functionality-discovery and only contains three methods:
module Bonobo {
interface Unknown {
void ref ();
void unref ();
Object query_interface (in string repoid);
};
};
which is very similar to Microsoft's COM IUnknown
interface which has the
following methods
HRESULT QueryInterface(REFIID riid, void **ppvObject);
ULONG AddRef();
ULONG Release();
Does the fact that .NET seems to spell the impending
death of COM mean that
Mono will spell the end of of Bonobo? Similarly
considering that .NET plans
to have semi-transparent
COM/.NET interoperability, is there a similar plan
for Mono and Bonobo?
Miguel de Icaza: Definetly. Mono will have to interoperate with a number of systems out there including Bonobo on GNOME.
Dare Obasanjo: A number of parties have claimed that Microsoft's NET platform is a poor clone of the Java(TM) platform. If this is the case why hasn't Ximian decided to clone or use the Java platform instead of cloning Microsoft's .NET platform?
Miguel de Icaza: We were interested in the CLR because it solves a problem that we face every day. The Java VM did not solve this problem.
Dare Obasanjo: On the Mono Rationale page it is pointed out that Microsoft's .NET strategy encompasses many efforts including
- The .NET development platform, a new platform for writing software.
- Web services.
- Microsoft Server Applications.
- New tools that use the new development platform.
- Hailstorm, the Passport centralized single-signon system that is being integrated into Windows XP.
Miguel de Icaza: Not at this point. We have a commitment to develop currently:
- A CLI runtime with a JITer for x86 CPUs.
- A C# compiler.
- A class library
All of the above with the help of external contributors. You have to understand that this is a big undertaking and that without the various people who have donated their time, expertise and code to the project we would not even have a chance of delivering a complete product any time soon.
We are doing this for selfish reasons: we want a better way of developing Linux and Unix applications ourselves and we see the CLI as such a thing.
That being said, Ximian being in the services and support business would not mind extending its effort towards making the Mono project tackle other things like porting to new platforms, or improving the JIT engine, or focusing on a particular area of Mono.
But other than this, we do not have plans at this point to go beyond the three basic announcements that we have made.
Dare Obasanjo: There are a number of other projects that are implementing other parts of .NET on Free platforms that seem to be have friction with the Mono project. Section 7.2 of Portable.NET's FAQ seems to indicate they have had conflict with the Mono project as does the banning of Martin Coxall from the dotGNU mailing list. What are your thoughts on this?
Miguel de Icaza: I did not pay attention to the actual details of the banning of Martin from the DotGNU mailing lists. Usenet and Internet mailing lists are a culture of their own and I think this is just another instance of what usually happens on the net. It is definitely sad.
The focus of Mono and .NET is slightly different: we are writing as much as we can in a high level language like C#, and writing reusable pieces of software out of it. Portable.NET is being written in C.
Dare Obasanjo: There have been conflicting reports about Ximian's relationship with Microsoft. On one hand there are reports that seem to indicate that there may be licensing problems between the license that will govern .NET and the GPL. On the other hand there is an indication that some within Microsoft are enthusiastic about Mono. So exactly what is Ximian's current relationship is with Microsoft and what will be done to ensure that Mono does not violate Microsoft's licenses on .NET if they turn out to be restrictive?
Miguel de Icaza: Well, for one we are writing everything from scratch.
We are trying to stay on the safe side regarding patents. That means that we implement things in a way that has been used in the past and we are not doing tremendously elaborate or efficient things in Mono yet. We are still very far from that. But just using existing technologies and techniques.
Dare Obasanjo: It has been pointed out that Sun retracted Java(TM) from standards processes at least twice, will the Mono project continue if .NET stops being an open standard for any reason?
Miguel de Icaza: The upgrade on our development platform has a value independently of whether it is a standard or not. The fact that Microsoft has submitted its specifications to a standards body has helped, since people who know about these problems have looked at the problem and can pin point problems for interoperability.
Dare Obasanjo: Similarly what happens if Dan Kusnetzky's prediction comes true and Microsoft changes the .NET APIs in the future? Will the Mono project play catchup or will it become an incompatible implementation of .NET on UNIX platforms?
Miguel de Icaza: Microsoft is remarkably good at keeping their APIs backwards compatible (and this is one of the reasons I think they have had so much success as a platform vendor). So I think that this would not be a problem.
Now, even if this was a problem, it is always possible to have multiple implementations of the same APIs and use the correct one by choosing at runtime the proper "assembly". Assemblies are a new way of dealing with software bundles and the files that are part of an assembly can be cryptographically checksummed and their APIs programmatically tested for compatibility. [Dare -- Description of Assemblies from MSDN gloassary]
So even if they deviate from the initial release, it would be possible to provide assemblies that are backwards compatible (we can both do that: Microsoft and ourselves)
Dare Obasanjo: Looking at the Mono class status page I noticed that a large number of .NET class libraries are not being implemented in Mono such as WinForms, ADO.NET, Web Services, XML schemas, reflection and a number of others. This means that it is very likely that when Mono and .NET are finally released apps written for .NET will not be portable to Mono. Is there any plan to rectify this in the future or is creating a portable .NET platform not a goal of the Mono project? Similarly what are the short and long term goals of the Mono project?
Miguel de Icaza: The status web page reflects the classes that people have "requested" to work on. The status web page is just a way of saying `Hey, I am working on this class as of this date' to avoid code duplication. If someone registers their interest in working on something and they do not do something after some period of time, then we can reclaim the class.
We are on the very early stages of the project, so you do see more work going on the foundational classes than on the end user classes.
I was not even expecting so many great and talented programmers to contribute so early in the project. My original prediction is that we would spend the first three months hacking on our own in public with no external contributions, but I have been proved wrong.
You have to realize that the goals of the Mono project are not only the goals of Ximian. Ximian has a set of goals, but every contributor to the project has his own goals: some people want to learn, some people like working on C#, some people want full .NET compatibility on Linux, some people want language independence, some people like to optimize code, some people like low level programming and some people want to compete with Microsoft, some people like the way .NET services work.
So the direction of the project is steered by those that contribute to it. Many people are very interested in having a compatible .NET implementation for non-Windows platforms, and they are contributing towards filling those gaps.
Dare Obasanjo: How does Ximian plan to pay for the costs of developing Mono especially after the failure of a number of recent venture funded, Free Software-based companies like Indrema, Eazel and Great Bridge and the fact that a sizable percentage of the remaining Free Software based companies are on the ropes? Specifically how does Ximian plan to make money at Free Software in general and Mono in particular?
Miguel de Icaza:Ximian provides support and services. We announced a few of our services recently, and more products and services have been on the pipeline for quite a while and would be announced during the next six months.
Those we announced recently are:
- Red Carpet Express: a subscription service for those who want a reliable high speed access to the Red Carpet servers.
- Red Carpet Corporate Connect: We modified our Red Carpet updater technology to help people manage networks of Linux workstations easily and to deploy and maintain custom software packages.
- Support and services for the GNOME desktop and Evolution: Our latest boxed products are our way of selling support services for the various products we ship.
The particular case of Mono is interesting. We are working on Mono to reduce our development costs. A very nice foundation has been laid and submitted to ECMA. Now, with the help of other interested parties that also realize the power of it, we are developing the Mono runtime and development tools to help us improve our productivity.
Indeed, the team working on Mono at Ximian is the same team that provided infrastructural help to the rest of the company in the past.
Dare Obasanjo: It is probably little known in some corners that you once interviewed with Microsoft to work on the SPARC port of Internet Explorer. Considering the impact you have had on the Free Software community since then, have you ever wondered what your life would have been like if you had become a Microsoft employee?
Miguel de Icaza: I have not given it a lot of thought, no. But I did ask everyone I interviewed at Microsoft to open source Internet Explorer, way before Netscape Communicator was Open Sourced ;-)
unfortunately (Score:5, Funny)
Why does the ximian logo [slashdot.org] look exactly like a spider shoved up somebody's left nostril?
Re:unfortunately (Score:2)
Wow, that's spooky.
Re:unfortunately (Score:1, Funny)
jacare, god, uzi, Osama bin Laden
Arrest them!
Re:unfortunately (Score:1)
Me too! I can never see the spider in the nostril.
Obsoleting unfinished software... (Score:1, Troll)
so, Miguel is obsoleting bonobo before it is even ready for primetime? If you pay attention troughout the interview in the end it just boils down to..
"we were having a really tough time meeting our goals so we've decided to do this instead, Microsoft was doing it so we thought if they can make billions off of it maybe we can make a few mil."
You know, I used to love Gnome when the people behind it knew what they were driving towards an object based unix framework.. nowadays they haven't a clue, KDE has put up and it's time Miguel shut up.
--iamnotayam
Re:Obsoleting unfinished software... (Score:2, Insightful)
So why would anyone use Bonobo instead of Mono? Lots of reasons. For one thing, it's easy to wrap legacy applications and languages in Bonobo than Mono. Since most of the world's applications are legacy applications and Mono won't be finished and stabilized for at lest 2 years, this is a big deal. Also, not everyone will be developing for Mono, so Bonobo is a good way of creating components. And Bonobo isn't GNOME specific. You can use it on Windows without including any GTK+ libraries.
BTW, KParts is Qt specific. You can't write an UNO (OpenOffice's component) KPart. You can write a Bonobo-UNO bridge so all UNO objects appear as Bonobo components and can be used by Bonobo applications.
Re:Obsoleting unfinished software... (Score:1, Informative)
Rich.
Re:Obsoleting unfinished software... (Score:4, Insightful)
All of it. The icons in KDE are much clearer and easier to distinguish than the Gnome ones. Icons should *not* be mini-photographs -- they should be clear simple representations. The Gnome icons give me a headache.
KDE had issues with look and feel back in the KDE 1 days. It doesn't any more. Gnome has the advantage of a larger community developing themes and styles, but the default in KDE 2 is perfectly acceptable, and the recent point releases have greatly increased the 'style candy' aspects over the original 2.0.
--
Don't take the sniping of random Slashdot trolls as a reason for not helping to theme KDE -- but don't go into it with the attitude that you are saving KDE from some horrible design mistake, because there isn't one there.
Re:Obsoleting unfinished software... (Score:1)
But, you'll probably still find them 'pointlessly shiny and obscure'. It's a strange thing to get so worked up over. But, just so I know, can you point me toward examples of icon sets which satisfy your urges?
last line of interview (Score:1)
= i asked microsoft to give away everything that they had paid developers to make for the last 3 years for free...
Re:last line of interview (Score:1)
At any rate, he just asked that they make the source available, I'm not a huge open source guy, but I think that could have really helped back in the days of IE 3 -- and may have limited the W3C from pulling some of that crap, that has made the standard useless and IE incompatible (proof that standards bodies can do alot of harm when they aren't impartial).
user interface a priority at Ximian? (Score:4, Flamebait)
Does any contributor's goal include a focus on usability issues and user experience design? If so, they weren't apparently worth listing.
As in many other interviews, de Icaza's comments are focused almost entirely on technical issues, and not on design issues. Component architectures may be fascinating for engineers, but they don't deliver an enhanced experience for the user by themselves.
To really improve the Linux user experience will require the kind of passionate engagement with the user that Apple has had, but instead we seem to be seeing a very programmer-centered set of interests and preoccupations at Ximian.
Tim
What does user interface have to do with Mono? (Score:5, Insightful)
Considering that they are currently working on the compiler, the language runtime and base class libraries for Mono I fail to see what user interfaces have to do with anything at this stage in the development process.
On the other hand if this was an interview about GNOME, which it isn't then I assume he would have mentioned the user interface issues.
Re:What does user interface have to do with Mono? (Score:4, Insightful)
That's exactly the problem. It's called "user-centered system design" for a reason. User experience is upstream from engineering in a user-centered project. You don't bring designers in late in the game to slap some icons on the system. Instead, you have a set of designs that engineers work towards implementing.
On the other hand if this was an interview about GNOME, which it isn't then I assume he would have mentioned the user interface issues.
I believe your assumption is in error. I have seen de Icaza discuss GNOME in exactly the same way -- naming lots of libraries and implementation strategies, but saying almost nothing about user-facing issues. That's why I noted the continuing pattern in my message.
Tim
Re:What does user interface have to do with Mono? (Score:1, Informative)
http://lists.ximian.com/archives/public/mono-li
Once the base libraries are in place and the GTK+ bindings are finished, it should be possible for more user-centered features to be added. BTW, don't forget that Sun is adding user centered features to Gtk+ so Mono will automatically inherit those.
Re:What does user interface have to do with Mono? (Score:2)
The user-centric design depends on the users of your system. The users of a compiler and language runtime are not going to interact with a GUI, so user interface discussions are irrelevant at this time. Secondly any GUI issues that wil be brought to the implementers of the language runtime and the compiler will be technical issues to probably do with performance and not user interface issues.
I believe your assumption is in error. I have seen de Icaza discuss GNOME in exactly the same way -- naming lots of libraries and implementation strategies, but saying almost nothing about user-facing issues. That's why I noted the continuing pattern in my message.
This I cannot agree or disagree with since most of the interviews involving Miguel or articles written by him I have read are about components and the like which I am interested in and not GUIs which I am not.
Re:What does user interface have to do with Mono? (Score:1)
Why can't compiler users interact with a GUI? I believe this one point is holding Linux back more than any other. There are too many things in Linux that require you to compile from source code, just to be able to run everything that people are used to with Windows.
It's easy to load up a game on Windows, you run the installer, then you run the game.
Under Linux, if you want to use high quality peripherals, half the time you need to recompile or patch your kernel, or compile and add modules to your system.
All of this has to be done using arcane commands and confusing config files. How-tos are not useful to beginners, since the differences between distibutions and versions mean that you almost need a CS degree to even get a common setup working.
Lets get some user-centric design right into the guts of the system, and make it easier to start out.
Re:What does user interface have to do with Mono? (Score:3, Informative)
The Mono compiler is in itself a bunch of classes written in C#. Those classes are being developed so that you can have multiple instances of the compiler for example and to be reusable as a component. The idea behind what can look like a really stupid goal is to make the C# compiler a component that gets plugged into SharpDevelop (a GPL
Also parts of the interfaces in the compiler are there so that SharpDevelop can provide method completion and useful suggestions to the user (what Microsoft calls "IntelliSense").
The compiler "base" classes can be shared, and is indeed the foundation for a work in progress Mono BASIC.NET implementation. A separate effort with the same goals in mind is creating an ECMA Script compiler.
But to us -Ximian- Mono represents a way of reducing our cost of developing end user applications. Evolution being our largest program (750k lines of code so far) has shown the needs for a better platform for software development. Everyone likes to criticize Microsoft for writing unstable applications. The problem is that once you reach certain complexity, it is hard to keep improving and extending an application without a deep knowledge of its internals.
So Mono to us is just a new platform to build better, larger, faster, nicer, more robust applications. Oh, it has the side effect of being compatible with the
Love,
Miguel.
Re:What does user interface have to do with Mono? (Score:1)
What gave you the idea that a programming framework was a "user-centered project"? If only you were trolling...
Re:What does user interface have to do with Mono? (Score:3, Insightful)
All software is made of many layers, and there are different concepts at each layer. The highest layer deals in concepts that are exposed to the user, and should be designed in a "user-centered" way. The other layers, by and large, deal in concepts that the user shouldn't have to think about, so their design should be based mostly on engineering considerations.
The notion that every layer of design needs to be "user-centered" is a gross distortion that harms software development. It would enormously complicate the design of the lower-level functionality, impeding the development of clean, simple layers that can be used by a variety of applications (some unforseen). I think there are enough examples to make this obvious.
By your standard, it seems we should criticize CPU designers for not considering the end-user. This is the same fallacy that leads people to say foolish things like, "You can't build a beginner-friendly interface on top of Unix".
I have seen de Icaza discuss GNOME in exactly the same way
Well, even though GNOME overall is a user desktop, large parts of it are not at the user-facing level, so it is entirely appropriate for Miguel to talk about those parts of GNOME from an engineering viewpoint. Hopefully, though, other people in GNOME think more about the user-facing parts. :-)
Re:What does user interface have to do with Mono? (Score:2)
Now do a search for "interface."
It is pretty clear that Miguel does care about the interfaces that the users of his libraries use. Or in other words, the interfaces that he cares about are the same interfaces that users use.
Hope that helped.
Re:What does user interface have to do with Mono? (Score:3, Insightful)
I don't know what planet you are from. But here in the real world the only "users" that are likely to use the Mono compiler and language runtime are developers.
In other words, the developers working on the product don't need to query a bunch of lawyers, accountants, and fast food personnel to find out how they should work. They already know how they should work.
The fact of the matter is that the compiler should work very similarly to gcc, or the command line java compilers from Sun and IBM, or every other compiler written in the last 10 years. Especially since their target audience is currently Linux developers.
When Miguel talks about Gnome to hackers, of course he talks about libraries, APIs, and implementation. On the other hand, Miguel makes no bones about modeling his User Interfaces from Microsoft's software. Gnumeric's interface, for example, is close enough to Microsoft Excel that I recently used it to ace a College course in the use of Excel. As far as I am concerned, when it comes to spreadsheet usability aping Excel is the only way to go. I have also seen lots of articles by Miguel where he praises Windows UI, and badmouth's Linux's UI (including Gnome).
Believe me, when it comes to the "user centered" design of a compiler Miguel is as qualified as anyone I can think of to choose the UI design.
Re:What does user interface have to do with Mono? (Score:3, Insightful)
Exactly. If you start with programming concepts and then slap an interface on top of it, you find that the underlying layers won't support the interface you need. For instance, in a network application framework, there are serious issues about timeout. If they're approached from a traditional programming perspective, you'll be left with a system that drops connections out from under the user and loses their work in progress.
This kind of issue is pervasive throughout any software system. You've mentioned a few other good examples; I'd add things like timeout management (as above), exception handling, module installation, and refresh timing. Engineers looking at these problems will generally take paths of least resistance which wind up hurting the user. The only way to prevent these problems is to start with the user and work back to the system. If you try to fix them after the foundation is already laid, engineering will always report that it's too late to tear up the foundation.
Tim
Re:user interface a priority at Ximian? (Score:5, Informative)
The Eazel people always talked about usability, and always tried to make computers easier to use. I think their contribution to the GNOME project will live forever in terms of having taught us that things need to improve in that area.
Sure, the interview did not talk about these topics, because the questions that Dare made were not focused in that area.
If you want to see the kind of things that the GNOME project members are doing towards improving the user interface, I suggest you read a number of mailing lists: gnome-2-list, gnome-devel-list, gtk-list and gnome-hackers. They have archives on the web.
Many new usability features are added to GNOME continously. Most GNOME hackers have been reading on topics of user interfaces and usability and have been acting based on this input.
Also, Sun has been providing direct feedback from the usability labs (you might want to check gnotices and the developer.gnome.org site for the actual articles and comments).
Based on this input we have been making changes to the platform to make sure that GNOME becomes a better desktop.
I am sure others can list in more detail all the new improvements that we got. For example, I just found out about the new screenshooting features in the new gnome-core; The Setup Tools got a great review on the Linux magazines for its simlicity to customize the system; There is a new and simplified control center and control center modules that addresses many of the problems in usability that we had in the past.
Better integration is happening all the time in Nautilus.
The bottom line is: the GNOME project is more active than ever and if you want to get to the source, you can either check the mailing list archives, or we should run a series of interviews with the various developers of GNOME that have contributed those features.
Dare was interested in Mono, so that was the focus of this article.
Miguel.
Re:user interface a priority at Ximian? (Score:1, Flamebait)
Most GNOME hackers have been reading on topics of user interfaces and usability and have been acting based on this input.
Has any actual UI usability (non-technical) person been involved in the process at all?
This is a lesson that Linux never seems to want to learn.
See http://usability.gnome.org (Score:2)
-Seth
Re:user interface a priority at Ximian? (Score:2)
It happens to me.
Now, regarding the GNOME releases, the various components that make up GNOME are released continously. The "branded" releases of GNOME have never been anything but just a "re-brand" of the latest set of stable releases.
In some cases when major components are integrated (like gnome-vfs, oaf, bonobo) the "branding" was the place where we commited to maintain API stability for some components. For other pieces, the switch from 1.4 from the previous version was basically just a configure.in number change.
The GNOME 2.x is a different case. It is a switch in our platform to Gtk+ 2.0.
Miguel.
Re:user interface a priority at Ximian? (Score:3, Interesting)
Although I have some odd attachment to Gtk flavored apps over Qt/KDE stuff, UI-wise the GNOME stuff seems downright inferior.
Ximian *is* programmer-centric (Score:2)
IMHO, the only way the linux user experience will be improved is if people from the mac community who are disenchanted with the direction that Apple has taken and marketshare limitations of the PowerPC architecture create a new desktop environment that "just happens to use the linux kernel" (as opposed to a desktop environment that "brings linux to the desktop") and puts the greatest emphasis on how people use technology, not the technology itself.
Re:user interface a priority at Ximian? (Score:2)
Yes they do. If it means much less time wrangling over obscure technical hacks, that's all the more time programmers can spend on "user" issues. So yes, good user-agnostic technical design can lead to good end-user experience. Contrast that with having a horrible mess of infrastructure painted over by a GUI which has had lots of time and money thrown at it.
How will mono solve inter language problems ? (Score:1, Interesting)
I still don't see how mono will solve the inter language problems in unix. Will they have a
Will updating a
Anyway, I'm kind of happy people are starting to address the inter language issues. I have a template based C++ library (NURBS++ if you care to know), I'd like to integrate it in other languages easilly so I can add scripting to it.
Scheme might be better then C++ for RAD and idea testing. Not sure, but with something like
Re:How will mono solve inter language problems ? (Score:1, Insightful)
Re:How will mono solve inter language problems ? (Score:4, Informative)
That means that if you use a Pascal component, it has to be compiled with a compiler that generates files in the CIL format. There are compilers for a bunch of languages out there (proprietary mostly, but there are some research and free compilers as well. The one we are writting is free).
That being said, the
The interesting bit here is that the runtime can provide bridges to arbitrary component systems. For example, Microsoft in their runtime have support for COM. We envision our runtime supporting some sort of COM (XPcom maybe) at the runtime level and things like Bonobo at a higher level (as it does not need any runtime support, just a set of classes that use System.Reflection).
Miguel
API Wrappers (Score:5, Interesting)
Y'know, I hear this all the time, but it just ain't true. The C++ support for Gnome is horrendous. It's been a few months since I've last looked, though. Has it improved at all?
As an example, I'd like to use the canvas in a project I'm planning but there wasn't any C++ interface when last I looked.
Re:API Wrappers (Score:2, Insightful)
One good thing to come out of the Gnome C++ bindings is libsigc++: signals and slots done within the language. To be fair to the Qt guys, at the time most compilers didn't have support to do what libsigc++ does and they were forced to use a workaround.
On the one hand we have Gnome with an incomplete C++ API and on the other we have KDE with a non-ideal signalling mechanism.
Re:API Wrappers (Score:1)
Re:libsigc++ is unportable (Score:1)
Re:API Wrappers (Score:3, Informative)
Re:API Wrappers (Score:1)
If Gtk--/Gnome-- have been fleshed out further, that is fabulous. I'll take another look when I get started on the project (any year now... :)).
Re:API Wrappers (Score:1)
The problem I have with Inti is that it is almost completely undocumented. Lots of code got written, none of it was documented and now it languishes unfinished as the origial author (Havoc, I think) went on to other projects.
I also question Inti's rejection of STL support. While it certainly doesn't have to integrate with every whiz-bang STL feature, I can certainly imagine uses for STL-compatible iterators and algorithm support.
Agreed. My hesitation is not with the language, it's with the tools. I know it's irrational, but I really find the moc annoying. It's one more dependency to deal with in Makefiles. I know KDE has special support to handle all of this, but I want to develop a project relatively indepently of any specific infrastructure. I don't want to have to integrate with a special build system. Plus special keywords mess up emacs highlight and indent. :)
But maybe these are just the rants of a stubborn old codger. :)
Gtk-- vs Qt (Score:2)
Gtk-- is by far the most elegant. It actually uses C++ as it is intended to be used, giving a statically typesafe interface unlike Qt, and not depending on ugly special purpose preprocessor hacks.
However, Qt is way more comfortable. Stable well documented API's beats any amount of elegance, and the ugly hacks are easy to use. The lack of static typesafety is something most programmers are used to deal with.
I switched from Gtk-- to Qt and haven't regretted it, it gets the work done, but sometime the academician in me miss the superior elegance of Gtk--.
Why CLR? (Score:2)
Re:Why CLR? (Score:1)
> interpretted code was. Why not just make a new
> object format, or extend an existing one, but
> make it pure intel object code?
I guess because Intel object code is far from being perfect. Too many opcodes to consider, too many specialities that would have to be emulated. Ok, there are some 386/486/pentium emulators out there, but I can think of better VM designs to make byte-code portable.
On the other hand: Why not use Z-Code [demon.co.uk]? I remember one company that even made a database program for this architecture...
Re:Why CLR? (Score:2)
The problem as I see it: CORBA tried to apease people who used crappy languages (like C), and that made everything 10x harder (or more?). .NET makes a new runtime for all the various languages, and it doesn't have to deal with the semantics of unfriendly languages. Some dynamic languages are fairly easy to port -- I think Perl and Python are both available without a huge amount of effort (?) -- and other languages have to be reinvented (C#).
Then they put the whole thing on a VM, which seems unnecessary. Most of the dynamic languages already run on a VM anyway (their respective byte-code interpreters), so maybe it's not a big deal. But it's conflating two different ideas -- sharing data and execution in different environments. This makes for less confusion of non-related ideas than exists in Java, but it's still unfortunate.
But then again, the alternative is for them to make all the different pieces anyway, and give each one an acronym of its own.
Re:Why CLR? (Score:2)
The reason it relates to ELF is that there needs to be a common format for storing type information in the object files. Remember, in the CLR, you can have Java extend a C++ class. This means that every class needs to have it's object information stored in the ELF file. In addition, there need to be standards for loading and initializing classes, as well as how they link.
Re:Why CLR? (Score:2)
I would expect current C++ compilers to be incompatible with this system, because they don't allow enough runtime introspection. I wouldn't expect C to work at all, except in a manner that involves lots and lots of function calls for everything (the if-you-must-use-C-you-will-suffer style, which isn't that uncommon -- the GTK object system feels a little like that).
Maybe this is getting me to understand why the CLR is being used. If you can subclass objects from other languages, you then will have some of the methods implemented in one language and portions in another. This is easy if they all are actually a common set of bytecodes. Easy to GC too. This is hard if there's multiple interpreters/binaries/etc. being invoked, and near impossible to GC.
Re:Why CLR? (Score:2)
***************
Actually, all Linux-compiled languages use ELF. C, Java, C++, Objective-C, etc.
Re:Why CLR? (Score:2)
Re:Why CLR? (Score:2)
**********
You have to pick a format, so why not pick a native one? There seems to be no need to develop a brand new format when one already exists for compiled languages.
Why not Parrot? JVM? (Score:2)
I would like to see a cook-off between Mono and Parrot though, for similar high-level code. It would seem smart to use the best underlying VM for all of these high level languages.
Re:Why CLR? (Score:1)
As for the advantages, well several spring to mind, and these all apply to Java just as much as .NET:
The new environment (Score:2)
I think this will put a major crunch into development projects like Mono and .Net
Obviously this interview was probably done a few weeks ago, so I wonder how things have changed over there.
I'm just wonder how much demand there will be for projects like this, especially if MS is betting the farm on it. You can only bet the farm so many times before you loose.
Re:The new environment (Score:2)
And how many tens of thousands of computers were destroyed in the tragedy and will need to be replaced?
My dad even suggested buying Dell and Gateway stock for that reason.
Re:UNIX-only? (Score:1)
Enlighten me (Score:1)
Re:Enlighten me (Score:3, Informative)
It does not matter in which language you define a function or a class, other languages targetting the CLR can consume the data (as far as the languages are CLS compliant, there is a spec you have to follow to get this).
The JVM did not have such functionality nor such a spec.
Miguel.
Languages for the JVM (Score:2)
Re:Languages for the JVM (Score:2)
Also it lacks features like P/Invoke that were things we were interested in.
We were trying to see how to move ahead with GNOME, and we made a choice.
Miguel.
Re:Languages for the JVM (Score:1)
You will find this article very interesting:
JVM And CLR [byte.com]
Re:Languages for the JVM (Score:1)
There probably are some fundamental limitations of the JVM - I expect talking to LISPers, embedded database people or workflow app developers, for example, would quickly elucidate a good set of requirements for an improved VM - but the CLR is such a transparent knock-off that it doesn't add anything significant itself, essentially MS seem to be relying just on better integrated tools to compete.
What there might be in this struggle that could benefit Linux developers remains a mystery to me.
--
Eh...!? (Score:2, Informative)
* Red Carpet Express: a subscription service for those who want a reliable high speed access to the Red Carpet servers.
* Red Carpet Corporate Connect: We modified our Red Carpet updater technology to help people manage networks of Linux workstations easily and to deploy and maintain custom software packages.
* Support and services for the GNOME desktop and Evolution: Our latest boxed products are our way of selling support services for the various products we ship.
Are you kidding?
Re:Eh...!? (Score:2)
Re:Eh...!? (Score:1)
Say I spend X amount of money on developing something and then are going to try to make this on service and support.
What happens now is that anyone can offer this service&support, they have to cover the salaries for the consultants and some more. I, on the other hand, must cover the same +the development costs. This makes it impossible for me to compete, I simple can't sell the same thing but have higher costs.
This is exactly why companies in general are constructed that each department must live on its income. Departments _can_ make a loss if it's directly makes it possible for another department to make a profit on what it produces. However, for this to work it must have some exclusive benefit. Otherwise someone else can sell it to a lower cost (since they don't have to pay lots of cash for the production).
True - Sun and HP sure can pay (Score:1)
Mikael
Re:Hey man nice shot (Score:1)
Re:Its called KDE ye dumbarse (Score:1)
Re:Its called KDE ye dumbarse (Score:1)
Your puny leetle oparateeng seestem makes joo veek und vutile.
Gnoome ees unly a stupidt leetle eemitation oof Mikrosoft Weendows, the mohst powaful veendowing seestem een za vurld.
This isn't about KDE (Score:2, Insightful)
Gnome sucks.
Computers, in general, suck.
This interview wasn't about Gnome; it was about a component model that might be better than the one Gnome currently uses. Although MS doesn't often come up with good ideas, it does employ some extremely bright people; if some of those bright people come up with a good idea, it behooves us to learn.
In this way, perhaps computers will someday suck less.
Bright people at Micros~1 (Score:1)
Anders wrote Turbo Pascal in 1982, and was chief architect of Delphi 1.0 and 2.0
Ander Heilsberg received the 2001 Dr. Dobb's Excellence in Programming Award [ddj.com].
We need choice (Score:1)
Portable.NET vs Mono implementation (Score:2)
Portable.NET has a different focus to Mono. Writing the compiler in C has two benefits: speed and bootstrapping. A well-crafted compiler in C will always be faster than one written in a garbage collected language, no matter how good the JIT is.
Bootstrapping is also easier with a C compiler: anyone with gcc can install Portable.NET and get it to run on their system. To bootstrap Mono, you have to have Microsoft's system installed.
There are many people who don't have Windows or don't want Windows. They then have to install the binary version of Mono. This introduces a security problem: you have to trust that the binary is correct, because you cannot guarantee that the published source matches the binary. With Portable.NET, if you trust your copy of gcc, and you can't find any backdoors in the code, you can trust your copy of Portable.NET.
In reality, it comes down to preference: I prefer to write compilers in C, because I believe that is the best language for writing compilers. Miguel has a different preference.
Rhys Weatherley - author of Portable.NETl [southern-storm.com.au]
http://www.southern-storm.com.au/portable_net.htm
Re:Portable.NET vs Mono implementation (Score:1)
In reality, the reverse of what you state is true - a traditional static compilation is at best no better than JIT compilation can get. There is no information available to a traditional compiler that is not also available to a JIT compiler, so why would a JIT compiler ever do a worse job? However the JIT compiler has the added bonus of being able to bring runtime analysis to bear.
Garbage collection is not really the issue - it's orthogonal to JIT compilation: (1) you can build statically compiled systems with GC, (2) for some purposes GC actually turns out to be faster. (Quick question: which takes longer, allocating from a GC heap, or allocating from a non-GC heap?) GC has different characteristics, but it's just not right to say it is simply slower - for some applications it's faster, for some it isn't.
Read Jeff Kesselman's book 'Java Platform Performance' makes very interesting reading, and debunks many of the performance myths about GC and VMs that abound.
Re:Portable.NET vs Mono implementation (Score:1)
And the fact is that C is rarely reused for the same reason every language is not reused -- it takes longer to find and understand the existing code than to rewrite it from scratch.
dammit anyways (Score:1)
you wouldn't see linus torvalds making a decision like this. linus seems more concerned about principal than money.
Re:dammit anyways (Score:4, Insightful)
I hope you are trolling, but you probably aren't.
Miguel is building Mono because he A) thinks it is cool, B) will probably be popular, C) Microsoft did much of the hard work of designing and documenting the system :).
Basically Gnome has always been about being able to reuse Gnome libraries and components in your language of choice. That's a pretty darn good goal, but it is definitely trickier than it looks. Micrososft and Miguel have both come to the conclusion that the easiest way to solve this problem is via a virtual machine.
Basically, it would allow Python hackers like me to reuse any Mono component using a simple:
import foo
Not only that, but Perl hackers could then import my Python package using a simple:
use bar;
These packages would likewise be available from any other language that had been ported to the CLR. Now, that's some pretty cool stuff.
The fact that Microsoft sponsored .NET, and that they have tied the CLR and the virtual machine with a lot of tech that is basically evil (Passport and Hailstorm), doesn't mean that the idea behind Mono isn't pretty cool.
When it's all said and done Mono will probably be compatible with .NET in the same way that gcc is compatible with Visual C++ (ie. not very), but that's still good because it will give Gnome hackers another tool. Miguel's canonical example is reusing an XML parser. Such a thing isn't really possible with Bonobo, but it will be possible if the XML parser is written as a Mono component.
Personally, I am content using a mixture of Python and C, but the idea behind Mono is intriguing, never mind who wrote the specification.
Re:dammit anyways (Score:1)
Re: (Score:2, Interesting)
Re:C# compiler!?!? (Score:1)
I guess it depends on the advertising focus of the company you teach for. If all the C# instructors in your company are unenthusiastic about the technology then it's no surprise these courses aren't selling - courses delivered by someone who dislikes what their teaching about aren't going to be worth attending. But where I work there's a great deal of enthusiasm for this technology (and also for Java), which is presumably part of the reason why we're selling a lot of these courses right now.
Bonobo, heh heh (Score:2)
how fucking lame is this? (Score:1)
bonobo's are actually a species of monkey with hyper-sexual behavior.
Ape.
Your comment violated the postersubj compression filter. Comment aborted.
(more crap to get mast the l***ness filter)
(yet more crap)
(this is my fourth try now)
god DAMN you people!
grrr...
(Use the Preview Button! Check those URLs! Don't forget the http://!)
Chimpanzee not monkey (Score:1)
Re:Chimpanzee not monkey (Score:2)
You're the one who wanted to get technical...
Re:Chimpanzee not monkey (Score:1)
location does not a species make
appearance does not a species make
If they can have kids they are the same species. Wolves & Star Trek notwithstanding.
Have we missed the point? (Score:1, Interesting)
I agree that, technologically, both
In that system, I could build my OS from a custom set of components to create an OS optimized for my purposes. I could even have multiple "themes" that I had designed to accomplish certain tasks, and switch between them without a reboot. For instance, I could have my web-browse/email/irc theme (optimized to the kernel level for that type of work), or my multitrack digital audio recorder theme (again, optimized to kernel level), or my http/ftp server theme, etc, etc.
With the type of system I'm talking about, application developers could build a web-browser out of their choice of reusable components using a Themebuilder application. I can then download the theme, which defines all of the components I will need to build the application and automatically download them as well. Any components I already have, will of course not be downloaded again. Then, if I don't like something about it, I can open it up in my Themebuilder and switch out the HTML rendering engine. Boom. Mozilla using the IE engine (or whatever). And, this method should be applicable down to the kernel level. Don't like this kernel, switch it out for that other one.
Of course, all of this would have tremendous overhead, decreasing performance. But why not design the system in such a way that the Themebuilder can compile an application into a static image that is efficiently optimized?
And, to the point of free software. Build it in as a part of the system (even if an optional one). Create the component packager in such a way that, given the proper command, it includes a full compressed copy of the original source code inside the binary distribution of the component along with a copy of the GPL
I guess what I'm saying is that, we seem to be at a crossroads in operating system design. Do we want to keep building crap on top of crap just to make the original crap capable of doing a half-assed job of what it should? Or do we want to put our heads together, think about everything that we and others have learned, think about what we can imagine as the operating system of the future, and make it happen?
Just my two cents. Maybe I'm crazy. And, maybe I'm the one who missed the point. But, I'd love to hear others' ideas of their ideal operating system of the future.
thanks.
Re:Have we missed the point? (Score:1)
Look at the hardware world - here we have geniune reuse of components - you really can go out and buy a black box (or grey in the case of chips in ceramic cases), read the documentation on how it works and integrate it successfully into a system. But the amount of work you have to do to accommodate the particular component you chose is huge.
From a high-level point of view, a PowerPC processor does the same thing that a Pentium 4 does. But the details are so different that you have to completely redesign the system if you want to migrate from one to the other. The same problems would plague any attempt to make such low level software pieces as a scheduler 'pluggable'.
Even components that are designed to be plug compatible suffer from problems. Not all PCI devices work properly in all systems. Even consumer electronics doesn't get it right - I've had to help lots of people get VCRs working properly. The advent of digital TV has made it really hard to get everything working properly. (I'm British - most people here can receive one of 3 competing digital TV services, either cable, satellite or terrestrial. So this is a big issue here now.) It took me about an hour to grok my parents system and wire it up so that it worked, and I've spent 2 years designing MPEG2 broadcast systems! Did you know that over 90% of people don't use their VCRs to do anything other than play rented videos because they can't work out how to connect the things up? And yet the interconnects between such components are some of the best-standardised component interfaces in the world! The real problem here is that the conceptual model is overcomplex - unless you understand the various paths down which information flows in this system (which parts are digital, which are analogue, how they are multiplexed etc.) you're never going to get it working reliably.
So the liklihood of getting a complete computer system to work where everything has been componentised is extremely slim. It would mean that every user suddenly had to be a systems integrator. Anyone who has worked professional in systems integration will tell you how long-winded and complex the integration process is.
Let's Make Unix Not Suck (Score:2)