Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
GNOME GUI

GNOME ORBit Ported To Linux Kernel 184

Lennie writes: "Some crazy people did this for fun; let's hope it stays that way. There is enough kernelbloat already (or have you been able to compile a 2.4 kernel as zImage?). Nonetheless, lots of fun I'm sure." Of all the ridiculous things, this has got to be at least three of them. Actually the worst part is that I kinda could see this as being useful. I think I'm broken.
This discussion has been archived. No new comments can be posted.

GNOME ORBit ported to Linux Kernel

Comments Filter:
  • This isn't an attempt to get orbit running faster by moving it into kernel space. This is a functionality hack - orbit won't run in kernel space, the kernel API will be made available to corba clients.

    I'm not sure if I like it or not - the functionality is cool, but the possibility of destabilization is pretty high, and there are lots of security issues.
  • You don't have a clue what you are talking about. This would make the servers even more slow than they already are when running in user space. The whole point of khttpd is to deliver static pages(usually cached) without context switches. This can be done in very little time and makes static serving much faster. Tux goes above and beyond this by allowing dynamic content to be generated by using cache objects(or something like that).

    Just because something runs in kernel space does not make it faster, it just means less latency between interrupt and application response. This is why microkernel design kinda sucks, if your NIC driver is in user space, then when an interrupt happens, the kernel has to switch to user space then back to kernel space and this is why performace is very bad.

    Having GUI functionality in the kernel will not make the GUI any faster, it will just bloat(bloat==unneeded shit) the kernel and lead to frequent crashes. I do think that graphic card drivers should reside in kernel space though, that way they can handle dma/agp/interrupts without having a seperate kernel module and they don't have to worry about using a lock to make sure that only one application will use the graphics hardware at a time. I think that only the bare minimun functionabliliy to exploit the hardware is necessary though.

    Matt Newell
  • Actually, embedding a JVM (or at least major chunks of it) in the kernel would be a great idea -- you'd have the whole Java security model to insulate you from sketchy code, and Java apps could get near-native performance quite easily. Of course, in 95% or cases you'd probably only want to compile it as a module, but either one would be an amazing step for cinching Linux's dominance in the entry to mid-level server market.

    Screw CORBA -- give me Java, and I can have RMI-IIOP, SOAP, etc., and much better security, portability, etc. And, it'd be something that no other OS out there has right now -- not Solaris, or the IBM *NIXes, or True64, or Win2k...actually, MacOS-X might have it; I'm not really familiar enough with its kernel architecture to be able to say how deep the Java integration goes.

  • I am beginning to hate technical discussions on slashdot because people read a few comments here and there and think that they actually know something.
    Mach-like kernels attempt to address this problem by dividing the kernel into "servers".
    I love the idea of a microkernel, but it is innefficient by nature and this has been proven time and time again.
    Other kernels address it by using languages that have better support for modularization (SML, Java, etc.)
    Modularization can be done very efficiently and fairly easily in plain C. I think that C++ could be used in the kernel, but misuse of it's features would lead to disaster.
    Matt Newell
  • An ORB should be part of the kernel; HTTP should not. HTTP support should be implemented as an object that is installed, as needed; the kernel should not need to be recompiled to change its functionality.

    Should NFS run entirely in user space as well?

    In an ideal world, the pure Platonic form of the ideal kernel would not have HTTP. However, in the real world, where mundane things such as performance intrude, there are often good reasons for such compromises. A user-space HTTPD incurs a performance hit as data needs to be copied between kernel space and user space. A kernel-space HTTPD can just send the file buffers out through the network. (Which is why Windows NT is apparently faster than Linux at some web serving tasks.)

    The only places where optimisation does not intrude on elegance are purely academic proof-of-concept projects, which never have to see the real world.
  • Here's what you do to get a bootable install on an ATA/66 (HPT366 in my case):
    • Install your favorite distro on the secondary master ATA/33 device (hdc)
    • Install your favorite kernel version with static support for your particular controller
    • I used 'boot from offboard chipset' or whatever option in the kernel IDE config, but that may have not been necessary.
    • put your kernel in the right place and setup lilo to boot it. Keep in mind that once you do the following BIOS steps your system will call your primary ata66 master hdc, which is how you built your system ;)
    • Move your drive over to your primary ATA/(66|100) master
    • Disable your secondary IDE controller in your BIOS
    • configure your 'external' bootable device into your bios boot list and set the external device as the ATA66 (rather than SCSI). YOu can see that I did this on an abit board ;)
    • reboot


    You should then get your lilo prompt and be able to boot directly onto your ATA66 drive. Then go into root and hdparm your system til it bleeds ;)

    btw, my ATA66 hdparm settings for a Maxtor 30GB 7200RPM HDD:

    hdparm -c1 -d1 -W1 -m16 -X68 /dev/hdc

    (somewhat agressive, but it is a toy bp6 w/2x400 oc'd to 533 ;)

    [root@server linux-2.4.0test10]# hdparm -tT /dev/hdc

    /dev/hdc:
    Timing buffer-cache reads: 128 MB in 1.11 seconds =115.32 MB/sec
    Timing buffered disk reads: 64 MB in 2.76 seconds = 23.19 MB/sec


    (btw, the sensors package really rr00xx on the bp6...)

    Your Working Boy,
  • It's scary, Win2K uses more than 15MB of non-pagable kernel memory. That's just wrong.

    Regardless of what Win2K does, I have a problem with this idea of impeding technological progress by placing arbitrary limits on something like kernel size. Why is 15MB bad while some lesser amount is OK? All you're really saying is that from an economic perspective, right now, you consider that OS's should require less RAM for their kernel. Back in 1985, you would have presumably been criticising any OS that took up more than say 100K of memory, and I would have been similarly pointing out the flaw in your thinking.

    Don't fall into the trap of spouting variations on the statement "640K should be enough for anyone!" (alleged Bill Gates quote.)

  • While this may not be exactly what you're looking for, you might try DS3 [uni-oldenburg.de] for modem sharing. Or, something that I suspect is more what you're after is MSREDIR [asymmetrica.com], a serial port redirector that is designed to let multiple people share multiple modems (an implementation of RFC 2217 [faqs.org]).

    As for sound card sharing, while I haven't looked into this very much, it is probably not difficult to do it using the EsounD daemon [tux.org]. I think I may have tried this before using xmms to play across the network to a remote host. Quite cool if you ask me.
  • I'm not sure I understand why people are so critical of projects like this. It's not like you would be compelled to include this stuff in your kernel, so how does it affect stability?

    On the contrary, you would have to grab the patch (since it's unlikely to be part of the kernel tarball), apply it, find the config option to enable it and rebuild. If you had done all that it is because you want to include it and probably with a very good reason.

  • I have yet seen no good remarks why this wouldn't be a Great Way to get the kernel more modular. Someone have an answer?

    Well, at least as it stands ...

    Security is completely unimplemented. Someone could use corba interfaces to read any file on your system, for example (if the CORBA-FileServer module is installed). Thus, this is really more for prototyping and development than actual real world use.

    I don't know how difficult it will be to secure this, but I suspect it will be a daunting task.

  • by scrytch ( 9198 ) <chuck@myrealbox.com> on Sunday December 10, 2000 @06:59AM (#568926)
    > The initial reaction here seems to be that this is a bad idea. But what's wrong with bringing a object request broker architecture to an essentially monolithic kernel?

    Nothing, but they didn't bring an architecture to the kernel, they just ported ORBit to it. Grafted it in with duct tape and baling twine, really. I don't even see anything valuable coming out of this as a side effect, such as a generic userland/kernel IPC interface like STREAMS or FreeBSD netgraph, just some libc compatibility macro hacks ... *shudder*

    Oh well, everyone's entitled to their own fun projects.

    --
  • Look, I even found more stuff for you! Check out mdmpoold [muehldorf.com]. From the website:

    mdmpoold simulates the serial port of your linux box as serial port of your windows box over a network. Therefore your windows programs can use all types of serial devices connected to or simulated by your linux-box.

    Nifty!
  • I hope you're not suggesting we embed the JRE into the kernel! That would be grotesque, despite the niftiness... No! No niftiness! Don't tempt me! Back!

    Heh!

    But you've got it wrong - don't embed the JRE in the kernel. Rather, build the kernel on top of a VM. Of course, it would have to be a better VM than the JRE, otherwise you'd just end up with JavaOS - it would need better support for low-level operations, and a number of performance issues would have to be addressed - e.g. support for offset-based method dispatch a la C++ vtables, in addition to the more dynamic runtime lookup (for all I know, MS .NET does this - scary thought.)

    When, in a few years time, you're running a 2GHz Crusoe using its native microcode with 1GB of post-Rambus high-bandwidth RAM, hooked to the Internet over gigabit fiber, this won't seem so farfetched.

  • You can now write device drivers in perl.
    Well that sent me screaming and running..
  • hey! Can you see me through this thing?
  • That's the NT way and total madness.

    You saying that "as long as the pieces are stable" is like saying "as long as people won't lie". An admirable goal but it won't happen.

    Kernel should be small, clean and fast. That's the only way to make a stable system. Kernel should be only a very thin abstraction layer over the bare metal. All the rest of the OS (drivers, shells, GUI) should be in the user space.

  • The theory about what to put into a kernel is "only what is completely necessary, and nothing more". The reason for that is that kernel code runs with special priveleges, and if it gets out of control it can take down your whole system. Therefore we want to put as little in the kernel so that we have less chance of the system crashing. So if something can be done in user-space, that's where it should be done.
  • You know, if we were all running the GNU Hurd and this was just another server, it would actually be pretty useful. :)
  • Although I have a hunch that this project is still immature, this could open a new era in scalability.
    Suppose system calls get standardized, like 'onclick' in HTML is standardized, it does mean that an Apple can communicate with a PC without much trouble.
    CORBA at kernel/low level means new infrastructure, applications could come, and Linux becomes more scalable than it already is!

    No, I don't think these people are crazy.

  • > Besides, not everyone wants to use GNOME... Not to mention, the benefits of including GUI components in the kernel are unproven at best.

    Right, and it's a good thing that this hack doesn't do that in any way, shape or form

  • by ErikZ ( 55491 )
    What the hell is 'Gnome Orbit'?
  • so is OmniORB and TAO and Visigenic
  • The endo-kernel, however, runs its OS in micro-kernel userspace processes

    Actually, the Hurd is disturbingly like that... it pushes all kinds of functionality traditionally relegated to the kernel into user space, so individual users can run their own filesystems and the like. I think RMS likes the idea because it allows users almost complete freedom to hack their environments, right down to the kernel itself. Personally, when it comes to OS fun, I'd much rather buy a machine of my own than time-share the Hurd with a bunch of maniacal user-space kernel hackers...

    Vovida, OS VoIP
    Beer recipe: free! #Source
    Cold pints: $2 #Product

  • 640K for a kernel? That's huge! I'm not against progress. What I am against is putting features into the base system. The problem is that the kernel should only contain features everyone uses. I'm against putting stuff like NFS in the kernel (although that's probably the only place to put it) much less something like ORBit. Technological progress is fine, but do it in userespace.
  • Why put something in the kernel that doesn't need to be in the kernel, and is just going to open a bunch of security issues later on.

    Anyone who's read "Secrets and Lies" knows what this is leading into. Linux becoming as bad as windows.
    Cheers.
  • I agree, does GNOME ORBit implement the whole CORBA spec?

    I have yet seen no good remarks why this wouldn't be a Great Way to get the kernel more modular. Someone have an answer?

    On a side issue, I for one, would not hesitate to pay in speed if it means that the kernel becomes more flexible.

    Of course, we could always base it on SOAP [microsoft.com] given that we already have such great http support in the kernel...

  • Damn, you're right. Win2K does use about 2MB of non-pagable kernel memory. I saw the 20MB kernel memory, remembered that BeOS and Linux kernel memory is nonpagable, and forgot that NT runs non-kernel apps in kernel space.
  • They've already got an Web server [demon.nl] in the kernel... Next step is to put a GNOME-enabled Web browser there. Whee!

    --

  • by pb ( 1020 ) on Sunday December 10, 2000 @02:50AM (#568945)
    That is very cool.

    With this, and khttpd and the frame buffer support and just a few other patches, I might not have to run in userland ever again!!

    Just like DOS!!! ;)
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • It wouldn't be the first OS to more closely tie the GUI to the kernel.

    That's true. Don't you love it when Java Swing causes your entire NT box to hang? Let's emulate that on Linux as well.

    There's alot you can't do with a GUI in UNIX because of the rift between userland and kernelland.

    Like what?? (besides another 3 FPS in Quake)

  • Give me some details and URLs, please!
    Thanks!

    Szo
  • ORBit never has relied on GNOME. It uses glib, but that's it. It is already decoupled from any GUI. It is just a fast CORBA ORB.
  • BeOS has the GUI entirely in userspace. Hell, the better part of the driver runs in userspace. In QNX, the driver runs ENTIRELY in userspace, it's just another process. Userspace is not what is causing bad Linux GUIs, its the fundemental problems of X and non-user-oriented design. Before trying to put stuff in the kernel, think about how much faster stuff would be without X. About how much cleaner stuff would be if GNOME were designed to be simple rather than kitchen sink complex. The real problem is that Linux GUI's are doing "cool" things as opposed to useful things.
  • So what we're doing here is making life in userland harder... just because you're an OK app programmer doesn't mean you're an OK kernel programmer. It's this type of hackery that made NT4 so.. uh, stable. "Move it to the kernel!" We're on crack if we think Linux will be immune to the stability issues that Windows has just because we're open source.

    Plus, since Linux (the kernel, yes) doesn't ship with a kernel debugger, how the heck can we expect people to submit bug reports?
  • The kernel is in non-pageable memory. As such, if you kernel takes up 20MB of RAM, that's 128-20=108MB of RAM that you actually have. It's scary, Win2K uses more than 15MB of non-pagable kernel memory. That's just wrong.
  • $ insmod gnome using module /lib/modules-2.4.0/gnome.o $ insmod evolution using module /lib/modules-2.4.0/evolution.o
  • Some people might say that ORBit wasn't the best ORB out there. If you're going to bloat the kernel, at least do it with a nice implimentation.

    mutter... OmniORB ... mutter.

    -------------------------------------------

  • This coming from a group of people who think C++ is too bloated for the kernel? Yes, I can see it now, the VFS written in Perl! Take that Verio.
  • by jilles ( 20976 ) on Sunday December 10, 2000 @02:58AM (#568955) Homepage
    The initial reaction here seems to be that this is a bad idea. But what's wrong with bringing a object request broker architecture to an essentially monolithic kernel? It seems to me that if it can perform well, the added modularity might actually be a huge step forward and might be a nice alternative next to the existing module architecture (sort of a primitive object request broker). One of the immediate advantages is that C is no longer required (but still allowed) for doing kernel programming.

    But then, i'm not a kernel hacker.
  • by Anonymous Coward
    When I first heard of this project, I knew the /. crowd would get their panties in a bunch. Good heavens! Someone trying something really different and (gasp!) possibly innovative? Can't have that.

    I'm not saying this ORB-in-the-kernel thing is a good idea or a bad one. My point is that in open source aren't we supposed to be open minded and experiment with software, let it evolve, and use the best ideas to build a better system? Or do we want to stay in our little niches and iterate over things like the KDE/GNOME "war" a few thousand more times?

    I'd much prefer to see some oddball projects like this one pop up, if only to make people think about other possibilities. I give the people on this project credit--I never would have thought of doing this with an ORB.

  • it is 'GNOME ORBit' :)
    Seriously it is a very hight speed implementation of the OMG CORBA specification, created to give the GNOME project a very fast CORBA implementation. You find more information on the RH Labs [redhat.com] homepage.
  • Actually, Linux is a DOS App. I know, because Loadlin is a protected-mode program that runs Linux from DOS! DOS lets the program do whatever it wants. I mean, it's the running program, right?
    Yeah, but you're expected to be able to return to DOS after playing the game. Which causes no end of annoyances... Saving the pointer to the INT9 handler, hmph
    In other words: Linux is a buggy DOS app. It can't even return to DOS.
  • Where rbzImage stands for Really Big zImage.

    When will mozilla be included in kernel?
  • One of the fundamental units of a Microkernel architecture is a uniform network/port based communcations subsystem.

    That's one way of implementing a microkernel, but it's hardly the defining feature. A microkernel is simply a system in which only minimal functionality is placed in the kernel. As a result, you need a better IPC mechanism than is present in most monolithic kernels, but as long as it's effective, it can take any form.

    I don't know a lot about it

    So perhaps you shouldn't be stating opinion as fact, hmm?

    but it has NOTHING to do with CORBA.

    One can implement a microkernel using CORBA as the primary means of IPC.

    Certainly CORBA services might run as any other Microkernel service but this would be hurrendously slow to use_as the primary means of inter-proccess communication.

    I suppose you can provide us with a link to a paper where this has been thoroughly examined? Or are you just spouting nonsense again?

    If the ORB is in the kernel (if it's not, then it's not really the primary means of IPC now, is it?) and designed to take advantage of that situation (through the use of virtual memory manipulation and direct access to the address space of both processes) then it should be able to provide adequate performance, and a small performance loss can easily be made up for with a dramatic increase in flexibility.

    Even if you don't want to use RPCs, you can still use things like CORBA to formalize the structure of the messages that you pass and abstract the details away from the programmer.

  • > You suppose that it should be done without
    > showing any obvious reason, or what the benefit
    > would be?

    Err, joke?
  • linux isn't bloated. even if it is, you can unbloat it by not compiling what you don't need in. the kernel can be made realitively small.
    i don't see how the 2.2 kernel is more unstable than the 2.0 kernel or the 2.0 kernel is more unstable than the .99 kernel.

    it's gotten bigger, to better handle things like SMP, but i wouldn't call that being bloated. the solaris kernel is far bigger than the linux kernel and i haven't heard anyone say that's bloated. it's also far smaller than the win2k kernel.

    i also don't see a lack of direction with linux, they keep making the kernel better for SMP, small appliances, workstations, networking, you name it.
    just like windows.
  • Can't be my comment, because I didn't say anything that disagrees with what you wrote.
  • Actually, the whole point of the HURD is so that the application developers _can_ do gee whiz stuff. Like implementing filesystems. The HURD allows code NOT PREVIOUSLY POSSIBLE WITH OTHER KERNELS.
  • by Phil-14 ( 1277 ) on Sunday December 10, 2000 @07:35AM (#568968)

    "So, Gnome, what are we going to do tonight?"

    "Same thing we do every night, Pinky, try to take over the kernel!"

  • Ok, i'm gonna borrow a page from Linus' playbook and ask,

    Are you on drugs?

    No, really, what part of "i came to Linux to get AWAY from Windows" don't you understand?

    Besides, not everyone wants to use GNOME. In fact, many, many people don't want to use GNOME. You know what else? Many people (brace yourself for this one!) use Linux purely in console mode.

    Not to mention, the benefits of including GUI components in the kernel are unproven at best. While there may be some performance increase (though this will most likely be from taking cycles away from other things ie no net gain), there is the UGLY reality of having a kernel that will crash when the GUI screws up. Not to mention the problems with a larger kernel (ie not being able to use zImage, which means precluding a lot of architectures from using Linux).

    So yeah, i paraphrase the movie "Billy Madison" when i say, "We are all now dumber for having read that."

    -benc
  • Um, Unix wasn't designed for speed. Fast Good. Doesn't matter how fast your whizbang kernel is, if everything breaks every revision because there are no safe and clean interfaces (even if they are not published) between components. These days performance is cheap...handling complexity is the real problem. I think we will see, and are already seeing, the Linux kernel become more modular purely out of the weight of the great amount of complexity a humongous monolithic kernel supporting so many features has. Don't program to the computer, program to the human (developer, administrator, or end-user). Clever hacks die fast.
  • I would be eager for the HURD, except...

    The HURD is not designed for speed. That immedietly makes the OS crap in my view. I would really like it if the HURD was sort of like an OpenSource QNX, fast, robust, elegant, but its turning out to be decidedly not. That's why I don't really see what HURD brings to the table. Its not like no other OS runs drivers in usermode (several do), its not like no other OSs have a translator-like filesystem (Plan9 seems to), its not like no other OS offers UNIX compatiblity in a microkernel (take your pick.) What exactly does the HURD have that hasn't been done already?
  • Those examples are pretty cool and I must admit, I had never heard of them when trying to solve my modem problem. However, they're still one-shot examples that solve a very specific problem (i.e., share a serial port with another machine). What I was wishing someone would address in a generic way of sharing *any* device remotely. The device driver interface is extremely simple (open, read, write, ioctl, close), so the technology needed to provide that remotely should be trivial. Loose ends would need to be tidied up (like locking devices that need to be locked and the such), but I'm *sure* it's a solveable problem. I think what this person has done is a step in the right direction towards making hardware as generic and universally accessible as anything else. But that's just my opinion.
  • Right now a whole field is open to integrate such things as office applications and other stuff, tightly into the kernel. I advise to start with a browser. We glue it ot the kernel and give it to users. Users will have to use it no matter their likes or deslikes. And so we kill all this madness of distros, alpha-beta-gamma versions, several apps for one purpose. We will start to unify Linux into One Total System. And fight M$ for World Domination.

  • by Galvatron ( 115029 ) on Sunday December 10, 2000 @04:32AM (#568980)
    I haven't really looked much at the kernel, but puting aside for a second stability and security issues, is a large kernel really so bad?

    If there actually is bloat, in the form of unnecessary or poorly written code, that's unquestionably bad, but if there's simply a lot of cool things being put into the kernel, that doesn't strike me as bad. You can always recompile, thereby stripping your kernel down to just what YOU need.

    I know, I know, the average desktop user isn't going to have the skills to recompile a kernel, but that's okay. Whether a user's kernel is 2 megs or 200 megs, they've probably got enough HD space to fit it. The situations where kernel size matters are probably going to be small devices (pda's, palmtops, etc.), and old devices (486). Small devices generally have customized OSes anyway, so you can expect that the manufacturers will take care of that. Old devices are probably better off running older software, so I'm not too worried about them.

    I probably missed a few things, again I'm not a kernel expert. Plus, the security and stability issues are not trivial, and are a big strike against a project like this. But, any other reasons why this is bad?

  • What happens when the ORB itself is part of the kernel and the entire kernel is getting beaten to death?

    This is no different than a TCP SYN attack. This ties up kernel resources for allocating connection overhead (see the TCP SYN-flood protection kernel option). Quite a few DoS attacks exploit communication stack errors, which are part of the kernel.
  • Well, I mean, to be honest, you don't have to run it.

    I would just advise you that rather than bitch and complain, just don't run it. Problem solved, no need to get worked up.
  • Just how does squirting a serialized display protocol across a UNIX domain socket slow down X any worse than GDI (the Windows low level display protocol)? I can't speak for how Be handles display events, but if it serializes its display protcol it's no different.

    True: X has some brain damage when it comes to supporting complex shapes in the protocol. And certainly the current crop of free X servers need hardware accelerated alpha channel blending and font/glyph anti-aliasing. But that's not a problem with X inherently, and all these problems can be resolved by adding X protocol extensions.

    There were certainly better network display protocol designs before X; NeWS comes to mind. Display Postscript is yet another, though that was released after X. Hardware accelerated anti-aliasing and alpha channel support I believe is slated to ship with XFree-4.0.2. Then the userspace widget libraries will have to support those features, which will probably take another year to sort out. On top of that we'll need better fonts designed, which is really the crux of the problem since the default fonts which ship with X are just terrible. Maybe by then the GnuStep team will have released their Display Ghostscript X extension, which IMO is the best way to go.

    None of these solutions will have much impact on speeding up the core X protocol. Mostly because on a local system it's pretty damn fast.

    Cheers,
    --Maynard
  • write your web, nfs etc service as a corba control and let it run in userland.
  • Maybe everyone should first agree on what is bloat?

    Certianly I would call inefficient coding, poor design, poor choice of algorithms, lack of performance tuning -- all bloat in my book. But bigness per se doesn't mean bloat.

    One man's bloat is another man's feature.

    But then on the other hand, I don't want 200 MB word processors either -- even if the word processor can do equation layout, image processing (ala. Photoshop), flowcharts, etc.
  • You suppose that it should be done without showing any obvious reason, or what the benefit would be?

    At least the people porting ORB into the kernel expressed some useful reasons for doing so (even if they are just for experimentation).
  • Regardless of what Win2K does, I have a problem with this idea of impeding technological progress by placing arbitrary limits on something like kernel size.

    I don't mean to flame you, but you sound like a PL/I or APL programmer thirty years ago. "I have a problem with this idea of impeding technological progress by placing arbitrary limits on the number of keywords a programming language should have."

    Bigger ain't necessarily better, as every C programmer knows.

    Back in 1985, you would have presumably been criticising any OS that took up more than say 100K of memory, [...]

    I still am. Seriously. 100kb is still too big, except possibly in the case of distributed operating systems where you usually need networking in the kernel otherwise the paradigm isn't practical. (More research should remove this requirement, though.)

    The flaw in your thinking is the fallacy that more "technological progress" in operating systems requires more non-pagable kernel memory. Real technological progress requires working smarter, not harder.

  • I'm not sure what CmdrTaco's problem is, but this idea isn't all that bad. I'm not convinced this is the right approach yet (been burned by old CORBA stuff too many times), but the concept is definitely a step in the right direction. I've often wondered why nobody has done this before. By "this" I mean constructed a remote interface mechanism at the device level. There are higher level remote interfaces that allow sharing of "resources", but these are all too specific to be useful across the board. As an example, I can share my printer(s) with Samba or lpd but why can't I share my modem or sound card in the same way?

    Here's a real life example of why this would be useful. I have a linux gateway in my house. It contains my only modem. I have three other PCs running a variety of OSes. From time to time, either the wife or I *have* to use Windows to dial in to work or to fax a word doc or whatever. I would LOVE to power up my windows machine, connect across the network to my modem in the gateway, and control it as though the device were local. Why is that a hideous hack thing to do? To me, it sounds like a natural extension of the shared, networked architecture of UNIX and X as they were intended.
  • Perhaps I'm just unaware, but is there a free OS (other than Hurd) that strikes a better balance (than Linux) between efficiency and the other things, such as maintainability, ease of implementing new drivers, daemons, etc.?

    And implicitly, I also mean, that has any actual resources committed to it?
  • Really? I think that most of the kernel in Win2K is pageable, and that it is only about 2 megs that is locked. I have a malfunctioning machine that forgets how much memory it has once in a while, and I've run Win2K on it when it thought it had just 16 megs. I was actually surprised at how well it ran; it wasn't until I opened Photoshop and loaded an image that I realized it had lost most of its memory.
  • That's fine, and I don't really disagree - my point just was that I don't think it's valid to say that "a kernel which takes up 15MB is wrong".

    In any case, assuming the monolithic kernel survives the next decade or so, it's only a matter of time before ORBs, virtual machines and god knows what else make it in there, because everyone will be using them. The 90MB kernel isn't that far off, mark my words! ;)

  • You know, things like khttpd and that sort of thing (I'm sorry, but a Web server is no more an integral part of the OS than a Web browser).

    Well, only because somebody hasn't thought of a good reason to have a web server in the kernel.

    (Reaching WAY up into my rear end here) Suppose the Linux-on-a-chip people get really motivated and make a controller for devices in the home. With IPv6, every light switch and thermostat has this LinuxChip in it with khttpd. A central control (with more juice than the slave devices) pulls data from each device through a bastard child of wget, and you (at work) can browse to myhouse.com (or whatever) and see what the setting are, and change them if desired.

    Taken to the next level, (taking into account here that I know next to nothing about CORBA), these chips also have the korbit compiled in as well. Now devices from different vendor can pass objects back and forth, with the master controller using that information to whatever end.

    I dunno -- I hesitate to write it off as crackpot until I see what it can do. khttpd and korbit are pretty cool hacks, and now that they're "in the wild", we might be surprised at what comes of it. Maybe nothing, maybe the Next Big Thing.

    (Regardless, looks like I need to look into this whole CORBA thing -- I thought it was a flash-in-the-pan, but apparantly a lot of people are taking it pretty damn seriously)

  • I agree with you in saying that, for the time being at least, an ORB in the kernel is probably a special-purpose add-on, which won't be necessary for the majority of users. However, your statements about "the goal and purpose of [an] O/S" are somewhat short-sighted. CORBA is not an 'application', it's a means of allowing applications to communicate in a language and platform independent way across a network. How many *NIX apps already use sockets to make them accessible to multiple 'client' programs on the same machine? Are you suggesting that socket management shouldn't be part of the kernel?

    True, CORBA is not commonly in use today for these purposes, but that is not because it would be unsuitable for them; rather, the technology has simply not begun to establish itself at the LAN and workstation level as a viable option. Huge enterprise installations often rely heavily on high-performance, scalable ORBs to manage communications between legacy system, user applications, web applications, etc., and distributed computing is becoming more and more a point of interest on the desktop.

    There are a number of things that the addition of (a later, more stable version of) kORBit to the core kernel distribution could offer: instant, painless cluster development; increased acceptance of Linux in enterprise-scale networks (now, your Linux router can scream through CORBA calls, as well...); and, as mentioned in the article, distributed hardware and resource access.

    The world standardized around TCP/IP to satisfy the need for standard network connection and data transfer management, and CORBA looks to be a contender for that role in distributed service and resource management. Just as C++, Java, Perl, and other high-level languages are rapidly supplanting C for most application development due to their increased abstraction, programmer productivity, and portability (more so with Java or Perl, of course), CORBA could one day all but replace TCP/IP for the majority of network applications, since while the data being passed between network nodes will be more complex, access to it would be more general and flexible.

    Please also note that you could replace the word CORBA in the above post with SOAP, or any other distributed network application protocol with sufficient momentum and protability to potentially become the standard. In fact, SOAP (or a twisted version thereof) is going to be a major component of Microsoft's .NET framework, which, for all the hatred and suspicion I have of MS, is likely going to do some fucking impressive things for distributed computing (at least, for those running Win2k and later). Right now, the capabilities of Microsoft software are not significantly ahead of the available free alternatives, but if MS gets a huge head start in this area, they just might be able to hold on to a new proprietary stranglehold on the business computing world.

    We should encourage experiementation with projects like the one described in the article, as a means of keeping Linux and free software as a whole on the cutting edge. This is an area where no clear leader has emerged yet, and where an early lead by free software could make a huge difference years down the road.

    Just my $2x10^-2

  • by Tom7 ( 102298 ) on Sunday December 10, 2000 @04:45AM (#569026) Homepage Journal
    This idea sounds good for:

    1. Remotely debugging device drivers

    2. Security holes

  • No, not a Beowulf cluster, but I wonder if this could be extended, etc, to actually spread a kernel around to a dozen 486's, to improve the price/performance ratio. Also, I wonder if you could do redundancy, i.e. have a failsafe machine with an identical module. Considering I have a couple dozen 486/66's sitting doing nothing (well, besides gathering dust)...

    Hmm, just what I need, another project.

  • by enterfornone ( 7400 ) <anonymouscoward@enterfornone.com> on Sunday December 10, 2000 @02:59AM (#569032) Homepage Journal
    The HURD is apparently going to offer CORBA as the primary means of inter-proccess communication. And due the HURD's micro kernel architecture it can probably be done without the bloat of having an orb compiled into a monolithic kernel.
  • by sabre ( 79070 ) on Sunday December 10, 2000 @12:55PM (#569033) Homepage
    Hey all, here's a few responses to the feedback we've received so far, hopefully this will clear up some of the FUD:

    1. NO, we do not expect this to go into any mainstream kernel any time soon. :)
    2. YES, this is an awesome way to play with and prototype kernel functionality in user space.
    3. NO, this does not work with other OS's. Although, there is no fundemental reason why it cannot be ported... again.
    4. YES, this does mean that if it was ported to other OS's that you could trivially write portable drivers.
    5. NO, we are not planning on porting GNOME to the kernel. :)
    6. YES, SOME user space code can do good things in the kernel, particularly network-centric code. Think kHTTPd or kNFS.
    7. NO, at least not without some redesign of GNOME, this will not make GNOME/bonobo faster.
    8. YES, security can definately be improved [err, well, ahh, be implemented? ;)]. We have one proposal from Miguel de Icaza on improving the security to the point of NFS. Other schemes could definately be implemented, we just haven't started thinking about it.
    9. NO, this does not "severely bloat your kernel", it adds about 150k of space when compiled in debug mode. This is still a very alpha version, btw, and there is still a lot to reduce out.
    10. YES, you can now write your device drivers in C++. :)
    Anyways, if you have any other questions, feel free to contact us. [mailto] :)

    -Chris

  • These guys had too much time on their hands. Although they did say it was a school assignment. Good thing they said its experimental and stated it has no security.
    The mainstream press could blow this totally out of context. Worse case scenario, the press says this is Linux's next kernel, which its not. Just an experiment that can be ported in future if they want.

    Amigori

    ----------
    Its alive!

  • by 1010011010 ( 53039 ) on Sunday December 10, 2000 @03:01AM (#569037) Homepage
    ... but it plays one on TV.


    ________________________________________
  • by DickBreath ( 207180 ) on Sunday December 10, 2000 @08:51AM (#569039) Homepage
    The HURD is not designed for speed. That immedietly makes the OS crap in my view.

    Do you mean that they are not taking speed into account at all? Or maybe that they are completely ignoring any concerns for efficiency?

    Or do you maybe mean that they simply are not making speed their number one concern?

    Could it be that speed (or perhaps efficiency), while important, is not the only factor to consider (as so many here seem to think it is) -- that in some circumstances it might reasonable to sacrifice some efficiency for other factors such as maintainability -- ease of implementing new deamons, etc. The age old sacrificing some comptuer resources for human producitivity.

    How much of a difference are we talking about here? A few (or some number of) percent, or an order of magnitude?
  • by EngrBohn ( 5364 ) on Sunday December 10, 2000 @03:01AM (#569040)
    It's an interesting experiment in creating a distributed operating system, but it's certainly nothing to put on your production system. Give it a few years (assuming people are willing to actually explore this) and it just might be something Tanenbaum would put in his next book (or maybe not). I also gotta wonder what the performance hit is, when the module you want is on the same machine requesting it, vs the non-Corba kernal (obviously if the module is on anther machine, you've got to deal with network latencies).
    cb
  • We already have a web server, an nfs server, and others in the kernel...

    We do this for performance reasons - a user land program can't get anywhere near the same perf. as being in the kernel...

    But honestly, programs don't belong in the kernel! (I'm not even going to touch upon the possibilities of a program in the kernel had a sploit...) Why don't we just improve the methods of user land programs communicating with the kernel so they can have performance as good asor so-similarily-it-dosen't matter as being right in the kernel? In the long run, wouldn't this be a better way to go?

    P.S.
    I don't do kernel coding, so please don't tell me to do it myself. You really don't want me touching your kernel anyway! Or perhaps its just not possible, but then how does the hurd have acceptable performance? or does it...

  • I'd be fine if the diff was a few percent or so, but its not. I'd describe HURD as glacialy slow. While having speed be the numero uno priority is not necessary for all OSs, it should definately be in the top three. Depending on the OS, the order of priorities should probably be simplicity/elegance, speed, then nifty-features. While the HURD nails the nifty part, it misses the other two. Secondly, its not exactly as if speed and maintainability are opposed to each other. A system designed for speed and efficiency breeds small, simple code without gimicky features. Do less, do it well, and do it fast should be the mantra for all OS designers. Leave the gee-whiz stuff to the application developers.
  • by vsync64 ( 155958 ) <vsync@quadium.net> on Sunday December 10, 2000 @03:12AM (#569046) Homepage
    This is a cool attempt.. for one thing it shows how flexible the linux kernel is.

    Um... Not really. It's almost trivial to put something inside of something else [indiana.edu], as long as you write good interfaces. And the more 3rd party code you accomodate, the more risk there is of unstable code crashing the system, or of security breaches.

    If necessary, kernel interfaces to userland programs [linuxdoc.org] are probably the best way to go, but even then you're not necessarily safe [attrition.org]. Remember: try to run code as an unpriveleged user at first, then as root if necessary, but only in kernel space as a last resort.

    but it would be funky having device drivers loaded from anywhere using this technology!

    Like Jini [sun.com]? I hope you're not suggesting we embed the JRE into the kernel! That would be grotesque, despite the niftiness... No! No niftiness! Don't tempt me! Back!

    --

  • FreeBSD. That UNIX.

    PS> If not better, then certainly as well.
  • On the contrary. This might help stripping down the Linux kernel to its very bare bones and having all services in user space, communicating with the kernel and each other through CORBA. Having Orbit in the kernel still wouldn't make such a kernel as a microkernel, I suppose. Think of all those heated discussions about formalising internal kernel API's. Having CORBA at your disposal might solve this as well.
  • by evil-beaver ( 15985 ) on Sunday December 10, 2000 @09:31AM (#569052)
    The closer Linux gets to being more like windows the more bloated and unstable it becomes. and yet even most Linux users must admit with every release of windows, windows suck less and less. Linux kernel hackers aka "people with too much free time on there hands" are only too willing to overload the kernel with unnessasary crap. i'll give credit where credit is due, MS knows that some think windows suck and they actively go out to make it work better. that takes direction, the kind that Linux currently lacks.
  • by StrawberryFrog ( 67065 ) on Sunday December 10, 2000 @05:04AM (#569054) Homepage Journal
    > wouldn't be the first OS to more closely tie the GUI to the kernel.

    I shouldn't have to say this here, but CORBA has nothing to do with GUIs, except that it is a necessary service for Gnome's particular architecture.

    This article is good news because it allows the ORB to be used in non-gui contexts. I'm not saying that it should be part of the kernel, but it should and can be decoupled from GNOME. Modularity. Reusability. Flexibility. These are all good things.
  • Comment removed based on user account deletion
  • While the Linux kernel is very useful, it is also quite antiquated in its feature set, architecture, and extensibility. That's why kernel releases become ever more complex and take longer and longer: everything can potentially depend on everything else, and many kernel extensions need to be recompiled for every new release.

    Mach-like kernels attempt to address this problem by dividing the kernel into "servers". Other kernels address it by using languages that have better support for modularization (SML, Java, etc.). Because the Linux kernel is monolithic and written in C, there aren't a whole lot of choices for how to achieve this. Exploring putting a system like CORBA into the kernel seems like a sensible approach. I don't think CORBA and kORBit is going to be the long term solution, but it may be a good testbed for these ideas.

  • what is CORBA anyway? I haven't been able to figure it out.

    CORBA (Common Object Request Broker Architecture) is a method of decoupling "objects" (read: chunks of code) from their physical location, programming language, and implementation details. What this means is that I can have a CORBA object implementing a particular "interface" that is running on an iMac in London, England and use it just the same as if it were local to my box in Ohio, USA. Obviously there would be network latencies, but the functionality would be identical...

    What's more, the module on the iMac can be written in Eiffel while my program (the one using it) is written in Smalltalk; CORBA doesn't care as long as both computers have a compatible ORB.

    What this is supposed to enable is interchangeable software "parts". For instance, I have an email application that requires and "address book" CORBA object; I would query the local ORB and say "hey, I need an 'address book'". The local ORB would hand me whatever address book was available; it might not be the same address book you use, but if it exports the same interface neither of us has to care. The details are handled within the ORB.

    All of this is very cool, but I'm not sure how much I like CORBA or not. :-)

  • The proper way to fix Linux is to fix Linux, not to hide all the broken stuff under a layer of latency-adding abstraction. Do you ever do performance tests on your ideas or do you just code with the force? Very non-obvious things can have huge impact on performance and therefore usability. Adding a layer of abstraction will not help.

    If you think I'm wrong, look at NT... it's all abstracted. If you want NT you know where to get it.
  • Yes, but Linux wasn't intended to be a microkernel, or even a production-quality OS, or really as anything more than Linus and his buddies to hack on. It serves its purpose well. Linux is my favorite OS to play with, because it's so mix-and-match and it seems to get drivers for new toys the quickest (except for USB...bah).

    However, what makes Linux so great for this also makes it worse for other applications. FreeBSD, for example, is IMHO much better at being a server due to various technical features of its implementation, as well as the general feel of the OS. It's also probably better for non-hacker users who don't want to go pick out the best fingerd and ftp and whatnot for themselves.

    I don't understand why people don't appreciate heterogenity. We had a DB guy at work complaining that Linux was "broken" because he was used to Solaris's memory allocation, and he thought that was the One True Way. Yes, Linux's memory allocation can cause kernel panics and OOM errors in a few extreme cases, but the vast majority of time it's more efficient.

    If you don't like the way a system works and the situation isn't conducive to patches, fork it or choose another system. If you're looking for a microkernel, the Hurd [gnu.org] would probably be a better choice than Linux. And it'll be available for production use RSN!

    The beauty of an open environment is that you can choose the system that best fits your needs, rather than being locked into one system.

    --

  • This reminds of a joke one of my CS graduate profs told about the "endo-kernel" (a pun on MIT's exo-kernel [mit.edu]). The exokernel is an extreme microkernel operating system the provides direct hardware access for each application. The endo-kernel, however, runs its OS in micro-kernel userspace processes for protection and runs user apps in the kernel space for performance! ;-)


  • by Mercster ( 39663 ) on Sunday December 10, 2000 @03:38AM (#569074)
    I know most people will shudder, but this is possibly great news for Linux on the desktop.

    If more and more of the infrastructure of a desktop environment (like GNOME) could be moved into kernel space, it wouldn't be the first OS to more closely tie the GUI to the kernel. As long as all the pieces are stable and the whole operation was well thought out (admittedly, not a trivial expectation for Linux code) it would surely mean a more integrated (and speedy) desktop. There's alot you can't do with a GUI in UNIX because of the rift between userland and kernelland.

    Imagine...GNOME/Linux on store shelves, with a custom kernel patched to take advantage of this. GNOME bigots, I can hear you gagging, but noone really cares ;-).

    Mercster
  • That actually *is* pretty funny. :)

    The sad thing is, I remember some of the old "performance tricks" we'd do in DOS.

    Like, if you've got a tight loop that you want to run quickly, make sure you put some inline assembler around it...

    cli

    sti

    That way, you aren't bothered by those useless "interrupts" that the Operating System is always doing. I mean, really, what good are those? ;)
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • by Millennium ( 2451 ) on Sunday December 10, 2000 @06:30AM (#569081)
    I don't think a kernel ORB would be bloat. If anything, it could lead to a [em]de-[/em]bloating of the kernel, because it would allow us to remove things that really don't belong in the kernel. You know, things like khttpd and that sort of thing (I'm sorry, but a Web server is no more an integral part of the OS than a Web browser).

    In addition, it would allow for a much more robust and powerful way of extending the kernel. This is a Good Thing, because the componentized architecture makes sure that this can be done without introducing instability. This is less of an issue in an Open-Source kernel than it is in a closed-source one, of course, but it's still an important advantage that should not be overlooked.

    ORBit itself has the advantage of being small. This is a big thing, since it minimizes size bloat. Its performance is also pretty good (though it could use some improvement), and would get faster in kernelspace. This is also a Good Thing. However, there's the distinct problem that it needs better testing for security issues; something of this complexity can't be allowed into the kernel until it's rock solid for obvious reasons.

    But overall, I say go for it. The potential benefits of CORBA in the kernel are simply too great to ignore.
    ----------
  • Linux is based on venerable technology -- but that technology doesn't account for new concepts and developments in software and operating systems.

    A kernel should be small, concise, and to the point. It should be easy to hang device drivers and other "low-level" tools, while providing for extensions through a standard, well-defined object module.

    An ORB should be part of the kernel; HTTP should not. HTTP support should be implemented as an object that is installed, as needed; the kernel should not need to be recompiled to change its functionality.

  • like the charter for WinNT?
  • I'd argue that having part of an ORB in the kernel isn't a bad idea. Only the "call" operation needs to be in the kernel, though; setup, teardown, and lookup should all be outside. In fact, if you're willing to struggle enough with x86 segmentation, it may be possible to implement interprocess calls without going through the kernel, using the "task gate" hardware nobody ever uses. I'm told that SPARC machines have even better interprocess call hardware. With better hardware support, this could actually work. Interesting thought - could you simulate such hardware on Crusoe? Might be an effective way to make this work.

    If CORBA is fast enough, you should then be able to start taking stuff out of the kernel, such as networking and file systems. This could make the system much more robust, and far easier to debug.

    The big problem with CORBA is that marshalling, copying, interprocess control transfer, and unmarshalling is expensive. But it doesn't have to be; look at L4, EROS, or QNX, which can call from one process to another very fast.

    Another big problem is figuring out a security model for this. CORBA's own security model hasn't been that useful in practice.

    Still, all this is a step in the right direction. All the heavy thinking in OS design for years has been that systems ought to be more modular and more decoupled, but getting the decoupling overhead down is a problem.

  • by vsync64 ( 155958 ) <vsync@quadium.net> on Sunday December 10, 2000 @03:51AM (#569094) Homepage
    If you think I'm wrong, look at NT... it's all abstracted.

    Well, NT used to be abstracted. NT is a sad story, really. Micros~1 got a whole bunch of top-notch engineers (Dave Cutler, the VMS guy, being probably the most famous one) and told them to make the next-generation OS.

    The engineers were gung-ho about it, and designed it to be modular, abstract, and machine-independent. Management, however, was actually against these attributes and turned NT into something much less wonderful. NT used to run on x86, Alpha, MIPS, and PPC, but those were gradually killed off so that only Intel remains.

    Heh, I got most of this info from a book I was browsing at the story but didn't have the money to buy. I wish I had; NT is one of the great tragedies of our day.

    --

  • I don't think I made my point very well. My point is that kernel size alone is not a sufficient criterion on which to judge an OS. The original message said that Win2K was "just wrong" to use more than 15MB of kernel memory.

    I don't know or care whether Win2K is a good OS, but I don't agree that one can judge it on the basis of the size of its kernel alone. If it uses a different architecture, the kernel size may make sense in its context. Kneejerk "15MB is too big" reactions tend to relate more to currently assumed limitations, than to anything real.

    The flaw in your thinking is the fallacy that more "technological progress" in operating systems requires more non-pagable kernel memory.

    I didn't mean to imply that. I'm not saying that progress does or doesn't lie that way; only that it doesn't make sense to close off a particular direction for no good reason.

    Perhaps message-passing microkernels will ultimately replace monolithic kernels, for example, but until that's actually happened (and Linux is a famous argument to the contrary), there may be real benefits to be had by including major additional functionality in a kernel. This kernel ORBit implementation is a good example: it's not so far fetched that in a few years time, an ORB in the kernel will be seen as a necessity.

    It also isn't much of a stretch to imagine an OS built on top of an ORB (which in some respects, isn't that different from a message-passing microkernel model.) If that approach were desirable, the route from here to there might very well be to start out by building the ORB into the kernel, then slowly removing other basic services from the kernel and replacing them with external object implementations. (I can imagine a lot of knees jerking right around now.) Making such radical changes, however, doesn't happen easily when people allow arbitrary metrics to define the boundary of their world.

  • by smartin ( 942 ) on Sunday December 10, 2000 @03:56AM (#569098)
    What a great platform Linux is for as both an teaching aid and as the basis for experimental work. All CS departments should adopt Linux as the basis for their operating system courses since it allows exactly this sort of experimentation.
  • but puting aside for a second stability and security issues

    This reminds me of a famous joke about the Titantic..

    "So how was the trip, aside from the iceberg?"

    Daniel

Where there's a will, there's an Inheritance Tax.

Working...