Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

Linus vs Mach (and OSX) Microkernel 394

moon_unit_one writes "Linus Torvalds apparently doesn't think much of the Mach mirokernel and says so in his new autobiography called Just for Fun: The Story of an Accidental Revolutionary. See what he says in this ZDNet UK article." The age old mach debate resurrected once again. Anyone have the old Minix logs with Linus? Thats a fun one too.
This discussion has been archived. No new comments can be posted.

Linus vs Mach (and OSX) Microkernel

Comments Filter:
  • by Anonymous Coward
    Microkernel vs Monolithic System Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more. The alternative is a microkernel-based system, in which most of the OS runs as separate processes, mostly outside the kernel. They communicate by message passing. The kernel's job is to handle the message passing, interrupt handling, low-level process management, and possibly the I/O. Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the not-yet-released Windows/NT.

    Lets look at his examples of 'the future' of operating systems:

    RC4000 -- wow, now there's a winner.
    Amoeba -- research toy.
    Chorus -- ditto
    Windows NT -- saw very little success until they ditched the microkernel and moved networking and GDI/User into the kernel.
    Mach -- are there any mach based OSes that don't run just one monolithic uber-server under mach?

    How about some examples he didn't give (they wern't around yet, or were off his radar for various reasons):

    QNX -- in the top 5 in the RTOS market, attempt to push into desktop market (by bizzare scam involving the 5 remaining amigans) seems to have petered out.
    BeOS -- desperately searching for a market. Claim to fame is scheduling latency, but has been outdone there by both QNX (obviously) and Linux.

    As if we needed more reasons not to ask academics for real world advice.
  • by Anonymous Coward
    You completely neglected where microkernels have their real advantage.

    SMP and reliability.

    In a 'clean' monolithic kernel, the entire kernel is locked when processing. This is perfectly fine in a uniprocessor case. But for multiprocessor systems, it sucks bigtime because your other processors are sitting in a spin-loop waiting for the kernel lock.

    Now the obvious way around that is to create exceptions to the rule. But that quickly becomes a nightmare to maintain. I'm sure that this is major reason the reason the 2.4 kernel was so very late.

    Now if you purport that we'll be stuck with uniprocessor or 2way processors for a long time. Thats very shortsighted. I see 16way systems moderately commonplace in 4-5 years. Linus doesn't really deal with huge n-way systems, so thats probably where is lapse comes into play.

    Now the reliability aspect of a microkernel comes from its rigid policies - making code proofs&verification easier to do. The spaghetti code monolithics that i've seen, heh, as with verifying any spaghetti code, good luck!

    It should also be noted that there is not a black-and-white distinction between a monolithic kernel and a microkernel. Any well-engineered design should have trade-offs to perform well in many environments. For instance, some microkernels give a pass to the graphics code running in kernel space. And monolithics will implement some message passing thing so that the network drivers play nice with everyone else.

    Tom
  • I've noticed the advantage of open source is that your users can be programmers at the same time. Users who are programmers can be massively more helpful for certain types of bugs.
  • NeXTSTEP and OSX are monolithic kernels ontop of a microkernel -- just like mkLinux.

    I don't even think OPENSTEP is an OS -- it's the Objective C environment that NeXTSTEP used, ported to other operating systems.

  • If, on the other hand, they had tried to design an OS that would support classic apps natively, they likely would have had to sacrifice system stability and performance to do it.

    Oh they tried, they tried for the better part of 10 years make that happen, and finally gave up, settling on Rhapsody for new apps and BlueBox (now known as Classic) for legacy apps.

    The stroke of genius that is allowing Apple to perform the impossible for a second time (ie a major architechural change without more than trivial backwards compatability issues) is Carbon. A subset of the old API which can be multithreaded and memory protected. So, rather than ask Adobe to rewrite Photoshop, Illustrator, et al. from the ground up, they only have to tweak the existing code a little, recompile, and they've got an OS X app. I'm sure that in time Photoshop will be rewritten with the Cocoa API, as will many other major apps (ProTools, with it's...memory issues...almost certainly requires it) but Carbon puts a stepping stone in the middle of that river so that everyone can migrate at a speed more in line with their comfort level.

    Don Negro

  • The fundamental problem is hardware. Task switchs, under *any* OS, take a long time.

    At the time of the Tannenbaum/Torvalds debate, the primary CPU in use was still the 386- which took 300 to 500 clock cycles to task switch. No, that's not a typo- three to five *hundred*.

    The situation has imporved enormously- Pentium class CPUs only take ~50 clock cycles to task switch. Of course, this is disregarding any TLB misses which are incurred.

    The task switch overhead is what caused NT to move the graphics calls into kernel space in 4.0 (causing a large improvement in performance and a huge decrease in reliability). The hardware costs of task switching is what kills "true" microkernel OSs on performance. And don't bother to whine about "poor processor design"- people (including me) want an OS that runs well on the hardware they have.

    The best operating systems are the "mutts"- not true microkernels nor true monolithic kernels. Which is how linux evolved- kernel modules for many if not most things, virtual file systems, etc.
  • Apparently there's an issue with lack of memory protection? I can't believe that would be true, I certainly haven't read anything about it in the reviews of mac os X

    OS X has memory protection. The old MacOS doesn't. The problems will be with old apps that 'got away' with memory violations from time to time (due to dumb luck in some cases) and seemed OK. Now, they will die on a SEGV. That's a good thing since they were broken from the start, but will take some fixing.

  • Not that there's a sharp line once you have loadable kernel modules.

    There's still a sharp line. Modules in Linux have very little to do with microkernel. In microkernel, the equivilant of modules would be more like daemons.

  • f all the apps on an OS were written in a language that didn't allow arbitraty pointer arithmetic (like Java, etc.), and the compiler was trusted (and not buggy) then seperate address spaces between apps (and the kernel) are not necessary.

    Someone somewhere will find a way to assemble badly behaving code (deliberatly or not). We might as well call passwords and firewalls unnecessary because all we really need is an internet where people don't go where they're not invited and don't try to crash servers.

  • n a 'clean' monolithic kernel, the entire kernel is locked when processing. This is perfectly fine in a uniprocessor case. But for multiprocessor systems, it sucks bigtime because your other processors are sitting in a spin-loop waiting for the kernel lock.

    The issue of lock granularity is not really a micro vs monokernel. Linux 2.0.x had the big kernel lock, and SMP suffered a good bit because of it. The newer kernels are much more finely grained.

    microkernel doesn't make that issue go away. Taking a sub-system out of the kernel and making it a user process doesn't eliminate the need to access it's data serially, it just shifts the problem to the scheduler. In effect, it turns every spinlock into a semaphore. The Linux kernel uses semaphores now for cases where the higher cost is offset by the benefit of having the other CPU(s) do something besides spin. The spinlocks are used in cases where the cost of trying to take a semaphore (failing and going it the queue) are higher than the cost of spinning until the lock is released.

    However, all of that is incidental to the defining difference between a mono and microkernel.

    Now if you purport that we'll be stuck with uniprocessor or 2way processors for a long time. Thats very shortsighted. I see 16way systems moderately commonplace in 4-5 years.

    I purport no such thing. While I do expect to see more multi-processor systems, I don't think they'll be SMP.

    Now the reliability aspect of a microkernel comes from its rigid policies - making code proofs&verification easier to do. The spaghetti code monolithics that i've seen, heh, as with verifying any spaghetti code, good luck!

    Microkernel does tend to force cleaner and sharper divisions and interfaces. The same thing can and SHOULD be done in a monokernel, but temptation to bend the rules can be quite strong.

    It should also be noted that there is not a black-and-white distinction between a monolithic kernel and a microkernel.

    Agreed completely! There's nothing I hate more than a system design with an acedemic axe to grind. There are a few places in the Linux kernel where a very microkernel-esque approach is taken. CODA comes to mind. Also NBD. We also see the opposite in khttpd.

    In that sense, monokernel itself is a compromise on the spectrum between completely monolithic systems (mostly embedded systems, Oracle on bare metal, etc.), systems with no memory protection (on purpose!) and cooperative multi-tasking, monokernel, microkernel. I'm not honestly sure where to place exokernel on the spectrum.

    I stand by the thought that the biggest potential advantage to microkernels is in cluster systems.

  • a) their small, minimalist design allows for fewer bugs, greater flexibility, etc, etc(all the old arguments you hear about them)...

    No arguement about the flexibility. I look forward to one day having enough free time to play with and enjoy that feature.

    The minimalist design allowing for fewer bugs is somewhat a red herring. By focusing on a smaller piece of code, you'll naturally find fewer bugs there vs. a larger piece of code (all else being equal). Once you add in the sevvices, the bug count is back up to where it was.

    There is some benefit that at least the bugs are isolated from each other, and thus more limited in the damage they can do. However, if the service is a critical one, the system will still be dead.

    b) the fact that very little code is in the kernel allows for better realtime performance. The less time that must be spent in kernel mode(where interrupts are disabled), the more time can be devoted to actual work and servicing devices.

    Interestingly, RTLinux [rtlinux.org] is a good example of that approach. It makes the entire kernel pre-emptable for the real time services.

    L4 and EROS kernels are even faster than previous generation microkernels.

    L4 is a good example of improvements in the area of microkernels. It's on my list of things to check out when I get that mythical free time. EROS is also very interesting. A microkernel with persistant objects to replace nearly everything! It's also on my list.

    IMO, overhead penalty invloved with context switching in a u-kernel OS is totally worth it, especially for a desktop system. And QNX proves that it can be done right.

    Our conclusions are not too far apart. I'm not so much saying that microkernel is bad or that it won't work, I just don't think it's time is here just yet, and Mach in it's current form isn't it. I do look forward to the HERD being ready for prime time. The 1.0 version will probably perform poorly vs. Linux, but it will provide a good platform to build from in the Free Software world. (yep, it's on my list :-)

  • I dunno, the article (mentioned by someone in an earlier comment) lists L3 at handling round trip IPC in 10 usec on a 50Mhz 486.

    The equation is quite different for a 486 than a PIII. The real cost isn't in simply executing the instructions, but in effects from flushing the TLB and cache misses. On a 486 it's not as big of a problem since the memory compared to the CPU isn't as dismally slow as with a PIII (for example).

    How fast do system calls need to be? (Really, I don't know) With "low end" machines at least 10 times that speed today, this doesn't seem like very much overhead at all.

    Faster is better :-) It really depends on the application. Some number crunching apps read their data in one big gulp, then spend many hours just crunching without a system call in sight. IO intensive apps on the other hand will spend most of their time in system calls. Probably the worst hit in a microkernel environment are apps that are heavy on IO, but must actually look at the data they are moving. A database is a good example. That's why Oracle is so interested in running on the bare metal.

    and where every cycle counts, you probly want EVERYTHING in the kernel, FULLY debugged. Not that that's possible or anything ;)

    Absolutely! :-)

  • Is this not less of an issue with newer CPUs? Just look at the rate at which the CPU/bus speed ratio is increasing. Much of the time a modern CPU spends is spent waiting for data from memory (granted this depends on the application.) If the CPU has to spend some extra time switching between rings then it shouldn't have much of an overall impact on speed - much of it's time is spent waiting anyway. This will only increase in the future.

    It's the context switches that kill, and the CPU to memory ratio makes it worse. Switching contexts churns the cache and causes more misses. One of the biggest optimization opportunities in a microkernel is minimising the cache and TLB effects of the task switches. A really tempting approach is to make the service run in the client process's context but in a different ring. That looks a lot like a highly modular monokernel.

  • I'm not even sure it is harder then making a CPU where the MMU can't be bypassed in user mode. Except for the one little detail that most people want more then one programming language (thus more then one compile), or that p-code is very slow.

    Really it is harder. The MMU has the advantage that it can see exactly where things are RIGHT NOW, and make a decision. Code verifiers have to look at the initial state and prove that given that initial state, the code, and any possable input, it cannot generate an access that violates the segmentation. What you end up with is expert system vs a natural intelligence. In the case of accidental violations, its back to "Nothing is foolproof because fools are so ingenious."

  • Point taken, except the compiler (not code verifier) can choose to insert bounds checks, it just won't be "as fast".

    The problem with that is that any user might compile a program, then hand patch the bounds check out of the bytecode (or just use an unapproved compiler). That might be malicious, or just someone who wants their code to run faster and is "sure" there aren't any bugs that require the bounds checking.

    The MMU on a modern processor is a complex beast. Most of the complexity is for performance rather than bounds checking. It at least has the benefit that the basic concept of 'just in time' bounds checking is conceptually a lot simpler than checking by proof.

  • by sjames ( 1099 ) on Friday April 06, 2001 @12:00PM (#310196) Homepage Journal

    Evidence to the contrary need not be presented. At least, not until someone comes up with a reason for microkernels being bad/wrong/sucky.

    Good point! The biggest problem for microkernels is that they have to switch contexts far more frequently than a monokernel (in general).

    For a simple example, a user app making a single system call. In a monokernel, The call is made. The process transitions from the user to the kernel ring (ring 3 to ring 0 for x86). The kernel copys any data and parameters (other than what would fit into registers) from userspace, handles the call, possably copies results back to userspace, and transitions back to ring3 (returns).

    In a microkernel, the app makes a call, switch to ring 0, copy data, change contexts, copy data to daemon, transition to ring3 in server daemon's context, server daemon handles call, transitions to ring 0, data copied to kernelspace, change contexts back to user process, copy results into user space, transition back to ring3 (return).

    Those extra transitions and context switches have a definite cost. A CPU designed with that in mind can help (a lot!), but on current hardware, it can be a nightmare. That's why a monokernel tends to perform better in practice.

    Kernel modules in Linux are nothing like microkernel. They are just bits of a monokernel that are dynamically linked at runtime. In order to be a microkernel, each module would be a process and would reside in some ring other than 0 (possably 3, but not necessarily). In order to fully realise the microkernel module, users would be able to do the equivilant of insmod and get a private interface. Other users might or might not choose to access that module, and might insmod a different version with the same interface. This would not present a security problem.

    There may well be places where the overhead is a worthwhile trade off for the features (clustered systems might be a good example). Cray takes an approach like that (not exactly that, just like that) in mkUnicos for the T3E (possably others, but I've not used those).

    Some microkernels do a better job at efficiency than Mach (L3 for example). At some point, the hardware might actually get fast enough that the trade-off is nearly always worth while. Even then, monokernels will still have a place for situations where every cycle counts I don't anticipate those situations ever going away.

  • The log from the Minix mailing list can be found in the book
    Open Sources: Voices from the Open Source Revolution, Appendix A [oreilly.com].
    ---
  • I could be wrong but I believe QNX managed to get much faster context switches on 486's.
  • no, but then who would have been able to imagine linus working paul allen?
  • by peterjm ( 1865 ) on Friday April 06, 2001 @09:04AM (#310216)
    I remember, back when I was a kid, walking through the video store and looking at the covers of all the cool movies I wanted to watch.
    There was one in particular that stands out.
    It was called "The Exterminator". The cover had this arnold schwarzeneger looking guy in fatigues with this huge flame thrower, trying to blow torch the camera taking his picture.
    I used to look at that picture and think, "damn, that guy must be trying to start a flame war."

    I get the same feeling when taco posts stories like this.
  • "Somehow I feel that Linus' viewpoint would be slightly different than the average web reporter reviewing MacOS X. There's a big difference between educated users and Uber-developers/Kernel hackers."

    Linus doesn't have any deeper insight on this than any semi-informed correspondent. Almost every review has pointed out these problems, by noting that "Classic" apps don't take advantage of OS X's "advanced features", such as protected memory and pre-emptive multi-tasking.

    "Also, I'm sure the reviewers have mentioned lack of support for CD/DVD stuff... That's what this article infers is being affected by the memory protection."

    That is not what the article implies, although it does seems to be what you inferred. The lack of support for DVD playback has nothing to do with memory protection. It has to do with development time: there wasn't enough. There is no technical reason why OS X cannot support DVD playback and CD burning, these are just features which weren't added in time for the March 24th launch date.

  • Linus has a strong opinion of another kernel for an operating system, this is news?

    No, not at all. Linus' dislike of Mach has been known for *years*. When I first heard his complaints, I wasn't entirely convinced. Mach appeared to offer so much, but as time has progressed, I've foud myself siding with Linus more and more. An OS is the one place where you really don't want extreme flexibility at the expense of performance.

  • OS X has memory protection. The old MacOS doesn't. The problems will be with old apps that 'got away' with memory violations from time to time (due to dumb luck in some cases) and seemed OK. Now, they will die on a SEGV.

    Er, what?

    If it is an old app it will run under classic with the other old apps, and have no memory protection.

    If it is a new app one assumes it will be debugged under OSX and not get through QA with a ton of memory violations. (that's not guaranteed, it could use the Carbon API, and have been debugged under OS9, and "assumed" to work under OSX -- but I think you can check the "run under classic" box for them if you really really want).

  • Someone somewhere will find a way to assemble badly behaving code (deliberatly or not). We might as well call passwords and firewalls unnecessary because all we really need is an internet where people don't go where they're not invited and don't try to crash servers.

    Clearly it isn't easy to do it right because so many JVMs have screwed up. Boroughs has been doing it for a long time, and even they have some failures 20 or so years back. But it isn't impossible.

    I'm not even sure it is harder then making a CPU where the MMU can't be bypassed in user mode. Except for the one little detail that most people want more then one programming language (thus more then one compile), or that p-code is very slow.

  • To run OS X you'll need a memory upgrade (why Apple is still shipping 64 meg machines when OS X's minimum requirements are 128 is a mystery to me), but you're still under $1000, which is hardly out of reach for the average consumer. Yes, I know you can buy a motherboard+case+power supply+graphics card+CPU in the Wintel world for half the price, but that's not something normal people have the ability or inclination to do.

    Actually the local computer stores sell Wintel boxes for $599, I forget how much RAM, but they have a 1Ghz CPU, which sounds better to Joe-sixpack then anything Apple offers in the iMac (unless they are sucked in by the look). Of corse you will have to add a monitor to the WinTel box, but that is still cheaper then the iMac.

    Many Unix and Windows users who would never have considered Macs before are very interested in Mac OS X.

    That's for sure. That's why I own a Mac for the first time in my life. My OSX PowerBook G3 makes a better web surfing toy then my Viao with Win98 did.

    P.S. I find it amusing that Apple spell checker (which the web browser can use) knows what a Mhz is, but not a Ghz. :-)

  • The MMU has the advantage that it can see exactly where things are RIGHT NOW, and make a decision. Code verifiers have to look at the initial state and prove that given that initial state, the code, and any possable input, it cannot generate an access that violates the segmentation.

    Point taken, except the compiler (not code verifier) can choose to insert bounds checks, it just won't be "as fast". (after all, the MMU is merely a bounds checker with some extra grot to send "well formatted" messages on violations)

    On the other hand the MMU has a ton of difficulties as well. If you look at pipeline diagrams of modern CPUs the MMU tends to get a full pipestage, or two. It is one of the more complex parts of modern CPUs. The transistor count is behind that of a decently sized cache, but (including the TLB) ahead of everything else. The raw complexity it probably behind the modern reordering systems, but ahead of pretty much everything else. It is insanely complex.

  • The problem with that is that any user might compile a program, then hand patch the bounds check out of the bytecode (or just use an unapproved compiler). That might be malicious, or just someone who wants their code to run faster and is "sure" there aren't any bugs that require the bounds checking.

    On the Bouroghs system, and the IBM AS/400 machines executable are not editable at all. I don't recall if root can edit them or not, or if they are actually uneditable, or if they stop being executable when they are changed (much like the set-uid bit on Unix OSes).

    The MMU on a modern processor is a complex beast. Most of the complexity is for performance rather than bounds checking.

    Exactly. The complexity in compiler based bounds checking is also all in the attempts to make it fast. Otherwise it would just add two (or more) CMP and Bcc instructions (or whatever the equivalent for the CPU is) before each memory reference.

    It at least has the benefit that the basic concept of 'just in time' bounds checking is conceptually a lot simpler than checking by proof.

    Yes it does. Although it has to rely on a normally fairly complex OS to set the MMU up right (and with some CPUs handle TLB shoot down, and cache invalidation).

  • As near as I can tell, Linus has nothing bad to say about OS X in particular...just it's usage of the Mach microkernel which he (and lots of other kernel hackers) have dismissed as crap.

    But MacOS X uses a modified Mach uK. Many of the OS services (ahem, filesystem) are directly provided by the BSD subsystem and skip the uK all together. The criticizm of Mach has to deal with these same issues, which is why Apple decided not to use Mach for every service. BSD doesn't just run as a strict server for Mach, it often bypasses it -- and as such, the opponents of Mach can still be vocal without attacking Apple's particular implementation.

    MacOS X is a hybrid, much like some are prone to call the Linux kernel itself.

    The wheel is turning but the hamster is dead.

  • Wow, now I know why all the slashdot hordes can't keep "loose" and "lose" straight - they get it straight from Linus "I'm sorry, but you loose" Torvalds! Of course, he has the benefit of not being a native english speaker... :)

    ---

  • Dumbass. :)

    ---

  • The high chief of the Little-endians has made statements that disparage the Big-endians.

    "They're unnatural and generally suck monkey ass." he said. "I think they also beat their children."

    The response from the Big-endians was quite clear- "The Little-endians are insane and represent a threat to the very fabric of our society."

    In a poll of 2000 randomly selected indviduals, none had an opinion on the subject, with a +-4% margin for error.

  • by GypC ( 7592 )

    Monolithic kernels are simply easier to build and it follows that the less code you have the fewer bugs you will have.

    Also the way that microkernels have many subsystems that they need to send/receive messages from adds another layer of complexity.

  • by GypC ( 7592 )

    Warmonger? Have you never heard the saying "Never burn bridges behind you" as a metaphor for not pissing off the people you work for when you move on to another job? Maybe English isn't your first language or this metaphor isn't common in your country...

    Sheesh.

  • by GypC ( 7592 )

    Point.

    I was just giving a common justification for the use of monolithic kernels.

  • Soap operas have higher writing standards.
    Steven E. Ehrbar
  • Linus is either ignorant about the way Mac apps work or unrealistic about Apple's ability to dictate to its developers.

    Not at all; you're misreading the article. Linus is of the opinion that the way Mac Classic apps work was a problem for the people developing OS X, which is why he didn't want to work on OS X.
    Steven E. Ehrbar
  • will everyone please quit feeding the "BSD is dying troll"?
    --
  • As a mission statement, "hate Microsoft" isn't a bad one. It seems to be working pretty well for both Oracle and Sun.

  • Linus: Hey Steve, kernel 2.4 is going to come out when I think it's good and ready.

    Steve Jobs: FUCK FUCK GODDAMN FUCK FUCK SHIT WHORE KILL MICROSOFT FUCK FUCK FUCK

    Linus: Hey Jobs, You're a piece of crap
  • Oh, it made perfect sense. It's not a short story, it's a microstory, with a problem, a climax, and a resolution. It even has two specific examples of sarchasm, each one about the two main characters. For example, Linus' first comment about the kernel release date is a statement directly relating to real life. The second sentence, where Jobs emits a stream of vulgarity, mocks that particular tendency he has to viciously abuse his employees.

    So, Mr. Smarty Pants, let's see you pack more literaty stuff into a 3 line story than I have. Good luck.

  • The "y" is important. Maybe he does have a huge ego, but being a revolutionary means being only a part of a revolution (in this context), not necessarily the cause or leader of it. And I think it's safe to say that he's at least some part of the current revolution (as long as we're willing to call it such).

    He might just be an egotist, but we'll need to look a bit farther than the title of his book to prove it.

  • No OS is crash proof.

    Can you read? This is why I said "modulo bugs and crappy drivers." Obviously no OS is crash-proof, but some do a pretty good job of preventing apps from bringing the system down, while others (Mac OS and Win9x) are much easier for one app to crash.
  • Point taken. However, if this page [freebsd.org] is correct, there are efforts under way to port it, and OpenBSD apperantly has been ported. The bottom line is that BSD is plenty available on PPC hardware, and if Apple had wanted to use a different flavor they could have done so.
  • Well, that's a bit more reasonable, but not much. Keep in mind that Classic is a userland program like any other. From an fundamental OS design standpoint, there's no reason it should matter. OS X is a pure Unix implementaiton that happens to have Classic as one of the apps that runs on it.

    The fact that VMWare runs on Linux doesn't make Linux a bad OS. It's precisely the same issue. From an OS design standpoint, classic apps don't exist. There just happens to be a userland application called "classic."

    So I'd say Linus is still pretty ignorant about the whole issue.
  • This is a good point.

    However, in some cases classic starts up automatically when a classic app gets launched, and short of removing the actual program I haven't found a way to change that.

    This particularly annoying because often it will pick a classic app when you open a document-- some PDFs open with classic acrobat even though I've got Preview, and in the beta at least .sit files got opened by classic stuffit. I'd like to have a "never launch classic" checkbox, so this doesn't happen.
  • by binarybits ( 11068 ) on Friday April 06, 2001 @09:45AM (#310264) Homepage
    Darwin is Apple's buzzwordy name for the core of their OS. It didn't exist until Apple bought out Next and opened up their code. So Darwin is an Apple-created Unix-flavored OS that lies beneath the pretty GUI in OS X. Darwin was *not* developed by CMU or anyone else. It was written by Next developers using BSD as a starting point, but wasn't called Darwin until Apple bought the company and opened the source in ~99.

    Darwin is as much a BSD as any of the other flavors. It's built atop Mach because that's how Next did it, and they continued in that tradition. Mach is a microkernel, atop which rests a layer that implements a BSD interface.

    I'm pretty sure BSD *does* run on PPC hardware. NetBSD certainly does, and I think FreeBSD does as well. And there's certainly no reason that Apple couldn't have ported one of the to PPC hardware if they'd wanted to. The reason they didn't is that they were starting with Next technologies, so they used the Next kernel.
  • by binarybits ( 11068 ) on Friday April 06, 2001 @09:31AM (#310265) Homepage
    I'm pretty sure what he's saying is that classic apps don't have memory protection from one another, which is true but irrelevant-- the way classic Mac apps are written, most of them would break if you tried to protect them from one another-- they're used to stomping on each others' memory at will.

    The solution Apple hit upon was the next best thing-- put them in their own memory space all together, but protect them from other non-classic apps. There really isn't much you can do about this. OS X *is* a real Unix and is therefore (modulo kernel bugs and bad drivers) crash-proof. The classic environment crashes a fair amount, but with a little refinement it's not likely to be any worse than OS 9.

    Linus is either ignorant about the way Mac apps work or unrealistic about Apple's ability to dictate to its developers. Apple has an enourmous installed base of applications designed for OS 9, and if they were to throw those out, there would be essentially no software for the new OS. If, on the other hand, they had tried to design an OS that would support classic apps natively, they likely would have had to sacrifice system stability and performance to do it. Not only that, but if classic apps are supported natively, there's no incentive for developers to carbonize their apps, and therefore it's unlikely that the new API's will be widely used. This would greatly cripple Apple's ability to move the platform forward in the future.

    This plan is a sensible middle path. It allows a migration path to fully buzzword compliant Carbon and Coacoa apps in the future. With any luck, all the important apps will have carbon versions by this time next year, and by the time OS XI (or whatever) ships, they can disable classic by default and run only native apps.

    Look how long it's taken Microsoft to wean the Win 9x line of 16-bit apps-- they *still* have some hooks down to the lower levels of the OS 6 years after Win95 debuted. This is undoubtedly one of the causes of the OS's crappy stability and sluggish performance. Had they adopted a "virtual box" approach as Apple has, they'd probably have a much better OS today.
  • Take a look here. [eros-os.org]

    The preceeding link is courtesy of Google [google.com]. They mirror and grep for you.

    I agree with the original poster -- for all intents and purposes, Linus is a moron. Before you moderators start handing out (-1, blah) points, hear me out. Sure, he's a great programmer and has made a great contribution in the form of the Linux kernel, but it's fairly clear from listening to him that he doesn't understand any developments in systems research since about 1960, with the possible exception of copy-on-write. Don't get me wrong -- I've used Linux since 1994, and it's better than most desktop operating systems. However, Linus' absolutely hubristic rejection of (barely) modern concepts which he doesn't understand, such as

    • threads ("processes are faster", sez he)
    • message-passing
    • version-control systems
    just to name three easy ones, is doing little for the Linux community or for people who want to deploy Linux in a serious environment. If you don't believe me, just compare Linux performance to Solaris x86. Sure, Linux has gotten faster with 2.4, but it still can't hold a candle to systems designed by people who haven't ignored the last 40 years of systems research.

    It's time for Linus to get out of the way and let someone else serve as "CVS with taste" for the kernel -- after that, Linux has a chance of becoming a rock-solid, lean, and efficient kernel.

  • You forgot to add:
    UN-altered REPRODUCTION and DISSEMINATION of this IMPORTANT information is ENCOURAGED!
  • This is, of course, exactly the same technique that NT4 and 2000 use for 16 bit apps.

    No it isn't. NT uses an application layer called WOW to translate Win16 API calls into Win32 API calls. 16-bit apps essentially run as full peers under NT (although interapp communication stuff like OLE doesn't work between 16 and 32).

    Windows 9x on the other hand runs a bastardized 16/32-bit kernel that pretty much keeps the whole of Windows 3.1 intact inside of the rest of the OS.

    A better comparision for Classic was how OS/2 ran Windows 3.1 apps by booting a virtual dos machine. Both approaches even have the same windowborder problem. Another comparison is the Mac on Unix environment in A/UX, and I'd expect that exactly where the hooks for Classic came from.
    --
  • If you frequent the kernel mailing list, you will find it a frequent topic that the kernel API is always changing and backwards compatibility is of little concern. Perfection in its nearest state is more important to them.

    This attitude, of course, just means that all the back-compatible cruft gets pushed into glibc (which Linus has repeatedly referred to as "crap"). Of course, the Linux kernel is virutually unusable without glibc (or some other crappy variant), so you end up with "perfection" with a big pile of crap stuck on top of it.

    This sort of user/kernel duality has even filtered down to the attitudes of the relatively non-technical advocate crowd. A typical featuritus Linux-based OS is not a very good Unix system, in a lot of people's opinions, and the response usually is "don't look there, look over here at our perfection-oriented kernel". I think the lesson of NT and OS X is that kernal internals are really a minor part of the OS's total utility.

    --
  • Actually, as I replyed above, OS X and OS/2 seem to the exact same methodology for back-compatible support.

    OS/2 created a virtualized x86 machine emulation, booted a modified DOS 5.0 on that machine, and then booted a relatively unmodified Windows on top of DOS. (There were some hooks into WINOS2 so that DDE could work between environments and there was a special seemless windows video driver, but otherwise as the blue box versions showed, it was pretty much virgin Win 3.1).

    AFAICT, Mac OS X does the same. You can even see a window showing the MacOS booting.

    IMO, the NT approach is better (not to get in the way of your random OS/2 advocacy!). OS/2 wasn't really a "Better Windows than Windows" because OS2WIN had all the same flaws as regular Windows (and there were a fucking shitload of flaws there), plus some added incompatibility. The "features", such the ability to do hardware-oriented stuff (like comm software) from WINOS2 were usually so fubared that you'd be crazy to try.

    Because NT translates Win16 API calls into Win32 API calls, a whole mess of bugs in the underlying system are resolved, and furthermore the WOW environemnt made problems like the "resource pools" to just go away. Compatibility and speed were a little worse, but back when I was running NT 3.5 on a low-end pentium, I really had less problems with 16-bit apps than I did with OS/2 on similar hardware.
    --
  • A quote:
    > Of course 5 years from now that will be
    > different, but 5 years from now everyone will
    > be running free GNU on their 200 MIPS, 64M
    > SPARCstation-5.

    Yep, and I picked it up in my flying car!

  • by WaterMix ( 12271 ) on Friday April 06, 2001 @09:26AM (#310276)
    [Linus'] new autobiography called Just for Fun: The Story of an Accidental Revolutionary [emphasis mine]
    Gee; I'm glad to see that Linus hasn't let his fame go to is head. I hope he just accidentallty hit the 'R' key in front of that last word.

    I don't mean to diminish what Linus has done, but RMS [stallman.org] has done a boatload more for the "revolution" than Linus did.

    --
    Wade in the water, children.

  • I think it's interesting -- and possibly a little exasperating to Mr. Torvalds -- that this is what was pulled from his autobiography as newsworthy.

    On the other hand, Mac OS X has been in the news a lot recently, and the PR staff of the book's publisher may have encouraged an article like like this for the PR value. Controversy sells.

  • But somehow any old guy on the street can add modifications into OpenBSD?

    Uhh...Yeah, actually he can.
    His patches might not get into the main tree, but he is free to fork the whole damn thing if he likes.
    There's nothing stopping you from adding to any of the BSDs, you just won't necessarily get into the official tree.

    Linux is the same way, pretty much.
    I kinda like the comittee better, as personal bias ("I think foo sucks and won't consider any ideas from foo, regardless of how good they may be") is less of a factor in deciding what goes in or not.

    C-X C-S
  • Mac OS X has and will always have completely insane RAM requirements. Classic or no Classic, you need AT LEAST 256 MB or you will be completely miserable.

    I for one upgraded to 384 MB before the final version came out, and no doubt the issues i had with the public beta running on a 128 MB machine are softened by now, but let me just explain something to you: Everything you see in Mac OS X is being rendered through a display layer known as Quartz-- essentially Display PDF. Everything on the screen is a PDF object. This is a bit of a performance hit, but not so much as you'd get under, say, Display Postscript (since Postscript is actually its own little stack-based programming language and requires a somewhat computationally expensive runtime. Quartz gives you the potential for some simply AMAZING things (for the moment the main sign of its existence is the ominpresent drop shadows and semitransparent menus in OS X, but the whole concept is EXTREMELY powerful and i think that five years from now most of the huge selling points of os x will be things made possible by quartz), and in the end the whole system is a probably good bit more sane than, say, X.

    However THE PROBLEM WITH THIS IS that Quartz, as (in the words of the developers) a "per-pixel mixer, not a per-pixel switcher", offers FREE BUFFERING to all windows, and almost all apps accept. What this means is that when the display server draws a pixel, it can't just figure out which window owns the pixel and ask the window to draw it; it has to keep track of each window independently by itself, consider which pixels from which windows to take into consideration (and some of these windows may have elements smaller than a pixel..), and indepdendently mix them appropriately at every pixel on the screen.
    The performance hit from this really isn't as bad as one would think, but **because Quartz stores the image of pretty much *every window on the screen***, *including* the ones you can't see, in RAM, it does mean you need RAM. Because if you don't supply enough RAM, it means that os x will start dipping into Virtual Memory.

    *** and it doesn't MATTER how high end your microprocessor and graphics processor are, if every time you have a signifigant screen redraw the system has to go grab bits from a large number of windows scattered throughout virtual memory, and that virtual memory is fragmented all over your hard disk (Mac OS X does not have a dedicated swap partition. Therefore, if your disk is fragmented, your virtual memory will be fragmented, and there is nothing more pathetic than watching an extremely fast machine try to deal with a fragmented virtual memory file), and that hard disk is the kind of cruddy IDE drive that apple's been relying on so much lately.. well, then, the system is going to be SLOW.
    SO: If you are going to run OS X, GET A BUNCH OF RAM, and MAKE SURE that you have a huge block of unfragmented free disk space for os x to use. And if you don't do these things, don't go around blaming the slow system responsiveness on the microkernel or the Objective C messaging architecture or the complex multitiered layout of the OS. And if you don't want to pay for the hardware needed to make all this stuff usable, well.. Be isn't *quite* bankrupt yet :)

    I for one am going to sit around and wait for apple to get some Multiprocessor machines out on the market. Whoo!

    (Classic only adds to all this pain, and to be honest as of right now (while you don't HAVE to run Classic, and i for one don't) the app support is such that either you are a GNU-head who will wind up living in Terminal most of the time or you will have several very important tasks you will find can *only* be provided by Classic mac apps. Come july that may change, but still..)

  • It's on ebay waiting for you. :)
    I have one in a niche in my desk, actually...it's a great headless server.

    "If ignorance is bliss, may I never be happy.
  • *Step (NeXT and Open, dunno about OSX) use a true microkernel design with multiple servers.

    "If ignorance is bliss, may I never be happy.
  • It would be fairly simple to create a userfriendly piece of software if you knew the exact hardware that it was going to be running on. With everyone and their mother making hardware for the x86 platform there is no way to include drivers for software that hasn't been written yet.

    I've never used a Mac but I never seen any really arcane hardware for it, everyone seems to use pretty much the same things aross the board. Then again maybe I'm a biased x86 person and don't know what I am talking about.
  • > Okay, maybe, but I don't think the GNU kernel is even "ready" now!

    From what I read at fsf.org a couple of months back, it sounds like they are now adopting Linux for that role. As best I can recall the claim went something like -
    All we needed was a kernel. Linux provides that kernel.

    --
  • Linus wasn't even *asked* about OS-X, and he's not judging it whatsoever.

    He was asked specifically about Mach, and he says it's a piece of crap, and he has his reasons. This does not reflect on the quality of the OS, it's stability, or anything else.. not even the particular implementation in OSX. It has only to do with MACH.

    In fact, that's one of the worst pieces of sensationalized journalism I've ever seen. They very effectively make it look like linux says OS-X is a piece of crap, when in fact, he said no such thing.

  • by mindstrm ( 20013 ) on Friday April 06, 2001 @10:26AM (#310298)
    Just like every other argument of this nature.

    Risc -vs -Cisc
    Microkernel -vs- Monolithic kernel...

    There's a theoretical side, and a 'reality' side.

    Is RISC cooler? Yes. Does that mean a chip is better because it's risc? No.

    Are there pros and cons between microkernels and monolithic kernels? Certainly, of course there are. Does that mean that linux is better than OSX simply because one is monolithic? No.


  • The PPC port of gcc still has a ways to go in terms of optimizing for the platform. It's not bad, but not quite there yet. As for darwin, who knows... The darwin compiler is still the old next port of gcc. It's fairly well optimized, but it's a bit outdated. There are a few folks at apple working on bringing it up to date with current gcc, but that may be a while yet.
  • I was under the understanding that micro kernels were very very good with multiprocessor machines, and since apple has cpu speed problems this kernel would make sence.

    Silberschatz (bell labs) has posted and excellent older chapeter (pdf) on Mach at

    http://www.bell-labs.com/topic/books/os-book/mach- dir/mach.pdf ---->or here if your trusting [bell-labs.com]

  • by jilles ( 20976 ) on Friday April 06, 2001 @08:55AM (#310303) Homepage
    What are we supposed to think of this article? It lacks detail. In fact the only argument it puts forward are some vague remarks of linus (probably taken out of context) way back in 1997. Apparently there's an issue with lack of memory protection? I can't believe that would be true, I certainly haven't read anything about it in the reviews of mac os X (and they weren't that positive alltogether). This is typical for zdnet, inflamatory title, some vague bits and pieces. No facts, no details, and yet it makes it onto slashdot. I'd love to see how mac os x performs, I'm interested in benchmarks, comparisons, design issues. None of that can be found in this article.
  • Comment removed based on user account deletion
  • by Flower ( 31351 ) on Friday April 06, 2001 @09:38AM (#310315) Homepage
    To be blunt, they all suck. Just look here. [securify.com]
  • while I agree with Linus' assertion wrt performance, today's processors are more than able to handle microkernels. The flaws that Linus derides were conscious design decisions to provide as much efficiency & protection as possible.

    Classic applications in OS X will, of course, not be protected by the nature of their environment, but they are still protected from affecting the OS as a whole. This was the whole point. Of course OS X is NOT perfect, but then again, neither were/are linux kernels 0.xx, 1.xx, 2.2.xx, 2.4.x, and 2.5.xx. No OS will ever be perfect. They will always offer tradeoffs. Are WINE applications completely protected? If they are, does each single WINE run application run in its own context? This increases the basic system requirements v. running all WINE apps under a single controlling architecure, e.g. classic in OS X.

    Monolithic kernels, also increase the basic system requirements of a system. They require resources for each and every "feature" run at compile times, and add in opportunities for poorly written code to run amok as more and more pieces of the OS are always running, and loaded at startup. Even the lauded mod architecture is susceptible to this flaw, in the same way that microkernel architectures are by dynamically loading needed/required components. There exists NO perfect OS/kernel design, and there never will be one. Selection of the base OS type is dependent upon the designers needs and requirements, which at the time Linus originally developed linux were efficiency and speed v. current processor design and efficiency. (Same as other readers have pointed out in the CISC v. RISC debates.)

    This entire discussion should be dropped. If monolithic kernels meet your requirements, great, if not use a microkernel design, a hybrid, or something entire NEW! This should be the crux of this argument, as it is EXTREMELY important that we not fixate and religisize a particular development environment, kernel, OS, whatever, etc. Diversity (not the PC kind) is what drives technological advancement, unless all of you truly wish to live in a wintel world!

    Perhaps this entire article was really designed to provoke this kind of discussion...

    Side notes: the 80386 was not REALLY designed to handle a multitasking environment, and hence was not optimized to provide optimal context switchin performance. I hazard to guess (I have not studied this closely, mind you) that the 680X0, MIPS, SPARC, etc. architectures provide a lower penalty for context switched v. x86 designs as they were intentionally designed to operate in multitasking environments. i.e. let us also not be so architecturually centric in our discussions
  • Well, the poster could've read the thread a bit more to see that it probably wasn't Linus that posted it, but an April Fools joke by someone posing as Linus (the headers indicated it came from Washington State and was written using MS Outlook).

    Troll may not have been the best choice for it, but there was no "jumped to conclusions" reason either.
  • Steve Job's glittery eye candy provides little functionality

    I doubt you've used OSX for any extended period of time (if at all), otherwise you probably wouldn't be saying that. They really thought the UI out. There are a number of very clever improvements in the functionality of the UI.

    too many CPU cycles to render it.

    Yeah, it's really hard to put a bitmap on the screen. Come on, this is 2001. I think we can afford to "waste" a few cycles on drop shadows and transparencies. Not everything has to look like TVWM.

    The fact that it lacks stability means Apple has a piss-poor staff of people admining their programmers.

    Ummmm, I haven't see a kernel panic on OSX yet. I ran the public beta for 7+ months. I'm running the GM on three separate machines. Not everyone is having problems. They might just be hardware issues that have to be ironed out. Heck, I've seen poor hardware cause Solaris to panic.

    There is NO EXCUSE for such a lame product!

    I agree. Fortunately, Mac OS X is a great OS.

    - Scott

    --
    Scott Stevenson
    WildTofu [wildtofu.com]
  • by TheInternet ( 35082 ) on Friday April 06, 2001 @10:23AM (#310320) Homepage Journal
    Thanks for explaining why the system requirements for Mac OSX are so ludicrous.

    Mac OS X's requirements are "so ludicrous" because it has to run two OSs at once: Mac OS X itself and Mac OS 9 via the "Classic" environment. If you're only running native apps (which the are relatively few at this second, but many are coming this summer), then you'll probably do just fine on 64MB of RAM. Last time I checked, this is pretty comparable to GNOME or KDE.

    - Scott


    --
    Scott Stevenson
    WildTofu [wildtofu.com]
  • Great idea. Some suggestions:
    • Linux Ultra
    • Linux Non Plus Ultra
    • Linux Final Conflict
    • Linux Meta ('doh!)
    • TransMinux (please don't kill me Linus)
    • Linux Meta-Mecha
    • LinuXtreme
    • Omega
    • Linux Server Edition
    • Linux for Dummies
    • Linux X
    • LinuXP
    • Lucy
    • Linux Blue
    • Yellow-bellied rat bastard (ok, that one's not so good)
    • LinuX-men
    • Jurassic Linux
    • Episode II
    • Linux Classic
    • Micro$oft Sux!
    • Liquinux
    • Quinine (is not a quine)
    • Linux Air
    • C:\con\con Air
    • Evil Dead II
    • Linux - Ghost in the Machine
    • FreeBSD ('doh! somebody slap me! damn just kidding ;-)
    OK, not completely great. There's a reason why I'm not a comedy writer. But it's free. If you don't like it do better.

    Boss of nothin. Big deal.
    Son, go get daddy's hard plastic eyes.
  • From Tanenbaum...
    Anyone who says you can have a lot of widely dispersed people hack away on a complicated piece of code and avoid total anarchy has never managed a software project.

    This is just one of a million statements in that article that show how truly clueless the man was about the future of computing back then. But then again just about everyone else was too. If you said then that the free software movement would spawn the more business minded/less political open source movement, which is the big darling of the computing world today, they would have probably laughed at you.
  • I agree that managing a widely spread open source project is a nontrivial excercise, but Tanenbaum acts as if it is impossible. Clearly the success of many modern examples proves that while it may be damn difficult, it is NOT impossible.

    I also agree that the many eyes make bugs shallow thing is kinda crap. It's many USERS that make bugs easy to find, not many people staring at the source
  • I think the real problem with that theory is not that the idea doesn't work in principle, because I believe it could. The real problem is that few people ever actually look at the source to any program they download. If every linux user pored over the source of every app they downloaded, sure bugs would be smashed with amazing speed, but no one really cares that much. I know me personally, if I want a new app, I do an apt-get install, and if that doesn't work, download the tarball. Only if things dont work with ./congigure;make do I start trying to figure out anything about how the program works.
  • Comment removed based on user account deletion
  • Um... Win9x is dead -- see XP

    So what you're saying is that Win9x -- which hundreds of thousands (if not millions) of people use every day, and are likeyly to continue to use for years to come -- is dead because of an OS which hasn't been released yet?

    --

  • Shame it only runs on machines out of the reach of the average consumer and can't even burn CDs yet.

    You can get a new iMac for $899. To run OS X you'll need a memory upgrade (why Apple is still shipping 64 meg machines when OS X's minimum requirements are 128 is a mystery to me), but you're still under $1000, which is hardly out of reach for the average consumer. Yes, I know you can buy a motherboard+case+power supply+graphics card+CPU in the Wintel world for half the price, but that's not something normal people have the ability or inclination to do.

    There's a quite enlightened synopsis of the state of the OS wars here

    That article is incorrect in a number of areas, especially this line: "The market for Mac OS X is simply a subset of the overall Mac market because not all Mac users will upgrade." Many Unix and Windows users who would never have considered Macs before are very interested in Mac OS X. Just count the number of Slashdot articles on OS X in the past month.

  • If the problem with Unix is that it's 30 years old... Mach (and other kernels like it, such as pre-version-4 NT) is over 10 years old. As microkernels go, it's considered to be quite bloated nowadays.

    If you want to see a kernel that's truly modern in design, look at Chorus, QNX or BeOS.

  • by bugg ( 65930 ) on Friday April 06, 2001 @11:02AM (#310343) Homepage
    Sorry, but Tanenbaum was right on the number with this one. Ask anyone who's managed an open source project, or is managing one now (such as myself) and they will tell you that it's not that easy. It's not that people can jump in and submit a patch, and fix it all. That's a load of crap; most of the work in any project will be done by a handful of people, and that's that.

    We can also tell by the number of bugs found in things developed with a development model like Linux that the "many eyes make all bugs shallow" philosophy is crap as well; most bugs aren't obvious programming errors, and if they are in your project, find new developers.

  • by rkent ( 73434 ) <rkent AT post DOT harvard DOT edu> on Friday April 06, 2001 @09:15AM (#310349)
    What?! Presumably you're saying that Linux wouldn't even have existed if the GNU kernel had been finished in 1992. Okay, maybe, but I don't think the GNU kernel is even "ready" now! Or if so, quite recently... So, yeah, it seems as if there was only an 8 year window of opportunity where Linux could have come about :)
  • You're forgetting Microsoft has the power of size and money, allowing it to dictate some standards (WinModems?) and since it is the defacto desktop on millions of computers around the world, developers must write good drivers for them. Not that they all do (ATi, especially under Win200, what I'm writing from now).
  • The biggest benefit for microkernels is twofold:
    a) their small, minimalist design allows for fewer bugs, greater flexibility, etc, etc(all the old arguments you hear about them)...
    and another point which I thought was very valid,
    b) the fact that very little code is in the kernel allows for better realtime performance. The less time that must be spent in kernel mode(where interrupts are disabled), the more time can be devoted to actual work and servicing devices.

    Some microkernels do a better job at efficiency than Mach (L3 for example). At some point, the hardware might actually get fast enough that the trade-off is nearly always worth while.

    L4 and EROS kernels are even faster than previous generation microkernels. And it's not a matter of hardware getting faster; it's damn fast right now and the context switching algorithms used are so much better that's it's not even slow anymore. The problem is getting somebody to do ia u-kernel right and to build an actual system on it. While the people working on L4 are doing great work, they're doing it for research purposes so it's not going anywhere practical, anytime soon.

    IMO, overhead penalty invloved with context switching in a u-kernel OS is totally worth it, especially for a desktop system. And QNX proves that it can be done right.

    -----
    "People who bite the hand that feeds them usually lick the boot that kicks them"
  • particularly in tech-heavy places like Silicon Valley: SiliconValley.com Special Report: Linus Torvalds [mercurycenter.com].

    Even outside the Valley, he's gaining visibility: Time Digital's Digital 50: #4 Linus Torvalds [time.com].

    Anyone paying attention to technology - tech investors, business leaders, etc., has at least heard of Linus Torvalds. The days of obscurity are gone. Just as people know who Jobs and Ellison are, they now know who Torvalds is.

  • "Linus has said a number of times that he's not out to change the world... or even do anything in particular for the world."

    I understand that absolutely. Linus started Linux and has worked on it for years out of self-interest. No doubt about it.

    What I'm getting at is that (unfortunately or not) Linus represents Open Source to the world at large. While most Linux afficionados know about Stallman, ESR, Perens, et. al., the lightening rod for Open Source right now is Linus Torvalds.

    Whether he likes it or not, he represents Open Source to the non-hacker world. What I was getting at with my initial post is that even though Open Source movement and Linus Torvalds are two separate and distinct entities, that is not the perception.

    The development of Linux shouldn't be run like a business, I wholeheartedly agree. But Torvalds has an effect far beyond just Linux. His straight up honesty is one of the things that has helped Linux come so far. It's one of the things that, by all accounts, makes him a good person.

    But Open Source as a movement is bigger than Linus, and if we take Linux out of the equation for a moment, we can see that without Linux, Open Source wouldn't really have a flagship "product", regardless of what Stallman says. I mean, Apache is great and all, but a successful operating system is an order of magnitude more important when you're selling Open Source as a viable process for building software.

    So now we have a situation where the de facto spokesman for Open Source might not be interested in taking on that role.

    Where does that leave Open Source advocates who do care about presenting Open Source as a viable option (with or without Linux).

  • he's the representative in the mainstream public's eye of Open Source in general and Linux in particular.

    I think that's why the press has latched onto this story, and why some of us find it particularly interesting. We all know that hackers flame each other, that for any technology to really matter, it has to originate from passionate individuals.

    But the rest of the non-geek world doesn't know this, isn't familiar with hacker culture and how ideas are discussed. The business world operates differently, and in the business world, attacks like his attack on Mach are often interpreted as signs of fear or weakness.

    It may be even more puzzling to the general public because they've been lead to believe that the Open Source community has always been interested in allowing many different technologies to flourish in a relatively benign environment.

    I applaud Linus for the tremendous work he's done over the years in developing Linux and championing Open Source, but if you want to convince folks that Open Source is a kinder, gentler way to compute, saying Mach is crap might not be the best approach.

  • The paper gives actual performance measurements and supports Rick Rashid's conclusion that microkernel based systems are just as efficient as monolithic kernels.

    Is this the same as this [microsoft.com] Rick Rashid: Senior Vice President of Microsoft Research?

    If so interesting stuff. When I worked there i knew he was a bright buy (did a good deal of the physics for Allegiance - the game) and was WAY to into Star Trek, but i didn't know he did OS design.

    -Jon

    Streamripper [sourceforge.net]

  • by the Man in Black ( 102634 ) <jasonrashaad.gmail@com> on Friday April 06, 2001 @08:50AM (#310388) Homepage
    I read the article at El Reg [theregister.co.uk] before it popped up here. The way this story is being presented by ZDNet, the Register, AND Slashdot is terribly misleading. As near as I can tell, Linus has nothing bad to say about OS X in particular...just it's usage of the Mach microkernel which he (and lots of other kernel hackers) have dismissed as crap.

    Reading anything else to it just turns the whole thing into a "Ooooh...the Linux guy HATES OS X! He must be threatened by it!" media frenzy. That single out-of-context quote, combined with "Linux as insofar failed to bring UNIX to the desktop, which is what Apple believes OS X WILL do", makes it even worse.

    I say humbug.
  • by kreyg ( 103130 ) <(ac.wahs) (ta) (gyerk)> on Friday April 06, 2001 @03:13PM (#310391) Homepage
    I don't mean to diminish what Linus has done, but RMS has done a boatload more for the "revolution" than Linus did.

    ????? There can only be one revolutionary? When did this turn into The Highlander?
  • I hope he just accidentallty hit the 'R' key in front of that last word.

    We should judge Linus by what he says and what he does. If you add everything up, that 'R' doesn't mean much against everything else. As far as I can tell, Linus actually is pretty humble.

    And you are assuming that Linus came up with that title, which may not be correct. The publishing company may have come up with the title, or at least the subtitle (the part that contains that 'R' word).

    steveha

  • The main reason that microkernels have not gained more acceptance in OS circles (although Windows NT is based on microkernel design) is that the most popular implementation of the concept (Mach) is also one of the most inefficient and badly designed.

    In Joseph Liedtke's 1995 paper, On Microkernel Construction [nec.com] he points out that the myth of Microkernels being more inefficient and thus slower than monolithic kernels is because most benchmarks were done against the Mach microkernel. He stated that the Mach performed poorly at both address switching and context switching and also failed to take processors into account and be optimized thusly. As a test, Liedtke wrote a Microkernel OS called L3 in which he showed that a call to getpid which took 800 cycles of kernel time on the Mach to be performed took 15 - 100 cycles on the L3.

    Also he also disproved the notion that using a microkernel leads to memory degradation due to a large number of cache misses a dissects a number of benchmarks from Chen and Bershad's paper The Impact of Operating System Structure on Memory System Performance [cmu.edu].

    Read his paper if you get the chance, it's very enlightening.

    The ideas behind microkernel design are very sound and some of them have found their way into most mainstream OSes (Linux kernel modules can be seen as a take on the Microkernel architecture). Basically, having an OS where everything is hot swappable including memory management, process scheduling and device drivers is kind of cool. Also the fact that the usual OS level APIs can now be swapped out meaning you can have both POSIX layer and a Win32 layer on the same OS is rather nice.

  • by SpanishInquisition ( 127269 ) on Friday April 06, 2001 @08:59AM (#310417) Homepage Journal
    Linus declares : "OS X kicks ass, Mach is a superior architecture, Linux has a lot of catch up to do, heck, why not switch right away, I find the dalmatian iMac particulary tempting since I started smoking crack"
    --
  • by small_dick ( 127697 ) on Friday April 06, 2001 @10:29AM (#310418)
    "...linux is monolithic, and I agree that microkernels are nicer..."

    "...From a theoretical (and aesthetical) standpoint linux looses..."

    Thank you.

    quotes [http]

  • by tz ( 130773 ) on Friday April 06, 2001 @09:51AM (#310420)
    I give it about a year. 6 months if there are a few geeks on a mission.

    Basically it would take binfmt_macho to be written, maybe an extended hfs (if we don't have an osxvfsshim module). And the syscall translation shim.

    And how long until it goes the other way? Add elf support (plus elf shared libs) and the linux syscall shim to Darwin? Maybe the same time.

    At some point they will cross and then Darwin will be subjected to natural seleciton.

    I already can run the Gimp on OS X.

    It is a bit silly to have a modular kernel and then always have to include the same modules. Meanwhile monolithic linux has modules that install devices, filesystems, almost everything (Until I had some hiccups with late 2.3, my kernel had a ramdisk and cramfs, and loaded the rest, I really should revisit that).

    At some point, probably 2-3 years, Darwin and Linux will either merge or become so cross compatible that one might all but disappear.
  • by Nohea ( 142708 ) on Friday April 06, 2001 @08:58AM (#310433)
    That's the funniest part of the article: Steve Jobs thinks Linus would be more interested in working on OSX vs working on Linux. The assumption is that he would be more interested in hurting Microsoft, rather than actually having full control in creating an OS that has merits in it's own right. I guess it was worth a try, but purely machiavellian.

    I understand companies like Apple, Sun, Oracle wanting to compete with microsoft, but i don't like it when CEOs just are slapping at each other and getting in pissing matches.

    Plus, can you imagine Linus working for Jobs?

  • by AntiPasto ( 168263 ) on Friday April 06, 2001 @09:01AM (#310457) Journal
    Hrmmm... my bets are on the christmas tree.

    ----

  • by wazzzup ( 172351 ) <astromac@f[ ]mail.fm ['ast' in gap]> on Friday April 06, 2001 @09:30AM (#310463)
    It's no secret that Linus doesn't like microkernel architectures. What is really going on here is the press trying to create some buzz, get some hits on their web sites and sell some books. Many sites headlines are saying that Torvalds said OS X is crap, not the Mach kernel which, of course, is false. Read the article.

    If you were Apple, the decision going micro or monolithic was a no-brainer in my opinion. Ignoring the Tevanian-Mach connections, going monolithic with OS X would be putting too many eggs in one basket given the shaky CPU ground they're standing on. Mach gives them more alot more flexibility to jump the Motorola ship if forced to.
  • by skilletlicker ( 232255 ) on Friday April 06, 2001 @08:48AM (#310502)
    Linus vs. Tanenbaum [www.dina.dk]
  • by Spy Hunter ( 317220 ) on Friday April 06, 2001 @09:38AM (#310541) Journal
    No, Linus is not jealous of OS X. He most likely doesn't care at all about its user-friendliness. What he doesn't like is the fact that it uses a Mach-based kernel, and he happens to hate Mach.

    Linus' comments in the article have to do ONLY with the Mach microkernel. The GUI is irrelevant to him. He didn't create Linux to be user-friendly, so he has no reason to envy OS X for being user-friendly.

The explanation requiring the fewest assumptions is the most likely to be correct. -- William of Occam

Working...