Linus vs Mach (and OSX) Microkernel 394
moon_unit_one writes "Linus Torvalds apparently doesn't think much of the Mach mirokernel and says so in his new autobiography called Just for Fun: The Story of an Accidental Revolutionary. See what he says in this ZDNet UK article." The age old mach debate resurrected once again. Anyone have the old Minix logs with Linus? Thats a fun one too.
Best quote from Tanenbaum (Score:2)
Lets look at his examples of 'the future' of operating systems:
RC4000 -- wow, now there's a winner.
Amoeba -- research toy.
Chorus -- ditto
Windows NT -- saw very little success until they ditched the microkernel and moved networking and GDI/User into the kernel.
Mach -- are there any mach based OSes that don't run just one monolithic uber-server under mach?
How about some examples he didn't give (they wern't around yet, or were off his radar for various reasons):
QNX -- in the top 5 in the RTOS market, attempt to push into desktop market (by bizzare scam involving the 5 remaining amigans) seems to have petered out.
BeOS -- desperately searching for a market. Claim to fame is scheduling latency, but has been outdone there by both QNX (obviously) and Linux.
As if we needed more reasons not to ask academics for real world advice.
Re:Envy? (Score:2)
SMP and reliability.
In a 'clean' monolithic kernel, the entire kernel is locked when processing. This is perfectly fine in a uniprocessor case. But for multiprocessor systems, it sucks bigtime because your other processors are sitting in a spin-loop waiting for the kernel lock.
Now the obvious way around that is to create exceptions to the rule. But that quickly becomes a nightmare to maintain. I'm sure that this is major reason the reason the 2.4 kernel was so very late.
Now if you purport that we'll be stuck with uniprocessor or 2way processors for a long time. Thats very shortsighted. I see 16way systems moderately commonplace in 4-5 years. Linus doesn't really deal with huge n-way systems, so thats probably where is lapse comes into play.
Now the reliability aspect of a microkernel comes from its rigid policies - making code proofs&verification easier to do. The spaghetti code monolithics that i've seen, heh, as with verifying any spaghetti code, good luck!
It should also be noted that there is not a black-and-white distinction between a monolithic kernel and a microkernel. Any well-engineered design should have trade-offs to perform well in many environments. For instance, some microkernels give a pass to the graphics code running in kernel space. And monolithics will implement some message passing thing so that the network drivers play nice with everyone else.
Tom
Re:Linus vs. Tanenbaum (Score:2)
Re:Best quote from Tanenbaum (Score:2)
I don't even think OPENSTEP is an OS -- it's the Objective C environment that NeXTSTEP used, ported to other operating systems.
Re:severe lack of information (Score:2)
Oh they tried, they tried for the better part of 10 years make that happen, and finally gave up, settling on Rhapsody for new apps and BlueBox (now known as Classic) for legacy apps.
The stroke of genius that is allowing Apple to perform the impossible for a second time (ie a major architechural change without more than trivial backwards compatability issues) is Carbon. A subset of the old API which can be multithreaded and memory protected. So, rather than ask Adobe to rewrite Photoshop, Illustrator, et al. from the ground up, they only have to tweak the existing code a little, recompile, and they've got an OS X app. I'm sure that in time Photoshop will be rewritten with the Cocoa API, as will many other major apps (ProTools, with it's...memory issues...almost certainly requires it) but Carbon puts a stepping stone in the middle of that river so that everyone can migrate at a speed more in line with their comfort level.
Don Negro
Re:Mach is known as a bad microkernel implementati (Score:5)
At the time of the Tannenbaum/Torvalds debate, the primary CPU in use was still the 386- which took 300 to 500 clock cycles to task switch. No, that's not a typo- three to five *hundred*.
The situation has imporved enormously- Pentium class CPUs only take ~50 clock cycles to task switch. Of course, this is disregarding any TLB misses which are incurred.
The task switch overhead is what caused NT to move the graphics calls into kernel space in 4.0 (causing a large improvement in performance and a huge decrease in reliability). The hardware costs of task switching is what kills "true" microkernel OSs on performance. And don't bother to whine about "poor processor design"- people (including me) want an OS that runs well on the hardware they have.
The best operating systems are the "mutts"- not true microkernels nor true monolithic kernels. Which is how linux evolved- kernel modules for many if not most things, virtual file systems, etc.
Re:severe lack of information (Score:2)
Apparently there's an issue with lack of memory protection? I can't believe that would be true, I certainly haven't read anything about it in the reviews of mac os X
OS X has memory protection. The old MacOS doesn't. The problems will be with old apps that 'got away' with memory violations from time to time (due to dumb luck in some cases) and seemed OK. Now, they will die on a SEGV. That's a good thing since they were broken from the start, but will take some fixing.
Re:MacOS/X is from BSD4.4, not Mach (Score:2)
Not that there's a sharp line once you have loadable kernel modules.
There's still a sharp line. Modules in Linux have very little to do with microkernel. In microkernel, the equivilant of modules would be more like daemons.
Re:Mach is known as a bad microkernel implementati (Score:2)
f all the apps on an OS were written in a language that didn't allow arbitraty pointer arithmetic (like Java, etc.), and the compiler was trusted (and not buggy) then seperate address spaces between apps (and the kernel) are not necessary.
Someone somewhere will find a way to assemble badly behaving code (deliberatly or not). We might as well call passwords and firewalls unnecessary because all we really need is an internet where people don't go where they're not invited and don't try to crash servers.
Re:Envy? (Score:2)
n a 'clean' monolithic kernel, the entire kernel is locked when processing. This is perfectly fine in a uniprocessor case. But for multiprocessor systems, it sucks bigtime because your other processors are sitting in a spin-loop waiting for the kernel lock.
The issue of lock granularity is not really a micro vs monokernel. Linux 2.0.x had the big kernel lock, and SMP suffered a good bit because of it. The newer kernels are much more finely grained.
microkernel doesn't make that issue go away. Taking a sub-system out of the kernel and making it a user process doesn't eliminate the need to access it's data serially, it just shifts the problem to the scheduler. In effect, it turns every spinlock into a semaphore. The Linux kernel uses semaphores now for cases where the higher cost is offset by the benefit of having the other CPU(s) do something besides spin. The spinlocks are used in cases where the cost of trying to take a semaphore (failing and going it the queue) are higher than the cost of spinning until the lock is released.
However, all of that is incidental to the defining difference between a mono and microkernel.
Now if you purport that we'll be stuck with uniprocessor or 2way processors for a long time. Thats very shortsighted. I see 16way systems moderately commonplace in 4-5 years.
I purport no such thing. While I do expect to see more multi-processor systems, I don't think they'll be SMP.
Now the reliability aspect of a microkernel comes from its rigid policies - making code proofs&verification easier to do. The spaghetti code monolithics that i've seen, heh, as with verifying any spaghetti code, good luck!
Microkernel does tend to force cleaner and sharper divisions and interfaces. The same thing can and SHOULD be done in a monokernel, but temptation to bend the rules can be quite strong.
It should also be noted that there is not a black-and-white distinction between a monolithic kernel and a microkernel.
Agreed completely! There's nothing I hate more than a system design with an acedemic axe to grind. There are a few places in the Linux kernel where a very microkernel-esque approach is taken. CODA comes to mind. Also NBD. We also see the opposite in khttpd.
In that sense, monokernel itself is a compromise on the spectrum between completely monolithic systems (mostly embedded systems, Oracle on bare metal, etc.), systems with no memory protection (on purpose!) and cooperative multi-tasking, monokernel, microkernel. I'm not honestly sure where to place exokernel on the spectrum.
I stand by the thought that the biggest potential advantage to microkernels is in cluster systems.
Re:Envy? (Score:2)
a) their small, minimalist design allows for fewer bugs, greater flexibility, etc, etc(all the old arguments you hear about them)...
No arguement about the flexibility. I look forward to one day having enough free time to play with and enjoy that feature.
The minimalist design allowing for fewer bugs is somewhat a red herring. By focusing on a smaller piece of code, you'll naturally find fewer bugs there vs. a larger piece of code (all else being equal). Once you add in the sevvices, the bug count is back up to where it was.
There is some benefit that at least the bugs are isolated from each other, and thus more limited in the damage they can do. However, if the service is a critical one, the system will still be dead.
b) the fact that very little code is in the kernel allows for better realtime performance. The less time that must be spent in kernel mode(where interrupts are disabled), the more time can be devoted to actual work and servicing devices.
Interestingly, RTLinux [rtlinux.org] is a good example of that approach. It makes the entire kernel pre-emptable for the real time services.
L4 and EROS kernels are even faster than previous generation microkernels.
L4 is a good example of improvements in the area of microkernels. It's on my list of things to check out when I get that mythical free time. EROS is also very interesting. A microkernel with persistant objects to replace nearly everything! It's also on my list.
IMO, overhead penalty invloved with context switching in a u-kernel OS is totally worth it, especially for a desktop system. And QNX proves that it can be done right.
Our conclusions are not too far apart. I'm not so much saying that microkernel is bad or that it won't work, I just don't think it's time is here just yet, and Mach in it's current form isn't it. I do look forward to the HERD being ready for prime time. The 1.0 version will probably perform poorly vs. Linux, but it will provide a good platform to build from in the Free Software world. (yep, it's on my list :-)
Re:Envy? (Score:2)
I dunno, the article (mentioned by someone in an earlier comment) lists L3 at handling round trip IPC in 10 usec on a 50Mhz 486.
The equation is quite different for a 486 than a PIII. The real cost isn't in simply executing the instructions, but in effects from flushing the TLB and cache misses. On a 486 it's not as big of a problem since the memory compared to the CPU isn't as dismally slow as with a PIII (for example).
How fast do system calls need to be? (Really, I don't know) With "low end" machines at least 10 times that speed today, this doesn't seem like very much overhead at all.
Faster is better :-) It really depends on the application. Some number crunching apps read their data in one big gulp, then spend many hours just crunching without a system call in sight. IO intensive apps on the other hand will spend most of their time in system calls. Probably the worst hit in a microkernel environment are apps that are heavy on IO, but must actually look at the data they are moving. A database is a good example. That's why Oracle is so interested in running on the bare metal.
and where every cycle counts, you probly want EVERYTHING in the kernel, FULLY debugged. Not that that's possible or anything ;)
Absolutely! :-)
Re:Envy? (Score:2)
Is this not less of an issue with newer CPUs? Just look at the rate at which the CPU/bus speed ratio is increasing. Much of the time a modern CPU spends is spent waiting for data from memory (granted this depends on the application.) If the CPU has to spend some extra time switching between rings then it shouldn't have much of an overall impact on speed - much of it's time is spent waiting anyway. This will only increase in the future.
It's the context switches that kill, and the CPU to memory ratio makes it worse. Switching contexts churns the cache and causes more misses. One of the biggest optimization opportunities in a microkernel is minimising the cache and TLB effects of the task switches. A really tempting approach is to make the service run in the client process's context but in a different ring. That looks a lot like a highly modular monokernel.
Re:Mach is known as a bad microkernel implementati (Score:2)
I'm not even sure it is harder then making a CPU where the MMU can't be bypassed in user mode. Except for the one little detail that most people want more then one programming language (thus more then one compile), or that p-code is very slow.
Really it is harder. The MMU has the advantage that it can see exactly where things are RIGHT NOW, and make a decision. Code verifiers have to look at the initial state and prove that given that initial state, the code, and any possable input, it cannot generate an access that violates the segmentation. What you end up with is expert system vs a natural intelligence. In the case of accidental violations, its back to "Nothing is foolproof because fools are so ingenious."
Re:Mach is known as a bad microkernel implementati (Score:2)
Point taken, except the compiler (not code verifier) can choose to insert bounds checks, it just won't be "as fast".
The problem with that is that any user might compile a program, then hand patch the bounds check out of the bytecode (or just use an unapproved compiler). That might be malicious, or just someone who wants their code to run faster and is "sure" there aren't any bugs that require the bounds checking.
The MMU on a modern processor is a complex beast. Most of the complexity is for performance rather than bounds checking. It at least has the benefit that the basic concept of 'just in time' bounds checking is conceptually a lot simpler than checking by proof.
Re:Envy? (Score:5)
Evidence to the contrary need not be presented. At least, not until someone comes up with a reason for microkernels being bad/wrong/sucky.
Good point! The biggest problem for microkernels is that they have to switch contexts far more frequently than a monokernel (in general).
For a simple example, a user app making a single system call. In a monokernel, The call is made. The process transitions from the user to the kernel ring (ring 3 to ring 0 for x86). The kernel copys any data and parameters (other than what would fit into registers) from userspace, handles the call, possably copies results back to userspace, and transitions back to ring3 (returns).
In a microkernel, the app makes a call, switch to ring 0, copy data, change contexts, copy data to daemon, transition to ring3 in server daemon's context, server daemon handles call, transitions to ring 0, data copied to kernelspace, change contexts back to user process, copy results into user space, transition back to ring3 (return).
Those extra transitions and context switches have a definite cost. A CPU designed with that in mind can help (a lot!), but on current hardware, it can be a nightmare. That's why a monokernel tends to perform better in practice.
Kernel modules in Linux are nothing like microkernel. They are just bits of a monokernel that are dynamically linked at runtime. In order to be a microkernel, each module would be a process and would reside in some ring other than 0 (possably 3, but not necessarily). In order to fully realise the microkernel module, users would be able to do the equivilant of insmod and get a private interface. Other users might or might not choose to access that module, and might insmod a different version with the same interface. This would not present a security problem.
There may well be places where the overhead is a worthwhile trade off for the features (clustered systems might be a good example). Cray takes an approach like that (not exactly that, just like that) in mkUnicos for the T3E (possably others, but I've not used those).
Some microkernels do a better job at efficiency than Mach (L3 for example). At some point, the hardware might actually get fast enough that the trade-off is nearly always worth while. Even then, monokernels will still have a place for situations where every cycle counts I don't anticipate those situations ever going away.
The olde Minix logs (Score:2)
Open Sources: Voices from the Open Source Revolution, Appendix A [oreilly.com].
---
Re:Mach is known as a bad microkernel implementati (Score:2)
Re:Why not work for them? (Score:2)
here we go again... (Score:4)
There was one in particular that stands out.
It was called "The Exterminator". The cover had this arnold schwarzeneger looking guy in fatigues with this huge flame thrower, trying to blow torch the camera taking his picture.
I used to look at that picture and think, "damn, that guy must be trying to start a flame war."
I get the same feeling when taco posts stories like this.
Re:severe lack of information (Score:2)
"Somehow I feel that Linus' viewpoint would be slightly different than the average web reporter reviewing MacOS X. There's a big difference between educated users and Uber-developers/Kernel hackers."
Linus doesn't have any deeper insight on this than any semi-informed correspondent. Almost every review has pointed out these problems, by noting that "Classic" apps don't take advantage of OS X's "advanced features", such as protected memory and pre-emptive multi-tasking.
"Also, I'm sure the reviewers have mentioned lack of support for CD/DVD stuff... That's what this article infers is being affected by the memory protection."
That is not what the article implies, although it does seems to be what you inferred. The lack of support for DVD playback has nothing to do with memory protection. It has to do with development time: there wasn't enough. There is no technical reason why OS X cannot support DVD playback and CD burning, these are just features which weren't added in time for the March 24th launch date.
Re:Linus is entitled to his opinion (Score:2)
No, not at all. Linus' dislike of Mach has been known for *years*. When I first heard his complaints, I wasn't entirely convinced. Mach appeared to offer so much, but as time has progressed, I've foud myself siding with Linus more and more. An OS is the one place where you really don't want extreme flexibility at the expense of performance.
Re:severe lack of information (Score:2)
Er, what?
If it is an old app it will run under classic with the other old apps, and have no memory protection.
If it is a new app one assumes it will be debugged under OSX and not get through QA with a ton of memory violations. (that's not guaranteed, it could use the Carbon API, and have been debugged under OS9, and "assumed" to work under OSX -- but I think you can check the "run under classic" box for them if you really really want).
Re:Mach is known as a bad microkernel implementati (Score:2)
Clearly it isn't easy to do it right because so many JVMs have screwed up. Boroughs has been doing it for a long time, and even they have some failures 20 or so years back. But it isn't impossible.
I'm not even sure it is harder then making a CPU where the MMU can't be bypassed in user mode. Except for the one little detail that most people want more then one programming language (thus more then one compile), or that p-code is very slow.
Re:Envy? (Score:2)
Actually the local computer stores sell Wintel boxes for $599, I forget how much RAM, but they have a 1Ghz CPU, which sounds better to Joe-sixpack then anything Apple offers in the iMac (unless they are sucked in by the look). Of corse you will have to add a monitor to the WinTel box, but that is still cheaper then the iMac.
That's for sure. That's why I own a Mac for the first time in my life. My OSX PowerBook G3 makes a better web surfing toy then my Viao with Win98 did.
P.S. I find it amusing that Apple spell checker (which the web browser can use) knows what a Mhz is, but not a Ghz. :-)
Re:Mach is known as a bad microkernel implementati (Score:2)
Point taken, except the compiler (not code verifier) can choose to insert bounds checks, it just won't be "as fast". (after all, the MMU is merely a bounds checker with some extra grot to send "well formatted" messages on violations)
On the other hand the MMU has a ton of difficulties as well. If you look at pipeline diagrams of modern CPUs the MMU tends to get a full pipestage, or two. It is one of the more complex parts of modern CPUs. The transistor count is behind that of a decently sized cache, but (including the TLB) ahead of everything else. The raw complexity it probably behind the modern reordering systems, but ahead of pretty much everything else. It is insanely complex.
Re:Mach is known as a bad microkernel implementati (Score:2)
On the Bouroghs system, and the IBM AS/400 machines executable are not editable at all. I don't recall if root can edit them or not, or if they are actually uneditable, or if they stop being executable when they are changed (much like the set-uid bit on Unix OSes).
Exactly. The complexity in compiler based bounds checking is also all in the attempts to make it fast. Otherwise it would just add two (or more) CMP and Bcc instructions (or whatever the equivalent for the CPU is) before each memory reference.
Yes it does. Although it has to rely on a normally fairly complex OS to set the MMU up right (and with some CPUs handle TLB shoot down, and cache invalidation).
Re:Hugely misleading (Score:2)
But MacOS X uses a modified Mach uK. Many of the OS services (ahem, filesystem) are directly provided by the BSD subsystem and skip the uK all together. The criticizm of Mach has to deal with these same issues, which is why Apple decided not to use Mach for every service. BSD doesn't just run as a strict server for Mach, it often bypasses it -- and as such, the opponents of Mach can still be vocal without attacking Apple's particular implementation.
MacOS X is a hybrid, much like some are prone to call the Linux kernel itself.
The wheel is turning but the hamster is dead.
god... loose vs. lose (Score:2)
---
That's grammar (Score:2)
---
Meanwhile, in Lilliputia. (Score:2)
"They're unnatural and generally suck monkey ass." he said. "I think they also beat their children."
The response from the Big-endians was quite clear- "The Little-endians are insane and represent a threat to the very fabric of our society."
In a poll of 2000 randomly selected indviduals, none had an opinion on the subject, with a +-4% margin for error.
Re:Envy? (Score:2)
Monolithic kernels are simply easier to build and it follows that the less code you have the fewer bugs you will have.
Also the way that microkernels have many subsystems that they need to send/receive messages from adds another layer of complexity.
Re:Envy? (Score:2)
Warmonger? Have you never heard the saying "Never burn bridges behind you" as a metaphor for not pissing off the people you work for when you move on to another job? Maybe English isn't your first language or this metaphor isn't common in your country...
Sheesh.
Re:Envy? (Score:2)
Point.
I was just giving a common justification for the use of monolithic kernels.
Re:Hugely misleading (Score:2)
Steven E. Ehrbar
Re:severe lack of information (Score:2)
Not at all; you're misreading the article. Linus is of the opinion that the way Mac Classic apps work was a problem for the people developing OS X, which is why he didn't want to work on OS X.
Steven E. Ehrbar
Re:someone should beat you with a cluestick (Score:2)
--
Re:Why not work for them? (Score:2)
Linus working for Jobs? (Score:2)
Steve Jobs: FUCK FUCK GODDAMN FUCK FUCK SHIT WHORE KILL MICROSOFT FUCK FUCK FUCK
Linus: Hey Jobs, You're a piece of crap
Re:Linus working for Jobs? (Score:2)
So, Mr. Smarty Pants, let's see you pack more literaty stuff into a 3 line story than I have. Good luck.
It's Revolutionar*Y* (Score:2)
He might just be an egotist, but we'll need to look a bit farther than the title of his book to prove it.
Re:severe lack of information (Score:2)
Can you read? This is why I said "modulo bugs and crappy drivers." Obviously no OS is crash-proof, but some do a pretty good job of preventing apps from bringing the system down, while others (Mac OS and Win9x) are much easier for one app to crash.
Re:MacOS/X is from BSD4.4, not Mach (Score:2)
Re:severe lack of information (Score:2)
The fact that VMWare runs on Linux doesn't make Linux a bad OS. It's precisely the same issue. From an OS design standpoint, classic apps don't exist. There just happens to be a userland application called "classic."
So I'd say Linus is still pretty ignorant about the whole issue.
Re:Quick Point (Score:2)
However, in some cases classic starts up automatically when a classic app gets launched, and short of removing the actual program I haven't found a way to change that.
This particularly annoying because often it will pick a classic app when you open a document-- some PDFs open with classic acrobat even though I've got Preview, and in the beta at least
Re:MacOS/X is from BSD4.4, not Mach (Score:3)
Darwin is as much a BSD as any of the other flavors. It's built atop Mach because that's how Next did it, and they continued in that tradition. Mach is a microkernel, atop which rests a layer that implements a BSD interface.
I'm pretty sure BSD *does* run on PPC hardware. NetBSD certainly does, and I think FreeBSD does as well. And there's certainly no reason that Apple couldn't have ported one of the to PPC hardware if they'd wanted to. The reason they didn't is that they were starting with Next technologies, so they used the Next kernel.
Re:severe lack of information (Score:5)
The solution Apple hit upon was the next best thing-- put them in their own memory space all together, but protect them from other non-classic apps. There really isn't much you can do about this. OS X *is* a real Unix and is therefore (modulo kernel bugs and bad drivers) crash-proof. The classic environment crashes a fair amount, but with a little refinement it's not likely to be any worse than OS 9.
Linus is either ignorant about the way Mac apps work or unrealistic about Apple's ability to dictate to its developers. Apple has an enourmous installed base of applications designed for OS 9, and if they were to throw those out, there would be essentially no software for the new OS. If, on the other hand, they had tried to design an OS that would support classic apps natively, they likely would have had to sacrifice system stability and performance to do it. Not only that, but if classic apps are supported natively, there's no incentive for developers to carbonize their apps, and therefore it's unlikely that the new API's will be widely used. This would greatly cripple Apple's ability to move the platform forward in the future.
This plan is a sensible middle path. It allows a migration path to fully buzzword compliant Carbon and Coacoa apps in the future. With any luck, all the important apps will have carbon versions by this time next year, and by the time OS XI (or whatever) ships, they can disable classic by default and run only native apps.
Look how long it's taken Microsoft to wean the Win 9x line of 16-bit apps-- they *still* have some hooks down to the lower levels of the OS 6 years after Win95 debuted. This is undoubtedly one of the causes of the OS's crappy stability and sluggish performance. Had they adopted a "virtual box" approach as Apple has, they'd probably have a much better OS today.
Re:Can we quit pretending...? (Score:2)
The preceeding link is courtesy of Google [google.com]. They mirror and grep for you.
I agree with the original poster -- for all intents and purposes, Linus is a moron. Before you moderators start handing out (-1, blah) points, hear me out. Sure, he's a great programmer and has made a great contribution in the form of the Linux kernel, but it's fairly clear from listening to him that he doesn't understand any developments in systems research since about 1960, with the possible exception of copy-on-write. Don't get me wrong -- I've used Linux since 1994, and it's better than most desktop operating systems. However, Linus' absolutely hubristic rejection of (barely) modern concepts which he doesn't understand, such as
It's time for Linus to get out of the way and let someone else serve as "CVS with taste" for the kernel -- after that, Linux has a chance of becoming a rock-solid, lean, and efficient kernel.
Re:Envy? (Score:2)
Re:severe lack of information (Score:2)
No it isn't. NT uses an application layer called WOW to translate Win16 API calls into Win32 API calls. 16-bit apps essentially run as full peers under NT (although interapp communication stuff like OLE doesn't work between 16 and 32).
Windows 9x on the other hand runs a bastardized 16/32-bit kernel that pretty much keeps the whole of Windows 3.1 intact inside of the rest of the OS.
A better comparision for Classic was how OS/2 ran Windows 3.1 apps by booting a virtual dos machine. Both approaches even have the same windowborder problem. Another comparison is the Mac on Unix environment in A/UX, and I'd expect that exactly where the hooks for Classic came from.
--
Re:severe lack of information (Score:2)
This attitude, of course, just means that all the back-compatible cruft gets pushed into glibc (which Linus has repeatedly referred to as "crap"). Of course, the Linux kernel is virutually unusable without glibc (or some other crappy variant), so you end up with "perfection" with a big pile of crap stuck on top of it.
This sort of user/kernel duality has even filtered down to the attitudes of the relatively non-technical advocate crowd. A typical featuritus Linux-based OS is not a very good Unix system, in a lot of people's opinions, and the response usually is "don't look there, look over here at our perfection-oriented kernel". I think the lesson of NT and OS X is that kernal internals are really a minor part of the OS's total utility.
--
Re:severe lack of information (Score:2)
OS/2 created a virtualized x86 machine emulation, booted a modified DOS 5.0 on that machine, and then booted a relatively unmodified Windows on top of DOS. (There were some hooks into WINOS2 so that DDE could work between environments and there was a special seemless windows video driver, but otherwise as the blue box versions showed, it was pretty much virgin Win 3.1).
AFAICT, Mac OS X does the same. You can even see a window showing the MacOS booting.
IMO, the NT approach is better (not to get in the way of your random OS/2 advocacy!). OS/2 wasn't really a "Better Windows than Windows" because OS2WIN had all the same flaws as regular Windows (and there were a fucking shitload of flaws there), plus some added incompatibility. The "features", such the ability to do hardware-oriented stuff (like comm software) from WINOS2 were usually so fubared that you'd be crazy to try.
Because NT translates Win16 API calls into Win32 API calls, a whole mess of bugs in the underlying system are resolved, and furthermore the WOW environemnt made problems like the "resource pools" to just go away. Compatibility and speed were a little worse, but back when I was running NT 3.5 on a low-end pentium, I really had less problems with 16-bit apps than I did with OS/2 on similar hardware.
--
Re:Linus vs. Tanenbaum (Score:2)
> Of course 5 years from now that will be
> different, but 5 years from now everyone will
> be running free GNU on their 200 MIPS, 64M
> SPARCstation-5.
Yep, and I picked it up in my flying car!
How humble (Score:3)
I don't mean to diminish what Linus has done, but RMS [stallman.org] has done a boatload more for the "revolution" than Linus did.
--
Wade in the water, children.
Re:Linus is entitled to his opinion (Score:2)
On the other hand, Mac OS X has been in the news a lot recently, and the PR staff of the book's publisher may have encouraged an article like like this for the PR value. Controversy sells.
Re:big talk for little man (Score:2)
Uhh...Yeah, actually he can.
His patches might not get into the main tree, but he is free to fork the whole damn thing if he likes.
There's nothing stopping you from adding to any of the BSDs, you just won't necessarily get into the official tree.
Linux is the same way, pretty much.
I kinda like the comittee better, as personal bias ("I think foo sucks and won't consider any ideas from foo, regardless of how good they may be") is less of a factor in deciding what goes in or not.
C-X C-S
os x and RAM (Score:2)
I for one upgraded to 384 MB before the final version came out, and no doubt the issues i had with the public beta running on a 128 MB machine are softened by now, but let me just explain something to you: Everything you see in Mac OS X is being rendered through a display layer known as Quartz-- essentially Display PDF. Everything on the screen is a PDF object. This is a bit of a performance hit, but not so much as you'd get under, say, Display Postscript (since Postscript is actually its own little stack-based programming language and requires a somewhat computationally expensive runtime. Quartz gives you the potential for some simply AMAZING things (for the moment the main sign of its existence is the ominpresent drop shadows and semitransparent menus in OS X, but the whole concept is EXTREMELY powerful and i think that five years from now most of the huge selling points of os x will be things made possible by quartz), and in the end the whole system is a probably good bit more sane than, say, X.
However THE PROBLEM WITH THIS IS that Quartz, as (in the words of the developers) a "per-pixel mixer, not a per-pixel switcher", offers FREE BUFFERING to all windows, and almost all apps accept. What this means is that when the display server draws a pixel, it can't just figure out which window owns the pixel and ask the window to draw it; it has to keep track of each window independently by itself, consider which pixels from which windows to take into consideration (and some of these windows may have elements smaller than a pixel..), and indepdendently mix them appropriately at every pixel on the screen.
SO: If you are going to run OS X, GET A BUNCH OF RAM, and MAKE SURE that you have a huge block of unfragmented free disk space for os x to use. And if you don't do these things, don't go around blaming the slow system responsiveness on the microkernel or the Objective C messaging architecture or the complex multitiered layout of the OS. And if you don't want to pay for the hardware needed to make all this stuff usable, well.. Be isn't *quite* bankrupt yetThe performance hit from this really isn't as bad as one would think, but **because Quartz stores the image of pretty much *every window on the screen***, *including* the ones you can't see, in RAM, it does mean you need RAM. Because if you don't supply enough RAM, it means that os x will start dipping into Virtual Memory.
I for one am going to sit around and wait for apple to get some Multiprocessor machines out on the market. Whoo!
(Classic only adds to all this pain, and to be honest as of right now (while you don't HAVE to run Classic, and i for one don't) the app support is such that either you are a GNU-head who will wind up living in Terminal most of the time or you will have several very important tasks you will find can *only* be provided by Classic mac apps. Come july that may change, but still..)
Re:Linus vs. Tanenbaum (Score:2)
I have one in a niche in my desk, actually...it's a great headless server.
"If ignorance is bliss, may I never be happy.
Re:Best quote from Tanenbaum (Score:2)
"If ignorance is bliss, may I never be happy.
Re:Envy? (Score:2)
I've never used a Mac but I never seen any really arcane hardware for it, everyone seems to use pretty much the same things aross the board. Then again maybe I'm a biased x86 person and don't know what I am talking about.
Re:Linus vs. Tanenbaum (Score:2)
From what I read at fsf.org a couple of months back, it sounds like they are now adopting Linux for that role. As best I can recall the claim went something like -
--
This has nothing to do with OSX. (Score:2)
He was asked specifically about Mach, and he says it's a piece of crap, and he has his reasons. This does not reflect on the quality of the OS, it's stability, or anything else.. not even the particular implementation in OSX. It has only to do with MACH.
In fact, that's one of the worst pieces of sensationalized journalism I've ever seen. They very effectively make it look like linux says OS-X is a piece of crap, when in fact, he said no such thing.
Bah (Score:3)
Risc -vs -Cisc
Microkernel -vs- Monolithic kernel...
There's a theoretical side, and a 'reality' side.
Is RISC cooler? Yes. Does that mean a chip is better because it's risc? No.
Are there pros and cons between microkernels and monolithic kernels? Certainly, of course there are. Does that mean that linux is better than OSX simply because one is monolithic? No.
Re:How long until we run OSX apps on LinuxPPC? (Score:2)
The PPC port of gcc still has a ways to go in terms of optimizing for the platform. It's not bad, but not quite there yet. As for darwin, who knows... The darwin compiler is still the old next port of gcc. It's fairly well optimized, but it's a bit outdated. There are a few folks at apple working on bringing it up to date with current gcc, but that may be a while yet.
Re:Mach information (Score:2)
Silberschatz (bell labs) has posted and excellent older chapeter (pdf) on Mach at
http://www.bell-labs.com/topic/books/os-book/mach- dir/mach.pdf ---->or here if your trusting [bell-labs.com]
severe lack of information (Score:5)
Re: (Score:2)
Re:o no, not again!! (Score:5)
kernel fun (Score:2)
Classic applications in OS X will, of course, not be protected by the nature of their environment, but they are still protected from affecting the OS as a whole. This was the whole point. Of course OS X is NOT perfect, but then again, neither were/are linux kernels 0.xx, 1.xx, 2.2.xx, 2.4.x, and 2.5.xx. No OS will ever be perfect. They will always offer tradeoffs. Are WINE applications completely protected? If they are, does each single WINE run application run in its own context? This increases the basic system requirements v. running all WINE apps under a single controlling architecure, e.g. classic in OS X.
Monolithic kernels, also increase the basic system requirements of a system. They require resources for each and every "feature" run at compile times, and add in opportunities for poorly written code to run amok as more and more pieces of the OS are always running, and loaded at startup. Even the lauded mod architecture is susceptible to this flaw, in the same way that microkernel architectures are by dynamically loading needed/required components. There exists NO perfect OS/kernel design, and there never will be one. Selection of the base OS type is dependent upon the designers needs and requirements, which at the time Linus originally developed linux were efficiency and speed v. current processor design and efficiency. (Same as other readers have pointed out in the CISC v. RISC debates.)
This entire discussion should be dropped. If monolithic kernels meet your requirements, great, if not use a microkernel design, a hybrid, or something entire NEW! This should be the crux of this argument, as it is EXTREMELY important that we not fixate and religisize a particular development environment, kernel, OS, whatever, etc. Diversity (not the PC kind) is what drives technological advancement, unless all of you truly wish to live in a wintel world!
Perhaps this entire article was really designed to provoke this kind of discussion...
Side notes: the 80386 was not REALLY designed to handle a multitasking environment, and hence was not optimized to provide optimal context switchin performance. I hazard to guess (I have not studied this closely, mind you) that the 680X0, MIPS, SPARC, etc. architectures provide a lower penalty for context switched v. x86 designs as they were intentionally designed to operate in multitasking environments. i.e. let us also not be so architecturually centric in our discussions
Re:No suprise here (Score:2)
Troll may not have been the best choice for it, but there was no "jumped to conclusions" reason either.
but what? (Score:2)
I doubt you've used OSX for any extended period of time (if at all), otherwise you probably wouldn't be saying that. They really thought the UI out. There are a number of very clever improvements in the functionality of the UI.
too many CPU cycles to render it.
Yeah, it's really hard to put a bitmap on the screen. Come on, this is 2001. I think we can afford to "waste" a few cycles on drop shadows and transparencies. Not everything has to look like TVWM.
The fact that it lacks stability means Apple has a piss-poor staff of people admining their programmers.
Ummmm, I haven't see a kernel panic on OSX yet. I ran the public beta for 7+ months. I'm running the GM on three separate machines. Not everyone is having problems. They might just be hardware issues that have to be ironed out. Heck, I've seen poor hardware cause Solaris to panic.
There is NO EXCUSE for such a lame product!
I agree. Fortunately, Mac OS X is a great OS.
- Scott
--
Scott Stevenson
WildTofu [wildtofu.com]
Armchair critic? (Score:5)
Mac OS X's requirements are "so ludicrous" because it has to run two OSs at once: Mac OS X itself and Mac OS 9 via the "Classic" environment. If you're only running native apps (which the are relatively few at this second, but many are coming this summer), then you'll probably do just fine on 64MB of RAM. Last time I checked, this is pretty comparable to GNOME or KDE.
- Scott
--
Scott Stevenson
WildTofu [wildtofu.com]
Re:Envy? (Score:2)
Boss of nothin. Big deal.
Son, go get daddy's hard plastic eyes.
Re:Linus vs. Tanenbaum (Score:2)
Anyone who says you can have a lot of widely dispersed people hack away on a complicated piece of code and avoid total anarchy has never managed a software project.
This is just one of a million statements in that article that show how truly clueless the man was about the future of computing back then. But then again just about everyone else was too. If you said then that the free software movement would spawn the more business minded/less political open source movement, which is the big darling of the computing world today, they would have probably laughed at you.
Re:Linus vs. Tanenbaum (Score:2)
I also agree that the many eyes make bugs shallow thing is kinda crap. It's many USERS that make bugs easy to find, not many people staring at the source
Re:Linus vs. Tanenbaum (Score:2)
Re: (Score:2)
Re:severe lack of information (Score:2)
So what you're saying is that Win9x -- which hundreds of thousands (if not millions) of people use every day, and are likeyly to continue to use for years to come -- is dead because of an OS which hasn't been released yet?
--
Re:Envy? (Score:2)
You can get a new iMac for $899. To run OS X you'll need a memory upgrade (why Apple is still shipping 64 meg machines when OS X's minimum requirements are 128 is a mystery to me), but you're still under $1000, which is hardly out of reach for the average consumer. Yes, I know you can buy a motherboard+case+power supply+graphics card+CPU in the Wintel world for half the price, but that's not something normal people have the ability or inclination to do.
There's a quite enlightened synopsis of the state of the OS wars here
That article is incorrect in a number of areas, especially this line: "The market for Mac OS X is simply a subset of the overall Mac market because not all Mac users will upgrade." Many Unix and Windows users who would never have considered Macs before are very interested in Mac OS X. Just count the number of Slashdot articles on OS X in the past month.
Mach is old technology (Score:2)
If the problem with Unix is that it's 30 years old... Mach (and other kernels like it, such as pre-version-4 NT) is over 10 years old. As microkernels go, it's considered to be quite bloated nowadays.
If you want to see a kernel that's truly modern in design, look at Chorus, QNX or BeOS.
Re:Linus vs. Tanenbaum (Score:3)
We can also tell by the number of bugs found in things developed with a development model like Linux that the "many eyes make all bugs shallow" philosophy is crap as well; most bugs aren't obvious programming errors, and if they are in your project, find new developers.
Re:Linus vs. Tanenbaum (Score:3)
OT: Re:Envy? (Score:2)
Re:Envy? (Score:2)
a) their small, minimalist design allows for fewer bugs, greater flexibility, etc, etc(all the old arguments you hear about them)...
and another point which I thought was very valid,
b) the fact that very little code is in the kernel allows for better realtime performance. The less time that must be spent in kernel mode(where interrupts are disabled), the more time can be devoted to actual work and servicing devices.
Some microkernels do a better job at efficiency than Mach (L3 for example). At some point, the hardware might actually get fast enough that the trade-off is nearly always worth while.
L4 and EROS kernels are even faster than previous generation microkernels. And it's not a matter of hardware getting faster; it's damn fast right now and the context switching algorithms used are so much better that's it's not even slow anymore. The problem is getting somebody to do ia u-kernel right and to build an actual system on it. While the people working on L4 are doing great work, they're doing it for research purposes so it's not going anywhere practical, anytime soon.
IMO, overhead penalty invloved with context switching in a u-kernel OS is totally worth it, especially for a desktop system. And QNX proves that it can be done right.
-----
"People who bite the hand that feeds them usually lick the boot that kicks them"
Linus is better known than you think (Score:2)
Even outside the Valley, he's gaining visibility: Time Digital's Digital 50: #4 Linus Torvalds [time.com].
Anyone paying attention to technology - tech investors, business leaders, etc., has at least heard of Linus Torvalds. The days of obscurity are gone. Just as people know who Jobs and Ellison are, they now know who Torvalds is.
Linus and Linux vs. Linus and Open Source (Score:2)
I understand that absolutely. Linus started Linux and has worked on it for years out of self-interest. No doubt about it.
What I'm getting at is that (unfortunately or not) Linus represents Open Source to the world at large. While most Linux afficionados know about Stallman, ESR, Perens, et. al., the lightening rod for Open Source right now is Linus Torvalds.
Whether he likes it or not, he represents Open Source to the non-hacker world. What I was getting at with my initial post is that even though Open Source movement and Linus Torvalds are two separate and distinct entities, that is not the perception.
The development of Linux shouldn't be run like a business, I wholeheartedly agree. But Torvalds has an effect far beyond just Linux. His straight up honesty is one of the things that has helped Linux come so far. It's one of the things that, by all accounts, makes him a good person.
But Open Source as a movement is bigger than Linus, and if we take Linux out of the equation for a moment, we can see that without Linux, Open Source wouldn't really have a flagship "product", regardless of what Stallman says. I mean, Apache is great and all, but a successful operating system is an order of magnitude more important when you're selling Open Source as a viable process for building software.
So now we have a situation where the de facto spokesman for Open Source might not be interested in taking on that role.
Where does that leave Open Source advocates who do care about presenting Open Source as a viable option (with or without Linux).
Linus is no longer "just a hacker" (Score:4)
I think that's why the press has latched onto this story, and why some of us find it particularly interesting. We all know that hackers flame each other, that for any technology to really matter, it has to originate from passionate individuals.
But the rest of the non-geek world doesn't know this, isn't familiar with hacker culture and how ideas are discussed. The business world operates differently, and in the business world, attacks like his attack on Mach are often interpreted as signs of fear or weakness.
It may be even more puzzling to the general public because they've been lead to believe that the Open Source community has always been interested in allowing many different technologies to flourish in a relatively benign environment.
I applaud Linus for the tremendous work he's done over the years in developing Linux and championing Open Source, but if you want to convince folks that Open Source is a kinder, gentler way to compute, saying Mach is crap might not be the best approach.
Rick Rashid? (Score:2)
Is this the same as this [microsoft.com] Rick Rashid: Senior Vice President of Microsoft Research?
If so interesting stuff. When I worked there i knew he was a bright buy (did a good deal of the physics for Allegiance - the game) and was WAY to into Star Trek, but i didn't know he did OS design.
-Jon
Streamripper [sourceforge.net]
Hugely misleading (Score:5)
Reading anything else to it just turns the whole thing into a "Ooooh...the Linux guy HATES OS X! He must be threatened by it!" media frenzy. That single out-of-context quote, combined with "Linux as insofar failed to bring UNIX to the desktop, which is what Apple believes OS X WILL do", makes it even worse.
I say humbug.
Re:How humble (Score:3)
????? There can only be one revolutionary? When did this turn into The Highlander?
Re:How humble (Score:2)
We should judge Linus by what he says and what he does. If you add everything up, that 'R' doesn't mean much against everything else. As far as I can tell, Linus actually is pretty humble.
And you are assuming that Linus came up with that title, which may not be correct. The publishing company may have come up with the title, or at least the subtitle (the part that contains that 'R' word).
steveha
Mach is known as a bad microkernel implementation (Score:5)
In Joseph Liedtke's 1995 paper, On Microkernel Construction [nec.com] he points out that the myth of Microkernels being more inefficient and thus slower than monolithic kernels is because most benchmarks were done against the Mach microkernel. He stated that the Mach performed poorly at both address switching and context switching and also failed to take processors into account and be optimized thusly. As a test, Liedtke wrote a Microkernel OS called L3 in which he showed that a call to getpid which took 800 cycles of kernel time on the Mach to be performed took 15 - 100 cycles on the L3.
Also he also disproved the notion that using a microkernel leads to memory degradation due to a large number of cache misses a dissects a number of benchmarks from Chen and Bershad's paper The Impact of Operating System Structure on Memory System Performance [cmu.edu].
Read his paper if you get the chance, it's very enlightening.
The ideas behind microkernel design are very sound and some of them have found their way into most mainstream OSes (Linux kernel modules can be seen as a take on the Microkernel architecture). Basically, having an OS where everything is hot swappable including memory management, process scheduling and device drivers is kind of cool. Also the fact that the usual OS level APIs can now be swapped out meaning you can have both POSIX layer and a Win32 layer on the same OS is rather nice.
Can you imagine? (Score:4)
--
Linus Quotes from 1992 (Score:3)
"...From a theoretical (and aesthetical) standpoint linux looses..."
Thank you.
quotes [http]
How long until we run OSX apps on LinuxPPC? (Score:3)
Basically it would take binfmt_macho to be written, maybe an extended hfs (if we don't have an osxvfsshim module). And the syscall translation shim.
And how long until it goes the other way? Add elf support (plus elf shared libs) and the linux syscall shim to Darwin? Maybe the same time.
At some point they will cross and then Darwin will be subjected to natural seleciton.
I already can run the Gimp on OS X.
It is a bit silly to have a modular kernel and then always have to include the same modules. Meanwhile monolithic linux has modules that install devices, filesystems, almost everything (Until I had some hiccups with late 2.3, my kernel had a ramdisk and cramfs, and loaded the rest, I really should revisit that).
At some point, probably 2-3 years, Darwin and Linux will either merge or become so cross compatible that one might all but disappear.
Re:Why not work for them? (Score:3)
I understand companies like Apple, Sun, Oracle wanting to compete with microsoft, but i don't like it when CEOs just are slapping at each other and getting in pissing matches.
Plus, can you imagine Linus working for Jobs?
Re:Linus vs. Tanenbaum (Score:3)
----
This is no surprise (Score:4)
If you were Apple, the decision going micro or monolithic was a no-brainer in my opinion. Ignoring the Tevanian-Mach connections, going monolithic with OS X would be putting too many eggs in one basket given the shaky CPU ground they're standing on. Mach gives them more alot more flexibility to jump the Motorola ship if forced to.
Linus vs. Tanenbaum (Score:3)
Re:Envy? (Score:5)
Linus' comments in the article have to do ONLY with the Mach microkernel. The GUI is irrelevant to him. He didn't create Linux to be user-friendly, so he has no reason to envy OS X for being user-friendly.