A Comparison of Solaris, Linux, and FreeBSD Kernel 318
v1x writes "An article at OpenSolaris examines three of the basic subsystems of the kernel and compares implementation between Solaris 10, Linux 2.6, and FreeBSD 5.3.
From the article: 'Solaris, FreeBSD, and Linux are obviously benefiting from each other. With Solaris going open source, I expect this to continue at a faster rate. My impression is that change is most rapid in Linux. The benefits of this are that new technology has a quick incorporation into the system.'"
How can they? (Score:5, Insightful)
Invitation to flamewar? (Score:1, Insightful)
50MHz processors?!?!?! Linux 2.0.27?!?!?! (Score:3, Insightful)
Re:When will OSI licenses really start working? (Score:2, Insightful)
Re:When will OSI licenses really start working? (Score:3, Insightful)
Hopefully never... Ports rocks!
Re:wishfull thinking (Score:2, Insightful)
Re:The Answer is Clear (Score:4, Insightful)
Re:How can they? (Score:3, Insightful)
It's not written to provoke flamewars. (Score:3, Insightful)
Re:It's not written to provoke flamewars. (Score:3, Insightful)
Re:Serious things were missed.... (Score:5, Insightful)
1) Solaris has more abstraction for architecture dependent code than Linux, and is therefore slower but more portable.
2) There are also more people working on Linux, leading to faster development but not as high a quality.
See, that wasn't so bad. Overall, the author concludes that the three OSs do things quite similarly and stand to benefit from each other in the future.
Re:When will OSI licenses really start working? (Score:3, Insightful)
I used to teach linux API and kernel Internals at various companies (HP, IBM, and Avaya). At that time, a student said something similar, so we decided to do some quick benchmarking( on 2.4 vs. 2000). It was what I would expect; Linux was very slow on the thread compared to NT. OTH, Linux blew away Windows on process creation. The simple answer to this, was that Linux's process is nothing more than a thread with memory creation. In contrast, Windows (at 2K) has optimized threads, but not focused on process creation. Where it is at now, well....
Re:When will OSI licenses really start working? (Score:3, Insightful)
FreeBSD Ports (Score:5, Insightful)
Wow! Is that all it takes to get a +5 Insightful now? So I guess this will be modded Troll or Flamebait.
I can never really understand why FreeBSD ports is better than Debian's APT. Perhaps it's only because they look at "package installation" as the only use for these tools, whereas I use these tools for "package management". Everything comes down to the packagers who make and maintain the packages and the quality of the tools used to make and maintain the packages. I've used FreeBSD, Gentoo, and finally Debian for servers and desktops. Based on my experience APT is a more elegant solution to package management compared to FreeBSD Ports and Gentoo Portage for the following reasons:
Package Building
Although building Debian packages can be a bit overwhelming especially for newcomers, it really shines especially if you have installed debhelper, dh-make, dbs, dpatch, and lintian. What's really great about APT is the automatic runtime dependency resolution prior to packaging the final debs. After building the package and before it gets packed into a deb, a dependency checker is run through it and it will automatically figure out the runtime dependencies for you. On FreeBSD Ports and Gentoo Portage, you have to figure out and specify runtime dependencies yourself.
The "Dusty Deck" Problem
When I install a package using ports or emerge, it will also install the dependencies. But most of the time you will essentially be installing from source (and yes I am aware that Ports and Portage also have pre-built packages). When you do that, Ports and Portage will install and build the build-time dependencies of the package you are installing. Now, that's fine if those build-time dependencies are also needed at run-time. But some dependencies are only used at build-time and will never be used again until you upgrade the packages that depend on them. You can decide to remove them after build time, but then when you update the package they will be downloaded, rebuilt, and installed again. You eventually grow tired of this cycle that you just leave these build-time only packages and then they continue to accumulate on your disk mostly wasting space.
This is probably the reason why there are "developer" packages for libraries that contain only the header files and the link libraries. Once you're done with building, you can uninstall the developer package. Try doing that under Ports or Portage. Oh, wait! You can't. The runtime and build-time dependencies are all in one package.
Package Uninstallation
Now this is where Ports and Portage, IMHO, really suck. When I uninstall a package from my system I want it gone. apt-get remove --purge and a properly packaged deb will do that for you. Ports and Portage will leave "package cruft" on your system. "Package cruft" can be anything from stale config files to build-time dependecy packages. You will have to track and remove those things manually.
I know these three points can be resolved on both Ports and Portage if the packages are done correctly. This is where APT has an advantage over Ports and Portage: being able to make a proper package. On Debian, with the tools I mentioned above installed, a package maintainer's life is greatly simplified.
Ports (and Portage) does Rock, but only if all you ever care about is package installation and not package management (package building, installation and uninstallation).
Re:wishfull thinking (Score:4, Insightful)
Saying that the NT kernel and Windows (the Win32 Subsystem) have any relation would be like asking how you can compare any *nix kernel with all the XWindows stuff stuck together...
NT is NOT what most people consider Windows, however it does POWER windows.
Also the NT kernel is not too shabby, considering its design age, and it came from Microsoft. Go pick up Inside NT or a current version that deals directly with the NT kernel and not the Win32 subsystem.
Re:The Answer is Clear (Score:4, Insightful)
Ok, so as a USER, why would you care about MySQL? Because as a SYSTEM ADMINISTRATOR, what I really care about is stability and easy of administration. Once performance reaches "good enough", I could give a shit about improved performance. Hardware is cheap, a $5,000 1U MP system can blast down just about anything I'm going to care about.
But, I want it to work today, tomorrow, next week, and next year. I want to reboot when I plan to (doing a kernel update, for instance) and only then. It had better be stable, and shouldn't be all that noticable in my day-to-day schedule. Doing updates had better not involve a half-day compiling, because downtime is !@#!@ expensive.
But, performance? Can you name a SINGLE INSTANCE where you chose your O/S based on some performance graph? Unless your technology depends on Windows (I feel for you if it does) any of the *nixes out there are "good enough" to do just about anything up to the very high end. (Linux/BSD/Solaris/AIX/OSX/etc) Even at the very high end, it's unlikely the choosing the "worst" OS will cost more than switching to the "best" OS!
In the end, it really doesn't matter all that much. Pick what you like, and roll with it. Performance is way down the totem pole, pay attention to stability, security, licensing, and compatability with your specific needs. Then, worry abouut performance!
Re:Toy computers need not apply (Score:3, Insightful)
Wow, a troll on slashdot, how novel. And an intelligent one as well, again how novel... *gag
Considering NT was scaling multi-processors before Linux even existed, this is a bit of a rich statement. (Especially since Linux didn't even consider SMP until 1996, when it was a mature feature of NT by then.)
Considering there is 128-way SMP version of Windows (running on NT) available, I would assume NT knows how to handle more than 2 processors, just a guess though.
Also considering the desktop versions of WindowsXP support 2 processors standard - and NT has for years and years, I might suggest that it even has an edge on some OSes that SMP is just becoming realistic. (XP does Dual Processors with HT, effectively managing 4 virtual CPUs and this is the desktop edition for the average Joe.)
Now should we talk about how it hasn't been until later versions 2.4 of Linux that in an SMP world, process affinity even become stable. (According to Intel, AMD and other people trying to create real world Linux SMP solutions.)
Or we could talk about hotplug of memory and processors with Linux - which is still not supported as it is with NT and Solaris. (And Linux you even get the fun of reconfiguring if you want to flip processors in downtime even.)
Call NT a Toy OS all you want, if you actually knew a little about the NT architecture and not just the 'windows buzz', that RUNS ON NT, then you probably wouldn't be laughed at so easily.
*Cheers.
Re:did anyone read the articles full of "facts"? (Score:4, Insightful)
Try a commercial app on anything but redhat.
Open source is great and all, but there's specialised apps where there simply is not a viable alternative. And if you're bound to a commercial app, then you're either going to run redhat, or get no support from your app vendor, in most cases. Yes, you can likely get it to work, but as soon as you run into an application bug, you're screwed - reinstall on redhat or you'll probably get no support.
Modular mining for example...
smash (linux user/promoter of 9 years).
Re:SOLARIS 10 IS A MICROKERNEL OS (Score:3, Insightful)
Re:Filesystems (Score:5, Insightful)
2. UFS2 is better in just about every way. The issue of journaling vs. soft-updates has been rehashed a million times over, and soft-updates are simply better.
The link you give is (a) written by the people that wrote soft updates, and (b) compares outdated journaling file systems, which are essentially strawmen. Reiser looks great in the papers on their own website too, but that's not a really an objective comparison either. If you want to put this to rest, you'll need to run a benchmark with more modern journaling file systems (in particular, those with wandering logs).
The one issue journaling had in it's favor was fsck times, and UFS2 with it's "background fsck" has eliminated that problem. A system based on UFS2 will be up-and-running far faster than a ReiserFS journaled system, due to reiserfsck taking much longer to complete.
The point of a journaling file system is that you don't need to run fsck (except in the case of a hard drive failure, of course). Mounting a several-hundred GB Reiser partition takes a few seconds, even if it was not cleanly unmounted. How much faster do you want that to be?
Last I heard from some of the authors, the main drawbacks keeping softupdates from being used elsewhere were that it was more invasive to the VFS than a journaling file system, and had extremely bad memory usage behavior for specifically crafted (but unrealistic) benchmarks. I'd be interested in knowing if there has been some progress on these since 2000. Journaling file systems have come a long way in that time.
Re:Toy computers need not apply (Score:5, Insightful)
I find it rather rich that trolls like you have the guts to call others trolls, using rhetoric to prove your point and hoping nobody will notice.
Linux 2.0 came in June 1996 with SMP support. So if SMP was not considered in Linux until 1996, you are basically saying that in less than 6 months, SMP was implemented in Linux better than in NT, where you say it was a mature feature ?!!!
I would call that a feat.And I wonder how you can call SMP a mature feature in NT, when it was not scaling better than in Linux, which was implemented in less than 6 months, like you implied.
Considering there is 128-way SMP version of Windows (running on NT) available, I would assume NT knows how to handle more than 2 processors, just a guess though.
I never heard of any 128-way SMP version of Windows. I heard of a custom secret implementation that supposedly does that, that's all. But no version of Windows commercially sold actually does that. And SMP versions of Windows scales very poorly.
Also considering the desktop versions of WindowsXP support 2 processors standard
Against that's false. Home edition does not support 2 processors standard (HT is not SMP nor 2 processors support), Pro edition does. Stop lying
and NT has for years and years, I might suggest that it even has an edge on some OSes that SMP is just becoming realistic. (XP does Dual Processors with HT, effectively managing 4 virtual CPUs and this is the desktop edition for the average Joe.)
Continuing your lies huh ? HT is not SMP, stop your nonsense. SMP was realistic in Linux way before NT, despite SMP being implemented in NT first : that speaks volumes for NT OS, and tend to prove GP point. Again, WinXP Pro support your 4 virtual CPUs, not Home. The distinction is important, if you want to make Linux comparison, because standard desktop editions of Linux distros come with SMP support. For Windows, standard desktop edition is XP Home, which has no such support. It could, but it doesn't.
Now should we talk about how it hasn't been until later versions 2.4 of Linux that in an SMP world, process affinity even become stable. (According to Intel, AMD and other people trying to create real world Linux SMP solutions.)
Still lying huh ? Process affinity wasn't becoming stable at end of Linux 2.4, it was just not implemented before. And it ended up implemented pretty fast. But I know why you added the word 'stable' in your rant : rhetoric implying it was there but not stable before.
Anyway, despite not having process affinity, Linux kernel was running circles around NT kernel, so you should keep this subject hidden.
Or we could talk about hotplug of memory and processors with Linux - which is still not supported as it is with NT and Solaris. (And Linux you even get the fun of reconfiguring if you want to flip processors in downtime even.)
Well, you got me there. I did not even know that there was this type of hardware supported in x86 architecture. You are not credible though, because these kind of hotplug are only in limited set of configurations, not available to most Linux programmers, that's why, if they exist, they are still not implemented. There are little numbre of devs that can afford a E25K you know ? But you chose a good example, forgetting the load of other features that Linux kernel has, that makes NT kernel a toy OS. Stability is enough to kill any of your arguments though. You will give me crap about NT running for months (with 1 service) and you having seen Linux crash, the difference is that
Stability in Linux is the norm, in NT it's the exception.
I mean that a Linux crash is news, while a NT running for years is news too.
Call NT a Toy OS all you w