Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Software IT

A Comparison of Solaris, Linux, and FreeBSD Kernel 318

v1x writes "An article at OpenSolaris examines three of the basic subsystems of the kernel and compares implementation between Solaris 10, Linux 2.6, and FreeBSD 5.3. From the article: 'Solaris, FreeBSD, and Linux are obviously benefiting from each other. With Solaris going open source, I expect this to continue at a faster rate. My impression is that change is most rapid in Linux. The benefits of this are that new technology has a quick incorporation into the system.'"
This discussion has been archived. No new comments can be posted.

A Comparison of Solaris, Linux, and FreeBSD Kernel

Comments Filter:
  • How can they? (Score:5, Insightful)

    by WindBourne ( 631190 ) on Sunday October 16, 2005 @10:13PM (#13806629) Journal
    At this point, in order to see the kernel, you have to sign off on MS's shared source license. By doing that, anybody in the OSS world who signs, is then at risk of being at the receiving end of a MS lawsuit. It would be just as bad as signing off on a SCO license.
  • by RoadkillBunny ( 662203 ) <roadkillbunny@msn.com> on Sunday October 16, 2005 @10:19PM (#13806658)
    And let the flamefest begin...
  • by Anonymous Coward on Sunday October 16, 2005 @10:47PM (#13806759)
    How about data from this century?
  • by Anonymous Coward on Sunday October 16, 2005 @10:50PM (#13806773)
    Because well none of these licences really permit sharing, well some versions of the BSD licence are GPL compatable (those with two or three clause licences or ISC compatable but not any with the advertising clause. meaning that you could gpl the code (you have to be able to gpl it before you can incorporate it into a system that is already GPLed hopefully you have heard of the viralant nature of the GPL. BSD can't incorporate GPLed code for obvious reasons, CDDL can incorporate BSD code (probably does) but not GPL (conflicts) and vice versa for the GPL. WRT Dtrace it seems doubtful that the CDDL code will be incorporated, the licence (like most out of sun is very, very scary)
  • by Jester998 ( 156179 ) on Sunday October 16, 2005 @11:05PM (#13806821) Homepage
    When will apt finally replace /usr/ports in FreeBSD?

    Hopefully never... Ports rocks!
  • by Cheapy ( 809643 ) on Sunday October 16, 2005 @11:05PM (#13806823)
    Well...I'll go out on a limb here and say that it matters becuase it's a kernel that is used extensively.
  • by AstroDrabb ( 534369 ) on Sunday October 16, 2005 @11:06PM (#13806832)
    Since Solaris has DTrace (and FreeBSD will have it soon as well), wouldn't they automatically be better than the Linux kernel?
    No. First, niether Solaris nor FreeBSD are microkernels. Second DTrace is for kernel developers and sysadmins. As a USER, what I really care about is overall performance of a kernel. This article about comparing MySQL Performance [newsforge.com] on Solaris 10, Linux 2.4/2.6, FreeBSD and OpenBSD pretty much sums up what matters to me. I run MySQL and Tomcat on Linux 2.6 because it just is faster. While Solaris 10 is good, it just wasn't as fast as Linux 2.6 from my tests. Linux 2.6 allowed me to get the most "bang for the buck" out of my servers for MySQL and Tomcat.
  • Re:How can they? (Score:3, Insightful)

    by WindBourne ( 631190 ) on Sunday October 16, 2005 @11:16PM (#13806873) Journal
    a good developer can learn from all systems. Sometimes how to design thing, and other times, how not to. Even from MS, there is a lot to learn.
  • by CyricZ ( 887944 ) on Sunday October 16, 2005 @11:49PM (#13806990)
    The article is purely technical, and does not focus on topics (like licensing) that often lead to flamewars and other disagreement. Indeed, it is written in such a way that only the technical issues are discussed, rather than ideological issues.

  • by Will2k_is_here ( 675262 ) on Sunday October 16, 2005 @11:59PM (#13807022)
    You must be new here. This is slashdot. Who the hell reads TFA? Slashdot comments that do not touch on ideological issues. Now THAT would be huge!
  • by joe_bruin ( 266648 ) on Monday October 17, 2005 @12:11AM (#13807082) Homepage Journal
    Who modded this insightful? As the author states, this is a comparison of three components that are similar in the three OSs (and even those not in great detail). It is NOT an overall comparison of every feature of the three kernels. This throws out the parent poster's points 1, 2, and 5. This was a technical analysis, so license is not relevant (parent's point 3). The article is skewed to a Solaris direction, but I would hardly call it propaganda. To summarize the skew:

    1) Solaris has more abstraction for architecture dependent code than Linux, and is therefore slower but more portable.
    2) There are also more people working on Linux, leading to faster development but not as high a quality.

    See, that wasn't so bad. Overall, the author concludes that the three OSs do things quite similarly and stand to benefit from each other in the future.
  • by WindBourne ( 631190 ) on Monday October 17, 2005 @12:19AM (#13807122) Journal
    I remember comparisons about how slow threads were to start in Linux compared to other OSes (although Windows is even worse, I think).

    I used to teach linux API and kernel Internals at various companies (HP, IBM, and Avaya). At that time, a student said something similar, so we decided to do some quick benchmarking( on 2.4 vs. 2000). It was what I would expect; Linux was very slow on the thread compared to NT. OTH, Linux blew away Windows on process creation. The simple answer to this, was that Linux's process is nothing more than a thread with memory creation. In contrast, Windows (at 2K) has optimized threads, but not focused on process creation. Where it is at now, well....

  • by timmarhy ( 659436 ) on Monday October 17, 2005 @12:29AM (#13807163)
    depending on the licensing issues which the parent probably doesn't understand they could or they couldn't. the gpl is very bad for "infecting" programmers like that. i'm sure it'd be very grey ground to look at gpl'd code in order to write say a bsd version.
  • FreeBSD Ports (Score:5, Insightful)

    by 0xB00F ( 655017 ) on Monday October 17, 2005 @12:51AM (#13807236) Homepage Journal

    Wow! Is that all it takes to get a +5 Insightful now? So I guess this will be modded Troll or Flamebait.

    I can never really understand why FreeBSD ports is better than Debian's APT. Perhaps it's only because they look at "package installation" as the only use for these tools, whereas I use these tools for "package management". Everything comes down to the packagers who make and maintain the packages and the quality of the tools used to make and maintain the packages. I've used FreeBSD, Gentoo, and finally Debian for servers and desktops. Based on my experience APT is a more elegant solution to package management compared to FreeBSD Ports and Gentoo Portage for the following reasons:

    Package Building

    Although building Debian packages can be a bit overwhelming especially for newcomers, it really shines especially if you have installed debhelper, dh-make, dbs, dpatch, and lintian. What's really great about APT is the automatic runtime dependency resolution prior to packaging the final debs. After building the package and before it gets packed into a deb, a dependency checker is run through it and it will automatically figure out the runtime dependencies for you. On FreeBSD Ports and Gentoo Portage, you have to figure out and specify runtime dependencies yourself.

    The "Dusty Deck" Problem

    When I install a package using ports or emerge, it will also install the dependencies. But most of the time you will essentially be installing from source (and yes I am aware that Ports and Portage also have pre-built packages). When you do that, Ports and Portage will install and build the build-time dependencies of the package you are installing. Now, that's fine if those build-time dependencies are also needed at run-time. But some dependencies are only used at build-time and will never be used again until you upgrade the packages that depend on them. You can decide to remove them after build time, but then when you update the package they will be downloaded, rebuilt, and installed again. You eventually grow tired of this cycle that you just leave these build-time only packages and then they continue to accumulate on your disk mostly wasting space.

    This is probably the reason why there are "developer" packages for libraries that contain only the header files and the link libraries. Once you're done with building, you can uninstall the developer package. Try doing that under Ports or Portage. Oh, wait! You can't. The runtime and build-time dependencies are all in one package.

    Package Uninstallation

    Now this is where Ports and Portage, IMHO, really suck. When I uninstall a package from my system I want it gone. apt-get remove --purge and a properly packaged deb will do that for you. Ports and Portage will leave "package cruft" on your system. "Package cruft" can be anything from stale config files to build-time dependecy packages. You will have to track and remove those things manually.

    I know these three points can be resolved on both Ports and Portage if the packages are done correctly. This is where APT has an advantage over Ports and Portage: being able to make a proper package. On Debian, with the tools I mentioned above installed, a package maintainer's life is greatly simplified.

    Ports (and Portage) does Rock, but only if all you ever care about is package installation and not package management (package building, installation and uninstallation).

  • by TheNetAvenger ( 624455 ) on Monday October 17, 2005 @01:10AM (#13807299)
    Can one really see how the NT kernel works, with all the stuff stuck together like Windows is?

    Saying that the NT kernel and Windows (the Win32 Subsystem) have any relation would be like asking how you can compare any *nix kernel with all the XWindows stuff stuck together...

    NT is NOT what most people consider Windows, however it does POWER windows.

    Also the NT kernel is not too shabby, considering its design age, and it came from Microsoft. Go pick up Inside NT or a current version that deals directly with the NT kernel and not the Win32 subsystem.

  • by mcrbids ( 148650 ) on Monday October 17, 2005 @01:41AM (#13807408) Journal
    . As a USER, what I really care about is overall performance of a kernel.

    Ok, so as a USER, why would you care about MySQL? Because as a SYSTEM ADMINISTRATOR, what I really care about is stability and easy of administration. Once performance reaches "good enough", I could give a shit about improved performance. Hardware is cheap, a $5,000 1U MP system can blast down just about anything I'm going to care about.

    But, I want it to work today, tomorrow, next week, and next year. I want to reboot when I plan to (doing a kernel update, for instance) and only then. It had better be stable, and shouldn't be all that noticable in my day-to-day schedule. Doing updates had better not involve a half-day compiling, because downtime is !@#!@ expensive.

    But, performance? Can you name a SINGLE INSTANCE where you chose your O/S based on some performance graph? Unless your technology depends on Windows (I feel for you if it does) any of the *nixes out there are "good enough" to do just about anything up to the very high end. (Linux/BSD/Solaris/AIX/OSX/etc) Even at the very high end, it's unlikely the choosing the "worst" OS will cost more than switching to the "best" OS!

    In the end, it really doesn't matter all that much. Pick what you like, and roll with it. Performance is way down the totem pole, pay attention to stability, security, licensing, and compatability with your specific needs. Then, worry abouut performance!
  • by TheNetAvenger ( 624455 ) on Monday October 17, 2005 @01:52AM (#13807435)
    They wanted to test a real OS, one that can scale to more than 2 processors.

    Wow, a troll on slashdot, how novel. And an intelligent one as well, again how novel... *gag

    Considering NT was scaling multi-processors before Linux even existed, this is a bit of a rich statement. (Especially since Linux didn't even consider SMP until 1996, when it was a mature feature of NT by then.)

    Considering there is 128-way SMP version of Windows (running on NT) available, I would assume NT knows how to handle more than 2 processors, just a guess though.

    Also considering the desktop versions of WindowsXP support 2 processors standard - and NT has for years and years, I might suggest that it even has an edge on some OSes that SMP is just becoming realistic. (XP does Dual Processors with HT, effectively managing 4 virtual CPUs and this is the desktop edition for the average Joe.)

    Now should we talk about how it hasn't been until later versions 2.4 of Linux that in an SMP world, process affinity even become stable. (According to Intel, AMD and other people trying to create real world Linux SMP solutions.)

    Or we could talk about hotplug of memory and processors with Linux - which is still not supported as it is with NT and Solaris. (And Linux you even get the fun of reconfiguring if you want to flip processors in downtime even.)

    Call NT a Toy OS all you want, if you actually knew a little about the NT architecture and not just the 'windows buzz', that RUNS ON NT, then you probably wouldn't be laughed at so easily.

    *Cheers.

  • by smash ( 1351 ) on Monday October 17, 2005 @02:09AM (#13807475) Homepage Journal
    OK...

    Try a commercial app on anything but redhat.

    Open source is great and all, but there's specialised apps where there simply is not a viable alternative. And if you're bound to a commercial app, then you're either going to run redhat, or get no support from your app vendor, in most cases. Yes, you can likely get it to work, but as soon as you run into an application bug, you're screwed - reinstall on redhat or you'll probably get no support.

    Modular mining for example...

    smash (linux user/promoter of 9 years).

  • by civilizedINTENSITY ( 45686 ) on Monday October 17, 2005 @04:05AM (#13807757)
    Well, been a couple years since I took Operating Systems, but I thought a microkernel was "a system kernel that runs itself in protected memory space, and all drivers and processes in a seperate memory space allowing for (theoretically) better stability." Monolithic kernels are faster, but less robust. You are saying that, "Drivers are loaded from it and run in the same context for performance reasons", but that in all other ways it would be a microkernel. Thus, it is perhaps almost a microkernel. It isn't a bad kernel. :-) There is no reason to exagerate.
  • Re:Filesystems (Score:5, Insightful)

    by SnowZero ( 92219 ) on Monday October 17, 2005 @04:43AM (#13807860)
    I agree that BSD does not need Reiser, but I disagree with the blanket statement that BSD's filesystem is necessarily better. Comparative benchmarking of full implementations has shown that the differences between filesystems is not that large for most workloads, so nobody needs to care as long as the filesystem safeguards its integrity.

    2. UFS2 is better in just about every way. The issue of journaling vs. soft-updates has been rehashed a million times over, and soft-updates are simply better.

    The link you give is (a) written by the people that wrote soft updates, and (b) compares outdated journaling file systems, which are essentially strawmen. Reiser looks great in the papers on their own website too, but that's not a really an objective comparison either. If you want to put this to rest, you'll need to run a benchmark with more modern journaling file systems (in particular, those with wandering logs).

    The one issue journaling had in it's favor was fsck times, and UFS2 with it's "background fsck" has eliminated that problem. A system based on UFS2 will be up-and-running far faster than a ReiserFS journaled system, due to reiserfsck taking much longer to complete.

    The point of a journaling file system is that you don't need to run fsck (except in the case of a hard drive failure, of course). Mounting a several-hundred GB Reiser partition takes a few seconds, even if it was not cleanly unmounted. How much faster do you want that to be?

    Last I heard from some of the authors, the main drawbacks keeping softupdates from being used elsewhere were that it was more invasive to the VFS than a journaling file system, and had extremely bad memory usage behavior for specifically crafted (but unrealistic) benchmarks. I'd be interested in knowing if there has been some progress on these since 2000. Journaling file systems have come a long way in that time.
  • by ookaze ( 227977 ) on Monday October 17, 2005 @10:15AM (#13808922) Homepage
    Considering NT was scaling multi-processors before Linux even existed, this is a bit of a rich statement. (Especially since Linux didn't even consider SMP until 1996, when it was a mature feature of NT by then.)

    I find it rather rich that trolls like you have the guts to call others trolls, using rhetoric to prove your point and hoping nobody will notice.
    Linux 2.0 came in June 1996 with SMP support. So if SMP was not considered in Linux until 1996, you are basically saying that in less than 6 months, SMP was implemented in Linux better than in NT, where you say it was a mature feature ?!!!
    I would call that a feat.And I wonder how you can call SMP a mature feature in NT, when it was not scaling better than in Linux, which was implemented in less than 6 months, like you implied.

    Considering there is 128-way SMP version of Windows (running on NT) available, I would assume NT knows how to handle more than 2 processors, just a guess though.

    I never heard of any 128-way SMP version of Windows. I heard of a custom secret implementation that supposedly does that, that's all. But no version of Windows commercially sold actually does that. And SMP versions of Windows scales very poorly.

    Also considering the desktop versions of WindowsXP support 2 processors standard

    Against that's false. Home edition does not support 2 processors standard (HT is not SMP nor 2 processors support), Pro edition does. Stop lying ... oh you can't, without destroying your point.

    and NT has for years and years, I might suggest that it even has an edge on some OSes that SMP is just becoming realistic. (XP does Dual Processors with HT, effectively managing 4 virtual CPUs and this is the desktop edition for the average Joe.)

    Continuing your lies huh ? HT is not SMP, stop your nonsense. SMP was realistic in Linux way before NT, despite SMP being implemented in NT first : that speaks volumes for NT OS, and tend to prove GP point. Again, WinXP Pro support your 4 virtual CPUs, not Home. The distinction is important, if you want to make Linux comparison, because standard desktop editions of Linux distros come with SMP support. For Windows, standard desktop edition is XP Home, which has no such support. It could, but it doesn't.

    Now should we talk about how it hasn't been until later versions 2.4 of Linux that in an SMP world, process affinity even become stable. (According to Intel, AMD and other people trying to create real world Linux SMP solutions.)

    Still lying huh ? Process affinity wasn't becoming stable at end of Linux 2.4, it was just not implemented before. And it ended up implemented pretty fast. But I know why you added the word 'stable' in your rant : rhetoric implying it was there but not stable before.
    Anyway, despite not having process affinity, Linux kernel was running circles around NT kernel, so you should keep this subject hidden.

    Or we could talk about hotplug of memory and processors with Linux - which is still not supported as it is with NT and Solaris. (And Linux you even get the fun of reconfiguring if you want to flip processors in downtime even.)

    Well, you got me there. I did not even know that there was this type of hardware supported in x86 architecture. You are not credible though, because these kind of hotplug are only in limited set of configurations, not available to most Linux programmers, that's why, if they exist, they are still not implemented. There are little numbre of devs that can afford a E25K you know ? But you chose a good example, forgetting the load of other features that Linux kernel has, that makes NT kernel a toy OS. Stability is enough to kill any of your arguments though. You will give me crap about NT running for months (with 1 service) and you having seen Linux crash, the difference is that :
    Stability in Linux is the norm, in NT it's the exception.
    I mean that a Linux crash is news, while a NT running for years is news too.

    Call NT a Toy OS all you w

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...