A Comparison of Solaris, Linux, and FreeBSD Kernel 318
v1x writes "An article at OpenSolaris examines three of the basic subsystems of the kernel and compares implementation between Solaris 10, Linux 2.6, and FreeBSD 5.3.
From the article: 'Solaris, FreeBSD, and Linux are obviously benefiting from each other. With Solaris going open source, I expect this to continue at a faster rate. My impression is that change is most rapid in Linux. The benefits of this are that new technology has a quick incorporation into the system.'"
Re:Filesystems (Score:1, Informative)
because reiser4 is gpl licensed, unlike freebsd.
"I noticed that the article didn't mention LUFS. This alone allows for tremenduous possibilities, not least of which is rapid development of filesystems. Do any other systems (besides GNU HURD) have userspace filesystems?"
fuse is in linux 2.6.14.
Re:Filesystems (Score:3, Informative)
FUSE is now merged into the Linux kernel [kerneltrap.org], and will appear in 2.6.14.
kprobes? (Score:3, Informative)
However, the quality of a kernel is not automatically improved by the inclusion of DTrace. Not to disparage Solaris and FreeBSD, but DTrace is primarily for kernel developers and sysadmins. The common user and app developer have little use for either DTrace or kprobes.
Re:The Answer is Clear (Score:4, Informative)
B) Linux has SystemTap [sourceware.org], which goes above and beyond what DTrace is capable of. It is still in heavy development by Red Hat (Intel and IBM also helped start up the effort), and it's already quite a product.
Your post was one big troll, why do you find it amusing to spread random misinformation?
Regards,
Steve
Re:The Answer is Clear (Score:3, Informative)
Also, issues preventing the porting of DTrace to other systems would be because of licensing, not technology.
Re:The Answer is Clear (Score:3, Informative)
Re:Filesystems (Score:3, Informative)
It's not like there would be any problems with releasing the source-code to the kernel extension for it.
After all, GCC is included. So is Ext2 & Ext3 file systems.
Do you mean to tell me those aren't GPL'd either?
Re:I was expecting a beefier article... (Score:5, Informative)
Also, most of the Linux page fault code is architecture independant. As it happens, I just wrote an article explaining Linux page fault handling [linux-mm.org] for the Linux-MM wiki. You can find some details there...
Re:Filesystems (Score:4, Informative)
The FreeBSD and Linux kernels do differ fairly significantly, so it may not even be an easy task porting over the code (again, licensing issues aside). Indeed, it may even be a better idea for the FreeBSD team to perform their own implementation of the ReiserFS4 concepts and algorithms.
Userspace Filesystems: try Plan 9 from Bell Labs (Score:5, Informative)
Have a look at the Plan 9 wiki [bell-labs.com]. You can even run it inside vmware or Xen.
Re:wishfull thinking (Score:1, Informative)
Re:Toy computers need not apply (Score:1, Informative)
And I'm no NT or Microsoft lover.
Re:When will OSI licenses really start working? (Score:3, Informative)
Re:Filesystems (Score:5, Informative)
LUFS hasn't been maintained since 2003, and is therefore almost dead. FUSE (Filesystem in Userspace) [sourceforge.net] is the most promising alternative that is getting merged into the 2.6.14 mainline Linux kernel. It works with several network filesystem protocols like:
SMB for FUSE [tudelft.nl]
SSH Filesystem (SSHFS) [sourceforge.net]
FuseDAV (WebDAV) [0pointer.de]
Linux-FUSE can also provide all applications on the system (even shell utilities) with access to network locations set up under KDE. There's a tutorial [ground.cz] for how to do this, but last time I tried it did not compile
These are much needed improvements to usability of the Linux desktop, because unprivileged (non-root) users shouldn't have to contact their sys. admins everytime they need to mount network locations. The KDE approach to providing network access is not complete without Linux-FUSE, because only KDE apps can open/save to network locations set up under KDE. Hopefully the KDE devs will create a GUI for mounting/unmounting FUSE shares so that all apps (GTK, Motif, even shell utilities) can access network files.
Re:wishfull thinking (Score:2, Informative)
Re:Filesystems (Score:5, Informative)
Yes it does. A filesystem is a part of the kernel. The kernel is under the BSD license. The inclusion of Reiserfs code in the kernel would require it to be under the GPL license instead.
So is Ext2 & Ext3 file systems.
The ability to read ext2 file systems is included, but it is not the ext2 file system itself. You cannot create and write to ext2 file systems with FreeBSD.
Re:Filesystems (Score:4, Informative)
Two reasons.
1. It's GPL'd code. Why in the world would a BSD-licensed project include GPL'd code, and in the kernel of all places?
2. UFS2 is better in just about every way. The issue of journaling vs. soft-updates has been rehashed a million times over, and soft-updates are simply better. http://www.usenix.org/publications/library/procee
The one issue journaling had in it's favor was fsck times, and UFS2 with it's "background fsck" has eliminated that problem. A system based on UFS2 will be up-and-running far faster than a ReiserFS journaled system, due to reiserfsck taking much longer to complete.
So let me ask you. For what reason should anyone even consider porting reiserfs to any of the BSDs?
Re:Big lock (Score:5, Informative)
Re:When will OSI licenses really start working? (Score:2, Informative)
Re:The Answer is Clear (Score:5, Informative)
Of course, systemtap is still in its infancy, perhaps after a couple re-writes that seem standard in major components in the Linux Kernel, they can make it stable. But today its, not and any where near stable. There for your statement of "Linux has SystemTap, which goes above and beyond what DTrace is capable of. It is still in heavy development by Red Hat (Intel and IBM also helped start up the effort), and it's already quite a product." Is complete rubish. Of course one would have to think about. If its still under heavy development, also shows just how far from ready it is.
Of course, the truth really is that DTrace is far more feature rich than systemtap is, or will be for a long time. Systemtap biggest stumbling block is "guru mode" that allows the user to disable any protection that systemtap engineers have added. Systemtap's language is lacking in some basic concepts, like variable types like struct and typeset, making guru mode necessary for far too many scripts, and in-escapable when userland probes are created. Along with the other problems documented in my blogs.
You may try and dismiss me as a troll, but nothing could be farther from the truth. I'm stating the facts, I have also contributed to the Systemtap product, and commented on code changes. But I refuse to sit quietly as people try and pass Systemtap off as stable or better than DTrace. Dtrace is stable, and Enterprise Production ready and more full featured than Systemtap, even though they have left out features, that have to be worked around by the programmer.
Re:How much of Solaris has gone open source? (Score:3, Informative)
The official announcement was last January (nine months ago). Rumors had been out earlier, of course.
"How much has been open sourced? AFAIK, all the have opened sourced is DTrace (a very cool tool/framework), but nay else."
That's what was released in January; as of April, the answer is (per the FAQ at http://opensolaris.org/ [opensolaris.org] :
"Lets see them open up the kernel internals like the thread model..."
Done.
Re:Linux kernel better than Solaris kernel. (Score:3, Informative)
Re:FreeBSD Ports (Score:1, Informative)
I've used FreeBSD, NetBSD, OS X, Debian and Redhat for servers and/or desktops.
In my opinion, FreeBSD Ports > NetBSD pkgsrc > DarwinPorts > APT > RPM.
Package Building
On FreeBSD Ports and Gentoo Portage, you have to figure out and specify runtime dependencies yourself.
??? I don't understand this sentence. Portupgrade handles everything for me just fine.
The "Dusty Deck" Problem
On my home FreeBSD server, with around 330 ports currently installed, used space in the /usr partition is 3.4 GB. Given how cheaps hard disk space is, I couldn't care less to save a few couple of megabytes just to remove build dependencies.
Package Uninstallation
I understand this may be true for people playing with their system learning Linux, constantly installing/uninstalling packages (and constantly switching Linux distros), but after a certain while, you get to know which packages you need/want on a server and the question of uninstallation becomes unimportant alotgether. I don't see why I would want to uninstall Apache, Python or Subversion. Upgrade, yes but remove, no.
As a Zope webesite developer, I've usually been unable to find the versions I need for a given task (Zope is very picky about versions, and usually "old" versions just won't do) whereas FreeBSD is always up-to-date.
did anyone read the articles full of "facts"? (Score:1, Informative)
"Currently Solaris 10 patches are still free for servers without support contracts which is nice for enterprise, but is really important for home users and hobbyists. Of major Linux distributions only Debian and Gentoo has free patches available, but using Debian puts you into the situation that is called "Not a Red Hat"(NRH): Red Hat commands well over 60% of Linux marketplace and that instantly shows in the availability of RPMs, commercial applications, books and other things. "
When was the last time I wished I could go through rpm hell instead of just apt-get install? Idiotic.
--
The Switchboard [theswitchboard.ca], a free, browser based, internet phone
Re:wishfull thinking (Score:2, Informative)
That is why NT 3.51/3.53 was more robust than NT 4,0 which moved major parts of the UI code to kernel mode.
Please actually read Inside Windows NT 3.51 by Helen Custer and THEN read Inside Windows NT 4.0 to know the difference.
Re:wishfull thinking (Score:5, Informative)
That is why NT 3.51/3.53 was more robust than NT 4,0 which moved major parts of the UI code to kernel mode.
Please actually read Inside Windows NT 3.51 by Helen Custer and THEN read Inside Windows NT 4.0 to know the difference.
Sorry, hun, read both and even had this discussion with a key kernel developer at Microsoft a few years ago. (1997 in fact, as we were starting to work with Beta 1 of Windows 2000)
NT 4.0 ONLY moved video to a lower ring. It had NOTHING to do with moving the Win32 subsystem INTO NT - that did not happen.
That is why Windows NT Embedded exists, and also why even the WinCE is a version of the NT kernel with NO Win32 ties.
Microsoft can STILL produce NT without any Win32, and just throw a *nix subsystem on it if they wanted to, but yet have the robustness of NT. Win32 is the just the default interface because of the common API and success of Windows applications.
I think you are confusing Ring dropping of the video driver with something completely different.
NT is a client/server kernel... Go look up what that means, please for the love of God.
Win32 is a subsystem, plain and simple. Yes it is a subsystem that has tools to control the NT kernel under it, but that is just because that is the default subsystem interface. You could build these control tools in any subsystem you want to stack on NT. PERIOD.
Yeah, right, NT scales so well (Score:4, Informative)
If it did, with all Microsoft's billions of dollars, how come there's no NT equivalent to this [ibm.com] for Linux, or this [sun.com] for Solaris?
Those two bad boys scale damn near linearly. I know that, I don't have assume that. I can afford a 7-figure house because I can make those things sing. That Sunfire E25K has 72 CPU slots, and each UltraSPARC-IV chip has 2 full CPUs on each die. The IBM 595 has 64 CPU slots, and when I was at SC-04 in Pittsburgh last year, IBM claimed they were working on an 8-way version of their Power CPU. That's 512 CPUs on an SMP box.
There's nothing like that in the NT world that anyone could buy. And you don't have to sign some NDA that would keep you from getting a job in a lot of places to see the source code for either OS.
Keep your damn toy OS, and your self-admitted assumption that "NT knows how to handle more than 2 processors", because there's no commercially-available system to support that assumption.
Re:When will OSI licenses really start working? (Score:3, Informative)
Re:did anyone read the articles full of "facts"? (Score:3, Informative)
Yum is the equivalent of apt-get.
Don't confuse between the two.
Re:Userspace Filesystems: try Plan 9 from Bell Lab (Score:3, Informative)
Re:Filesystems (Score:3, Informative)
Re:wishfull thinking (Score:2, Informative)
I've never heard of such a thing. Neither has google. You probably mean microkernel, which is what MS was claiming NT was until they got tired of academic microkernel nuts telling them it wasn't (everyone except Tanenbaum who was busily claiming that Linux, with its unfashionable monolithic design, was obsolete) NT is a monolithic/microkernel hybrid.
Re:Toy computers need not apply (Score:5, Informative)
I might be wrong, but AFAIR, the largest SMP configuration supported by NT is 32 CPUs (or, probably, 16 Hyperthreading-CPUs) because of a constraint compiled into the kernel (Windows "Datacenter Server" Edition).
Anyway, even if you could run NT on some 128 CPUs, it would not scale well. If you actually knew a little about the NT implementation and not just the "microsoft propaganda", you'd possibly figure out, that a lot of (theoretically independent) code portions in the NT kernel synchronize on only one mutex-like synchronization lock (CRITICAL_SECTION) that is shared between these code portions
Example:
If you've got 50 independent data structures, you could use 50 mutex locks (one for each data structure), to protect it form becoming corrupted due to simultaneous modification by multiple threads. The NT design in this example would be to use only 5 CRITICAL_SECTION locks for the 50 independent data structures (one for every 10 data structures), so one thread modifying a data structure will potentially lock out 9 other threads who could be modifying 9 other data structures.
The lack of fine grained synchronization on NT makes it scale pretty bad, especially compared to Solaris (which scales so well probably mainly because of very fine grained and sophisticated synchronization, for example by using RW-locks instead of mutex-like CRITICAL_SECTIONs in situations where this is possible).
Re:Hyperthreading (Score:2, Informative)
However, I don't see what this has to do with hyperthreading. I don't know if there is a hyperthreading-specific feature in Solaris, but there is at least a cache-affinity feature, that tries to dispatch a process to same CPU always. On Hyperthreading-CPUs, it would at least try to keep the process on the same hyperthread (because it sees each hyperthread as one CPU), and an hyperthreading-specific extension would be to keep the process on the same CPU, but possibly on a different hyperthread (because that does not matter, as two hyperthreads on the same CPUs share the same cache memory, and that's what FreeBSD does).
You can even see how cache affinity works, provided you've got an SMP box. Just start a process with no more than a single compute-intensive thread, and you will see one CPU running at 100% for some time, while the others are idle (only every few seconds, the process will probably jump to another CPU). If there where no cache affinity, you would see all n CPUs running at approximately (100 / n) percent of load, because the process would be dispatched to any free CPU, regardless of where it had been dispatched previously (and as you are monitoring CPU load in steps of one second or so, and the process' time slice is only about 60 ms, you will see some load on all CPUs where the process had been running in the last second).
(Note: you can monitor CPU load per CPU using 'mpstat', as most GUI performance meter only show total system load, but not load per CPU)
Re:wishfull thinking (Score:3, Informative)
http://www.cosc.brocku.ca/~cspress/HelloWorld/199
Secondly, just put in: nt kernel client server
Into almost any search engine, Google is what I tested it on. You can also substitute client-server to help weed out some of the articles just talking about client server computing models and not the NT kernel and OS architecture itself.
Basically, NT has a client/server kernel and is a client/server OS architecture as well. It is not monolithic and serve multiple layers at the kernel level, but does so in a way that performance is not lost at the rate of other layered kernel designs like you would find in Linux.
It is just a different kernel concept that the people building NT came up with to give the best of both worlds, almost monolithic kernel speeds, and without the layered overhead. Cutler and his team were no fools, and if people remember came from the VMS and Unix world, not a Microsoft world.
Take Care...