A Comparison of Solaris, Linux, and FreeBSD Kernel 318
v1x writes "An article at OpenSolaris examines three of the basic subsystems of the kernel and compares implementation between Solaris 10, Linux 2.6, and FreeBSD 5.3.
From the article: 'Solaris, FreeBSD, and Linux are obviously benefiting from each other. With Solaris going open source, I expect this to continue at a faster rate. My impression is that change is most rapid in Linux. The benefits of this are that new technology has a quick incorporation into the system.'"
wishfull thinking (Score:5, Interesting)
*sigh*
Re:wishfull thinking (Score:2, Interesting)
Re:wishfull thinking (Score:2, Funny)
***Microsoft Trade Secrets Protection Act***
The person that wrote the above post has been dealt with for sharing Microsoft (R) Windows (R) proprietary source code and is violation of the End-User License Agreement as well as Microsoft Revised Penal Code section 192.168.0.1. The person has been terminated and please disregard the above message.
Thank you,
Microsoft Support Team
Re:wishfull thinking (Score:2, Informative)
Re:wishfull thinking (Score:4, Insightful)
Saying that the NT kernel and Windows (the Win32 Subsystem) have any relation would be like asking how you can compare any *nix kernel with all the XWindows stuff stuck together...
NT is NOT what most people consider Windows, however it does POWER windows.
Also the NT kernel is not too shabby, considering its design age, and it came from Microsoft. Go pick up Inside NT or a current version that deals directly with the NT kernel and not the Win32 subsystem.
Re:wishfull thinking (Score:5, Informative)
That is why NT 3.51/3.53 was more robust than NT 4,0 which moved major parts of the UI code to kernel mode.
Please actually read Inside Windows NT 3.51 by Helen Custer and THEN read Inside Windows NT 4.0 to know the difference.
Sorry, hun, read both and even had this discussion with a key kernel developer at Microsoft a few years ago. (1997 in fact, as we were starting to work with Beta 1 of Windows 2000)
NT 4.0 ONLY moved video to a lower ring. It had NOTHING to do with moving the Win32 subsystem INTO NT - that did not happen.
That is why Windows NT Embedded exists, and also why even the WinCE is a version of the NT kernel with NO Win32 ties.
Microsoft can STILL produce NT without any Win32, and just throw a *nix subsystem on it if they wanted to, but yet have the robustness of NT. Win32 is the just the default interface because of the common API and success of Windows applications.
I think you are confusing Ring dropping of the video driver with something completely different.
NT is a client/server kernel... Go look up what that means, please for the love of God.
Win32 is a subsystem, plain and simple. Yes it is a subsystem that has tools to control the NT kernel under it, but that is just because that is the default subsystem interface. You could build these control tools in any subsystem you want to stack on NT. PERIOD.
Re:wishfull thinking (Score:3, Informative)
http://www.cosc.brocku.ca/~cspress/HelloWorld/1999
Secondly, just put in: nt kernel client server
Into almost any search engine, Google is what I tested it on. You can also substitute client-server to help weed out some of the articles just talking about client server computing models and not the NT kernel and OS architecture itself.
Basically, NT has a c
Re:wishfull thinking (Score:2, Insightful)
How can they? (Score:5, Insightful)
Re:How can they? (Score:2)
At this point, in order to see the kernel, you have to sign off on MS's shared source license.
I fail to see why an open source programers would want to use or look at the NT/Windows kernel. I figure Solaris, BSD and Linux will all borrow from each other and Microsoft will quietly borrow from open source to get a decent scheduler.
Re:How can they? (Score:3, Insightful)
Re:Toy computers need not apply (Score:3, Insightful)
Wow, a troll on slashdot, how novel. And an intelligent one as well, again how novel... *gag
Considering NT was scaling multi-processors before Linux even existed, this is a bit of a rich statement. (Especially since Linux didn't even consider SMP until 1996, when it was a mature feature of NT by then.)
Considering there is 128-way SMP version of Windows (running on NT) available, I would assume NT knows how to handle more than 2 proc
Yeah, right, NT scales so well (Score:4, Informative)
If it did, with all Microsoft's billions of dollars, how come there's no NT equivalent to this [ibm.com] for Linux, or this [sun.com] for Solaris?
Those two bad boys scale damn near linearly. I know that, I don't have assume that. I can afford a 7-figure house because I can make those things sing. That Sunfire E25K has 72 CPU slots, and each UltraSPARC-IV chip has 2 full CPUs on each die. The IBM 595 has 64 CPU slots, and when I was at SC-04 in Pittsburgh last year, IBM claimed they were working on an 8-way version of their Power CPU. That's 512 CPUs on an SMP box.
There's nothing like that in the NT world that anyone could buy. And you don't have to sign some NDA that would keep you from getting a job in a lot of places to see the source code for either OS.
Keep your damn toy OS, and your self-admitted assumption that "NT knows how to handle more than 2 processors", because there's no commercially-available system to support that assumption.
Re:Toy computers need not apply (Score:5, Informative)
I might be wrong, but AFAIR, the largest SMP configuration supported by NT is 32 CPUs (or, probably, 16 Hyperthreading-CPUs) because of a constraint compiled into the kernel (Windows "Datacenter Server" Edition).
Anyway, even if you could run NT on some 128 CPUs, it would not scale well. If you actually knew a little about the NT implementation and not just the "microsoft propaganda", you'd possibly figure out, that a lot of (theoretically independent) code portions in the NT kernel synchronize on only one mutex-like synchronization lock (CRITICAL_SECTION) that is shared between these code portions
Example:
If you've got 50 independent data structures, you could use 50 mutex locks (one for each data structure), to protect it form becoming corrupted due to simultaneous modification by multiple threads. The NT design in this example would be to use only 5 CRITICAL_SECTION locks for the 50 independent data structures (one for every 10 data structures), so one thread modifying a data structure will potentially lock out 9 other threads who could be modifying 9 other data structures.
The lack of fine grained synchronization on NT makes it scale pretty bad, especially compared to Solaris (which scales so well probably mainly because of very fine grained and sophisticated synchronization, for example by using RW-locks instead of mutex-like CRITICAL_SECTIONs in situations where this is possible).
Re:Toy computers need not apply (Score:5, Insightful)
I find it rather rich that trolls like you have the guts to call others trolls, using rhetoric to prove your point and hoping nobody will notice.
Linux 2.0 came in June 1996 with SMP support. So if SMP was not considered in Linux until 1996, you are basically saying that in less than 6 months, SMP was implemented in Linux better than in NT, where you say it was a mature feature ?!!!
I would call that a feat.And I wonder how you can call SMP a mature feature in NT, when it was not scaling better than in Linux, which was implemented in less than 6 months, like you implied.
Considering there is 128-way SMP version of Windows (running on NT) available, I would assume NT knows how to handle more than 2 processors, just a guess though.
I never heard of any 128-way SMP version of Windows. I heard of a custom secret implementation that supposedly does that, that's all. But no version of Windows commercially sold actually does that. And SMP versions of Windows scales very poorly.
Also considering the desktop versions of WindowsXP support 2 processors standard
Against that's false. Home edition does not support 2 processors standard (HT is not SMP nor 2 processors support), Pro edition does. Stop lying
and NT has for years and years, I might suggest that it even has an edge on some OSes that SMP is just becoming realistic. (XP does Dual Processors with HT, effectively managing 4 virtual CPUs and this is the desktop edition for the average Joe.)
Continuing your lies huh ? HT is not SMP, stop your nonsense. SMP was realistic in Linux way before NT, despite SMP being implemented in NT first : that speaks volumes for NT OS, and tend to prove GP point. Again, WinXP Pro support your 4 virtual CPUs, not Home. The distinction is important, if you want to make Linux comparison, because standard desktop editions of Linux distros come with SMP support. For Windows, standard desktop edition is XP Home, which has no such support. It could, but it doesn't.
Now should we talk about how it hasn't been until later versions 2.4 of Linux that in an SMP world, process affinity even become stable. (According to Intel, AMD and other people trying to create real world Linux SMP solutions.)
Still lying huh ? Process affinity wasn't becoming stable at end of Linux 2.4, it was just not implemented before. And it ended up implemented pretty fast. But I know why you added the word 'stable' in your rant : rhetoric implying it was there but not stable before.
Anyway, despite not having process affinity, Linux kernel was running circles around NT kernel, so you should keep this subject hidden.
Or we could talk about hotplug of memory and processors with Linux - which is still not supported as it is with NT and Solaris. (And Linux you even get the fun of reconfiguring if you want to flip processors in downtime even.)
Well, you got me there. I did not even know that there was this type of hardware supported in x86 architecture. You are not credible though, because these kind of hotplug are only in limited set of configurations, not available to most Linux programmers, that's why, if they exist, they are still not implemented. There are little numbre of devs that can afford a E25K you know ? But you chose a good example, forgetting the load of other features that Linux kernel has, that makes NT kernel a toy OS. Stability is enough to kill any of your arguments though. You will give me crap about NT running for months (with 1 service) and you having seen Linux crash, the difference is that
Stability in Linux is the norm, in NT it's the exception.
I mean that a Linux crash is news, while a NT running for years is news too.
Call NT a Toy OS all you w
Flavourful. (Score:5, Funny)
One is crunchy, the other's chewy, and the last is malt flavoured.
When will OSI licenses really start working? (Score:3, Interesting)
Re:When will OSI licenses really start working? (Score:5, Interesting)
BTW, you mention Solaris's network stack. For Solaris 9.9.x, just before the release, Sun did an internal test comparing between Solaris and all the major OSs. It turned out that Solaris lost big to Linux 2.6 when it came to networking. So Sun delayed it so that the internal team could re-design it to beat Linux's networking. According to one of my friends there, they believe that they have done so. But he also said that they borrowed ideas from Linux and BSD. So yes, the x-pollination is occuring.
Re:When will OSI licenses really start working? (Score:5, Interesting)
The system has been working very good. Plus there are obvious connections. FreeBSD (and I assume Solaris) can both read ext2 (and I assume 3). Both have DevFS (which Linux has had, at least in some form, I don't know how close/far apart they were). So code which can be easily adapted does get moved. I would be VERY surprised if there were only a handful of drivers for FreeBSD that said something along the lines of "Based on the Linux driver by Mr. Reverse Engineer", and I'd imagine there are drivers that go the other way too (I'm not nearly as familiar with FreeBSD as I am with Linux).
Re:When will OSI licenses really start working? (Score:3, Insightful)
I used to teach linux API and kernel Internals at various companies (HP, IBM, and Avaya). At that time, a student said something similar, so we decided to do some quick benchmarking( on 2.4 vs. 2000). It was what I would expect; Linux was very slow on the thread compared to NT. OTH, Linux blew away Windows on process creation. The simple answer to this, was that Linux's pro
Re:When will OSI licenses really start working? (Score:3, Informative)
In terms of real work, for the time frame you are refering to, it is interesting that Oracle runs 25% [oracle.com]
Re:When will OSI licenses really start working? (Score:3, Interesting)
I'm fairly sure Solaris was the first to have an automatically-managed /dev. Solaris has
had its /dev and /devices
arrangement (in which everything in /dev
is a symlink to something in /devices,
there is no such thing as MAKEDEV
anymore, and everything is automatically
maintained) since at least Solaris 2.4, and
quite possibly sin
Re:When will OSI licenses really start working? (Score:3, Interesting)
That was about 2-3 years ago. It all seems to work smoothly these days, I think they just patched some work-arounds (guess the maj
Re:When will OSI licenses really start working? (Score:2)
Re:When will OSI licenses really start working? (Score:3, Informative)
Re:When will OSI licenses really start working? (Score:2)
Re:When will OSI licenses really start working? (Score:2)
Re:When will OSI licenses really start working? (Score:2)
Re:When will OSI licenses really start working? (Score:3, Insightful)
Re:When will OSI licenses really start working? (Score:2)
And for the record, all three flat out rock.
Re:When will OSI licenses really start working? (Score:2, Insightful)
Re:When will OSI licenses really start working? (Score:3, Insightful)
Hopefully never... Ports rocks!
FreeBSD Ports (Score:5, Insightful)
Wow! Is that all it takes to get a +5 Insightful now? So I guess this will be modded Troll or Flamebait.
I can never really understand why FreeBSD ports is better than Debian's APT. Perhaps it's only because they look at "package installation" as the only use for these tools, whereas I use these tools for "package management". Everything comes down to the packagers who make and maintain the packages and the quality of the tools used to make and maintain the packages. I've used FreeBSD, Gentoo, and finally Debian for servers and desktops. Based on my experience APT is a more elegant solution to package management compared to FreeBSD Ports and Gentoo Portage for the following reasons:
Package Building
Although building Debian packages can be a bit overwhelming especially for newcomers, it really shines especially if you have installed debhelper, dh-make, dbs, dpatch, and lintian. What's really great about APT is the automatic runtime dependency resolution prior to packaging the final debs. After building the package and before it gets packed into a deb, a dependency checker is run through it and it will automatically figure out the runtime dependencies for you. On FreeBSD Ports and Gentoo Portage, you have to figure out and specify runtime dependencies yourself.
The "Dusty Deck" Problem
When I install a package using ports or emerge, it will also install the dependencies. But most of the time you will essentially be installing from source (and yes I am aware that Ports and Portage also have pre-built packages). When you do that, Ports and Portage will install and build the build-time dependencies of the package you are installing. Now, that's fine if those build-time dependencies are also needed at run-time. But some dependencies are only used at build-time and will never be used again until you upgrade the packages that depend on them. You can decide to remove them after build time, but then when you update the package they will be downloaded, rebuilt, and installed again. You eventually grow tired of this cycle that you just leave these build-time only packages and then they continue to accumulate on your disk mostly wasting space.
This is probably the reason why there are "developer" packages for libraries that contain only the header files and the link libraries. Once you're done with building, you can uninstall the developer package. Try doing that under Ports or Portage. Oh, wait! You can't. The runtime and build-time dependencies are all in one package.
Package Uninstallation
Now this is where Ports and Portage, IMHO, really suck. When I uninstall a package from my system I want it gone. apt-get remove --purge and a properly packaged deb will do that for you. Ports and Portage will leave "package cruft" on your system. "Package cruft" can be anything from stale config files to build-time dependecy packages. You will have to track and remove those things manually.
I know these three points can be resolved on both Ports and Portage if the packages are done correctly. This is where APT has an advantage over Ports and Portage: being able to make a proper package. On Debian, with the tools I mentioned above installed, a package maintainer's life is greatly simplified.
Ports (and Portage) does Rock, but only if all you ever care about is package installation and not package management (package building, installation and uninstallation).
Re:FreeBSD Ports (Score:3, Interesting)
Now this is where Ports and Portage, IMHO, really suck. When I uninstall a package from my system I want it gone. apt-get remove --purge and a properly packaged deb will do that for you. Ports and Portage will leave "package cruft" on your system.
I don't know about portage, but ports don't leave cruft behind. When you remove a port, you effectively remove a package. FreeB
Re:FreeBSD Ports (Score:3, Interesting)
So [package uninstaller] on [OS] completely removes that package. Golf clap. A properly packaged (your words) application will work that way on any OS, not just Debian.
I love Debian, and have used it for years. It's great. However, the real admin nightmare comes when you decide you want some non-standard feature supported systemwide. On FreeBSD, for example, if I want LD
Re:When will OSI licenses really start working? (Score:3, Interesting)
It is happening. Solaris is the new kid on the block, it will take time for the code to be grokked and made use of. Or vice versa.
Re:When will OSI licenses really start working? (Score:2)
-matthew
Re:When will OSI licenses really start working? (Score:2, Informative)
Filesystems (Score:2, Interesting)
Does anybody know why ReiserFS 3 hasn't been ported to any of the BSDs yet? ReiserFS 4 looks as though it's pretty revolutionary, if distributions settle on that as a default, I can see that giving quite an advantage to Linux compared with the other kernels.
I noticed that the article didn't mention LUFS [sourceforge.net]. This alone allows for tremenduous possibilities, not least of which is rapid development of filesystems. Do any other systems (besides GNU HURD) have userspace filesystems?
Re:Filesystems (Score:3, Informative)
FUSE is now merged into the Linux kernel [kerneltrap.org], and will appear in 2.6.14.
Re:Filesystems (Score:4, Informative)
The FreeBSD and Linux kernels do differ fairly significantly, so it may not even be an easy task porting over the code (again, licensing issues aside). Indeed, it may even be a better idea for the FreeBSD team to perform their own implementation of the ReiserFS4 concepts and algorithms.
Re:Filesystems (Score:5, Interesting)
Clearly you don't know about Reiser (no offense, it's just that that question shows a stark lack of understanding with regards to why Reiser is interesting in the first place).
Reiser solves one of the oldest problems facing the old Unix-style filesystem: the adoption of btree-order performance directory lookups (using Reiser's "dancing trees") without significant loss in other areas of filesystem performance, e.g. directory entry creation and deletion, etc. This is something which was long thought impossible.
This lead to further development, since the major reason to avoid creating thousands of temporary files has always been directory lookup times. So, now the question is: how far do you go with files? Reiser 4 answers that question by adding significant semantics to files which were not practical with slower filesystems (again, keep in mind that when I say "slower" I refer to the performance bottleneck surrounding large directories primarily).
The problem with Reiser is that it is Reiser, and none of the exisiting filesystems can match its performance in these areas. That means that if you write an application that relies on Reiser's performance, it really RELIES on Reiser, and cannot perform well under normal filesystems without significant engineering (e.g. writing special-case code for Reiser and non-Reiser filesystems). In some cases (e.g. databases) this might be worthwhile, but in the case of more mundane applications, having filesystem-specific code is not always viable.
For more information, see the Reiser4 site: http://www.namesys.com/v4/v4.html [namesys.com]
Nothing wrong with ReiserFS itself. (Score:2)
You cannot take code from two radically different projects, stick them together as is being proposed by others, and then have it magically work. You could run into issues with the FreeBSD file buffering subsystem, for i
Professionalism of the Solaris, FreeBSD developers (Score:2)
Indeed, the instance of the KOffice developer who went around publically insulting [slashdot.org] a long time KDE and KOffice user is a perfect example of the sort of unprofess
Re:Professionalism of the Solaris, FreeBSD develop (Score:2)
Regardless, it was a very unprofessional act to perform. At least there are others on the open source community who do set a good example for the other developers.
Re:STFU CYRIC! (Score:4, Funny)
Re:Filesystems (Score:2)
Layering your own FS on top of Reiser works pretty well. Break up directories into smaller directories containing sub-directories of files increases access time a lot. Squid and other programs
Re:Filesystems (Score:3, Interesting)
I'd say the real problem is often Hans Reiser. He's usually got good ideas, but he tends to quickly get on people's bad side with his argumentative style. What's strange is that this is usually limited to the first 10-20 posts in some big discussion, then he calms down and works more directly and constructively with developers. If you look at the past few Reiser discussions on lkml, they seem to follow this pattern.
Re:Filesystems (Score:2)
I doubt Mr. Reiser would have much better luck with the kernel devs on any of the BSDs.
Re:Filesystems (Score:2)
Re:Filesystems (Score:2)
Userspace Filesystems: try Plan 9 from Bell Labs (Score:5, Informative)
Have a look at the Plan 9 wiki [bell-labs.com]. You can even run it inside vmware or Xen.
Re:Userspace Filesystems: try Plan 9 from Bell Lab (Score:3, Informative)
Re:Filesystems (Score:5, Informative)
LUFS hasn't been maintained since 2003, and is therefore almost dead. FUSE (Filesystem in Userspace) [sourceforge.net] is the most promising alternative that is getting merged into the 2.6.14 mainline Linux kernel. It works with several network filesystem protocols like:
SMB for FUSE [tudelft.nl]
SSH Filesystem (SSHFS) [sourceforge.net]
FuseDAV (WebDAV) [0pointer.de]
Linux-FUSE can also provide all applications on the system (even shell utilities) with access to network locations set up under KDE. There's a tutorial [ground.cz] for how to do this, but last time I tried it did not compile
These are much needed improvements to usability of the Linux desktop, because unprivileged (non-root) users shouldn't have to contact their sys. admins everytime they need to mount network locations. The KDE approach to providing network access is not complete without Linux-FUSE, because only KDE apps can open/save to network locations set up under KDE. Hopefully the KDE devs will create a GUI for mounting/unmounting FUSE shares so that all apps (GTK, Motif, even shell utilities) can access network files.
Re:Filesystems (Score:4, Informative)
Two reasons.
1. It's GPL'd code. Why in the world would a BSD-licensed project include GPL'd code, and in the kernel of all places?
2. UFS2 is better in just about every way. The issue of journaling vs. soft-updates has been rehashed a million times over, and soft-updates are simply better. http://www.usenix.org/publications/library/procee
The one issue journaling had in it's favor was fsck times, and UFS2 with it's "background fsck" has eliminated that problem. A system based on UFS2 will be up-and-running far faster than a ReiserFS journaled system, due to reiserfsck taking much longer to complete.
So let me ask you. For what reason should anyone even consider porting reiserfs to any of the BSDs?
Re:Filesystems (Score:5, Insightful)
2. UFS2 is better in just about every way. The issue of journaling vs. soft-updates has been rehashed a million times over, and soft-updates are simply better.
The link you give is (a) written by the people that wrote soft updates, and (b) compares outdated journaling file systems, which are essentially strawmen. Reiser looks great in the papers on their own website too, but that's not a really an objective comparison either. If you want to put this to rest, you'll need to run a benchmark with more modern journaling file systems (in particular, those with wandering logs).
The one issue journaling had in it's favor was fsck times, and UFS2 with it's "background fsck" has eliminated that problem. A system based on UFS2 will be up-and-running far faster than a ReiserFS journaled system, due to reiserfsck taking much longer to complete.
The point of a journaling file system is that you don't need to run fsck (except in the case of a hard drive failure, of course). Mounting a several-hundred GB Reiser partition takes a few seconds, even if it was not cleanly unmounted. How much faster do you want that to be?
Last I heard from some of the authors, the main drawbacks keeping softupdates from being used elsewhere were that it was more invasive to the VFS than a journaling file system, and had extremely bad memory usage behavior for specifically crafted (but unrealistic) benchmarks. I'd be interested in knowing if there has been some progress on these since 2000. Journaling file systems have come a long way in that time.
Re:Filesystems (Score:3, Informative)
It's not like there would be any problems with releasing the source-code to the kernel extension for it.
After all, GCC is included. So is Ext2 & Ext3 file systems.
Do you mean to tell me those aren't GPL'd either?
Re:Filesystems (Score:2)
Also, GCC isn't part of the FreeBSD kernel, you know.
Re:Filesystems (Score:2)
Re:Filesystems (Score:2)
It's not in the kernel modules portion of the FreeBSD CVS repository:
http://www.freebsd.org/cgi/cvsweb.cgi/src/lkm/ [freebsd.org]
Nor is such code with the other filesystem code:
http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/fs/ [freebsd.org]
Re:Filesystems (Score:5, Informative)
Yes it does. A filesystem is a part of the kernel. The kernel is under the BSD license. The inclusion of Reiserfs code in the kernel would require it to be under the GPL license instead.
So is Ext2 & Ext3 file systems.
The ability to read ext2 file systems is included, but it is not the ext2 file system itself. You cannot create and write to ext2 file systems with FreeBSD.
Re:Filesystems (Score:3, Informative)
I was expecting a beefier article... (Score:5, Funny)
Re:I was expecting a beefier article... (Score:5, Funny)
Re:I was expecting a beefier article... (Score:2)
Re:I was expecting a beefier article... (Score:5, Informative)
Also, most of the Linux page fault code is architecture independant. As it happens, I just wrote an article explaining Linux page fault handling [linux-mm.org] for the Linux-MM wiki. You can find some details there...
Re:I was expecting a beefier article... (Score:2)
Linux kernel better than Solaris kernel. (Score:2, Interesting)
http://www.ultralinux.org/faq.html#q_1_15 [ultralinux.org]
50MHz processors?!?!?! Linux 2.0.27?!?!?! (Score:3, Insightful)
Re:Linux kernel better than Solaris kernel. (Score:2)
Re:Linux kernel better than Solaris kernel. (Score:3, Informative)
Hyperthreading (Score:5, Interesting)
For hyperthreaded CPUs, FreeBSD has a mechanism to help keep threads on the same CPU node (though possibly a different hyperthread). Solaris has a similar mechanism, but it is under control of the user and application, and is not restricted to hyperthreads (called "processor sets" in Solaris and "processor groups" in FreeBSD).
I am positive that the 2.6 kernel understands hyperthreading and does something similar to FreeBSD. Why wasn't that mentioned? Did the author not know that?
Overall through, it was interesting. I'd read it as a longer series, if they had one. This is an area that I'm interested in. I read kernel-traffic, and subscribe to LWN (you should to!) almost entirely to read the kernel page. I've learned so much about operating systems and computers from reading about the improvements in the Linux kernel, why the old version wasn't good enough, etc. While I no longer use Linux since I got my Mac (OS X fills all my needs), I continue to learn a large amount about computer architecture and operating system concepts from it.
Big lock (Score:2)
Re:Big lock (Score:2, Interesting)
http://www.freebsd.org/smp/ [freebsd.org]
Re:Big lock (Score:5, Informative)
Serious things were missed.... (Score:2, Interesting)
2. The concept of Solaris containers is nearly science fiction. Building them and then watching them through dtrace is a work of art, as in the Sistine Chapel. LVM is a different school of thought that gets to a similar conclusion; this all skewed by the beauties of VMWare and multiple instance/clustering management possibilities.
3. The licenses-- very important differences in lice
Re:Serious things were missed.... (Score:5, Insightful)
1) Solaris has more abstraction for architecture dependent code than Linux, and is therefore slower but more portable.
2) There are also more people working on Linux, leading to faster development but not as high a quality.
See, that wasn't so bad. Overall, the author concludes that the three OSs do things quite similarly and stand to benefit from each other in the future.
SCO Engineers Do It Best (Score:3, Funny)
Don't say it isn't true - or our lawyers will be calling.
Mod article is Flamebait (Score:2)
But flamewars are so much fun to read, so bring it on!
Interesting Model Breakdown... (Score:5, Interesting)
P.S. Sorry to repeat myself on that...just not sure how best to say it.
kprobes? (Score:3, Informative)
However, the quality of a kernel is not automatically improved by the inclusion of DTrace. Not to disparage Solaris and FreeBSD, but DTrace is primarily for kernel developers and sysadmins. The common user and app developer have little use for either DTrace or kprobes.
Re:kprobes? (Score:3, Interesting)
This is not meant to disparage Solaris dev tools. This is merely to point out that Linux has its own, very powerful developer-oriented tools.
Re:The Answer is Clear (Score:4, Informative)
B) Linux has SystemTap [sourceware.org], which goes above and beyond what DTrace is capable of. It is still in heavy development by Red Hat (Intel and IBM also helped start up the effort), and it's already quite a product.
Your post was one big troll, why do you find it amusing to spread random misinformation?
Regards,
Steve
Re:The Answer is Clear (Score:5, Informative)
Of course, systemtap is still in its infancy, perhaps after a couple re-writes that seem standard in major components in the Linux Kernel, they can make it stable. But today its, not and any where near stable. There for your statement of "Linux has SystemTap, which goes above and beyond what DTrace is capable of. It is still in heavy development by Red Hat (Intel and IBM also helped start up the effort), and it's already quite a product." Is complete rubish. Of course one would have to think about. If its still under heavy development, also shows just how far from ready it is.
Of course, the truth really is that DTrace is far more feature rich than systemtap is, or will be for a long time. Systemtap biggest stumbling block is "guru mode" that allows the user to disable any protection that systemtap engineers have added. Systemtap's language is lacking in some basic concepts, like variable types like struct and typeset, making guru mode necessary for far too many scripts, and in-escapable when userland probes are created. Along with the other problems documented in my blogs.
You may try and dismiss me as a troll, but nothing could be farther from the truth. I'm stating the facts, I have also contributed to the Systemtap product, and commented on code changes. But I refuse to sit quietly as people try and pass Systemtap off as stable or better than DTrace. Dtrace is stable, and Enterprise Production ready and more full featured than Systemtap, even though they have left out features, that have to be worked around by the programmer.
Re:The Answer is Clear (Score:3, Informative)
Also, issues preventing the porting of DTrace to other systems would be because of licensing, not technology.
Re:The Answer is Clear (Score:3, Informative)
Re:The Answer is Clear (Score:4, Insightful)
Re:The Answer is Clear (Score:4, Insightful)
Ok, so as a USER, why would you care about MySQL? Because as a SYSTEM ADMINISTRATOR, what I really care about is stability and easy of administration. Once performance reaches "good enough", I could give a shit about improved performance. Hardware is cheap, a $5,000 1U MP system can blast down just about anything I'm going to care about.
But, I want it to work today, tomorrow, next week, and next year. I want to reboot when I plan to (doing a kernel update, for instance) and only then. It had better be stable, and shouldn't be all that noticable in my day-to-day schedule. Doing updates had better not involve a half-day compiling, because downtime is !@#!@ expensive.
But, performance? Can you name a SINGLE INSTANCE where you chose your O/S based on some performance graph? Unless your technology depends on Windows (I feel for you if it does) any of the *nixes out there are "good enough" to do just about anything up to the very high end. (Linux/BSD/Solaris/AIX/OSX/etc) Even at the very high end, it's unlikely the choosing the "worst" OS will cost more than switching to the "best" OS!
In the end, it really doesn't matter all that much. Pick what you like, and roll with it. Performance is way down the totem pole, pay attention to stability, security, licensing, and compatability with your specific needs. Then, worry abouut performance!
Re:The Answer is Clear (Score:3, Interesting)
It's fair to say then, you obviously have very modest requirements.
Unless your technology depends on Windows (I feel for you if it does) any of the *nixes out there are "goo
Re:SOLARIS 10 IS A MICROKERNEL OS (Score:3, Insightful)
It's not written to provoke flamewars. (Score:3, Insightful)
Re:It's not written to provoke flamewars. (Score:3, Insightful)
Re:It's not written to provoke flamewars. (Score:2)
Historic, not technical, reasons. (Score:2)
But we're seeing that change now. There's a PowerPC [opensolaris.org] port in the works, for instance.
Re:WARNING! Invalid article copy! (Score:2)
Re:How much of Solaris has gone open source? (Score:3, Informative)
The official announcement was last January (nine months ago). Rumors had been out earlier, of course.
"How much has been open sourced? AFAIK, all the have opened sourced is DTrace (a very cool tool/framework), but nay else."
That's what was released in January; as of April, the answer is (per the FAQ at http://opensolaris.org/ [opensolaris.org] :
Re:How much of Solaris has gone open source? (Score:4, Interesting)
Well, it's good that you said "AFAIK", because what YK turns out to be out of date. Browse the Solaris source code right here [opensolaris.org].
OK, here's the directory with the dispatcher stuff [opensolaris.org] and here's thread.c specifically [opensolaris.org].
Re:How much of Solaris has gone open source? (Score:3, Funny)
Re:did anyone read the articles full of "facts"? (Score:4, Insightful)
Try a commercial app on anything but redhat.
Open source is great and all, but there's specialised apps where there simply is not a viable alternative. And if you're bound to a commercial app, then you're either going to run redhat, or get no support from your app vendor, in most cases. Yes, you can likely get it to work, but as soon as you run into an application bug, you're screwed - reinstall on redhat or you'll probably get no support.
Modular mining for example...
smash (linux user/promoter of 9 years).
Re:did anyone read the articles full of "facts"? (Score:3, Informative)
Yum is the equivalent of apt-get.
Don't confuse between the two.