Comparing Linux To System VR4 208
robyannetta writes "Paul Murphy from LinuxInsider.com asks the question
What's the difference between Linux and System VR4? From the article: 'If there's a real bottom line here, the one thing I'm clear on is that I haven't found it yet, but the questions raised have been more interesting that the answers -- so more help would be welcomed.'"
Various differences (Score:3, Informative)
GNU/Linux has a wider variety of software natively written for it
the Linux kernel includes support for more hardware than SVR4
Linux is more popular as a desktop operating system than SVR4.
Another important factor to consider for many users is price, although there are inexpensive and free versions of UNIX.
Linux issues and bugs generally are often fixed extremely fast.
For a more in-depth technical reference, see this good article [over-yonder.net] on the fundamental difference between BSD and UNIX (although BSD is not technically SVR4 it's still a good read).
Sigh (Score:5, Informative)
It took me three paragraphs before I figured out that the author of the article wasn't talking about an operating system called "VR4".
Whitespace matters, people. "SystemV R4" or "SVR4" or "SysVR4" woulda done just fine...
Best part of the article (Score:1, Informative)
I read the FA (Score:5, Informative)
There are some very clever things in Unix that you don't notice till someone redoes them and turns them into a stinking heap. For example the new Solaris 10 services. It does what init and inetd does but needs a binary config file which it rewrites on boots and when it changes stuff (ala windows registry for unix). Having been way too deep on too many broken systems, I don't like binary files that change that are essential for my os to work. But this is progress...
Off the top of my head, here you go (Score:5, Informative)
1. Streams. ATT's streams was just a mistake. It was a great idea in theory. In practice, it adds too much overhead without enough advantages. Even at Sun, it's recognized among Engineers as a mistake, and it's significant that methods of speeding up the networking stack involve discussions on how to get away from streams.
2. The VM. Linux's VM in 2.6 is vastly superiour to stock ATT VM. And it's probably better than Sun's, in the 2.6 Kernel (NOT before 2.4 however). For example, the VM limitations are one reason why NFS sucks in 2.4 kernels; and even Trond has admitted this.
3. Boot-up code. Grub + Linux rocks. It's the best solution out there. Vastly superious to everything, including Sun's implementation. Of course, Sun is hobbled by that Open Boot nonsense, where you have to type an absolutely absurd amount of stuff to specify a device.
4. kernel debugging. Stock ATT blows here. Sun rules, with Linux becoming a close second. This is with respect to kgdb. Although some new technologies are under development in Linux which are interesting.
5. SMP. Stock ATT blows, but not much has been done lately here. Sun's implementation is superiour to everything, which is why you can support so many processors. Linux is starting to catch up though.
Well, that's just off the top of my head. There are probably other things, but I've got to get back to work.
Reasons to use threads on uniprocessor x86 (Score:5, Informative)
Re:RTFA (Score:3, Informative)
The impression I got was the author was way over his depth writing it, and was largely aware of this. Consider the final conclusion "If there's a real bottom line here, the one thing I'm clear on is that I haven't found it yet". Now, that's either a very good troll or a genuine article.
As for answering the article. Well, the painfully obvious difference is in hardware support. SVR4 is a joke in terms of hardware support compared to linux.
In terms of 'features' like kgdb, ptrace, LVM, NUMA, SMP, well I don't think I even know enough to make an informed comment. I will note that the author's attempts to draw comparisons appear extremely weak to me (particularly WRT threading).
Also, the author also seemed to confuse a number of architectural weaknesses with kernel weaknesses. Run linux on a toy mainframe and it won't have mainframe hardware features. Well, Doh. Run Solaris on personal computer hardware and it won't either. Run linux on mainframe hardware and it will.
So, I consider the article very weak, and not worth the electrons it was distributed on. However, it is a fair enough question to ask. It is just a pity to ask it so badly and then slip in bits like the SCO lawsuit for extra hits.
Re:Various differences (Score:2, Informative)
You are such a fucking loser your monger.
I'll give you a rundown of the categories on that page, in case you are too lazy to read it.
Linux on POWER
Linux on Intel processor-based servers
Linux on AMD processor-based servers
Linux on Mainframe
The Linux s390 and PPC and PPC64 (and even m68k) architecture maintainers all work for IBM. The POWER5 processor had features designed with Linux in mind to better suit its low level memory management system. The IBM Linux guys go do Linux bringup and verification on sample silicon.
Oh also, the Linux IA64 maintainers work for Intel and HP, both companies have quite a few staff doing Linux (especially ia64) work. SGI has a lot of staff working on the kernel alone.
Sun these days is probably not supported as well, but why would you buy a sparc server running Linux when you could buy an Altix, or a POWER5 (or zSeries if you want a real mainframe)? You would have to be insane. The only reason sparcs are still being sold is the solaris on sparc legacy.
Re:I read the FA (Score:4, Informative)
My impression of the SystemV series was that the proprietary status of Unix was in doubt and SystemV was intended to fix that.
Unix was written before the US copyright law was were extended to apply to software, and before the "program as component of patentable invention" hack was invented and debugged. So the only IP protection AT&T had on it was trade secret. Trade secret goes "poof!" when the secret is out, and AT&T had distributed several generations of source and documentation to universities around the world.
(This was also before the breakup of the Bell System, and there was some mandate on them publishing releasing certain telephone-related work as part of their monopoly mandate which, separately, might have imperiled its IP status. I don't recall the details. But it was probably made moot by the court-mandated breakup later.)
Unix had been a back-room project by a team that had been explicitly forbidden, at least initially, from building an OS. (Indeed, one factor driving the kernel's simplicity and the design goal of pushing as much out to the application layer as possible was the creation of plausable deniability: "An OS does X, Y, and Z and this doesn't. So it's not an OS. Right?")
Since they weren't writing something viewed as productizable or proprietary, they were at Bell Labs (where publishing was the usual route for most work), and software in those days wasn't productized anyhow, they felt no need to keep it under their hats.
The broad circulation of source and docs spawned the era of the commodity unix box. A new hardware vendor, rather than writing his own OS, could just port Unix to the box - a matter of hacking a couple thousand lines of hardware-interface code. AT&T would look the other way as long as they weren't selling it. Once they got it working, AT&T would cut a licensing deal on very good terms. (For them it was free money.)
This continued until the University of New South Wales built a course around System 6 (i.e. release 6 of the documentation set, which was how System N was named). They printed a two-volume coursebook - volume 1 being the kernel source pretty-formatted, while volume 2 was a textbook walking you through the guts. This immediately became an underground classic, and finally got onto the administrative radar screen at AT&T. The lawyers "Cease and Desist"ed the University.
The SystemV project, if I recall correctly, started shortly after the CONTU (Committee On New Technological Uses - charged with studying and proposing to Congress whether/how software should receive copyright protection) reported and Congress explicitly extended copyright to cover software. Now that IP protection was available, AT&T got together with several of the big Unix players and together they reimplemented the kernel from scratch, and tried to move everybody to the result.
They gave a number of plausable-sounding reasons for the work - claiming it was a great improvement on the previous stuff. But they didn't include the Berkeley work (especially noticible: no Berkeley Signals) which had its own proprietary issues. The resulting functionality of SystemV was both incompatible with and lower than both BSD and some other System N derivatives. So the general consensus (at least among the people I hung out with at the time) was that the whole exercise was to clean up the IP status of Unix for its future as a product.
Re:Off the top of my head, here you go (Score:3, Informative)
Still, that literally blows Sun's biggest machine out of the water. Especially in absolute performance, when you consider a new 9MB cache I2 is probably a clear twice the speed of the fastest of sun's sparcs.
Second, Sun's machines are NUMA as well. That's right, they have Non Uniform Memory Access. See here [sun.com]. They have a 4 tiered access hierarcy on memory. Either way, SGI's NUMAlink interconnect is far better than Sun's old crossbar switch dinosaur. See here [virginia.edu]. The Altix has 4 times the top-of-the-line Sun's memory bandwidth per CPU. That is SGI's old interconnect too, mind you.
Re:I read the FA (Score:3, Informative)
Ok, from this little statement it is obvious that you missed the major feature behind the new 'greenline' code in Solaris 10. (I know, this is slashdot...)
In short, generates a vectored graph of services. This gives the system a list of services and their dependencies. The old init.d/rc.X only provides a linear `these scripts must run in this order` relationship.
This has several advantages that immediately come to mind:
By the way, the file that it uses is xml, not binary.
And it is also not a 'file that changes'. The actual config file is static (unless you make a change). The only thing that may change is the order that two nondependent services start in relationship to each other. And that is the point, they are not dependent and thus you should not care if apache starts before sshd. In fact, this is a very good thing as you probably dont want a problem in the apache rc file to cause sshd to never start (guess how many times i've seen rc3.d/S99apache and rc3.d/S99sshd on a single system... guess which one runs first under init.d... yup.... apache.)
For more details, you can check out the blogs of Stephen Hahn [sun.com], Bill Moffitt [sun.com], or Tobin Coziahr. [sun.com]
Re:Off the top of my head, here you go (Score:3, Informative)
Of course, if I have a dozen network ports, several hard drives with several operating systems, and another dozen CD-ROM and DVD drives, OpenBoot will allow me to easily boot from any of them. Also, I recommend you look up documentation regarding devalias and nvramrc.
OpenBoot is so superior to the PC BIOS that it is the main reason I would hesitate to buy another PC.
Re:Off the top of my head, here you go (Score:3, Informative)