Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Unix Sun Microsystems Linux

Comparing Linux To System VR4 208

robyannetta writes "Paul Murphy from LinuxInsider.com asks the question What's the difference between Linux and System VR4? From the article: 'If there's a real bottom line here, the one thing I'm clear on is that I haven't found it yet, but the questions raised have been more interesting that the answers -- so more help would be welcomed.'"
This discussion has been archived. No new comments can be posted.

Comparing Linux To System VR4

Comments Filter:
  • Various differences (Score:3, Informative)

    by Lindsay Lohan ( 847467 ) on Monday January 17, 2005 @08:15PM (#11390490) Homepage Journal
    So what's the difference between SVR4 and Linux? At a glance they may look the same because they're both in the Unix family, but they're actually quite different.

    GNU/Linux has a wider variety of software natively written for it

    the Linux kernel includes support for more hardware than SVR4

    Linux is more popular as a desktop operating system than SVR4.

    Another important factor to consider for many users is price, although there are inexpensive and free versions of UNIX.

    Linux issues and bugs generally are often fixed extremely fast.

    For a more in-depth technical reference, see this good article [over-yonder.net] on the fundamental difference between BSD and UNIX (although BSD is not technically SVR4 it's still a good read).

  • Sigh (Score:5, Informative)

    by devphil ( 51341 ) on Monday January 17, 2005 @08:23PM (#11390569) Homepage


    It took me three paragraphs before I figured out that the author of the article wasn't talking about an operating system called "VR4".

    Whitespace matters, people. "SystemV R4" or "SVR4" or "SysVR4" woulda done just fine...

  • by Anonymous Coward on Monday January 17, 2005 @08:44PM (#11390729)
    My request for help included a list of some things you can do with Solaris but not with Linux, and more than 40 readers sent me e-mail responding to this by telling me that Linux (or, in several cases, Windows) can do all of those things [...] those responses suggested a frightening thought for future exploration: that the knowledge gap between the Linux and Solaris communities might be much bigger than I think it is.
  • I read the FA (Score:5, Informative)

    by thogard ( 43403 ) on Monday January 17, 2005 @08:48PM (#11390751) Homepage
    They guy never saw the SVR4 code... talk about a mess. AT&T had nice clean code that worked well was efficient but didn't do networking very well at all. So they hopped into bed with Sun who had real good networking stuff from BSD. The result was the two of them spawned SVR4. The read system call in the old unix was short and sweet and fit on a vt100 screen. The new one took pages even when printed out and didn't do anything new. It was a rewrite for the sake of a rewrite.

    There are some very clever things in Unix that you don't notice till someone redoes them and turns them into a stinking heap. For example the new Solaris 10 services. It does what init and inetd does but needs a binary config file which it rewrites on boots and when it changes stuff (ala windows registry for unix). Having been way too deep on too many broken systems, I don't like binary files that change that are essential for my os to work. But this is progress...
  • by Anonymous Coward on Monday January 17, 2005 @08:48PM (#11390754)
    Here are some obvious differences from someone who's worked on both. These are just some quick things which come to mind, off the top of my head.

    1. Streams. ATT's streams was just a mistake. It was a great idea in theory. In practice, it adds too much overhead without enough advantages. Even at Sun, it's recognized among Engineers as a mistake, and it's significant that methods of speeding up the networking stack involve discussions on how to get away from streams.

    2. The VM. Linux's VM in 2.6 is vastly superiour to stock ATT VM. And it's probably better than Sun's, in the 2.6 Kernel (NOT before 2.4 however). For example, the VM limitations are one reason why NFS sucks in 2.4 kernels; and even Trond has admitted this.

    3. Boot-up code. Grub + Linux rocks. It's the best solution out there. Vastly superious to everything, including Sun's implementation. Of course, Sun is hobbled by that Open Boot nonsense, where you have to type an absolutely absurd amount of stuff to specify a device.

    4. kernel debugging. Stock ATT blows here. Sun rules, with Linux becoming a close second. This is with respect to kgdb. Although some new technologies are under development in Linux which are interesting.

    5. SMP. Stock ATT blows, but not much has been done lately here. Sun's implementation is superiour to everything, which is why you can support so many processors. Linux is starting to catch up though.

    Well, that's just off the top of my head. There are probably other things, but I've got to get back to work. :P
  • by pslam ( 97660 ) on Monday January 17, 2005 @09:32PM (#11391087) Homepage Journal
    He's obviously quite unqualified to write the article and didn't even bother to ask anyone. A single processor can emulate multiple processors, and this is often a convenient and even efficient programming model. To elaborate:
    • Sometimes it's cheaper in memory and/or clock cycles to use context switching and multiple stacks than scheduling functions off a single thread. This can be true even if the threads aren't concurrent (e.g coroutines).
    • It's often easier to use multiple threads even when not necessary, despite having to deal with mutexes. The amount of state in some protocols can lead to a mess.
    • When you need low latency, threads are often the only solution.
    • Single threaded apps cannot schedule tasks preemptively. Reason enough right there.
    • If you need prioritisation of preemptive tasks. When you do, the kernel is best off doing the scheduling because you might not be the only process with priority needs.
    • A thread is just a process without most of the baggage, and you don't see people arguing that processes don't belong on x86.
    Then again, mindless use of threads does annoy me. So I'll list some "soft" indicators of when you shouldn't use threads:
    • When a single threaded app would be substantially faster.
    • When you don't need preemption.
    • When you're going to be using 8,000 of them. It's at least 4-16KB per thread, and thread switches aren't negligably cheap. Rewrite with poll().
    • When you cannot say with certainty that you won't deadlock or race.
    • When you don't understand what the previous point means.
    • When your hardware/OS/platform has a hideous thread switching cost. Can't think of any reasonable system these days where this is a show stopper.
    Leave criticism of OS features to those who are qualified, Murphy. Better still, try asking one of them - there's no shortage.
  • Re:RTFA (Score:3, Informative)

    by lakeland ( 218447 ) <lakeland@acm.org> on Monday January 17, 2005 @09:52PM (#11391233) Homepage
    Is it a troll? I found it too confusing to say. The article is looking for technical differences between linux and SVR4. Consider this quote: "Specifically, what's needed here is the low level programmer view, not of what's out there by way of applications..."

    The impression I got was the author was way over his depth writing it, and was largely aware of this. Consider the final conclusion "If there's a real bottom line here, the one thing I'm clear on is that I haven't found it yet". Now, that's either a very good troll or a genuine article.

    As for answering the article. Well, the painfully obvious difference is in hardware support. SVR4 is a joke in terms of hardware support compared to linux.

    In terms of 'features' like kgdb, ptrace, LVM, NUMA, SMP, well I don't think I even know enough to make an informed comment. I will note that the author's attempts to draw comparisons appear extremely weak to me (particularly WRT threading).

    Also, the author also seemed to confuse a number of architectural weaknesses with kernel weaknesses. Run linux on a toy mainframe and it won't have mainframe hardware features. Well, Doh. Run Solaris on personal computer hardware and it won't either. Run linux on mainframe hardware and it will.

    So, I consider the article very weak, and not worth the electrons it was distributed on. However, it is a fair enough question to ask. It is just a pity to ask it so badly and then slip in bits like the SCO lawsuit for extra hits.
  • by Anonymous Coward on Monday January 17, 2005 @10:24PM (#11391433)
    Read this [ibm.com]

    You are such a fucking loser your monger.

    I'll give you a rundown of the categories on that page, in case you are too lazy to read it.

    Linux on POWER
    Linux on Intel processor-based servers
    Linux on AMD processor-based servers
    Linux on Mainframe

    The Linux s390 and PPC and PPC64 (and even m68k) architecture maintainers all work for IBM. The POWER5 processor had features designed with Linux in mind to better suit its low level memory management system. The IBM Linux guys go do Linux bringup and verification on sample silicon.

    Oh also, the Linux IA64 maintainers work for Intel and HP, both companies have quite a few staff doing Linux (especially ia64) work. SGI has a lot of staff working on the kernel alone.

    Sun these days is probably not supported as well, but why would you buy a sparc server running Linux when you could buy an Altix, or a POWER5 (or zSeries if you want a real mainframe)? You would have to be insane. The only reason sparcs are still being sold is the solaris on sparc legacy.
  • Re:I read the FA (Score:4, Informative)

    by Ungrounded Lightning ( 62228 ) on Monday January 17, 2005 @10:35PM (#11391497) Journal
    AT&T had nice clean code that worked well was efficient but didn't do networking very well at all. So they hopped into bed with Sun who had real good networking stuff from BSD. The result was the two of them spawned SVR4. The read system call in the old unix was short and sweet and fit on a vt100 screen. The new one took pages even when printed out and didn't do anything new. It was a rewrite for the sake of a rewrite.

    My impression of the SystemV series was that the proprietary status of Unix was in doubt and SystemV was intended to fix that.

    Unix was written before the US copyright law was were extended to apply to software, and before the "program as component of patentable invention" hack was invented and debugged. So the only IP protection AT&T had on it was trade secret. Trade secret goes "poof!" when the secret is out, and AT&T had distributed several generations of source and documentation to universities around the world.

    (This was also before the breakup of the Bell System, and there was some mandate on them publishing releasing certain telephone-related work as part of their monopoly mandate which, separately, might have imperiled its IP status. I don't recall the details. But it was probably made moot by the court-mandated breakup later.)

    Unix had been a back-room project by a team that had been explicitly forbidden, at least initially, from building an OS. (Indeed, one factor driving the kernel's simplicity and the design goal of pushing as much out to the application layer as possible was the creation of plausable deniability: "An OS does X, Y, and Z and this doesn't. So it's not an OS. Right?")

    Since they weren't writing something viewed as productizable or proprietary, they were at Bell Labs (where publishing was the usual route for most work), and software in those days wasn't productized anyhow, they felt no need to keep it under their hats.

    The broad circulation of source and docs spawned the era of the commodity unix box. A new hardware vendor, rather than writing his own OS, could just port Unix to the box - a matter of hacking a couple thousand lines of hardware-interface code. AT&T would look the other way as long as they weren't selling it. Once they got it working, AT&T would cut a licensing deal on very good terms. (For them it was free money.)

    This continued until the University of New South Wales built a course around System 6 (i.e. release 6 of the documentation set, which was how System N was named). They printed a two-volume coursebook - volume 1 being the kernel source pretty-formatted, while volume 2 was a textbook walking you through the guts. This immediately became an underground classic, and finally got onto the administrative radar screen at AT&T. The lawyers "Cease and Desist"ed the University.

    The SystemV project, if I recall correctly, started shortly after the CONTU (Committee On New Technological Uses - charged with studying and proposing to Congress whether/how software should receive copyright protection) reported and Congress explicitly extended copyright to cover software. Now that IP protection was available, AT&T got together with several of the big Unix players and together they reimplemented the kernel from scratch, and tried to move everybody to the result.

    They gave a number of plausable-sounding reasons for the work - claiming it was a great improvement on the previous stuff. But they didn't include the Berkeley work (especially noticible: no Berkeley Signals) which had its own proprietary issues. The resulting functionality of SystemV was both incompatible with and lower than both BSD and some other System N derivatives. So the general consensus (at least among the people I hung out with at the time) was that the whole exercise was to clean up the IP status of Unix for its future as a product.
  • by Anonymous Coward on Monday January 17, 2005 @10:44PM (#11391560)
    First, the biggest single system Linux box is 512 CPUs (although I think NASA has 2048 CPUs in a BX2 machine, which has an expanded cache coherency domain to 1024 or 2048 CPUs, I'm not sure if they've actually hooked them up yet).

    Still, that literally blows Sun's biggest machine out of the water. Especially in absolute performance, when you consider a new 9MB cache I2 is probably a clear twice the speed of the fastest of sun's sparcs.

    Second, Sun's machines are NUMA as well. That's right, they have Non Uniform Memory Access. See here [sun.com]. They have a 4 tiered access hierarcy on memory. Either way, SGI's NUMAlink interconnect is far better than Sun's old crossbar switch dinosaur. See here [virginia.edu]. The Altix has 4 times the top-of-the-line Sun's memory bandwidth per CPU. That is SGI's old interconnect too, mind you.
  • Re:I read the FA (Score:3, Informative)

    by segfaultcoredump ( 226031 ) on Tuesday January 18, 2005 @12:07AM (#11392060)
    There are some very clever things in Unix that you don't notice till someone redoes them and turns them into a stinking heap. For example the new Solaris 10 services. It does what init and inetd does but needs a binary config file which it rewrites on boots and when it changes stuff (ala windows registry for unix). Having been way too deep on too many broken systems, I don't like binary files that change that are essential for my os to work. But this is progress...

    Ok, from this little statement it is obvious that you missed the major feature behind the new 'greenline' code in Solaris 10. (I know, this is slashdot...)

    In short, generates a vectored graph of services. This gives the system a list of services and their dependencies. The old init.d/rc.X only provides a linear `these scripts must run in this order` relationship.

    This has several advantages that immediately come to mind:

    1. The system can start faster since it can now run several init scripts at once. No longer does one have to wait for nfs to start before starting the web server (assuming one uses the all too common setup where nfs is rc3.d/S15nfs and the web server is rc3.d/S99apache)
    2. Since the system tracks dependencies, it can restart dependant services as needed (and not touch services that are not impacted)
    3. You can disable things and patches will no longer re-enable them. Under solaris 9, a common way to disable something was to rename the SXXbla file by putting a `.` or `_` in front of it. This works great unless they release a patch to that file and the new (patched) file gets dumped out there in the rc directory.
    4. During a jumpstart (kickstart for you RedHat folks), you can drop in your own site.xml file and instantly customize a ton of thigns that used to require editing dozens of files.
    5. It is now easy to drop in a service monitoring facility (like sun's SMF) that monitors key services and restarts both them and their dependent services.

    By the way, the file that it uses is xml, not binary.

    And it is also not a 'file that changes'. The actual config file is static (unless you make a change). The only thing that may change is the order that two nondependent services start in relationship to each other. And that is the point, they are not dependent and thus you should not care if apache starts before sshd. In fact, this is a very good thing as you probably dont want a problem in the apache rc file to cause sshd to never start (guess how many times i've seen rc3.d/S99apache and rc3.d/S99sshd on a single system... guess which one runs first under init.d... yup.... apache.)

    For more details, you can check out the blogs of Stephen Hahn [sun.com], Bill Moffitt [sun.com], or Tobin Coziahr. [sun.com]

  • by SunFan ( 845761 ) on Tuesday January 18, 2005 @12:59AM (#11392312)
    Of course, Sun is hobbled by that Open Boot nonsense, where you have to type an absolutely absurd amount of stuff to specify a device.

    Of course, if I have a dozen network ports, several hard drives with several operating systems, and another dozen CD-ROM and DVD drives, OpenBoot will allow me to easily boot from any of them. Also, I recommend you look up documentation regarding devalias and nvramrc.

    OpenBoot is so superior to the PC BIOS that it is the main reason I would hesitate to buy another PC.
  • by andreyw ( 798182 ) on Tuesday January 18, 2005 @03:44AM (#11392958) Homepage
    Gah, I loved your post but I disagree with GRUB being somehow better than OpenBoot/OpenFirmware. The drive enumeration is completely braindead, doesn't match up with anything in Linux (heck, doesn't even match up with the enumeration in Hurd, and Grub is the Hurd bootloader, dammit). Also, GRUB is just a bootloader - thats it. Sure, if you use the Multiboot format it can pass some information like memory size to the kernel, but OF/OB is a system monitor that manages all the hardware at start-up. The naming isn't ridiculous - its descriptive of WHAT the device is and WHERE the device is (including the bus its on). Also OF/OB provide an architecture-independent language (a variation of Forth) for writing PCI-card onboard ROMs, as well as an easy interface for USING these device from within OF/OB (the device tree). Technically, a SCSI controller from UltraSparc would be able to be booted from by a Mac's OpenFirmware.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...