Comparing Linux To System VR4 208
robyannetta writes "Paul Murphy from LinuxInsider.com asks the question
What's the difference between Linux and System VR4? From the article: 'If there's a real bottom line here, the one thing I'm clear on is that I haven't found it yet, but the questions raised have been more interesting that the answers -- so more help would be welcomed.'"
Re:Various differences (Score:4, Insightful)
more of the technical aspects, ie like threads and smp and
stuff like that and how its implemented differently. Not
that the qualitative differences aren't of some import too
but I just see that more as the color of the cars vs the
types of engines/trannies.
nice link on bsd philosophy.
Being that this is /. (Score:5, Insightful)
The article is basically worthless. It's like walking through a classroom about 20 minutes into the lecture, and walking out 15 minutes later.
It starts in the middle, and leads nowhere. Just a bleem of time that, for whatever reason, is, unfortunatley, recorded here for posteriry.
L-A-M-E (Score:5, Insightful)
1) Linux runs on a 'toy' platform (x86), and why the hell would a programmer want threads when there's not TRUE concurrency?
2) Linux does nothing significant that AT&T wasn't doing 10 years ago.
3) Generally speaking, Linux sucks.
IMHO I expect to see this sort of thing about half-way down in a thread of
-JT
The guy's a phony. (Score:5, Insightful)
Confused? That's what Paul Murphy hoped. He's just as confused as you are. Ignore him.
The difference is easy... and surprisingly simple (Score:2, Insightful)
Re:Various differences (Score:2, Insightful)
> # GNU/Linux has a wider variety of software
> natively written for it
There is a huge base of professionally written
and supported apps for SVR4/Solaris/UnixWare/...
The GNU compiler and toolchain, while useable,
perhaps even good, is inferior to the offerings
from Sun, IBM, HP, and the commercial vendors.
The same can be said for nearly, if not every,
category of GNU/Linux/Open-Source software.
Linux may in fact have more "stuff" available
for it but when you weed out the crap, it isn't
that impressive.
> # the Linux kernel includes support for more
> hardware than SVR4
>
No it doesn't. Their is little if no support
in Linux for Sun, HP, DEC, Compaq, IBM hardware.
And with few exceptions, what support is in place
is pretty poor. Likewise, when you look at the
offerings from Sun, SCO, and others, the amount
of support on the PC is just about as good.
> # Linux is more popular as a desktop operating
> system than SVR4.
>
Maybe, but maybe not. People at home and dorm
geeks dork'ing around don't count. I'll guarantee
you their are more scientific and engineering
shops that are using SVR4 based desktops than
Linux.
> # Another important factor to consider for many > users is price, although there are inexpensive > and free versions of UNIX.
Define "price". Having to wait around for days
on end while somebody on the mailing list or web
forum decides to answer your question (or some-
times not at all)is unacceptable. Sun, HP, and
IBM, provide guaranteed response times and they
do it well. When your systems are down and
costing you $5000 an hour open-source "support"
don't cut and ends up costing you a lot more.
> # Linux issues and bugs generally are often fixed extremely fast.
Sometimes yes, sometimes no.
Re:What does this say? (Score:3, Insightful)
I didn't get that either. I had some (very serious) issues with concurrency in Linux, but they've all been fixed in 2.6/NPTL.
One thing I did like was his comment that the distinction between desktops and servers is mostly one of marketing. I thought that was quite insightful, if not entirely original.
The GPL is Linux's hallmark (Score:5, Insightful)
Re:Various differences (Score:3, Insightful)
Here, have a few goats...
Correct, kind of. There are arguably more commercially supported apps for Unix or Solaris than for Linux, and generally speaking, more money is involved. (That is, the apps in question are serious ones that companies rely on; if the software fails, the company goes south.)
Having said that, there are probably more "professionally written" apps for Linux, it's just that most of them aren't as commercial or mission-critical.
That depends what metric you use. If the only measure that you use is the performance of generated code, then I will concede that you're probably right. On the other hand, the GNU compiler tends to be much more standards-compliant than its commercial competitors. The GNU toolchain is ported to more architectures and platforms than any other. Moreover, the open source suite has more "off the shelf" than any other (e.g. valgrind, though it's not ported to anything other than IA32), where on the Sun or IBM systems you'd need to buy something extra (e.g. Purify). One notable exception is the Sun ONE Studio performance collector/analyzer; I haven't seen anything like it for Linux.
Strongly disagree. In the desktop arena, open source is way ahead of commercial Unix.
Commercial Unix definitely has the upper hand here because they can use the best of open source as well as the best of proprietary. So, for example, you can run Apache on your Solaris machine and get the best of both worlds, so to speak.
That's a false dichotomy, and Sturgeon's Law applies here. There's a lot of crap in Linux, but that's because there's a lot of stuff, and 90% of everything is crap. The 10% left over is equally impressive, but it tends to do different things than the 10% of commercial offerings which aren't crap.
Open source doesn't have a "Verilog-killer", but commercial Unix doesn't have an "Apache-killer".
Re:RTFA (Score:3, Insightful)
irony-on And the unsubtle implications concerning the changes in 2.6 respecting the SCO-IBM fracas are legitimate technical observations. irony-off
The article read like troll-bait to me. A serious journalist could have simply asked developers what the technical differences are and how they are affected by the intended platforms. A serious programmer could have answered his own questions. What class does that leave article's author in - bridge dweller?
Re:The difference is easy... and surprisingly simp (Score:1, Insightful)
Re:Off the top of my head, here you go (Score:5, Insightful)
Not. OpenBoot/OpenFirmware are vastly superior to the cheesy i-must-look-like-a-floppy system that crippled pcs have. When grub supports testing hardware, or listing the devices present inside the system over a serial console, let me know. List the scsi busses and the devices present? I've used OpenBoot(sun), OpenFirmware(Apple), the NeXT Rom monitor, as well as the stuff on Alpha and PA-RISC, which I can't remember the names of right now -- they're all much more flexible than grub.
Grub also still doesn't work on all PC hardware. I've never gotten it to work with a Compaq SmartArray card. Never. Several different versions of Grub, several different SmartArrays.
Granted, Grub is a massive improvement over crap like lilo, but it's nowhere near as flexible as what you'd find on a good unix machine.
Re:Various differences (Score:3, Insightful)
The flip side of that is CPU performance doesn't mean squat on a multiprocessor system if the interconnect and memory systems are not up to snuff.
Opteron could "go up to as much SMP" as US provided the glue logic is there.
Are you sure about that?
The high end US chips have provisions for maintaining cache coherency in systems with up to 1023 processors. I don't recall seeing a similar feature in the Opterons (IIRC, they're good for up to 8 processors). The Opteron more closely competes with the US-IIIi than the US-III or US-IV.
I also recall reading that one generation (Power4?) of IBM's Power boxes had some performance problems when doing real SMP workloads due to IBM messing up the memory system. The same systems did wonderfullly with single processor tasks.
Re:What does this say? (Score:3, Insightful)
Background: modern compilers, e.g., the Gnu suite, use multiple passes to cleanly separate functionality. In a nutshell the first pass compiles the source code into an arbitrary intermediate format. IIRC gcc covers C, C++, Fortran (g77), Ada (gnat), Java (gcj) and more. Bob could add support for BobTalk with relatively modest effort.
The next pass performs generic optimizations on the intermediate code - extracting loop invariants, unrolling loops, eliminating dead code, etc.
The final pass translates the intermediate code into processor-specific object code.
(Reality is much more complex with the C preprocessor, register coloring and keyhole optmization of the object code, etc.)
The key thing isn't sequential thinking, it's the way that adding support for a new language is handled entirely in the first pass. Adding support for a new processor is handled entirely in the final pass. Adding new optimizations for your master's thesis is handled entirely in the second pass.
Back in the dark ages you did have "single-pass" compilers... and they were an absolute bitch to maintain since each language and processor stood alone. (I used quotes since these compilers normally produced assembly code (.s) that was then compiled/assembled into the object code files. So even "single pass" compilers normally used multiple passes, but with a processor-specific intermediate language.)
It was a major innovation when AT&T released a C++ compiler that worked by compiling the C++ code into C code instead of assembler. It allowed the new language to be supported in a fraction of the time required before.
So tell me again why we would want to return to the days when we went directly from a high level language to object code....