Remembering How Plan 9 Evolved at Bell Labs (theregister.com) 36
jd (Slashdot reader #1,658) writes: The Register has been running a series of articles about the evolution of Unix, from humble beginnings to the transition to Plan9. There is a short discussion of why Plan9 and its successors never really took off (despite being vastly superior to microkernels), along with the ongoing development of 9Front.
From the article: Plan 9 was in some way a second implementation of the core concepts of Unix and C, but reconsidered for a world of networked graphical workstations. It took many of the trendy ideas of late-1980s computing, both of academic theories and of the computer industry of the time, and it reinterpreted them through the jaded eyes of two great gurus, Kenneth Thompson and Dennis Ritchie (and their students) — arguably, design geniuses who saw their previous good ideas misunderstood and misinterpreted.
In Plan 9, networking is front and center. There are good reasons why this wasn't the case with Unix — it was being designed and built at the same time as local area networking was being invented. UNIX Fourth Edition, the first version written in C, was released in 1973 — the same year as the first version of Ethernet.
Plan 9 puts networking right into the heart of the design. While Unix was later used as the most common OS for standalone workstations, Plan 9 was designed for clusters of computers, some being graphical desktops and some shared servers...
Because everything really is a file, displaying a window on another machine can be as simple as making a directory and populating it with some files. You can start programs on other computers, but display the results on yours — all without any need for X11 or any visible networking at all.
This means all the Unixy stuff about telnet and rsh and ssh and X forwarding and so on just goes away. It makes X11 look very overcomplicated, and it makes Wayland look like it was invented by Microsoft.
From the article: Plan 9 was in some way a second implementation of the core concepts of Unix and C, but reconsidered for a world of networked graphical workstations. It took many of the trendy ideas of late-1980s computing, both of academic theories and of the computer industry of the time, and it reinterpreted them through the jaded eyes of two great gurus, Kenneth Thompson and Dennis Ritchie (and their students) — arguably, design geniuses who saw their previous good ideas misunderstood and misinterpreted.
In Plan 9, networking is front and center. There are good reasons why this wasn't the case with Unix — it was being designed and built at the same time as local area networking was being invented. UNIX Fourth Edition, the first version written in C, was released in 1973 — the same year as the first version of Ethernet.
Plan 9 puts networking right into the heart of the design. While Unix was later used as the most common OS for standalone workstations, Plan 9 was designed for clusters of computers, some being graphical desktops and some shared servers...
Because everything really is a file, displaying a window on another machine can be as simple as making a directory and populating it with some files. You can start programs on other computers, but display the results on yours — all without any need for X11 or any visible networking at all.
This means all the Unixy stuff about telnet and rsh and ssh and X forwarding and so on just goes away. It makes X11 look very overcomplicated, and it makes Wayland look like it was invented by Microsoft.
A bit silly (Score:5, Interesting)
This makes various statements about a *research* deomnstrator to show it's superiority to *production* platforms.
For example, they point out how much simpler it is, because they have so many fewer function calls. Well, of *course* because they don't have user requirements driving to do any better. For example. the article even points out the "simplified, one fits all calls that can handle both remote and local resources as equals admittedly bogs down actually local resuorces. You could bet that if it were considered seriously for production, someone would demand a "shortcut" set of system calls for enhanced behavior.
Similarly, he asserts than plan 9 has "kubernetes" built in. It absolutely does not. It may have the same namespace, control group, and chroot facilities (I'm guessing..), but the kernel has that too.
Plan 9 may have some interesting concepts, but the author puts it a bit too much on a pedastal.
Re: (Score:3)
Ie, everything in Plan 9 is a file. In Unix, only some things were files. Thus Plan 9 has a simpler API. More complexity in an API doesn't necessarily make things better; you can always create more functions by just laying a library on top. If you're on a small system the added simplicity saves space and improves performance. Don't think like Windows where one assumes everything gets bigger and faster so that efficiency is unimportant.
Plan 9 was also a full system. It had enough API to be completely a
Re:A bit silly (Score:4, Interesting)
Problem being that the author doesn't give credit even when Linux models certain things as files. When linux models the process as files, it's " bolted-on extras", when plan 9 does it, it's built in. He has a bad habit of repeated this in containerization, declaring it "bolt-on" when Linux does it, when namepspaces and cgroups and relative root aren't really "bolt-on", it was added and made integral to the architecture as it evolved.
Whatever opportunity Plan9 had, they squandered it by being closed source and by the time they came around in 2000, the ship had throughly sailed. However even a modern demo of 9Front in 2024 lags in capability even by 1995 standards. It is not even remotely suitable for modern uses at all.
It's various philosophies of "purity" being intact come from no one bothering to bring alternatives to the table. It hasn't had to endure real-world criticism by virtue of no one even trying to use it. A purist says "thanks to 9P, we don't need NFS or SMB!", otherwise written as "we don't support NFS or SMB".
Efficiency is of course important, and in the real world that has meant creating purpose-built functions rather than decalring one size should fit all. So Plan9 can brag about the simplicity of fewer system calls, but the abstraction, at least according to the article author, comes at the cost of being poorly optimized for some common use cases.
The problem with the "built for the userbase" was that the userbase was always so much tinier, and *never* expanded beyond the people who worked on it for its own sake. It never really made it to 1995 levels of functionality, despite all these years. If someone made an earnest effort to make it production capable, even for the most nerdy engineering application, it would require a ton of work and almost certainly get eviscerated on security.
Re: (Score:3)
Plan 9 is a pleasure to program in. You need to make a service available transparently over the network? Talk the 9P protocol. Need to access a service over the network? Talk the 9P protocol. Need to talk locally to one of those services? Talk the 9P protocol. How do you talk the 9P protocol? Either use the 9p system calls directly, or use file operations that result in 9P.
The variant of C is a well considered update to the standard, unlike C++.
Namespaces are constructed on the fly. You bin is in one place
Re: (Score:2)
Need to access a service over the network? Talk the 9P protocol. Need to talk locally to one of those services? Talk the 9P protocol. How do you talk the 9P protocol? Either use the 9p system calls directly, or use file operations that result in 9P.
I need to hijack 40,000 computers, or just borrow huge quantities of CPU time, for use by myself and my nefarious porpoises. It sounds like Plan9 will make that easy. Am I misreading something?
Re: (Score:2)
Need to access a service over the network? Talk the 9P protocol. Need to talk locally to one of those services? Talk the 9P protocol. How do you talk the 9P protocol? Either use the 9p system calls directly, or use file operations that result in 9P.
I need to hijack 40,000 computers, or just borrow huge quantities of CPU time, for use by myself and my nefarious porpoises. It sounds like Plan9 will make that easy. Am I misreading something?
If you can find 40,000 computers running open plan9 services, go right ahead.
Re: (Score:2)
The thing is, Plan 9 took it to ridiculous extremes. In the 1980s you could still, to some extent, get away with forcing everything to act like a file, but the further you got away from that with things that really, really weren't files, the more contortions you had to go through to pretend it was a file.
However this was never really noticed with Plan 9 at the time because it was a research OS and was mostly only used by researchers who could jump through whatever hoops were demanded. It's easy enough to
Re:A bit silly (Score:5, Informative)
The inflation of system calls in Linux was not driven by any "user requirements", but by honest (and dishonest) mistakes and the need to keep them around forever for backward compatibility. This is how such beauties like olduname() and oldolduname() (sic) came into existence, not because any user asked for them.
And Linux doesn't really even need such "defenders" like you -- its real number of system calls is much less if you get rid of all the compat cruft (which you don't need if you're porting Linux to a new hardware platform, or you don't care about binary compat): for instance, you need to implement only clone3() -- no need for any of clone2(), clone(), fork() or vfork() which are just legacy interfaces which can be easily implemented in glibc on top of clone3().
You're guessing wrong. Plan9 doesn't have anything like control group or chroot (because it does not need them), and has only the equivalent of "mount" namespaces (no ipc, network, uts, time, etc namespaces).
you don't seem to be familiar with them.
Re: (Score:3)
This is how such beauties like olduname() and oldolduname() (sic) came into existence, not because any user asked for them.
That clearly means that a need for newer, replacement system calls were needed when it met reality. You only have obsolete system calls because at some point the userbase needed the replacement. I think it's silly to presume that the lack of evolving system calls in Plan9 is because they must have gotten it right the first time, instead of the more likely answer of not actually having to have their implementation stand up to the real world.
(because it does not need them)
If it doesn't need them, it's only by virtue of its nature as a res
Re: (Score:3)
You really need to understand a LOT more about how Plan 9 works before commenting further. For one, a lot of things that would be system calls in other OSes are just transactions handled over 9p, generally by userspace daemons. That reduces the count of actual system calls in the kernel considerably.
Re:A bit silly (Score:5, Insightful)
The article does not claim Plan9 has kubernetes built in, it claims that the namespaces of Plan 9 and the 9P protocol already had the functionality so you don't need it. It was released for production as Inferno.
It's basically a what if story. If Unix which was built around free source distribution, but only for research purposes wasn't. Linus started creating Linux because Prentice Hall kept Minix proprietary for their OS Course textbook, so he couldn't distribute code he had written on a Minix 1.5 386 system. Linus only did this because he didn't know what was going on with FreeBSD. GNU which started in 1983 was going nowhere because it got trapped in the dead end of HURD. Bell Labs were continuing research Unix with Plan 9, which instead of using the networks tools of the day and X Windows had networking and 8/12, later Rio built in to it's foundation. But they kept it proprietary and then attempted to develop it commercially as Inferno, and when it didn't Lucent sold it off. As a result Linux reach critical mass with drivers and hardware support, picked up all the GNU free software. Microsoft stopped paying licence fees for Xenix, it's Unix multitasking corporate platform and created NT with ideas stolen from DEC. By the mid 90s, tech companies abandoned paying for Unix and developed Linux as a platform for their services. And this is how we ended up where we are.
But if Prentice Hall hadn't kept Minix proprietary to sell textbooks, we would have Open Source software running on a 12,000 line Minix Microkernel OS. If FreeBSD had more that a few academics at Berkley working on it, or they got in contact with Linus when he was looking to build a posix compliant kernel from the Posix specs, Sun's Unix specs and Minix, we could have started with fully functional FreeBSD instead of early Linux.
And if Plan 9 had been released as Open Source and ran on hardware other than obscure things like than the Blit, we may have had a much smaller OS without X windows suitable for software development available. Instead it's a hobby system that's missed out on 30 years worth of developments like high level languages, games and browsers with internet, video and audio.
Re: (Score:3)
A fair analysis, though I would challenge the speculation a bit. If Minix had become the Unix, it would probably have also deviated from it's line count. Again, the 'theory' of microkernel might get the rules bent a bit as the world actually demands certain things of it.
We might have in theory started from Minix, 386BSD, or Plan 9, but I don't know if 30 years of drift would have landed us roughly in the same territory, with people posting articles about the dead Linux project saying "Oh if only Torvalds'
Stopped reading here (Score:5, Funny)
Plan 9 is not just a replacement kernel, or the layers above that. It's also a clustering system, and a network filesystem, and a container management system. In terms of functional blocks, we are talking about replacing Linux, and Ceph or Gluster or whatever, and all hypervisors, and all container systems, and Kubernetes and all that.
As it was conceived as one tool for all this, it is much, much simpler than the gigabytes of complexity layered on top of Linux that makes Linux able to do this stuff.
This sort of breaks the philosophy of each piece doing one thing and doing it well. If we wanted one, big program that did everything, we'd have written it. And called it systemd.
Re: (Score:2)
Haarrr!
Re: (Score:3)
Will you ditch the monolithic Linux kernel because it does networking, file systems, I/O scheduling, drives hardware, has crypto functions, plus so much more unrelated stuff?
The "do one thing and do it well" refers to user-space tools, not to what the kernel offers.
Bell Labs developed Plan 9? (Score:4, Funny)
Weren't they a bit late to the game? Plan 9 [came] from Outer Space [wikipedia.org] in the late 50s.
Re: (Score:3)
And this last sentence makes the article look like (Score:3)
it was written by a 12-year-old.
This means all the Unixy stuff about telnet and rsh and ssh and X forwarding and so on just goes away. It makes X11 look very overcomplicated, and it makes Wayland look like it was invented by Microsoft.
Re: (Score:2)
X11 is amazingly overcomplicated. I having dug into Wayland, but I suspect it really is also overcomplicated too.
Re:And this last sentence makes the article look l (Score:4, Informative)
I think X gets a bad rap, it's not that bad.
Wayland is itself simpler, but it largely achieves his by simply not doing a ton of important things, which pushes the complexity outwards. The resulting ecosystem of things built on top of Wayland are not simpler, and still after 15 years are missing important functionality for a working desktop.
Re: (Score:3)
>"X11 is amazingly overcomplicated."
It is mostly overcomplicated because it does so much. And so much of what it does isn't typically needed anymore (or is deprecated). It has evolved over time. A very long time. If you focus on only the things actually/mostly used now, it isn't that complicated. Comparing it to something like Wayland isn't quite "fair." For one, Wayland *still* doesn't do everything some people want that can be done with X11.
>"I having dug into Wayland, but I suspect it reall
Re: (Score:1)
Someone over at x.org did a non-trivial amount of thinking on what X12 would look like.
https://www.x.org/wiki/Develop... [x.org]
It's careful to say that there isn't an X12 project, at least as of September 2017.
In computing, the mediocre platform always wins. (Score:1)
I do not know why. I only know it is true.
Re: (Score:2)
Re: (Score:2)
One is to establish whether Plan9 was the 'mediocre' after all. Lots of designs seem/feel better in their early, academic state. Any "rough edges" can be pushed to the side and folks assume "well, if we think about it, we can figure it out, but maybe later".
So Unix really got started in the mid-70s with lots of people kicking the tires and evolving it in the field. By the time most people could even theoretically look at Plan9 around 1995, you had multiple BSD distributions and Linux distributions and tho
no (Score:2)
Why it failed (Score:2)
Maybe they should have had a Plan 10?
Comparing rio to X11/Wayland is bad (Score:3)
It makes X11 look very overcomplicated, and it makes Wayland look like it was invented by Microsoft.
Because rio just treats video as some space of RAM to dump to. Very easy in 9P to just say that RAM we dump into is elsewhere. However, this is shitty performance on anything modern. Attempt dumping every single frame of 4K video at 60fps, frame by frame into RAM, just wherever. You'll quickly understand what all that other stuff does in X11 and Wayland.
Nothing stops Plan9 having some sort of access to the command pipe at /dev/opengl or whatever and commands just get sent there. Or /dev/videoaccel hardware device or /dev/buffer{0,1}, or however someone would want to expose the deeper layers of a video card on the FS outside of just some random page of its VRAM. But the thing about all of this, this is just the horror of ioctl presented as a file, you still have to figure out the specific format you need to shove into "file". This is the same reason you cannot just cat music.mp3 > /dev/soundcard on Linux. Because the file is a stand-in for taking data and passing it, as it understands the data, to the underlying device. And each device on the FS only understands the format as the kernel knows how to handle it, which isn't all that much because the kernel needs to focus on device things, not format things. This is the entire point of ioctl and why working with it is a pain in the arse.
Kind of the whole point of the command eject on Unix systems, because there's not a semantic way via the FS to indicate that you want /dev/cdrom to actually eject. Now you can one step it past that by having sub-devices like echo 1 > /dev/cdrom/eject and whatnot, but when you look at all of the various specific functions that say a modern video card does, your going to have a shit ton of device nodes (or sub-nodes, however you want to do it)
And at some point you've just invented all the "garbage" that people bitch about X11 and Wayland but with a FS that's completely un-navigable, and then you'll just have your display server rio playing middle man for where to go in the FS for specific functionality. So basically, the "formats" and "intermediaries" that plague X11 will just be the exact same thing except in rio it'll be which device nodes to hit and in what specific order to accomplish the task at hand.
It's real simple to do all kinds of forwarding and dumping windows onto someone else's screen when you just treat their video card as a frame buffer. I remember a project from a while back called directfb. And it's just does that, dumps into the Linux kernel's frame buffer device. Super simple, stupid fast for basic drawing, and you can literally just take that add some TCP on top, and poof, you too can start creating your own display forwarding server, and call it whatever you want. Network transparent and all. But video cards tend to be a bit more complex than just an array of memory locations, that's kind of the reason why things like X11 and Wayland are complex beast. There's a bit more to it than just dumping bytes into a video array.
Plan9 versus Cloud (Score:1)
Of course there's no end of minor technical differences, but that's not the point.
If you really want "The Network IS the Computer", you don't want Solaris or Plan9, you want Cloud.
And you can get it now, with proven ability for many years by now.
Re: (Score:2)
Well, not quite, because the cloud is just somebody else's computer, although it'll be a virtual computer running inside a single physical computer. Plan9 would have enabled the cloud to be an arbitrary-sized virtual computer running over an arbitrary number of computers.
This would have been much cheaper, and since the cloud's virtual machines don't need a gui, Plan9's gui limitations don't matter.
It would also have meant that instead of using a whole bunch of apps to securely log into the cloud, you could
Compare Plan 9 with IBM iSeries (Score:2)
IBM's iSeries, originally System 38 (announced in 1978, so 10 years ahead of Plan 9), took a related approach - everything is a database record - but applied it at the hardware level. It had its own operating system, which I think was more like an SQL engine, and it's own "programming language" - RPG (Report Program Generator, very punched-card oriented). AFAIR it was marketed exclusively by business partners who sold the hardware together with business packages as a complete solution, which meant that al
Plan 9 (Score:1)
It didn't matter that it was actually better, but it had to be different.
The everything-is-a-file and pass-everything-as-string was brought to the limit, and used it even in cases where it really did not fit.
Performance was terrible when compare to Linux in doing apple-to-apple comparisons.
Plan 9 at the end deserves to be where it is now