Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Open Source Unix

48-Year-Old Multics Operating System Resurrected (multicians.org) 94

"The seminal operating system Multics has been reborn," writes Slashdot reader doon386: The last native Multics system was shut down in 2000. After more than a dozen years in hibernation a simulator for the Honeywell DPS-8/M CPU was finally realized and, consequently, Multics found new life... Along with the simulator an accompanying new release of Multics -- MR12.6 -- has been created and made available. MR12.6 contains many bug and Y2K fixes and allows Multics to run in a post-Y2K, internet-enabled world.
Besides supporting dates in the 21st century, it offers mail and send_message functionality, and can even simulate tape and disk I/O. (And yes, someone has already installed Multics on a Raspberry Pi.) Version 1.0 of the simulator was released Saturday, and Multicians.org is offering a complete QuickStart installation package with software, compilers, install scripts, and several initial projects (including SysDaemon, SysAdmin, and Daemon). Plus there's also useful Wiki documents about how to get started, noting that Multics emulation runs on Linux, macOS, Windows, and Raspian systems.

The original submission points out that "This revival of Multics allows hobbyists, researchers and students the chance to experience first hand the system that inspired UNIX."
This discussion has been archived. No new comments can be posted.

48-Year-Old Multics Operating System Resurrected

Comments Filter:
  • Is this purely an educational thing at this point, or is there any other uses?

    • by Gay Boner Sex ( 5003585 ) on Sunday July 09, 2017 @12:00AM (#54772165)
      History. Read. Learn from the past. General concepts and themes do not change.

      Multics didn't have many "problems," or at least many more than other systems of the time. (the IBM TSS/360, in 1967, turned out to be too slow for supporting more than one user concurrently, and of course OS/360 was plagued with bugs and performance problems). There is a common myth that Multics "failed," but in fact the system was first described in 1965, released in the early 1970s, and lasted until 2000 (Salus himself said, "With Multics they tried to have a much more versatile and flexible operating system, and it failed miserably."). However, the lifespan, in particular the thirteen years after development ceased in which installations continued to use it, doesn't suggest failure. It's certainly true that AT&T management decided that the project wasn't relevant to them, and that's sufficient for Unix history.

      Bam!
      • Where did you read that the 360 was so slow it could only handle one user?

        • by Gay Boner Sex ( 5003585 ) on Sunday July 09, 2017 @12:37AM (#54772245)
          Where did I read?

          Market research at Digital by walking up to someone ELSE'S mainframe and waiting for it to do another "load in the washing machine" before it would even think about giving you the time of day and sniff your card stack.
        • by udin ( 30514 ) on Sunday July 09, 2017 @12:40AM (#54772257)
          The 360 wasn't particularly slow--the time-sharing operating system TSS/360 was initially a mess (so was OS/360--at first--see Brooks,"The Mythical Man Month") and I don't recall that they improved the TSS to where it was usable. There was a project in (as I recall) the IBM lab in Cambridge MA that did an interesting and credible virtual machine OS for the 360 (I vaguely recall it was for a middling level 360) that was developed because the "official" time-sharing system TSS/360 was such a mess.
          • CP-67/CMS, at the IBM Cambridge Scientific Center at 545 Tech Square. Also in that building was the MIT Multics project and the early days of the MIT AI Lab. CP-67 ran on the /360 mod 67 and later on quite a few of the /370 mainframes. It was written mostly in System/360 Assembler, and later in IBM's double-secret language PL/S.

            I taught a graduate course at Brown on the internals of TSS/360, CP-67, and Multics in the early 1970s. One year the class project was to replace the paging and tasking support

        • by PolygamousRanchKid ( 1290638 ) on Sunday July 09, 2017 @02:56AM (#54772473)

          Where did you read that the 360 was so slow it could only handle one user?

          This rumor originated in Dr. Gene Amdahl's lesser known history of the IBM mainframe titled, "The Apocryphal Man Mouth," which examined the contradictory cognitive dissonance of software project managers who think that they are running a development process, when, in fact, they are simply running their own mouths. The book is filled with the taller tales of the seminal computer industry, like the instance of Professor Forman Acton of referring to the inventors of that new-fangled language, collectively as, "The FORTRAN Boys."

          Apparently, a disgruntled IBM customer complained about about the one user design limitation of OS/360, and asked the IBM sales rep when an upgrade to more than one user would be available. The IBM sales rep pulled out a little plastic case containing resistors, uttered some bizarre incantation like, "Bad Booze Rots Our Young Girls But Vodka Goes Well", and enumerated the prices of the resistors, and how many users each one would support. One cold solder joint later, and the IBM customer was a happy camper.

          There was also something in there about Oliver North nearly starting World War Three, because he was forced to use IBM's OrifaceVision/2, which was like their PROFS Professional Office System for mainframes, but it was much more secure, because it was based on OS/2, which meant it never ran or was used at all, and you can't get any more secure than something that just doesn't work . . .

          . . . oh, and speaking about IBM SAA AD/Cycle, don't mention that, unless you say "Mary Hartman! Mary Hartman" three times to a mirror, and conclude it with that Islamic curling Eight-ender cry, "Allah Hu Almaraq!", ("God is Gravy!"),

          . . . and . . .

      • by udin ( 30514 )
        Multics suffered due to its scale--ultimately time-sharing on minicomputers became much more cost-effective compared to Multics. The cost of the hardware it ran on limited its market. Its niche disappeared. OSes like Unix could run on small computers in small companies and big computers (or lots of small computers) in big companies. It also suffered slow development due to tackling new hardware, a new (and complex, bloated) programming language, PL/I, and several new architectural concepts in one project.
        • As I recall, most Multics (Multicii? Multicesses?) ran at University shops all over the western world. They had a big requirement for multi-user access in a way that most businesses didn't - at the time.

          What killed Multics was the Personal Computer - why be forced to use a terminal to access a mainframe somewhere else in the world, and over 300 baud if you were lucky, when you could have your own processing power right under your desk?

          The original minicomputers, like PDP, VAX and Wang, were all small timesh

          • I used it myself. Your analysis is correct. It was also prone to oversubscription. Students and computer scientists were programming it with the beginnings of "object oriented programming" with languages like LISP, and taught to use self-reference and recursion as part of their philosophically preferred approach rather than as resource expensive tools to use only when needed. The result was _profoundly_ expensive in system resources: calling a function is a much more expensive operation at the kernel level

            • calling a function is a much more expensive operation at the kernel level than running a loop

              Clearly they needed The Sussman their God.

            • You don't need the kernel to call a function.
              And inside of the kernel, function calls have the same cost as outside.

              • You cannot allocate the space to save the state from which a function is called, nor allocate new space to copy in specified function with space for its local variables, without access to the kernel. Nor can you read the end state of the called function and return its generated information or status to the working environment or connect its results to other programs without kernel level functions. It's true that many libraries efectively abstract away this operation at the library level, including libc or

                • You cannot allocate the space to save the state from which a function is called, nor allocate new space to copy in specified function with space for its local variables, without access to the kernel
                  That is utterly wrong.
                  The magic is called a stack. And every processor has build in instructions for function calls.
                  No kernel needed at all, kernels have nothing to do with funvtion calls.
                  Instead of having a fancy name, I suggest to read a book, or simply dissassemble a fibonacci function or something similar tri

          • and over 300 baud if you were lucky,

            "110 baud should be enough for everybody." - Bill Gates, while in Jr. High School and sitting in front of an ASR-33.

      • by Anne Thwacks ( 531696 ) on Sunday July 09, 2017 @07:39AM (#54772895)
        Multics didn't have many "problems,"

        I am not so sure about that. In 1973, I repeatedly brought down the system by running a Fortran program in which I declared an array names ARRAY. I cant remember whether this was illegal or not, ARRAY may have been a reserved word, but in the context of Fortran 4, that could have depended on where it was used.

        I would not have complained if I got a printout with an error message - probably "SYNTAX ERROR AT OR NEAR LINE 1 COLUMN 1". Instead the entire OS would crash! This happened several times a day for several days before anyone realised it was me. It was then possible to figure out what I had done wrong only by deliberately crashing a few more times! I am sure that, over a 6 month period, I had few days without a system crash. I may not have been the cause of most of them.

        In mitigation, none of my BASIC programs crashed bringing down the whole system. (But they were only concerned with gathering data from users. The Fortran stuff was solving Maxwell's wave equations).

        Yes, I did ask for a PDP8 instead. I don't know how the costs would have compared. What I do know, is my employers made a colossal amount of money from that software, while I was paid £11 per week for 6 months. After it was written, an apprentice could do in 30 minutes what had previously taken a degree level physicist 3 months - and not only get the right answer, but prove that he had, before gold plated parts were manufactured to the resulting spec. Then, if there were manufacturing errors, predict whether the resulting predict would still be in spec over a wide range of parameters, requiring only a single 30 minute lab test to confirm my predictions, rather than 6 months field tests at the top of a 30 metre mast IN A FIELD WITH COWS or on the top of a war ship at sea - and other scenarios where failure was rather expensive.

        OK, so computers cost $1M in those days - the payback could be many times that - per month. (But even then, engineers were treated like shit).

      • by davecb ( 6526 )
        Read up on how they did single-level store, whcih caused memory and the file system to behave a lot like one another. Then ask yourself about running Linux programs out of a persistant memory filesystem.
    • Unix was supposed to be a simplified version of Multics, so it would be interesting to see what the original Multics was capable of, and in what ways would upgrades of Unix have occured had Multics developed on parallel tracks

    • Education has been known to be useful. I don't understand why there is an "or" in your statement.
  • by TheGratefulNet ( 143330 ) on Saturday July 08, 2017 @11:44PM (#54772127)

    maybe its worth looking into..

  • by sk999 ( 846068 ) on Saturday July 08, 2017 @11:45PM (#54772133)

    I had a Multics account way back - used it solve problem sets in Physical Chemistry. It would be cool to resurrect my account, but I don't remember the password. Is there a password reset function?

  • Still (Score:4, Informative)

    by ArchieBunker ( 132337 ) on Sunday July 09, 2017 @12:28AM (#54772221)

    a more capable operating system than HURD.

  • by www.sorehands.com ( 142825 ) on Sunday July 09, 2017 @12:47AM (#54772267) Homepage

    I was a project administrator on Multics for my students at MIT. It was a little too powerful for students, but I was able to lock it down. Once I had access to the source code for the basic subsystem (in PL/1) I was able to make it much easier to use. But it was still command line based.

    A command line, emails, and troff. Who needed anything else?

    • by Lorens ( 597774 )

      Well, without the web, I would definitely want inn and trn.

  • It's not the end! (Score:4, Interesting)

    by Gravis Zero ( 934156 ) on Sunday July 09, 2017 @01:10AM (#54772329)

    Considering that processor was likely made with the three micrometer lithographic process, it's quite possible to make the processor in a homemade lab using maskless lithography. Hell, you could even make it NMOS if you wanted. So yeah, emulation isn't the end, it's just another waypoint in bringing old technology back to life.

    • Wow, I never thought about that. Do people do that??
      • It's a work in progress. The chemistry is doable (it's been shown and proven) but people (including me) are actually working to design and build cheap "high resolution" lithographic systems. One micrometer lithography is easily within reach. Higher resolution is possible but getting a laser with the wavelengths shorter than 400nm on the cheap may be a challenge.

        • Wow, that's cool! I would think that getting any laser with sub millimeter precision would be tough or expensive.
    • Considering that processor was likely made with the three micrometer lithographic process,

      Not even close. It dates from before 1970. It was built with discrete components. I doubt it used printed circuit boards, but if it did, the process would have been closer to 3mm minimum feature size. The adder (as in hardware used to implement "add" assembler instruction) would have been a 15" square board, or more likely, a crate of 10 smaller wire-wrapped boards. The CPU would have been more than 4 off 42U racks.

    • by LWATCDR ( 28044 )

      Why bother? Just use and FPGA to clone it.
      The real project at this point IMHO is getting gcc and glibc running on it. I doubt many users will want write software in PL/1 for it.

    • by Anonymous Coward

      My first thought was whether it'd be interesting to implement the machine with an FPGA or something, since emulating 36 bit registers on 32 bit has got to hurt performance (and 64 bit everything is just brute force and ignorance). But seeing how far you can push homemade lithography might actually be quite interesting, if maybe several steps up on the difficulty ladder.

      What sort of budget does one need for this?

  • Multics (Score:5, Interesting)

    by Tom ( 822 ) on Sunday July 09, 2017 @03:16AM (#54772487) Homepage Journal

    The original submission points out that "This revival of Multics allows hobbyists, researchers and students the chance to experience first hand the system that inspired UNIX."

    More importantly: To take some of the things that Multics did better and port them to Unix-like systems. Much of the secure system design, for example, was dumped from early Unix systems and was then later glued back on in pieces.

    • Re:Multics (Score:5, Informative)

      by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Sunday July 09, 2017 @04:56AM (#54772609)

      More importantly: To take some of the things that Multics did better and port them to Unix-like systems. Much of the secure system design, for example, was dumped from early Unix systems and was then later glued back on in pieces.

      Basically Multics gave way to the rise in minicomputers who could not handle such a heavy OS, so the developers created Unix ("Multics without balls" - a play on Eunuchs). One thing they did was if there was every a problem, you called panic(). Aka, the kernel panic on Unix. Much simpler and much lighter than trying to recover (though modern Linux is pretty hard to panic() without failing hardware - it has enough built-in self checks that in general it'll handle misbehaving kernel drivers).

      There's also historical interest - don't you want to see what the predecessor to Unix and C was? Unix was popular because it was a lightweight OS at the time that was multiuser and multiprocessing, a change from CP/M and DOS. It's why people run emulators of the Apollo Guidance Computer with the original software. It's neat, it's interesting.

    • Comment removed based on user account deletion
      • As I understand it, there's very, very, very, little resemblance between the two beyond what you'd expect between two operating systems that shared some designers.

        Such as how the NT kernel resembles VMS in several key ways [windowsitpro.com].

      • by davecb ( 6526 )
        The Honeywell salesforce of the day weren't quite sure how to sell a big timesharing system, and referred to Multics as "a machine big enough for everyone in Boston". They were very much into selling "one computer per company" instead, and flogged GCOS to all sorts of unsuspecting companies, including the University of Waterloo. The wouldn't have had a clue about how to sell one computer per person
      • Time to undo some mod points, because this comment is too good to pass up! I've been a student of Multics lore for a few years—it was way before my time—and the answer is that the obsession with this was beyond amazing. The MIT site would regularly split their system into two while doing debugging, removing IO controllers and CPUs from the main system (without shutting it down) until they had enough hardware set aside to bring up another instance of the OS, still sharing disk drives. They also a

  • Using Multics is rather like masturbation; it's fun for a while, but ultimately it doesn't produce anything.
    --
    E.A. Blair

  • Will it run my smart phone without crashing? I need something to run my phone that doesn't crash. Is there anything anywhere that won't crash, or is crashing a design feature intentionally put into software?
    • If you don't like crashes, Multics is not for you. (See my previous post).
      • If you don't like crashes, Multics is not for you. (See my previous post).

        We really don't have much trouble there; I've had a Multics up for three months, another team member, nearly a year. Then again, we haven't have hundreds of stoned college kids banging away at it.

        • by davecb ( 6526 )

          It was quite easy to crash a process: Multics itself was way harder to crash. It was substantially more resiliant than GCOS, which ran on almost-identical hardware. Hi-Multics.ARPA was usually up for months, between occasional maintenance reboots.

          --dave (DRBrown.TSDC@Hi-Multics.ARPA) c-b

    • Will it run my smart phone without crashing? I need something to run my phone that doesn't crash. Is there anything anywhere that won't crash, or is crashing a design feature intentionally put into software?

      Software crashes because the people who pay for software development are more interested in having it NOW than they are in having it RIGHT. Screw Agile.

  • I wonder if it could run https://en.wikipedia.org/wiki/... [wikipedia.org]?
    I was a sysop for DTSS on a DPS-8 for awhile. DTSS had pipes which inspired them in Unix.

  • Influence on Unix (Score:4, Informative)

    by nuckfuts ( 690967 ) on Sunday July 09, 2017 @01:00PM (#54774035)

    From here [wikipedia.org]...

    The design and features of Multics greatly influenced the Unix operating system, which was originally written by two Multics programmers, Ken Thompson and Dennis Ritchie. Superficial influence of Multics on Unix is evident in many areas, including the naming of some commands. But the internal design philosophy was quite different, focusing on keeping the system small and simple, and so correcting some deficiencies of Multics because of its high resource demands on the limited computer hardware of the time.

    The name Unix (originally Unics) is itself a pun on Multics. The U in Unix is rumored to stand for uniplexed as opposed to the multiplexed of Multics, further underscoring the designers' rejections of Multics' complexity in favor of a more straightforward and workable approach for smaller computers. (Garfinkel and Abelson[18] cite an alternative origin: Peter Neumann at Bell Labs, watching a demonstration of the prototype, suggested the name/pun UNICS (pronounced "Eunuchs"), as a "castrated Multics", although Dennis Ritchie is claimed to have denied this.)

    Ken Thompson, in a transcribed 2007 interview with Peter Seibel[20] refers to Multics as "...overdesigned and overbuilt and over everything. It was close to unusable. They (i.e., Massachusetts Institute of Technology) still claim it’s a monstrous success, but it just clearly wasn't." He admits, however, that "the things that I liked enough (about Multics) to actually take were the hierarchical file system and the shell—a separate process that you can replace with some other process."

  • Based on the comments here, I can't wait for this story to be taken up at TheRegister.
  • Massive
    Unusable
    Tables
    In
    Core
    Seriously

    Okay the last I just winged it, but that was the standard definition of Multics for(;;).
    Since it wasn't posted, I did my duty for history. And tables. In Core.
  • The biggest problem with Multics was GE/Honeywell/Bull, the succession of companies that made the computers that it ran on. None of them were much good at either building or marketing mainframe computers.

    So yes, Multics was a commercial failure; the number of Multics systems that were sold was small. But in terms of moving the computing and OS state of the art forward, it was a huge success. Many important concepts were invented or popularized by Multics, including memory mapped file I/O, multi-level file s

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...