Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

The Last Multics System Decommissioned 77

Bell Would? writes: "A key feature of the brief news item, 'The end of the Multics era,' in the latest issue of the The Risks Digest is the 'list of goals' Multics had fulfilled which, as the author describes them, are as relevant today as they were 35 years ago." Odd -- I assumed these were all long since junked or put into museums, since my first exposure to the name Multics was in books which spoke mostly in the past tense. That list of goals is one that I hope architecture designers consult frequently.
This discussion has been archived. No new comments can be posted.

The Last Multics System Decommissioned

Comments Filter:
  • by Anonymous Coward
    Yeah, we actually maintained multics for a few years. It was still there when i was done, so i have know idea what happened to it.
  • by cperciva ( 102828 ) on Sunday November 12, 2000 @08:32PM (#628382) Homepage
    I guess there won't be any Real Men around any more. After all, it is a well known fact that Real Men use Multics.
  • "Fortunately for us, Dennis Richie and Ken Thompson decided to pare down some of the features and create a version of "Multics without the balls." Thus Unix was born (the name being a pun on "Multics")."

    And fortunately for us, an army of people have put every one of those features BACK [into Unix]... I don't think anybody would be wanting to run a process swapping OS with a16 bit address space these days... The list of features Richie and Thompson removed include demand paging, dynamic linking, shared memory, memory mapped files, ... need I go on?

    The big problem with Multics was that is was 20 years ahead of the hardware.

    In addition to Unix, I know a significant portion of the Multics staff was instrumental in developing the proprietary OS run by Stratus Computer (www.stratus.com) and I believe Multicians also had a major hand in VMS and later in NT development.
  • I know for a fact that the ABB (Asea Brown Boveri) offices in Columbus, Ohio still have a functional Multics. At least they still did a few weeks ago when my father was there on business...
    ICQ# : 30269588
    "I used to be an idealist, but I got mugged by reality."
  • by SurfsUp ( 11523 ) on Monday November 13, 2000 @12:04AM (#628385)
    Multics had a rather interesting approach to file I/O, IIRC - when you loaded a file, it got mapped straight in to virtual memory (the machine had a 48 bit address space back in the 1960's, so you could get away with stunts like this). Read/write was just a matter writes to memory!!

    That's pretty much what we do now in Linux - when you write it doesn't go to disk, it goes onto memory pages. When you read you're reading from memory pages and if they're not there, they get 'swapped in' from your file using the same mechanism we use for virtual memory, though we bypass the paging hardware in this case (it's faster that way).

    Neat idea - but imagine the 32-bit address space crunch happening 20 years ago instead of now :)

    We get around that by using disjoint pages of virtual memory mapped into the file's address space with a hash table, so the file has a 44 bit address space - that should be enough for a while. This works well, and doesn't cause virtual memory fragmentation. We'll probably start mapping the files in chunks larger than one page pretty soon.
    --

  • Really.

    Perhaps it went onto one of our naval vessels, and collapsed some poor rusted out deck. What a sad end to a noble operating system!

  • Well my first AC Flame of the day. wow. I think to better state my question (considering the topic of the story seemed to be centered on the longevity of the Multics systems), perhaps I should have asked ... What are the possibilities for developing a *nix, BeOS whatever system with that type of longevity? Everything *nix wise is booming so fast that some distros are skipping version numbers. And BeOS has gone from "The Multimedia OS of the future" to an IA OS! But now that you mention it.. hmm just imagine.. a beowulf cluster of Multics-based Dr Dos machines, wow.... the SETI data I could process!

  • Pioneer users of the system put up with a lot: crashes, poor response, constant change, arrogance from developers, and inexplicably missing features.

    Ahhh, isn't it nice to see that while the dominant OS may change, some things always remain the same?

  • What's interesting is that Bull (?), the company that owns the rights to the source code, never wanted to release the source code because they claimed they had to "continue to support the few remaining Multics systems in existence" -- they can't possibly be doing this now, so give us the damn source! ;-)
  • Mmmmm, Apollo. My trusty DN3000 is still sitting about five feet away. I do not fire it up as much as I used to since the disk started making unhappy noises. It is not dead yet, though. :)
  • I always wondered why MultiCS was still in its server type choices, I didn't even know any were around to this day?
  • Two thoughts:

    1) Was this the longest lasting O/S thus far? Anyone know of a production O/S that pre-dates mid-1965 that is still running?

    2) Multics died mostly from being proprietary and running on proprietary hardware. The first, the Multicians thought could be surmounted by a gift from the current code owners. The second, alas, was fatal.

    The industry's fixation (mostly because of the volume curve) on VonNeuman architechtures that lack any real new features cause us all to not have things we could really really use -- like the ring security that Multics offered which had direct hardware support. Too bad ASICs are not yet dense enough... maybe soon. :-)

    --Multics

  • Something is quite wrong with the Canadian military. First they de-commission Multics. Now I hear that they are getting new helicopters to replace the Sea-Kings. What's next? Our diesel submarine getting replaced? I am starting to see a conspiracy.

    Actually, I don't begrudge the millitary for getting new helicopters. They are really needed.
  • It shows a completely different point of view... instead of "everything is a file", the MULTICS way seems to be "everything is core".

    We do that too, it's called mmap. The nice thing is that the same primitives used in mmap (and now swap too... and soon, shm) are also used in read and write. All this in a nice, compact efficient package. Um, as long as you close your eyes and forget that buffer.c exists. :-)
    --

  • Yes, good old Domain/OS.
    That was a nice OS. And the apollo's amazingly quick for what they were (68030's, as I recall).


    -- And let there be light... so he fluffed the light spell
  • I believed the "Hackers" (the book) hype against Multics and heard many an anti-Multics joke throughout the years. But I ended up meeting (at a short-lived contract) a wonderful gentleman who had worked on Multics and was very proud of the accomplishments he'd achieved and what ground Multics had broken.

    I gained a new respect for the achievements of the Multics team, and I know today my former coworker and friend would be very unhappy to learn of this news. It really is the end of an era, and we have a lot to thank for what was learned as a result of it.


  • It would'nt be all that hard to emulate the hardware on a PC. Yes, it would be slow, but the Multics CPUs were only a few MIPS, so an emulation on any modern PC would be faster than the original machine.

    Some of the Multicians should do this, just so it's not forgotten. It's still one of the most secure operating systems around.

  • I was intending to do something like this, but since its already done I will just add a couple of things...

    A wide range of system configurations, changeable without system or user program reorganization. Windows: Only three reboots to install a sound card! Linux: Exchange anything but the kernel without rebooting Microkernels: 8-D

    Well I take this to mean 'hot plug' (since dorking with the system/modules is 'system reorganization') which as far as I'm aware linux doesn't support but Win 2k [compaq.com] if your HW supports it, AIX [ibm.com], and Solaris [sun.com] do.

    Hierarchical structures of information for system administration and decentralization of user activities. Not entirely sure what they mean by this...
    I think they mean NDS [novell.com], Active Directory (which is basically LDAP with a bunch of support) and of course LDAP if you are willing to spend the time to get it to support all the cool stuff NDS does .

  • It was actually the first computer I ever used. My father, a professor of computer science at MIT who did some development on the Multics virtual memory, got me an account.

    Unfortunately, he agreed to pay for it. The Multics billing system was the most elabourate I've ever seen, before or since. You were billed for CPU minutes, connect time minutes and I think even a whole bunch of other minutes. As a result, I ran through $150 of computer time in three days, which is not exactly cost-effective, so he wound up getting me a free ITS tourist account.

    I don't remember much about it anymore, since it's been years since I've had an account, but I do feel a little nostalgic now that it's gone. Pity no renegade hobbyist could put one together, as some individuals have with ancient PDP-10 systems. I have to assume that the cost of wiping classified data from the systems is sufficiently high that the recycler is the only realistic destination for these ancient systems.

    D

    ----
  • I know that the Unisys 2200-based transaction system I work on at NWA (WorldFlight, a derivative of UAL's UNIMATIC) still runs in a modified TIP environment on OS2200, which itself is a direct descendant of UNIVAC's EXEC8.

    However, even though some of the Fortran and MASM source still used in production dates from 1966 and 1967, the operating system itself is much newer (less than a year old). It's been refined over time, and the hardware and OS isn't the same as it was. It just maintains an extremely high level of compatibility with older software, and as far as the application is concerned it's still running on a UNIVAC 1108.
    --
    -Rich (OS/2, Linux, BeOS, Mac, NT, Win95, Solaris, FreeBSD, and OS2200 user in Bloomington MN)
  • Sid [userfriendly.org] from User Friendly [userfriendly.org] will be devastated!
  • The interesting part about the article was that the last Multics machine was being run by the Canadian military! Only in Canada, eh?
  • by Anonymous Coward
    where can i download this mystical linxu ?
  • The NSA used to use multics for their mail server. till it got cracked. using old fashion proprietary systems with secure classifications is no justification of lax security.

    http://www.iretro.com [iretro.com]
  • Continuous operation analogous to power & telephone services
    Well, all modern operating systems can do this in theory at least ;-)

    If you're talking "analogous to power & telephone services", that means to be even resistant to hardware failures. Which means hot-swap disks and CPUs. Certainly Sun systems can do this, except maybe if you lose the drive with the root partition, but I'm not aware of Winderz machines which allow CPU swapping.

    Right now, my main Linux server is whining and rumbling like a banshee on testosterone, and it's not the power supply fan, so it must be the old 17 gig hard drive. So it looks like I'll have a few hours of downtime to get a new one in there.

    Hierarchical structures of information for system administration and decentralization of user activities. Not entirely sure what they mean by this...

    Sounds like NIS on steroids. Or maybe the Windoze Registry without the suckage, and distributed over multiple machines. Or better yet, NetInfo from NeXT/OS-X.

  • Support for selective controlled information sharing.
    • Windows: Network Neighborhood ought to be enough for anybody!
    • Linux/UNIX/BSD: NFS, Coda, FTP, scp, etc...

    Nope, more likely something like the ACLs of NT...

  • have been holding off a US invasion all this time with MULTICS??? How Embarassing....
    Wondering what is replaceing it.... Suddenly the invitation to Microsoft to move north of the border makes sense.

  • I've never heard of one at U of C, but that's only since 1995.

    However, speaking of old systems, when I was in 2nd year engineering (1996), we had to do C programming on old DECstations running OSF. They finally replaced them in 1998 with RedHat 4.2 or something on P-233's, then switched to WinNT 4 in 1999 fall, then a year later switched back to Debian 2.2.whatever (same hardware all along).

    --
  • by Chris Tyler ( 2180 ) on Monday November 13, 2000 @03:04AM (#628409) Homepage
    Yes, the University of Calgary housed two Multics systems in the 80's: a 6-CPU system and a 1-CPU test system. The company that supported Multics after Honeywell (ACTC) was a spin-off from U of C.

    Multics (at UofC) was the first large system I used, and I have many fond memories of it. I attended the Shad Valley (technology + entrepreneurship) summer program in 1984 and spent hours absorbing 'everything Multics'. On-line manuals, pathnames, processes, e-mail, chatting, windowing systems (character-based) ... all very fascinating to a tech-hungry teenager.

    It's interesting to note that Multics underwent a development surge in the early 80's and despite the aging hardware design still had a number of sites at that point (Ford, Canadian defense, US DOD).

    I'm sad to see it go, though its time has come (without portability, it was doomed to die with the hardware). I remember touring the U of C computer room when a tech was on site, reportedly doubling the cache *width* while the system remained on-line (I presume he was taking one CPU offline at a time). The LED bargraph pads showing CPU utilization for each processor that were scattered around the room were quite impressive too :-)

  • ...has upgraded it's 5-processor Multics system to multiple handheld devices to "stay ahead, technologically.

    Info about said devices is available here [geocities.com].

    Karma karma karma karma karmeleon: it comes and goes, it comes and goes.
  • That's pretty much what we do now in Linux - when you write it doesn't go to disk, it goes onto memory pages. When you read you're reading from memory pages and if they're not there, they get 'swapped in' from your file using the same mechanism we use for virtual memory, though we bypass the paging hardware in this case (it's faster that way).

    From the Multics Glossary [multicians.org] entry on virtual memory:

    A Multics process accesses all the data (it is allowed to) on the system as part of a huge, two-dimensional address space (see segmentation). There is no "file I/O", no buffers, no read-in, no write-out.

    I take this to mean that Multics had no read(2) or write(2)... from the application-writer's point of view, the equivalent to these system calls was simple memory access.

    Presenting this analogy to programmers as their primary means of file access is different from using such tricks down at a level where (theoretically) no one except kernel programmers needs to know ahout them.

    It shows a completely different point of view... instead of "everything is a file", the MULTICS way seems to be "everything is core".

  • by miniver ( 1839 ) on Monday November 13, 2000 @03:52AM (#628412) Homepage

    I had the "opportunity" to work as a systems operator on *6* Multics systems, from 1986 to 1988. (Yes, I'm listed with Multicians.org.) Your interpretations of some of the goals of the Multics project is somewhat colored by modern technology. Let me explain what some of those goals meant to the Multicians, and why they still aren't met by modern operating systems:

    • Continuous operation analogous to power & telephone services
      This meant that the entire system was hot swappable: disk drives, CPUs, Memory units, IO units. Of course, your odds of the system surviving the addition or subtraction of any one of these were ... low. This was more a function of the hardware architecture than the OS, but most modern computers don't take this to the extremes of Multics. Since hardware is so cheap, it's much more effective to build redundant clusters with shared, redundant storagem where you add and subtract entire systems, instead of adding and subtracting components.
    • A wide range of system configurations, changeable without system or user program reorganization.
      This is the hot-swappable hardware thang again. You could add a CPU to a system without interrupting the processing on the rest of the system. System software updates were quite a different matter -- that generally required a system restart, and there were still "system" drives whose failure could cause the entire system to crash.
    • Support for selective controlled information sharing.
      This refers to classifying information, not filesystems. Multics could run with Classified, Secret, and Top-Secret information (and programs) all co-resident, and without a lower-classification program being able to access higher-classification information. No modern operating system works this way; the set of systems that replaced the Multics group that I worked on was *3* separate Unix networks, one for each security classification.
    • Hierarchical structures of information for system administration and decentralization of user activities.
      This refers to the traditional hierarchical file structure, with hierarchical user management thrown in for good measure. What CP/M and MS-DOS stole from Unix, Unix in turn stole from Multics.

    In general, Multics achieved its goals, though the cost was too high. More recent operating environments have judged the cost of some of those goals (primarily security) to be so unrealistic as to be completely undesirable. While I think that Multics aimed too high on some goals, I think that too many operating systems (including Linux) aim too low.


    Are you moderating this down because you disagree with it,

  • Right now, my main Linux server is whining and rumbling like a banshee on testosterone, and it's not the power supply fan, so it must be the old 17 gig hard drive. So it looks like I'll have a few hours of downtime to get a new one in there.


    I have a problem with OLD beging used for the label of a 17 gig hard drive. Possibly you could describe an old CDC Wren III 300 Meg drive as old (I still have one in service), but by no stretch of the imagination could I imagine a 17 gig drive as old. Especially interesting considering the topic we are under.
  • by bunyip ( 17018 ) on Monday November 13, 2000 @03:56AM (#628414)
    Most of the large airline systems run on IBM's Transaction Processing Facility (TPF). IBM keeps updating it, even added TCP/IP a few years back, but it's essentially 1960's technology. By the way, go to a web site like Expedia or even the new Orbitz site and you're still hitting mainframe assembler code in the background somewhere... Try these on for size: most applications are written in assembler, manually divided into 4K blocks. No virtual memory, all storage preallocated at sysgen into fixed size blocks (woohoo - no fragmentation!). No filesystem, all you get is a shitload of blocks (381 bytes, 1055 bytes, 4K) and it's up to the programmer to do the rest. I've seen code on these systems that was written in 1970-1972 and is still in use today, taking thousands of transactions per second. Somehow I don't see W2K apps lasting 30 years.
  • You mention Museums... Where can i find a computer museum or is there one? Besides my closets of course.


  • "Bye bye the mainframe has died". It's alwawys sad to see a history thing as a live Multics system dying. It did inspire *nix, didn't it? So, we got a lot to thank these machines for, *waves sadfully goodbye*
  • ..for adapting some of this into the *nix world? Or to BeOS? etc.?

  • You've got it turned around -- Unix is similar to Multics, since Multics came first -- but I'll take you seriously anyways:

    1. CPL: Command Procedure Language, better known as shell scripts.
    2. Device independent I/O.
    3. Hierarchical file systems.
    4. Most of the OS written in higher-level language (PL/I vs C). Though admittedly, calling C a higher level language is pushing it.
    5. Memory management (paging, memory-mapped files, etc.)
    6. On a more humorous note ...

    7. Both are user unfriendly
    8. Multics administrators like lusers even less than Unix administrators like them
    9. Terminal I/O is ugly because it was developed around the VT-100 and print terminals.
    10. mail is basically brain-dead.
    11. And, last but not least ...

    12. Emacs, which was ported to both Multics and Unix from ITS.

    On a historical note, Primos (the Pr1me Operating System), was a much more direct steal from Multics, down to implementing CPL exactly. I learned Primos years before I used Multics, and Multics was merely more difficult to administrate.


    Are you moderating this down because you disagree with it,
  • Just goes to show how much longevity the original systems had. I can't imagine that the systems of today will still be serving their purposes 35 years into the future. This is a pretty cool testimonial to the time-tested power of the *NIX'es which are mostly based on Multics.
  • Actually our defense system was three-pronged:

    1) Our mighty Multics system at U of C. (Hello, Shadlings)
    2) A board with spike in it.
    3) Alan Thicke.

    Our latest strategy is letting our economy fall so far behind the US that there will be nothing left worth invading over.
  • by Krimsen ( 26685 ) on Sunday November 12, 2000 @08:13PM (#628421)
    Here [tcm.org] is one. (seems to be down at the moment)
    And here [obsoleteco...museum.org] is another.
  • i also had only heard about Multics through incidental mention on other subjects.it could be interesting to study the application of the goals to current systems and see what type of improvements would be possible.
  • by the_other_one ( 178565 ) on Sunday November 12, 2000 @08:50PM (#628423) Homepage

    Multics was ahead of it's time. Now It's at the end of it's time. I hope that before I reach the end of my time, I read an article about the last Windows system reaching it's final blue screen.

  • I may be wrong, but I seem to recall a Multics system at U of Calgary (Alberta, Canada) when I was there around 1986. Can anyone confirm or Deny this?
  • by mr_gerbik ( 122036 ) on Sunday November 12, 2000 @08:19PM (#628425)
    Here are links to a couple of computer museums here in the US.

    The Computer Museum of America [computer-museum.org]

    Compuseum [compustory.com]

    -gerbik
  • Multics had a rather interesting approach to file I/O, IIRC - when you loaded a file, it got mapped straight in to virtual memory (the machine had a 48 bit address space back in the 1960's, so you could get away with stunts like this). Read/write was just a matter writes to memory!!

    Neat idea - but imagine the 32-bit address space crunch happening 20 years ago instead of now :)

  • heh, was Chris Walpole around in those days?

  • by micahjd ( 54824 ) <micahjd@users.sourceforge.net> on Sunday November 12, 2000 @08:57PM (#628428) Homepage
    Looks to me like not only are these principles still applicable, but they're pretty integral parts of everybody's favorite OS:
    • Convenient remote terminal use
      Linux/BSD/UNIX: Check! telnet/ssh and X can make nearly everything network transparent
      Windows: Need an extra program like PCanywhere, and even then it's single user. (but isn't m$ fixing this in win2k?)
    • Continuous operation analogous to power & telephone services
      Well, all modern operating systems can do this in theory at least ;-)
    • A wide range of system configurations, changeable without system or user program reorganization.
      Windows: Only three reboots to install a sound card!
      Linux: Exchange anything but the kernel without rebooting
      Microkernels: 8-D
    • A highly reliable internal file system
      Windows: NTFS seems to be close enough for most people Linux: ext3 and reiserfs
    • Support for selective controlled information sharing.
      Windows: Network Neighborhood ought to be enough for anybody!
      Linux/UNIX/BSD: NFS, Coda, FTP, scp, etc...
    • Hierarchical structures of information for system administration and decentralization of user activities.
      Not entirely sure what they mean by this...
    • Support for a wide range of applications.
      Check.
    • Support for multiple programming environments & human interfaces
      Windows: IDEs, IDEs and more IDEs.
      Linux: Your choice of gcc,emacs,kdevelop,vi, or whatever else you find on freshmeat
    • The ability to evolve the system with changes in technology and in user aspirations.
      Open source!
  • Boston has an excellent computer museum.

    It even has one of those trippy one legged robot that hopes around like it's in an ass-kicking competition.
  • I'm not sure ext3 counts as being highly reliable yet© In fact last time I checked it was still alpha code©

  • by Anonymous Coward
    For those of you who don't know, Multics (Multiplexed Information and Computing Service) is a comprehensive, general-purpose programming system which is being developed as a research project. The initial Multics system will be implemented on the GE 645 computer. One of the overall design goals is to create a computing system which is capable of meeting almost all of the present and near-future requirements of a large computer utility. Such systems must run continuously and reliably 7 days a week, 24 hours a day in a way similar to telephone or power systems, and must be capable of meeting wide service demands: from multiple man-machine interaction to the sequential processing of absentee-user jobs; from the use of the system with dedicated languages and subsystems to the programming of the system itself; and from centralized bulk card, tape, and printer facilities to remotely located terminals. Such information processing and communication systems are believed to be essential for the future growth of computer use in business, in industry, in government and in scientific laboratories as well as stimulating applications which would be otherwise undone.

    Because the system must ultimately be comprehensive and able to adapt to unknown future requirements, its framework must be general, and capable of evolving with time. As brought out in the companion papers, this need for an evolutionary framework influences and contributes to much of the system design and is a major reason why most of the programming of the system will be done in the PL/I language. Because the PL/I language is largely machine-independent (e.g. data descriptions refer to logical items, not physical words), the system should also be. Specifically, it is hoped that future hardware improvements will not make system and user programs obsolete and that implementation of the entire system on other suitable computers will require only a moderate amount of additional programming.

  • Perhaps even Multi-Level Security and Mandatory Access Control - after all, Bell and LaPadula explained their MAC/MLS model with a Multics interpretation. This is what led on to the TCSEC (Orange Book) B1 and above classes.

  • Oh, you couldn't imagine that the Dockmaster shutdown in 1998 was faintly related to the fact that the machine was 14 years old, with hardware replacements prohibitively expensive and difficult to get?
  • Did they? According to the Multicians Web Site, [multicians.org] the ABB site was a Multics customer only until 1991.

    Are you sure the machine was still functioning? Or just there...

    From discussions on the Multics newsgroup, the only site that there seemed to still be uncertainty about was the Puerto Rico Highway Authority, and they were pretty sure that the system there didn't get the Y2K patches, and thus could not still be operating.

    If you're right, then certainly let the folks at the Multicians site know of the still-running ABB system...

  • But the way that the features were added to Unix, they are just hacks, and appear that way, rather than an integral part of the architecture.

    Take a look at the way multics handled dynamic linking [multicians.org]. Calling a non-existant symbol caused the process to suspend. Someone could write a replacement for the missing subroutine and resume the process.

    Multics had a hierarchical administration system, unix has sudo(1).

    Multics was designed for large systems. Unix was designed for small systems and grew large.

  • Personally I'm hoping for next Tuesday to be the dooms day for Windows.
  • Wow... I had heard that they merged with the Museum of Science (which, in itself, is a great insitution). It's sad that it does not, in fact, exist anymore.

    Back about 10 years ago, it really was a geek's museum. They had chunks of Whirlwind sitting around, with an original console. There was another section (perhaps of Whirlwind, perhaps of another acient computer) where you walked through racks and racks of vacuum tubes.

    They also had demonstrations of core memory, and the infamous tinkertoy computer.

    More recently, they seemed to focus on kids, and explaining how a modern PC worked. This seemed like a losing battle, since obviously their monster "walk through" computer became out of date. And, anyhow, I suspect that fewer kids were really interested in what went on inside the box (and those that were would rather simply disassemble the family computer than push a bumper-car sized mouse around). They tried to demonstrate neat uses of computers but... well, all of their stuff was behind the times. Why go to a museum to learn about computer graphics when your family desktop puts out kickin' Q3 frame rates?

    The last time I went there, I didn't see much about computer history, per se. I remember a display they had of early PCs (including, I believe, an Apple 1). That was a kick, especially since I owned one of the ones in the display (an Osborne CP/M machine).
  • Hopefully the legacy found in Unix and to a larger degree in Domain/OS (anyone else remember Apollo?) will live on.

    Yes, I worked on Apollo workstations in the 80s at Birmingham University in England. They were effing fantastic for their time. The group I worked for would do large non-linear finite element analysis of plastic deformation (e.g. forged con-rods) using parallel fortran jobs spread across all the workstation cpus on the network. Although this was slow, it was still much faster than submitting the jobs to the University computer centre which was running, yes, you guessed it, MULTICS!

    Last time I talked to my old supervisor they had transitioned OSes as follows

    MULTICS->DOMAIN/OS->Irix->Linux

    He seemed to be quite pleased they had skipped the Windows phase entirely.

  • What's interesting is that Bull (?), the company that owns the rights to the source code, never wanted to release the source code because they claimed they had to "continue to support the few remaining Multics systems in existence" -- they can't possibly be doing this now, so give us the damn source! ;-)

    The source won't do you a lot of good; it's all written in PL/I and ALM (Assembly Language for Multics) on a machine with a 9-bit byte and a 36-bit word.

    In any case, after 35+ years of development, I *don't* want to see how much cruft has accumulated. There are things Man Was Not Meant To Know -- that's one of them.


    Are you moderating this down because you disagree with it,
  • 48 bits? I recall 36 bits, split into 18 bit segment and 18 bit offset. Because of the memory mapping, standard files were limited to 2**18 words, approx 1 MB. Larger files were "MSF", or multi-segment files, which had a sort of builtin directory structure, which not all tools supported.

    I will say, that the I/O speed impressed me!

    (P.S. I used Multics at the U.S. Geological Survey arouund 1980. It was a fun machine.)
  • I was a Multics user many moons ago as well. I pretty much grew up on the system before I discovered Unix. Actually I had met Unix before Multics, but as far as OSs go...it was my first love 8-)

    The billing system sure could be a pain sometimes, I remember taking a class where we would only get $15 a week in CPU time. And, in one class we learned about MACSYMA, so I just had to see what PI to million places looked like. The job ran for a week in the background...got killed when it exceeded the absolute CPU limit, but not before it burned through $150+ of CPU time I didn't have. The prof couldn't understand why I couldn't log in even after advancing me all the credit for the course.

    Later I had a textprocessing account where CPU time was unlimited, but we were charged based on output....different rates for different queues (printers/media, etc). Well, along the way, I was playing around with postscript and queued a job that crashed....got no output....but also consumed a large negative amount of cash. Suddenly meant I had unlimited printing funds 8-) That was nice being able to print huge manuals and/or sourcecode listings of software from the net for nothing.

    I'll credit this event as having a significant influence on my programming background. Suddenly being able to print listings of large programs, gave me plenty of reading material to learn from....even when I didn't have access to a terminal, plus when I did have terminal access I spent most of my time reading forums or Usenet.

    If I had known that Halifax was still running a Multics system, maybe I would've put in for a transfer to Halifax rather then getting downsized in Medicine Hat (ended up moving east anyways 8-) They had a Honeywell at DRES, but it wasn't Multics....that machine would die all the time, at least it did until they took it off of maintenance and waited for it to die before taking it out of service. Then it ran practically forever....

    Of course, I probably would've never have gotten access to the Multics box in Halifax.

    Hmmm, forums and chatting on Multics....I had forgotten how nice that was. Kind of interesting being that I work for a collaboration software company now.
  • Well, I just opened up the box and it turned out to be the damn CPU fan.

    As for "old", well, that 17G drive cost me almost $300 when it was new! :)

  • Alas poor Yorik, I knew him well....

    I was a student at U of C between 82 and 87 and I have a lot of memories (good and bad) about the Multics system. My first programming course in FORTRAN used the Multics system. I remember how slow the system responded when it seemed liked everyone logged on and trying to finish their assignements. There was none of these candy-assed GUI's to make things 'easier'! You had to type those commands, dammit!
  • I have often wondered why we so stubbornly worked so hard to make the system survive. My own take on it is that we were young and wanted to make a dent in the psyche of the industry which in those bad old days was incredibly shortsighted. And I think we did.

    Heh.
  • *BEEP* Wrong again. Real Men Use The Hurd! (try it out, see what I mean)
  • Just one curious question:

    How do you implement, for example, access to a certain directory for multiple *groups*, which might be rather largish? In NT, i just add the respective groups to the access list, in Linux I ??? (I hate to admit it, but there seems to be a point for NT... %-()

    Regards, Ulli

  • by matroid ( 120029 ) on Sunday November 12, 2000 @08:22PM (#628447) Homepage
    For those of you who have no idea what Multics is, here's a brief summary from www.multicians.org [multicians.org]:
    Multics (Multiplexed Information and Computing Service) is a timesharing operating system begun in 1965 and used until 2000. The system was started as a joint project by MIT's Project MAC, Bell Telephone Laboratories, and General Electric Company's Large Computer Products Division. Prof. Fernando J. Corbató of MIT led the project. Bell Labs withdrew from the development effort in 1969, and in 1970 GE sold its computer business to Honeywell, which offered Multics as a commercial product and sold a few dozen systems.

    It had TONS and TONS of features (look here [multicians.org] for a list), but unfortunately it took too long to implement, and when these features were finally implemented, the resulting OS was so damn slow nobody wanted to use it. Consequently it was canned.

    Fortunately for us, Dennis Richie and Ken Thompson decided to pare down some of the features and create a version of "Multics without the balls." Thus Unix was born (the name being a pun on "Multics").

    And we all lived happily ever after!!

  • I went to University at the University of Calgary and worked for a time at ACTC, both names familiar to Multicians. Multics was in many ways decades before its time. Even though you had to program in PL/1, the sheer elegance of the system was a wonder to behold.

    Hopefully the legacy found in Unix and to a larger degree in Domain/OS (anyone else remember Apollo?) will live on.

  • Boston HAD a computer museum... See http://www.tcm.org/html/history/index.html
  • My favorite gallery of obsolete machines is The Archive of No-Longer-Existant Computer Hardware [doesntexist.com]. They feature hardware able to handle what was once considered a formidable load, but which has been retired after being stretched beyond its limits.
  • by friedo ( 112163 ) on Sunday November 12, 2000 @09:11PM (#628451) Homepage
    Not anymore. Real Men use BSD.
  • I looked at the sorce code these guys wrote in PL/1 in the 70's and early 80's and to my surprise, they docuemented clear, I expected there code to be massive pile of shit with no comments but I was already, I am also very impressed that they wrote 3000 pages of specs before starting the implementation.
    Checkout for some source. http://www.multicians.org/multics-source.html
  • Multics could run with Classified, Secret, and Top-Secret information (and programs) all co-resident, and without a lower-classification program being able to access higher-classification information. No modern operating system works this way; the set of systems that replaced the Multics group that I worked on was 3 separate Unix networks, one for each security classification.

    There may be good reason for that... I was wringing Google looking for a place to get real TTYs when I found this thingy about Multics covert channels [multicians.org]. It was in /. in March [slashdot.org]. Sounds to me like separate machines is a solution, not some kind of OS shortcoming.

  • The Multics system was up and running at least until the summer of 1991. I used it for about a year from 1990 to 1991. After the summer of '91, though, I lost access. It could have been because they were decommissioning it, I'm not sure.

    I think I first accessed the Internet on that system.

  • To think that people are exclaiming at Multics finally ceasing! There is quite an old OS out there still... UNIX. Of course not in its original AT&T code...

    I can draw three of morals from this:

    1) A good overall design never grows old. Not to mention excellent foresight by its desginers. These ideas that the Multics archatecs have thought of are still the model of a good server/client OS.

    2) A good overall design is built to last! If the Canadians just took out a Multics system last month, there must of been a reason why it was still in commission for this long of time. Most certainly it could be argued that it was because there wasn't funding... maybe, but that's aside from this point. If it works, why fix it?

    3) If Bill Gates didn't drop out of college in the 1970s, and actually studied a Multics or an UNIX system... maybe his OS would be good? Or was it inevitable? ;)
  • (Red Hat) Linux's out of the box access list control, with each user having not only his own userid, but group, blows away NT's clunky access control system. I've had to deal with both systems, and Linux is less confusing (Everyone? Administrators? What the hell are all these groups?), and, with the UNIX-friendliness of scripts, much easier to do large changes. Although most of those scripts are already there, all you have to do is stick the -R flag on the standard acl commands (chgrp, chmod, chown) and that's how difficult it is to do a huge filesystem recursively.

    I've run ACL's on NT and UNIX. UNIX's system is both simple and flexible - NT's is just nightmareish.

  • Following the links at Multicans.org took me to: The US National Security Agency's DOCKMASTER machine was shut down in March, 1998, after repeated extensions. The hardware from this site, except for the hard drives, was given to the National Cryptologic Museum, which in turn loaned it permanently to the Computer Museum History Center in Mountain View, California
    hope this helps

Life is a whim of several billion cells to be you for a while.

Working...