Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Open Source

MenuetOS, an OS Written Entirely In Assembly Language, Inches Towards 1.0 372

angry tapir writes "MenuetOS is an open source, GUI-equipped, x86 operating system written entirely in assembly language that can fit on a floppy disk (if you can find one). I originally spoke to its developers in 2009. Recently I had a chance to catch up with them to chat about what's changed and what needs to be done before the OS hits version 1.0 after 13 years of work. The system's creator, Ville Turjanmaa, says, 'Timeframe is secondary. It's more important is to have a complete and working set of features and applications. Sometimes a specific time limit rushes application development to the point of delivering incomplete code, which we want to avoid. ... We support USB devices, such [as] storages, printers, webcams and digital TV tuners, and have basic network clients and servers. So before 1.0 we need to improve the existing code and make sure everything is working fine. ... The main thing for 1.0 is to have all application groups available'"
This discussion has been archived. No new comments can be posted.

MenuetOS, an OS Written Entirely In Assembly Language, Inches Towards 1.0

Comments Filter:
  • by HoldmyCauls ( 239328 ) on Friday November 15, 2013 @12:25PM (#45433853) Journal

    Just tried it in Virtualbox, and it has made strides since I last tried it some years ago. Some notes:
    Select "Other/Unknown (64-bit)" in the Operating System type drop-down, unless you specifically download the 32-bit version.
    Add a floppy controller and add the image as a floppy disk attached to that. Delete the other controllers that are present by default, unless you have a specific reason not to (like listening to your outdated music on disc from within MenuetOS, or loading a WAD or PAK file for Doom/Quake).
    Does not work with my work MacBook's iSight camera (afaict).
    Boots in 5 seconds, and I'm thinking of ways to demonstrate it to students at the schools where I work.

  • by jythie ( 914043 ) on Friday November 15, 2013 @12:29PM (#45433913)
    Yeah, outside a few rather narrow cases, modern CPUs have just gotten too complicated to write efficient assembly for.
  • by Anonymous Coward on Friday November 15, 2013 @01:32PM (#45434751)

    Three points:

    1) Compilers vs Humans
    You have to start by doing an apples-to-apples comparison. Yes, many developers these days are ignorant of low level details about assembly language, and would therefore not produce assembly code that is as good as what comes out of a compiler. But that is because the compiler isn't built by your standard run-of-the-mill code monkey. They are built by people who truly understand the issues involved in creating good assembly language. So you need to compare assembly created by a compiler vs assembly created by someone who is at least as skilled as the people who created the compiler. In such a comparison, the humans will generate more efficient code. It will take them much longer (which is one of the two reasons why we have compilers and high-level languages), but they will generate better code.

    2) Why write assembly language
    No, one does not write assembly language for "fun" - there are specific business reasons to do so. Replacing inner loops in performance-critical loops with hand-coded assembly language is a common example. Most major database companies have a group of coders whose jobs are to go into those performance-critical sections and hand tune the assembly language. Would I try to write a GUI using assembly language? No, because it simply isn't that performance sensitive. Choose the tool that fills your needs. Religion about tools is just silly.

    3) Out-perform C
    No. Given coders of equal skill, all of the common high-level languages (Java, C, C++, etc) are identical in terms of CPU-intensive performance. That's because the issue is more one of selecting the correct algorithms and then coding them in a sane manner. It is demonstrable that Java can *never* be more efficient than a corresponding C program because one could always write a C program that is nothing more than a replacement for the standard Java JVM (might be a lot of code, but it can be done).

    The place that one starts to see differences in performance is in the handling of large data sets. Efficiently managing large data sets has much more to do with management of memory. Page faults, TLB cache misses, etc have significant performance impacts when one is working on large data sets. Java works very hard to deny the developer any control over how data is placed in memory, which leaves one with few options in terms of managing locality and other CPU-centric issues related to accessing memory. C/C++ gives one very direct control over the placement and access of objects in memory, hence providing a skilled developer the tools necessary to exploit the characteristics of the CPU-CACHE-RAM interaction. It is laborious, to be sure, but C/C++ allows for that level of control.

    So it all boils down to what one is implementing. If I were implementing a desktop application, I would probably use Java. The performance demands related to memory management are typically not very great and Java's simpler memory management paradigm streamlines the development of such applications (not to mention the possibility at least of having one implementation that runs on multiple platforms). If I were implementing a high volume data management platform, I would use C++ because the fine grain control of memory management provides me the necessary tools to optimize the data-intensive performance.

  • by lgw ( 121541 ) on Friday November 15, 2013 @01:50PM (#45435003) Journal

    So, what you're saying is that the C compiler is better Assembly coder than you are. I feel your pain on that one.

    Indeed. I spent 5 years supporting a production commercial OS written entirely in assembly (one of many forks that happened when IBM started licensing the source for their old mainframe OS). Today I let a C compiler do it's job on my personal projects.

    Can you write faster code than the compiler - sure you can, though it requires a deep understanding. But that code will be crap unmaintainable code. There was a day when C was called a high level language, and in a meaningful way it still is. You can write good maintainable C code that doesn't look optimized and get nearly-perfect assembly that bears little resemblance to the source.

    The worst choice in C is to think you need to help the compiler optimize. Seriously, the compiler doesn't care at all whether you write x = x << 1; x += x; or x *= 2; it sees them all the same, so code the one that makes sense in context.

  • by Immerman ( 2627577 ) on Friday November 15, 2013 @02:22PM (#45435447)

    Can fit in cache != will be in cache. On a modern multi-GB system the memory paging index alone is going to dwarf the size of this OS's code. Then there' all the rest of the OS data, plus the even more frequently used application code and data. Certainly shrinking the OS code size drastically will help free up more cache space for other uses, and the most heavily used parts may be able to remain in the cache most of the time, but it's almost a guarantee that most of the time most of the OS code won't be in the cache.

  • by Megol ( 3135005 ) on Friday November 15, 2013 @02:28PM (#45435515)

    There's another reason for this too... today's CPUs are designed to recognize some standard compiler instruction chains and shortcut them -- so if you hand-code those instructions, the CPU will have to take your instructions literally, whereas if you use the manufacturer's compiler (or a common compiler such as provided by GNU or MS), the CPU will often recognize the expensive routines and optimize them for you in the pipeline.

    That said, if the assembler actually knows the cpu they're targeting, they can take advantage of these pipeline shortcuts as well. But it won't be portable unless they duplicate a lot of the logic that goes into compilers in the first place -- at which point, you're adding an extra layer that's going to take more time/space.

    I think you are mistaken here. Yes some Intel processors optimize some instruction patterns however those same patterns are those that are used by assembly language programmers too. Some examples of this is fusing together some comparison instructions followed by a conditional branch. Any assembly programmer not using that pattern isn't optimizing for performance either by ignorance or intention (=size optimization). Now these patterns have been used since the Pentium Pro was released so it isn't a recent change.

    Somewhat more esoteric is the detection and special handling of CALL x; x: POP EAX type patterns. Here one calls the next instruction (labeled x here) causing the processor to push the return address onto the stack which then is then stored into the EAX register by the POP instruction. Intel processors detect this pattern and avoids treating it as a branch instruction leading to faster execution.

    Other than those two examples I can't recall anything not exposed to assembly language programmers - in fact those kind of rewrites _are_ exposed to programmers if they'd bothered to read the manuals.

If a train station is a place where a train stops, what's a workstation?

Working...