Operating Systems Design

OS design isn't taught in "OS design" classes and books like Operating Systems Concepts by Andrew Tanenbaum are a sick joke. What happens is that you get an instructor or writer who doesn't understand the first thing about OS design, doesn't know the state of the art, probably doesn't even understand what design is. So what do they do? They teach you how to reimplement Unix, a 30 year obsolete architecture! How can they be so stupid? See Excerption Not Abstraction and Cargo Cult.

So what does a fresh eager student do if they want to avoid having their imagination utterly destroyed by an Andy Tanenbaum spouting irrelevancies like global vs local address spaces, synchronous vs asynchronous messages, and the 23 secret ways to resolve deadlocks? Start with a Crash Course In Os Design. Also learn the Operating Systems Design Principles and meditate on them a lot.

(The discussion of capability and ACL security models that was here, has been moved to Capability Security Discussion.)


Monolithic Kernel and Boot Strap aren't fundamental OS concepts at all, only implementation details. And Micro Kernel (or Nano Kernel) are obsoleted by Exo Kernel as a fundamental OS concept.

People agree with this? Isn't Windows Nt Kernel Micro Kernel? --Takuya Murata


Extreme Programmers help us out here -- It seems that a team of 6 could build a better Operating System in a month, and then could continue to add features as it got smaller and smaller over time.

Operating Systems used to be small, RAM was expensive. Now they are big and getting bigger. What exactly are the requirements (tell some stories) of an OS?

Scheduler of processes

Handler of interprocess communication

Speaker to low level Input Output devices

Q:
does the OS really have to do this or is a process sufficient? and do you need multiple processes?

Operating Systems As Religions examines the whole topic from a humanist perspective.

The OS should, on a multiuser system, be able to preempt a running process, to keep a user from locking the system with their own malicious (or just poorly written) program. However, all the OSes I have seen are based on the Von Neumann Paradigm of computers. Perhaps it is time for new thinking, open up alternatives to memory-mapped io or DMA. Jacob Cohen.

You're focusing on two things you should never talk about when discussing design goals. First, you're talking about implementation details (eg, a "process"). Second, you're not even talking about the abstractions and principles which the users will see after you've abstracted away the hardware. Rather, you're talking about the ones that YOU see coming from the damned hardware! So not only are you doing a miserable job of expressing design goals, but you're not even talking about design goals at all.

In case it escaped your notice, all programmers talk about OSes the same way you do. Even the ones that build them. This should tell you that they're all incompetent fools. Which they are, with very few exceptions.

So what's an OS all about? It's something that provides STORAGE, COMMUNICATION, and COMPUTATION. Hence, the storage, network, graphics and process stacks. That's what an OS is all about; several interrelated stacks of ABSTRACTIONS. -- RK

I thought #12 of the Operating Systems Design Principles was "Exo Kernel". Exokernels have the explicit philosophy that operating systems should not provide abstractions. That's a task better left to user-space libraries, because All Abstractions Lie. Instead, the interface that user-space sees mimics the physical hardware to the greatest degree possible. It's the library designers' job to abstract away the hardware. -- Jonathan Tang

I don't understand the distinctions between user-space and kernel-space, nor between daemons and libraries. I suspect this is because they are artificial low-level bullshit. To the limited extent that I understand these distinctions, I am convinced that they are artificial low-level bullshit.

I never did get around to fixing up the Exo Kernel page the way I wanted to; never got feedback on the subject.

My position on the issue can be found at New Os Features/12 and also at Operating Systems Design Principles/Exokernel.

In other words, I do NOT think of Exo Kernel as a low-level feature. I don't think of it as a feature at all but rather as a general organising / design principle. Which is what it must be in order to qualify for inclusion in Operating Systems Design Principles.

An OS is not an exokernel just because it's limited in its scope to the point where it's useless crap. An OS is not an exokernel just because it's got some insignificant feature buried at a level of abstraction so low that operating systems architects (the smart ones anyways) don't bother thinking about.

Rather, to be an exokernel, an OS has to separate multiplexing from abstraction at EACH AND EVERY layer of abstraction. And there's a boatload more layers of abstraction even in the computation stack than just "cpu" and "processes". -- Richard Kulisz

In most (all?) modern processors, the hardware supports certain instructions that are only available in "protected mode". These do things like access the TLB, set and clear interrupt flags, and handle I/O - all the nitty gritty implementation details that you're rather not worry about. The operating system is the only program that has access to these instructions. If an application wants to make use of these services, it has to execute a system call (eg. SYSENTER and SYSEXIT instructions on the Pentium). These switch the protection level on the CPU, and then transfer control to an address specified in the kernel's trap table. The kernel then has to perform the specified operation on behalf of the application, because the application doesn't have privileges to perform it itself (and note that this is a hardware restriction, not something brain-dead OS designers cooked up).

The system call mechanism is the boundary between "user space" and "kernel space". Code running in kernel space has access to these protected-mode instructions. Code running in user space doesn't.

The problem the Exo Kernel developers identified was that the defined system call mechanism presents an interface that application developers cannot get around. If you want access to the disk, you have to go through the open/close/read/write syscalls (well, there's mmap...), because those're the system calls that the OS exposes. But that locks you into a particular filesystem: if you're implementing a Data Base or Prevalence Layer, you've got to simulate blocks on top of files, which are then built on top of blocks. The result is inefficient and inflexible.

An Exo Kernel tries to make its syscalls mimic the underlying hardware as much as possible, so you don't get this Abstraction Inversion. You expose time-slices, disk blocks, TLB contents, etc. Then user-space libraries are responsible for creating processes, threads, paging policies, filesystems, etc. An application doesn't *have* to use any particular one of these formerly-OS abstractions: it can link against a different library and get a different abstraction of the underlying system resources.

That's why I don't understand what you mean when you say "an OS has to separate multiplexing from abstraction at EACH AND EVERY layer of abstraction". The exokernel does not provide layers of abstraction at all. Those are all in user-space libraries. Instead, it just makes sure that there are no conflicts when the physical resources are accessed. -- Jonathan Tang

Thank you for the exposition, I'm certain it will benefit the page.

In programming languages too there is a distinction between "the language" and "the libraries". This distinction is BS from the user's point of view. The biggest and most important part of Smalltalk Language is its libraries. It's only from a language designer's point of view that you think about the importance of minimizing what's inherent and hard-coded into the language. And even then, if you're clever, you put in enough reflexion that the distinction simply doesn't matter.

It's the same with operating systems. By putting enough reflexion into them, you can allow users to change everything at runtime, assuming sufficient privileges. It should be a design goal of any and every operating system project to be able to replace the entire OS dynamically, at runtime. If you have that level of reflexion, it ceases to matter how big of a "kernel" you've got. The distinction between user-space and kernel-space simply ceases to matter.

Note here that we're talking about "kernels" and not operating systems. An operating system includes ALL of its libraries and other components. A good OS doesn't merely have extensive abstractions provided by its libraries but rather has LAYERED abstractions. This is obvious in the networking stack. If you've studied the subject area, its necessity is also obvious in the storage stack. If system programmers weren't fools, it would also be obvious in the graphics stack. And finally, layering is also necessary in the computation stack.

Now, in a badly designed OS, the abstractions provided in the different layers will have multiplexing mixed in. This is bad because it makes it more difficult to develop, install and test alternate abstractions in the system. For a clean and flexible operating system, abstraction should be separate from secure multiplexing at each and every layer of abstraction.

And now on a completely different topic, there is a subtle issue in exokernels that has long puzzled me. If you multiplex a resource, should its consumers be aware of each other? If they are aware of each other then you must provide abstractions to arbitrate the resource (arbitration is an abstraction). OTOH, if they are not aware of each other then the resource you're providing is a complete fiction, an abstraction.

For example, if you multiplex a CPU then you can choose to expose individual time slices of the CPU. In that case, you must provide a method for users to secure and release particular time slices in the presence of other users doing the same. In contrast, you could expose a virtual CPU devoted to each individual user. But users would no longer be exposed to any particular time slice of the real CPU.

When multiplexing RAM, this difference simply doesn't matter. It doesn't matter what particular part of real RAM you're using. The same with storage and graphics in general. In the case of the CPU, now that's an entirely different thing. Same with networking. So the question is, how do you expose time without providing arbitration abstractions? -- Richard Kulisz


If you are into building your own OS (an increasingly popular sport;-) you should probably take a look at www.cs.utah.edu where you will find a library of components that can be used to assemble custom operating systems tuned to your needs.

Using the flux toolkit to build your "own" custom is akin to "writing" a novel by transcribing the storyline off of one of those make-your-own adventure books. And you'll learn as little about operating systems as you would have writing. -- RK


I, as someone who enjoys the challenge of writing kernels, have spent long hours contemplating the significance of exokernels. I love the idea; however, I have never been successful in figuring out how to make them work in practice.

We all agree that they expose underlying hardware, but what if the hardware changes between runs of a system? I might have a CD-ROM in one run, but a DVD-ROM in another. Or, I might have an AMD Athlon in one, and a Pentium IV in the next (e.g., as with moving harddrives from one computer to another).

Just who is responsible for performing task switching in an exokernel? If all it did was provide protection, but no multiplexing (as is required by pure-blooded exokernel philosophy), then no task would ever switch pre-emptively. It'd be equivalent to a cooperatively multitasking environment. Ouch.

Assuming the issue of task switching is resolved, how is inter-process communications handled?

One solution is to again take advantage of protection-without-multiplexing, which mandates a shared memory buffer of some kind, with each process agreeing to not overwrite the contents of another process' buffer contents. This sounds dangerous.

Alternatively, you can use a finite-sized shared buffer which is used to implement a datagram messaging service between processes. The rule here is that, if your message would exceed the size of the buffer, you must defer the message and try again later. Or just drop it. Or just wrap around and overwrite the contents of the buffer starting at the beginning. Either way, someone, somewhere, is going to have to re-send the message. This just screams for problems in the runtime performance benchmarks.

How are multiple processes that share the same library-OS able to become aware of each other? This is required, for example, if you need to swap out a few pages of another process to free up some RAM for yourself.

For that matter, what is the policy of swapping pages of processes that don't use the same libOS? How is this situation handled?

This list is not exhaustive. This is just a quick recollection of the issues that I faced when I was planning on writing an exokernel for myself.

You must have, at some level, some multiplexing in order to coordinate processes. L4 is a microkernel which comes dangerously close to also being an exokernel, but even L4 provides some abstractions which I think you'll agree are necessary to make a working system.


See original on c2.com