CIS 307: TOPIC: Virtual Memory

Virtual Memory is treated fully in Chapter 3 of Tanenbaum. Here are a few additional notes.

Introduction

A Virtual Memory system is a system where the addresses used in executing images, Virtual Addresses, are different from the addresses, Physical Addresses, used at the hardware level, on the bus between processor and main memory. There is a map (called the Memory Map from the Virtual Address Space to the Physical Address Space. This map is usually different in different processes. The map is set and evaluated repeatedly during program execution. For a given virtual address the map at different times will give different values. It will be undefined if the page or segment containing the virtual address is not in main memory, otherwise it will be the current location in main memory. [There are two other important maps: the Naming Map that binds the names you use in your program to virtual addresses; and the Content Map that maps physical addresses to the content at that location. You might think who/what implements these maps, and if they are forms of early/late binding.]

Virtual Memory was invented in the early Sixties. It is now almost universally used for the following reasons:

Size Independence:
It allows us to have programs whose size is limited by the size of the virtual memory, not by the size of the physical memory. Since virtual memory is usually (NOT always) larger than physical memory, large programs can run even on machines with small main memory.
Protection:
When my array index computation goes out of range, it is bad because it aborts my program. But it would be very bad if it would abort your program. Protection controls the access rights of a process to various portions of its virtual address space. With appropriate operating systems support it becomes possible to achieve any desired protection policy where sharing and separation between processes is as desired. This property is so important that many want virtual memory even in diskless machines, or in machines where the physical address space is larger than the virtual address space.
Relocatability:
If in location A we store the address of location B then, if the program (or the portion containing B), is relocated then we need to change the content of A. Alternatively, as seen in machines with Base and Bound registers, and in Virtual Memory, we can use addresses relative to the base, or virtual addresses, and avoid all recomputations.
Performance:
By calibrating carefully the amount of main memory made available to a process (and the size of cache memory available on the processor) it is possible to achieve virtual memories that have access times that are better, on average, than physical memory and size comparable to that of secondary storage. The best of both worlds [Thank you Locality].
The hardware required for Virtual Memories, on average is complex but, these days, it is cheap [in fact now the Memory Management hardware is just a portion of the processor chip]. Overall, a great idea. [When considering machines with Virtual Memory always look at what they do to reduce the main memory required to accomodate page tables.]

Performance

We will now try to determine, given a map from a Paged Virtual Memory to Physical Memory, and the access time to Physical memory, what is the access time (expected) to Virtual Memory. We will assume that the translation cost is zero (we can always include translation cost as a percentage of the memory access). We define some terms.
Hit Cost = H :
Time it takes to access virtual memory in the case that there is no page fault. Under the assumption that translation cost is 0, then H is equal to the access time of physical memory.
Hit Ratio = R :
Ratio between the number of times virtual memory is accessed without giving origin to a page fault and the number of time it is accessed giving origin to a page fault.
Miss Penalty = M :
Time it takes to access virtual memory in the case that there is a page fault.
It becomes easy to see that the expected access time to virtual memory becomes:
	                R*H + M    R*H + M       M
  E(access time) =	------- ~= ------- = H + -
                 	  R+1         R          R
For example, if H is 0.1microseconds, M is 15milliseconds, and R is 100,000 then the expected access time to virtual memory is 0.25microseconds, that is, it is 2.5 times the access time to physical memory.

The beauty of virtual memory is that it is like a game that can be played more than once, not only between main memory and secondary storage, but also between cache memory and main memory. In all case things work well for us because programs show "Locality" in their behavior. Let's consider the combination of cache and main memory virtual address translation. Let Hc, Rc, and Mc and Hm, Rm, and Mm be respectively the hit cost, Hit ratio, and Miss penalty in the case of cache to main memory, and main memory to disk virtual addresses.
We can consider two cases: the case where the cache virtual address space operates on top of the main memory virual address space, and the converse case.

Clearly case 2 is less efficient than case 1. But it is the way that is usually implemented in hardware. One reason is that if the cache works using virtual addresses then it becomes tied to a specific process and when there is a context switch the cache needs to be flushed and cold started. In order to take full advantage of the cache when it uses real addresses it is necessary to increase the hit ratio for main memory, for example, from 100,000 to 600,000 (in this case the total access time becomes 0.51microseconds). You may have heard that if in a car you improve the power of the engine by a lot, you have also to improve brakes, suspensions, steering, body rigidity, etc. The same is true in our case, if we add cache, we should also increase the size of main memory so as to decrease main memory fault rate and bring the performance of the whole memory system close to the one of the cache.

Hardware support for Virtual Memory

Of course the memory mamangement hardware unit will support translation from virtual to physical address. But it does much more. Here is a partial list of other services it may provide: As virtual memories became larger, page tables also became larger. Thus a fundamental problem is how to manage the amount of page table that is kept in physical memory. Different architecture deal with this problem differently. The DIGITAL VAX put the paged tables in virtual memory (but not all of them of course). IBM's System 38 and its successors did without it by inverting the page table, and using a frame table (tied to size of physical memory, not virtual memory). The INTEL386 used hierarchical tables. Same problem, different ways to deal with it.

The PDP-11 Segmented Virtual Memory

The PDP-11 is a very old mini-computer very popular at the beginning of the 70's. It came in many versions. It uses segmented virtual memory. It is interesting, among other things, because in many models it had bigger physical address spaces than virtual spaces. We will describe a model where a virtual address is 16 bits(64KB) and the physical address is 18 bits(256KB). The PDP-11 uses in simple form many of the features that are common in current virtual memory systems.

Since the PDP-11 has 8 segments (the machine is so old that at the time these segments were called pages), the virtual address will consist of a 3 bit segment-id field and a 13 bit displacement-in-segment field. The 8 Page Descriptor Registers (PDRs) and 8 Page Address Registers (PARs) form a kind of Page Table for this machine. It is much smaller than in the "real" virtual address systems that have come later.

Hard versus Soft Page Faults

Suppose you have a page fault. If there is a free frame we can immediately load the needed page into that frame. But if no frame is available (and no replaceable page is available which has not been written to) then we have first to write to disk the replaced page, then read in the needed page. This means that for one page fault we have more than one IO operation: double jeopardy. I know that may be other processes can use the processor while I wait for my page, but still it is bad for my program. A possibility is to keep available a pool of "free" frames. The content of these frames is copied to buffers and written to disk before replacement is required and the frames are marked available for replacement. Then in the case of a page fault: if it is for one of the pages that were occupying the "free" frames, then we have a Soft Page Fault since we can still find in the frame the original page; if it is for another page, then we have a Hard Page Fault since the page has to be brought in from disk. Now for a Hard page fault we will have to wait for a single IO operation. Of course Soft page faults will be handled much more quickly than Hard page faults.

Working Set Principle

If you go over the various replacement policies you find that they assume that a program, say with n pages, executes with m frames. Nothing is said about how m was chosen. The replacement policy says only how to use those frames. Here we discuss how to choose m.

Suppose that we are in a system where pages are backed to a single disk and this disk has an access time that is 10ms. Then we know that the maximum number of page faults that our system can handle is 100 per second. If we run my program on this system and give to it a lot of page frames, we may get, say, 20 page faults per second. If the number of frames available to my program is reduced we may get 40 page faults per second. By reducing further the number of frames, we may get to 100 faults per second. But beyond that point, no matter how few frames my program is given, we will not get more than 100 faults per second. If I want my program to keep the cpu busy, if f is the number of faults it generates per second, M is the page fault penalty, and c is the compute time per second, we should have c >= f*M [In our case c >= 10*f] so that the program runs at least as much as it waits for faults and the cpu does not have to wait for the io channels.

If instead of thinking only of my program, I think of all the programs currently executing, if cumulatively they create 60 page faults, then the disk is not fully utilized, if the faults are close to 100, then probably programs are waiting on page faults more than they should. If a program in a second has more time waiting for page faults than cpu time, then that program has more page faults than it should. Clearly, if programs have a mean time between faults that is no less that the fault penalty, then, on average the cpu will not be idle while the disk is handling faults. If we make them equal, then, both cpu and IO are equally busy. Going back to our analysis of the performance of virtual memory, we see that in the "ideal" case, the access time to virtual memory is 2 times the access time to physical memory.
We call Working Set of a program the number of distinct pages used by that program in a specified time interval. We (the operating system) try to allocate to a program enough frames to accomodate a desired working set. A desirable working set is one, as discussed above, where for the chosen process the cpu time is not less than the fault handling delay time. Since the operating system may be unable to give to each process the desired number of frames, the OS has two choices, to let programs run even if they have too many page faults (they are trashing), or to swap out the programs whose desired working set cannot be satisfied. The OS follows the second policy. This is the Working Set Principle: Let a program run only if we can accomodate its desired working set.

ingargiola.cis.temple.edu