View on GitHub

Notes

reference notes

Hardware and Control Structures

Terminology

Execution of a Process

  1. Operating system brings into main memory a few pieces of the program (resident set- portion of process that is in main memory)
  2. An interrupt is generated when an address is needed that is not in main memory
  3. Operating system places the process in a blocking state
  4. Piece of process that contains the logical address is brought into main memory
  5. Operating system issues a disk I/O Read request
  6. Another process is dispatched to run while the disk I/O takes place
  7. An interrupt is issued when disk I/O is complete, which causes the operating system to place the affected process in the Ready state

Implications of breaking up process into pieces

More processes may be maintained in main

A process may be larger than all of main memory

Real and Virtual Memory

Real Memory

Characteristics of Paging and Segmentation

Simple Paging Virtual Memory Paging Simple Segmentation Virtual Memory Segmentation  
Main memory partitioned into small fixed-size chunks called frames Main memory partitioned into small fixed-size chunks called frames Main memory not partitioned Main memory not partitioned  
Program broken into pages by the compiler or memory management system Program broken into pages by the compiler or memory management system Program segments specified by the programmer to the compiler (ie., the decision is made by the programmer) Program segments specified by the programmer to the compiler (ie., the decision is made by the programmer)  
Internal fragmentation within frames Internal fragmentation within frames No internal fragmentation No internal fragmentation  
No external fragmentation No external fragmentation External fragmentation External fragmentation  
Operating system must maintain a page table for each process showing which frame each page occupies Operating system must maintain a page table for each process showing which frame each page occupies Operating system must maintain a segment table for each process showing the load address and length of each segment Operating system must maintain a segment table for each process showing the load address and length of each segment  
Operating system must maintain a free frame list Operating system must maintain a free frame list Operating system must maintain a list of free holes in main memory Operating system must maintain a list of free holes in main memory  
Processor uses page number, offset to calculate absolute address Processor uses page number, offset to calculate absolute address Processor uses segment number, offset to calculate absolute address Processor uses segment number, offset to calculate absolute address  
All the pages of a process must be in main memory for process to run, unless overlays are used Not all pages of a process need be in main memory frames for the process to run. Pages may be read in as needed All the segments of a process must be in main memory for process to run, unless overlays are used Not all segments of a process need be in main memory for the process to run. Segments may be read in as needed  
    Reading a page into main memory may require writing a page out to disk   Reading a segment into main memory may require writing one or more segments out to disk

Locality and Virtual Memory

To accommodate as many processes as possible, only a few pieces of each process is maintained in main memory.

But main memory may be full so OS will brings one piece in and must swap one piece out.

The OS must not swap out a piece of a process just before that piece is needed.

If it does this too often this leads to trashing:

To avoid this, the operating system tries to guess, based on recent history, which pieces are least likely to be used in the near future

Principle of Locality

The principle that the instruction currently being fetched/executed is very close in memory to the instruction to be fetched/executed next.

so if we keep the most active segments of program and data in the cache, overall execution speed for the program will be optimized.

Our strategy for cache utilization should maximize the number of cache read/write operations, in comparison with the number of main memory read/write operations.

Support Needed for VM

For virtual memory to be practical and effective:

Paging

In simple paging, the page table is loaded into memory when the process is loaded into memory.

Each page table entry contains the frame number of the corresponding page in memory.

With virtual memory scheme, the page table becomes more complex.

Each page table entry contains:

  1. Present bit (P)
  2. Modify bit (P)
  3. Other Control Bits
  4. Frame Number

  5. Present bit (P)
    • To indicate whether the page is in main memory or not.
    • If it is in memory, the entry contains the frame number of the corresponding page in main memory
    • If it is not in memory, the entry may contain the address of that page on disk or the page number may be used to index another table (often in the PCB) to obtain the address of that page on disk.
  6. Modify Bit (M)
    • Indicate if the page has been altered since it was last loaded into main memory
    • If no change has been made, the page does not have to be written to the disk when it needs to be swapped out
  7. Other control bits
    • May be present if protection is managed at the page level
    • a read-only/read-write bit
    • protection level bit: kernel page or user page

Translation of Addresses

When a particular process is running, register holds the starting address of the page table for the process.

The page number of a virtual address is used to index that table to get the frame number.

Than combined it with offset of a virtual address to produce the desired physical address.

s

Page Table Structures

Page table is needed when translating the virtual address to physical address.

Page tables are variable in length (depends on process size)

There is one page table for each of the process.

But each of process can occupy huge amounts of virtual memory.

To overcome this problem, most virtual memory schemes store page tables in virtual memory rather than real memory.

When a process is running, at least part of its page table must be in memory.

Approach to organize the page table:

  1. Two level scheme
  2. Inverted Page Table Structure

Translation Lookaside Buffer (TLB)

Each virtual memory reference can cause two physical memory accesses:

To overcome the effect of doubling the memory access time, most virtual memory schemes make use of a special high-speed cache called a translation lookaside buffer (TLB)

s

Given a virtual address, processor examines the TLB (Figure 8.7)

Figure 8.8 is a flowchart that shows the use of TLB.

s

Page Size

The smaller the page size, the lesser the amount of internal fragmentation

The physical characteristics of most secondary-memory devices (disks) favor a larger page size for more efficient block transfer of data

The design issue of page size is related to the size of physical main memory and program size

main memory is getting larger and address space used by applications is also growing

most obvious on personal computers where applications are becoming increasingly complex

Segmentation

Segmantation allow programmer to view memory as consisting of multiple address spaces or segment

Segment may be unequal dynamic size.

Memory references consists of a segment number and offset

Each process has its own segment table

Each segment table entry contains:

  1. Present bit (P)
  2. Modify bit (P)
  3. Other Control Bits

If segment is in main memory, the entry contains the starting address and the length of that segment

Other control bits may be present if protection and sharing is managed at the segment level

Logical to physical address translation is similar to paging except that the offset is added to the starting address (instead of being appended)

The basic mechanism for reading a word from memory involves the translation of a virtual address consisting of a segment number and offset into physical address using segment table.

Size of the segment table depends on the size of the process and we can’t expect to hold it in register and it must be in memory to be accessed.

Fig. 8.12 suggests a hardware implementation of this scheme:

s

Combined Paging and Segmentation

A memory is broken up into a number of fixed-size frame

A process is:

Segmentation is visible to the programmer

Paging is transparent to the programmer

From a programmer’s point of view, a logical address still consists of a segment number and offset.

From system’s point of view the segment offset is viewed as a page number and page offset for a page within the specified segment.

Each process has:

The virtual address consist of:

Segment table entry:

  1. Length field
  2. Base field - is the physical address of the page table of that segment
  3. Other control bits is used for sharing and protection purposes.
  4. The present and modified bits are not needed because these matters are handled at the page table

Page table entry:

  1. Present bit (P)
  2. Modify bit (M)
  3. Other Control Bits
  4. Frame Number.

Protection and sharing info most naturally resides in segment table entry

Figure 8.13 suggests a structure to support combined paging/segmentation.

s

Operating System Software

The design of the memory management portion of an operating system depends on three fundamental areas of choice:

  1. Whether or not to use virtual memory techniques
  2. The use of paging or segmentation or both
  3. The algorithms/policy employed for various aspects of memory management

Issues with Virtual Memory: Performance

Need to minimize page faults

Fetch Policy

Determines when a page should be brought into memory

Demand Paging

Pre-paging

Placement Policy

Determines where in real memory a process piece resides

For pure segmentation systems:

For paging (and paged segmentation):

Replacement Policy

Deals with the selection of a page in main memory to be replaced when a new page is brought in

Objective is that the page that is removed will be the page least likely to be referenced in the near future

The more sophisticated the replacement policy, the greater the hardware and software overhead to implement it

Not all pages in main memory can be selected for replacement

Some frames are locked (cannot be paged out) known as Frame Locking:

Locking is achieved by associating a lock bit with each frame

The OS might decide that the set of pages considered for replacement should be:

The decision for the set of pages to be considered for replacement is related to the resident set management strategy:

No matter what is the set of pages considered for replacement, the replacement policy deals with algorithms that will choose the page within that set

Replacement Policy Algorithms

  1. Optimal Policy
  2. Least recently Used (LRU)
  3. First In First Out (FIFO)
  4. Clock Policy (not cover in the syllabus)

Example:

Resident Set = 3 frames

The process execution requires a reference to five distinct pages.

The page address stream formed by executing the program is given as below

Optimal Policy

Least Recently Used (LRU)

First In First Out (FIFO)

Page Buffering

Is a strategy that can improves paging performance and allows the use of a simpler page replacement policy.

A replaced page is not lost, but rather assigned to one of these two lists:

When a page is to be read in (loaded into the main memory), the page frame at the head of the free page list is used, destroying the page that was there.

When unmodified page is to be replaced, it remains in memory and its page frame is added to the tail of the free page list.

When a modified page is to be written out and replaced, its page frame is added to the tail of the modified page list.

At each page fault the two lists are first examined to see if the needed page is still in main memory

If it is STILL in the memory:

If it is NOT in the memory:

Cleaning Policy

Concerned with determining when a modified page should be written out to secondary memory (disk)

Demand Cleaning

Precleaning

A better approach for cleaning policy by incorporates page buffering.