View on GitHub

Notes

reference notes

I/O Categories and Techniques

External devices that engage in I/O with computer systems can be grouped into three categories:

  1. Human readable
    • suitable for communicating with the computer user
    • printers, terminals, video display, keyboard, mouse
  2. Machine readable
    • suitable for communicating with electronic equipment
    • disk drives, USB keys, sensors, controllers
  3. Communication
    • suitable for communicating with remote devices
    • modems, digital line drivers, network interface card (NIC)

Differences in I/O devices

Evolution of the I/O Function

  1. Simple Microprocessor-controlled device
    • Processor directly controls a peripheral device
  2. Processor uses programmed I/O w/out interrupts
    • A controller or I/O module is added
  3. Processor need not wait for I/O to be performed
    • Same configuration as step 2, but now interrupts are employed
  4. Block of data can be transferred w/out involving processor
    • The I/O module is given direct control of memory via DMA(Direct Memory Access)
  5. Processor time can be utilized better
    • The I/O module is enhanced to become a separate processor, with a specialized instruction set tailored for I/O
  6. a large set of I/O devices can be controlled, with minimal processor involvement
    • The I/O module has a local memory of its own and is, in fact, a computer in its own right

Techniques for Performing I/O

  1. Programmed I/O
    • the processor issues an I/O command on behalf of a process to an I/O module
    • process then busy waiting for the operation to be completed before proceeding
  2. Interrupt-driven I/O
    • the processor issues an I/O command on behalf of a process
      • if non-blocking – processor continues to execute instructions from the process that issued the I/O command
      • if blocking – the next instruction the processor executes is from the OS, which will put the current process in a blocked state and schedule another process
  3. Direct Memory Access (DMA)
    • a DMA module controls the exchange of data between main memory and an I/O module
    • the processor is involved only at the beginning and end of the transfer

detail

When the processor is executing a program and encounters an instruction relating to I/O, it executes that instruction by issuing a command to the appropriate I/O module. In the case of programmed I/O, the I/O module performs the requested action, then sets the appropriate bits in the I/O status register but takes no further action to alert the processor. In particular, it does not interrupt the processor. Thus, after the I/O instruction is invoked, the processor must take some active role in determining when the I/O instruction is completed. For this purpose, the processor periodically checks the status of the I/O module until it finds that the operation is complete. With programmed I/O, the processor has to wait a long time for the I/O module of concern to be ready for either reception or transmission of more data. The processor, while waiting, must repeatedly interrogate the status of the I/O module. As a result, the performance level of the entire system is severely degraded. An alternative, known as interrupt-driven I/O, is for the processor to issue an I/O command to a module then go on to do some other useful work. The I/O module will then interrupt the processor to request service when it is ready to exchange data with the processor. The processor then executes the data transfer, as before, and resumes its former processing. Interrupt-driven I/O, though more efficient than simple programmed I/O, still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus, both of these forms of I/O suffer from two inherent drawbacks:

When large volumes of data are to be moved, a more efficient technique is required: direct memory access (DMA). The DMA function can be performed by a separate module on the system bus, or it can be incorporated into an I/O module. In either case, the technique works as follows. When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending the following information:

Whether a read or write is requested

The processor then continues with other work. It has delegated this I/O operation to the DMA module, and that module will take care of it. The DMA module transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor. Thus, the processor is involved only at the beginning and end of the transfer.

Direct Memory Access (DMA)

Direct Memory Access (DMA) is a capability provided by a computer architecture that

I/O Facilities Design

I/O facility design objectives

  1. Efficiency
    • Major effort in I/O design
    • Important because I/O operations often form a bottleneck
      • Bottleneck cause?
        • I/O transfer is slow compared to the speed of CPU
        • How can the data transfer be improved?
    • Most I/O devices are extremely slow compared with main memory and the processor
      • Even with the vast size of main memory in today’s machines, it will still often be the case that I/O is not keeping up with the activities of the processor.
      • Handled through multiprogramming, keeping processor busy
    • The area that has received the most attention is disk I/O
  2. Generality
    • Desirable to handle all devices in a uniform manner
    • applies to both the way processes view I/O devices and the way the operating system manages I/O devices and operations

A Model of I/O Organization

The three most important logical structures are presented in the figure.

I/O Organization

A particular operating system may not conform exactly to these structures. This is just an approximation

The following layers are involved:

  1. Logical I/O
  2. Device I/O
  3. Scheduling and Control
  4. Directory Management
  5. File System
  6. Physical Organization

1. Logical I/O

2. Device I/O

3. Scheduling and Control

4. Directory management

5. File system

6. Physical organization

TWO TYPES OF I/O DEVICES

In terms of data transfer

Block-oriented devices

Stream-oriented devices

I/O Buffering

Consider the following situation:

There are two problems with this approach.

  1. User program hung up waiting for the I/O to complete
  2. Interferes with swapping decisions by OS
    • Location 1000 – 1511 must remain in main memory during the course of block transfer, if not some data maybe lost.
    • Risk of single process deadlock
      • Process suspend because waiting for I/O command and being swapped out.
      • The process is blocked waiting for I/O event and the I/O operation is blocked waiting for the process to be swapped in.

To avoid the problem used “frame locking”.

Buffering

Why buffering??

No Buffer: The OS directly accesses the device when it needs

Three types of buffering:

Single Buffer

Operating system assigns a buffer in main memory for an I/O request

The simplest type of support that the operating system can provide.

Block – Oriented Single Buffer

Stream -Oriented Single Buffer

  1. Line-at-a-time operation
    • appropriate for scrollmode terminals (dumb terminals)
    • user input is one line at a time with a carriage return (enter key) signaling the end of a line
    • output to the terminal is similarly one line at a time
  2. Byte-at-a-time operation
    • used on forms-mode terminals
    • when each keystroke is significant
    • other peripherals such as sensors and controllers

Double Buffer

Circular Buffer

summary

I/O Scheduling – Disk Performance

Disk performance parameters

The actual details of disk I/O operation depend on the:

  1. Computer system (the architecture)
  2. Operating system
  3. Nature of the I/O channel and disk controller hardware (scheduling)

Positioning the Read/Write Heads

in a hard disk drive

When the disk drive is operating, the disk is rotating at constant speed. To read or write, the head must be positioned at the desired track and at the beginning of the desired sector on that track.

Disk performance parameters

Disk Scheduling

Policies:

E.g.:

First-In, First-Out (FIFO)

Shortest Service Time First (SSTF)

SCAN

C-SCAN

Disk Cache

Cache memory is used to apply to a memory that is smaller and faster than main memory and that is interposed between main memory and the processor

When an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache

Design Consideration

When a new sector is brought into the disk cache, one of the existing blocks must be replaced.

Least Recently Used (LRU)

Memory Cache

Least Frequently Used (LFU)

Memory Cache