Ch 7 io_management & disk schedulingmadhuributani
This document discusses input/output (I/O) management and disk scheduling. It begins by categorizing I/O devices as those for communicating with users, electronic equipment, and remote devices. It then describes how I/O devices differ in data rates, applications, control complexity, data transfer units, data representation, and error handling. The document outlines three I/O techniques - programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses the evolution of I/O architectures and covers I/O buffering, disk organization, and disk terminology.
This document discusses I/O management and disk scheduling. It begins by categorizing I/O devices as human readable, machine readable, or for communication. It then covers the evolution of I/O functions from programmed I/O to direct memory access. I/O buffering techniques like single, double, and circular buffers are introduced to deal with device speed and size mismatches. Finally, common disk scheduling policies like FIFO, SSTF, and SCAN are outlined and compared using an example request queue.
This document discusses operating system I/O systems. It covers I/O hardware including devices, ports, buses and controllers. It describes how operating systems manage I/O through techniques like interrupts, DMA, blocking/non-blocking I/O, buffering and caching. The kernel I/O subsystem handles requests, scheduling, error handling and protection. Interfaces like STREAMS provide communication between processes and devices. I/O performance is important to overall system performance.
I/o management and disk scheduling .pptxwebip34973
This document provides an overview of I/O management and disk scheduling in operating systems. It begins with an introduction to different categories of I/O devices and how they vary. It then covers techniques for performing I/O like programmed I/O, interrupt-driven I/O, and direct memory access. The document discusses the evolution of the I/O function and a hierarchical model for organizing I/O. It also outlines concepts like I/O buffering, disk scheduling parameters, RAID levels, and disk caching.
This document provides information about a computer systems course, including the lecturer, textbook, and recommended reading. It then summarizes the key topics that will be covered in the course, including computer structure, central processing unit components like registers and instruction cycles, memory hierarchy with caches, input/output techniques like programmed I/O and interrupt-driven I/O, and other concepts.
This document provides an overview of operating system I/O subsystems. It discusses I/O hardware, including devices, buses, controllers and device drivers. It describes how operating systems handle I/O requests through mechanisms like interrupts, DMA, polling, blocking/non-blocking I/O and asynchronous I/O. The document also outlines kernel data structures for managing I/O and discusses STREAMS, performance optimization techniques, and the life cycle of an I/O request from the application to hardware.
This document provides an outline for an operating systems file organization course. It lists the course instructor, references, format, policies, and weekly topics. The course will cover basic file concepts, physical data transfer considerations, different types of files like sequential, ordered, direct access, indexed sequential files and indexes. It will have weekly quizzes, assignments, a midterm, and final exam. Course material will be provided online and late assignments will receive point deductions.
The document provides an overview of computer function and interconnection. It discusses the basic components of a computer system including the CPU, memory, and I/O devices. It describes the Von Neumann architecture with a single memory to store both instructions and data. It then explains the fetch-execute cycle of instruction processing and how interrupts can alter the normal flow of a program. Finally, it discusses common interconnection structures like bus architectures and the elements involved in bus design.
The document discusses input/output (I/O) devices and their classification, data transfer rates, applications, complexity of control, units of data transfer, and error handling. It describes different I/O techniques including programmed I/O, interrupt-driven I/O, and direct memory access. It also covers I/O buffering approaches like single buffering, double buffering, and circular buffering which help smooth data transfer between devices and processes. Logical I/O structures in operating systems separate functions by complexity into logical, device, and scheduling/control layers.
The document outlines the key concepts covered in an operating systems course, including:
1. Operating system structures like processes, threads, CPU scheduling, synchronization, deadlocks, memory and file systems.
2. Linux and Windows system internals such as interrupts, device drivers, and protection.
3. Distributed systems topics like networks, client-server models, peer-to-peer architectures, and virtual machines.
This document discusses input/output (I/O) management and disk scheduling. It covers categories of I/O devices, differences between devices, performing I/O using programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses I/O buffering techniques, disk performance parameters, disk scheduling policies, RAID configurations, and disk caching algorithms like least recently used and least frequently used.
This document discusses input/output (I/O) management and disk scheduling. It covers categories of I/O devices, differences between devices, performing I/O using programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses I/O buffering techniques, disk performance parameters, disk scheduling policies, RAID configurations, and disk caching algorithms like least recently used and least frequently used.
The document provides an overview of computer organization and architecture. It discusses that computer architecture focuses on the logical structure and behavior of a computer system, while computer organization deals with the physical implementation and operational attributes. The document also outlines the evolution of computers from early vacuum tube-based systems to modern multicore processors, noting increased processing speed, smaller component sizes, and larger memory capacities over time. It describes the classic Von Neumann architecture with separate memory and processing units, and how this basic structure is still prevalent in modern systems.
This document provides an overview of operating systems and computer system organization. It describes the basic components of a computer system including hardware, operating system, application programs, and users. It then discusses operating system functions like process management, memory management, storage management, and protection/security. It provides details on computer system architecture including multiprocessor systems and clustered systems. It also covers operating system structure for multiprogramming and timesharing systems.
The document provides an overview of I/O systems. It discusses peripheral devices and their I/O management through various techniques like direct memory access, polling, interrupts and buffering. It also covers topics like clocks and timers, caching, spooling, error handling, power management, kernel data structures and improving I/O performance through techniques such as reducing interrupts and using direct memory access. The document contains information presented through multiple sections on different I/O related concepts and components.
The document discusses various aspects of computer system structures. It describes that a modern computer system consists of a CPU, memory, and device controllers connected through a system bus. I/O devices and the CPU can operate concurrently, with each device controller managing a specific device type. Interrupts are used to signal when I/O operations are complete. Memory is organized in a hierarchy from fastest and smallest registers to slower but larger magnetic disks. Various techniques like caching, paging and virtual memory help bridge differences in speed between CPU and I/O devices. The document also discusses hardware protection mechanisms like dual mode operation, memory protection using base and limit registers, and CPU protection using timers.
Computer organization & architecture chapter-1Shah Rukh Rayaz
The document provides an introduction to computer organization and architecture. It discusses the structure and function of computers, including data processing, storage, and movement functions. It also explains why this course is studied. The document then outlines the topics that will be covered in subsequent chapters, including computer evolution and performance, basic computer components and functions, and interconnection structures. It provides an overview of cache memory principles and the memory hierarchy in general.
This document provides an overview of the Operating System & Linux Programming course. It discusses key topics like operating system structure, functions, and operations. These include process management, memory management, storage management, protection and security. It also describes computer system organization, including hardware components and multiprocessing. Examples of kernel data structures and computing environments like mobile, distributed and open-source systems are also summarized.
This document provides an overview of the topics and slides covered in Unit 1 of the Operating Systems course. It includes:
1. An index listing the topics, corresponding lecture numbers, and slide numbers. Topics include an overview of operating systems, OS functions, protection and security, distributed systems, special purpose systems, OS structures and system calls, and OS generation.
2. Brief descriptions of what an operating system is, its goals, and definitions. It also describes basic computer system organization with CPUs, memory, and I/O devices.
3. An overview of operating system structures including multiprogramming, timesharing, multitasking, and virtual memory to enable efficient sharing of resources between processes.
This document provides an outline for an operating systems file organization course. It lists the course instructor, references, format, policies, and weekly topics. The course will cover basic file concepts, physical data transfer considerations, different types of files like sequential, ordered, direct access, indexed sequential files and indexes. It will have weekly quizzes, assignments, a midterm, and final exam. Course material will be provided online and late assignments will receive point deductions.
The document provides an overview of computer function and interconnection. It discusses the basic components of a computer system including the CPU, memory, and I/O devices. It describes the Von Neumann architecture with a single memory to store both instructions and data. It then explains the fetch-execute cycle of instruction processing and how interrupts can alter the normal flow of a program. Finally, it discusses common interconnection structures like bus architectures and the elements involved in bus design.
The document discusses input/output (I/O) devices and their classification, data transfer rates, applications, complexity of control, units of data transfer, and error handling. It describes different I/O techniques including programmed I/O, interrupt-driven I/O, and direct memory access. It also covers I/O buffering approaches like single buffering, double buffering, and circular buffering which help smooth data transfer between devices and processes. Logical I/O structures in operating systems separate functions by complexity into logical, device, and scheduling/control layers.
The document outlines the key concepts covered in an operating systems course, including:
1. Operating system structures like processes, threads, CPU scheduling, synchronization, deadlocks, memory and file systems.
2. Linux and Windows system internals such as interrupts, device drivers, and protection.
3. Distributed systems topics like networks, client-server models, peer-to-peer architectures, and virtual machines.
This document discusses input/output (I/O) management and disk scheduling. It covers categories of I/O devices, differences between devices, performing I/O using programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses I/O buffering techniques, disk performance parameters, disk scheduling policies, RAID configurations, and disk caching algorithms like least recently used and least frequently used.
This document discusses input/output (I/O) management and disk scheduling. It covers categories of I/O devices, differences between devices, performing I/O using programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses I/O buffering techniques, disk performance parameters, disk scheduling policies, RAID configurations, and disk caching algorithms like least recently used and least frequently used.
The document provides an overview of computer organization and architecture. It discusses that computer architecture focuses on the logical structure and behavior of a computer system, while computer organization deals with the physical implementation and operational attributes. The document also outlines the evolution of computers from early vacuum tube-based systems to modern multicore processors, noting increased processing speed, smaller component sizes, and larger memory capacities over time. It describes the classic Von Neumann architecture with separate memory and processing units, and how this basic structure is still prevalent in modern systems.
This document provides an overview of operating systems and computer system organization. It describes the basic components of a computer system including hardware, operating system, application programs, and users. It then discusses operating system functions like process management, memory management, storage management, and protection/security. It provides details on computer system architecture including multiprocessor systems and clustered systems. It also covers operating system structure for multiprogramming and timesharing systems.
The document provides an overview of I/O systems. It discusses peripheral devices and their I/O management through various techniques like direct memory access, polling, interrupts and buffering. It also covers topics like clocks and timers, caching, spooling, error handling, power management, kernel data structures and improving I/O performance through techniques such as reducing interrupts and using direct memory access. The document contains information presented through multiple sections on different I/O related concepts and components.
The document discusses various aspects of computer system structures. It describes that a modern computer system consists of a CPU, memory, and device controllers connected through a system bus. I/O devices and the CPU can operate concurrently, with each device controller managing a specific device type. Interrupts are used to signal when I/O operations are complete. Memory is organized in a hierarchy from fastest and smallest registers to slower but larger magnetic disks. Various techniques like caching, paging and virtual memory help bridge differences in speed between CPU and I/O devices. The document also discusses hardware protection mechanisms like dual mode operation, memory protection using base and limit registers, and CPU protection using timers.
Computer organization & architecture chapter-1Shah Rukh Rayaz
The document provides an introduction to computer organization and architecture. It discusses the structure and function of computers, including data processing, storage, and movement functions. It also explains why this course is studied. The document then outlines the topics that will be covered in subsequent chapters, including computer evolution and performance, basic computer components and functions, and interconnection structures. It provides an overview of cache memory principles and the memory hierarchy in general.
This document provides an overview of the Operating System & Linux Programming course. It discusses key topics like operating system structure, functions, and operations. These include process management, memory management, storage management, protection and security. It also describes computer system organization, including hardware components and multiprocessing. Examples of kernel data structures and computing environments like mobile, distributed and open-source systems are also summarized.
This document provides an overview of the topics and slides covered in Unit 1 of the Operating Systems course. It includes:
1. An index listing the topics, corresponding lecture numbers, and slide numbers. Topics include an overview of operating systems, OS functions, protection and security, distributed systems, special purpose systems, OS structures and system calls, and OS generation.
2. Brief descriptions of what an operating system is, its goals, and definitions. It also describes basic computer system organization with CPUs, memory, and I/O devices.
3. An overview of operating system structures including multiprogramming, timesharing, multitasking, and virtual memory to enable efficient sharing of resources between processes.
This slide deck presents a detailed overview of the 2025 survey paper titled “A Survey of Personalized Large Language Models” by Liu et al. It explores how foundation models like GPT and LLaMA can be personalized to better reflect user-specific needs, preferences, and behaviors.
The presentation is structured around a 3-level taxonomy introduced in the paper:
Input-Level Personalization (e.g., user-profile prompting, memory retrieval)
Model-Level Personalization (e.g., LoRA, PEFT, adapters)
Objective-Level Personalization (e.g., RLHF, preference alignment)
Design of Variable Depth Single-Span Post.pdfKamel Farid
Hunched Single Span Bridge: -
(HSSBs) have maximum depth at ends and minimum depth at midspan.
Used for long-span river crossings or highway overpasses when:
Aesthetically pleasing shape is required or
Vertical clearance needs to be maximized
この資料は、Roy FieldingのREST論文(第5章)を振り返り、現代Webで誤解されがちなRESTの本質を解説しています。特に、ハイパーメディア制御やアプリケーション状態の管理に関する重要なポイントをわかりやすく紹介しています。
This presentation revisits Chapter 5 of Roy Fielding's PhD dissertation on REST, clarifying concepts that are often misunderstood in modern web design—such as hypermedia controls within representations and the role of hypermedia in managing application state.
PRIZ Academy - Functional Modeling In Action with PRIZ.pdfPRIZ Guru
This PRIZ Academy deck walks you step-by-step through Functional Modeling in Action, showing how Subject-Action-Object (SAO) analysis pinpoints critical functions, ranks harmful interactions, and guides fast, focused improvements. You’ll see:
Core SAO concepts and scoring logic
A wafer-breakage case study that turns theory into practice
A live PRIZ Platform demo that builds the model in minutes
Ideal for engineers, QA managers, and innovation leads who need clearer system insight and faster root-cause fixes. Dive in, map functions, and start improving what really matters.
In modern aerospace engineering, uncertainty is not an inconvenience — it is a defining feature. Lightweight structures, composite materials, and tight performance margins demand a deeper understanding of how variability in material properties, geometry, and boundary conditions affects dynamic response. This keynote presentation tackles the grand challenge: how can we model, quantify, and interpret uncertainty in structural dynamics while preserving physical insight?
This talk reflects over two decades of research at the intersection of structural mechanics, stochastic modelling, and computational dynamics. Rather than adopting black-box probabilistic methods that obscure interpretation, the approaches outlined here are rooted in engineering-first thinking — anchored in modal analysis, physical realism, and practical implementation within standard finite element frameworks.
The talk is structured around three major pillars:
1. Parametric Uncertainty via Random Eigenvalue Problems
* Analytical and asymptotic methods are introduced to compute statistics of natural frequencies and mode shapes.
* Key insight: eigenvalue sensitivity depends on spectral gaps — a critical factor for systems with clustered modes (e.g., turbine blades, panels).
2. Parametric Uncertainty in Dynamic Response using Modal Projection
* Spectral function-based representations are presented as a frequency-adaptive alternative to classical stochastic expansions.
* Efficient Galerkin projection techniques handle high-dimensional random fields while retaining mode-wise physical meaning.
3. Nonparametric Uncertainty using Random Matrix Theory
* When system parameters are unknown or unmeasurable, Wishart-distributed random matrices offer a principled way to encode uncertainty.
* A reduced-order implementation connects this theory to real-world systems — including experimental validations with vibrating plates and large-scale aerospace structures.
Across all topics, the focus is on reduced computational cost, physical interpretability, and direct applicability to aerospace problems.
The final section outlines current integration with FE tools (e.g., ANSYS, NASTRAN) and ongoing research into nonlinear extensions, digital twin frameworks, and uncertainty-informed design.
Whether you're a researcher, simulation engineer, or design analyst, this presentation offers a cohesive, physics-based roadmap to quantify what we don't know — and to do so responsibly.
Key words
Stochastic Dynamics, Structural Uncertainty, Aerospace Structures, Uncertainty Quantification, Random Matrix Theory, Modal Analysis, Spectral Methods, Engineering Mechanics, Finite Element Uncertainty, Wishart Distribution, Parametric Uncertainty, Nonparametric Modelling, Eigenvalue Problems, Reduced Order Modelling, ASME SSDM2025
Jacob Murphy Australia - Excels In Optimizing Software ApplicationsJacob Murphy Australia
In the world of technology, Jacob Murphy Australia stands out as a Junior Software Engineer with a passion for innovation. Holding a Bachelor of Science in Computer Science from Columbia University, Jacob's forte lies in software engineering and object-oriented programming. As a Freelance Software Engineer, he excels in optimizing software applications to deliver exceptional user experiences and operational efficiency. Jacob thrives in collaborative environments, actively engaging in design and code reviews to ensure top-notch solutions. With a diverse skill set encompassing Java, C++, Python, and Agile methodologies, Jacob is poised to be a valuable asset to any software development team.
2. Contents
• I/O Devices
• Organization of the I/O Function
• Operating System Design Issues
• I/O Buffering
• Disk Scheduling
• RAID
• Disk Cache
3. I/O Devices
• External devices can be categorized into
– Human readable:- Eg. Printers, visual display
terminals, keyboards, mouse
– Machine readable:- Eg: disk and tape drives,
sensors, actuators
– Communications:-Modems
Differences exist among these classes
4. Differences
• Data rates: data transfer rates range from 101
to 109
• Applications: how the device is used has an influence
on the software and policies in the OS and supporting
utilities
• Complexity of control: depends on the device. Printer
–simple control when compared to a disk
• Unit of transfer: as bytes or characters
• Data representation: different data encoding schemes
• Error conditions: different from device to device
5. Organization of the I/O function
• Programmed I/O :
– Processor issues an I/O command on behalf of the process to an I/O
module
– Process then buzy waits for the operation to be completed
• Interrupt-Driven I/O
– Processor issues an I/O command on behalf of the process to an I/O
module
– Continues to execute instructions
– Interrupted by I/O when the latter has completed its work
• Direct Memory Access
– Controls the exchange of data between main memory and an I/O module.
– Processor sends a request for the transfer of a block of data to DMA
– Interrupted only after the entire block has been transferred
6. Evolution of the I/o Function
• The processor directly controls a peripheral device
• A controller or I/O module is added and processor uses
programmed I/O without interrupts
• I/O module with interrupts
• The I/O module is given direct control of memory via DMA.
• I/O module is enhanced to I/O processor; CPU directs the
I/O processor to execute an I/O program in main memory
• I/O module has a local memory of its own. With this
architecture, a large set of I/O devices can be controlled
8. Direct memory Access
• When processor wishes to read/write a block of data, issues
command to the DMA module
– Read/Write using the read/write control line between
processor and DMA module
– The address of the I/O device involved using data lines
– The starting location in memory to read from or write to,
communicated on the data lines and stored by the DMA
module in its address register
– The number of words to be read/written, communicated
via the data lines and stored in the data count register
After transfer of block of data is accomplished the DMA module sends a
interrupts signal to the processor
9. Types of DMA Configurations
Single bus detached DMA
Single-bus, integrated DMA-I/O
Inefficient
same bus
is shared
11. OS Design Issues
• Design objectives
– Efficiency:- Major work is done in increasing the
efficiency of disk I/O
– Generality
• use a hierarchical, modular function to design a I/O
function.
• Hides most of the details of device I/O in lower-level
routines so that user processes and upper levels of the
OS see devices in terms of general functions, such as
read, write, open, close, lock, unlock.
12. Logical Structure of the I/O
Function
• The hierarchical philosophy is that the functions
of OS should be separated according to their
complexity, characteristic time scale, level of
abstraction
• This leads to layered approach where each layer
performs a related subset of functions
• Changes in 1 layer does not require changes in
other layers
• I/O also follows the same approach
14. Local peripheral device
•Concerned with managing general I/O
functions on behalf of user processes
•User processes deals with device in
terms of device identifier and commands
like open, close , read, write
Operations and data are converted into
appropriate sequences of I/O instructions,
channel commands, and controller orders.
Buffering improves utilization.
Queuing ,scheduling and controlling of I/O
operations
Interacts with I/O module and h/w
16. File system •symbolic file names are converted to
Identifiers that ref files thro’ file
descriptors
•files can be added deleted , reorganized
•Deals with logical structure of files
•Operations open, close,read,write
•Access rights are managed
•logical references to files and records
must be converted to physical
secondary storage addresses
•Allocation of secondary storage space
and main storage buffers
17. I/O Buffering
• Why buffering is required?
– When a user process wants to read blocks of data from a disk, process
waits for the transfer
– It waits either by
• Busy waiting
• Process suspension on an interrupt
– The problems with this approach
• Program waits for slow I/O
• Virtual locations should stay in the main memory during the
course of block transfer
• Risk of single-process deadlock
• Process is blocked during transfer and may not be swapped out
The above inefficiencies can be resolved if input transfers in advance of
requests are being made and output transfers are performed some time after
the request is made. This technique is known as buffering.
18. Types of I/O Devices
• block-oriented:
– Stores information in blocks that are usually of fixed
size, and transfers are made one block at a time
– Reference to data is made by its block number
– Eg: Disks and USB keys
• stream-oriented
– Transfers data in and out as a stream of bytes, with
no block structure
– Eg: Terminals, printers, communications ports,mouse
19. Single Buffer (Block-oriented data)
•When a user process issues an I/O request, the OS assigns a buffer
in the system portion of main memory to the operation
Reading ahead:
Input transfers are made to the system buffer. When the transfer is
complete, the process moves the block into user space and
immediately requests another block.
When data are being transmitted to a device, they are first copied
from the user space into the system buffer, from which they will
ultimately be written.
20. Performance comparison between
single buffering and no buffering
• Without buffering
– Execution time per block is essentially T + C
T - time required to input one block
C- computation time that intervenes between input
requests
• With Buffering
– the time is max [C, T]+ M
M - time required to move the data from the
system buffer to user memory
21. Single Buffer (Stream-oriented
data)
• line-at-a-time fashion:
– user input is one line at a time, with a carriage return
signalling the end of a line
– output to the terminal is similarly one line at a time Eg:
Line Printer
• byte-at-a-time fashion
– used on forms-mode terminals when each key stroke is
significant
– user process follows the producer/consumer model
22. Double Buffer or buffer swapping
A process now transfers data to (or from) one buffer while the operating
system empties (or fills) the other. This technique is known as double
buffering
Block oriented transfer : the execution time as max [C, T]
stream-oriented input:
line-at-a-time I/O the user process need not be suspended for
input or output, unless the process runs ahead of the double buffers
byte-at-a-time operation no particular advantage over a single buffer
In both cases, the producer/consumer model is followed
23. Circular Buffer
When more than two buffers are used then collection of buffers is
known as circular buffer with each individual buffer being one unit
of the circular buffer
24. The Utility of Buffering
Buffering is one tool that can increase the
efficiency of the operating system and the
performance of individual processes.
26. Components of a Disk
Platters
Spindle
The platters spin (say, 90
rps).
The arm assembly is moved
in or out to position a head
on a desired track.
Tracks under heads make a
cylinder (imaginary!).
Only one head reads/writes
at any one time.
Disk head
Arm movement
Arm assembly
Tracks
Sector
Block size is a multiple of sector size
(which is fixed).
27. Disk Device Terminology
• Several platters, with information recorded magnetically on both surfaces
(usually)
• Actuator moves head (end of arm,1/surface) over track (“seek”), select
surface, wait for sector rotate under head, then read or write
– “Cylinder”: all tracks under heads
• Bits recorded in tracks, which in turn divided into sectors (e.g., 512
Bytes)
Platter
Outer
Track
Inner
Track
Sector
Actuator
Head
Arm
28. Disk Head, Arm, Actuator
Actuator
Arm
Head
Platters (12)
{
Spindle
30. 30
Physical disk organization
• To read or write, the disk head must be
positioned on the desired track and at the
beginning of the desired sector
• Seek time is the time it takes to position the
head on the desired track
• Rotational delay or rotational latency is the
additional time its takes for the beginning of
the sector to reach the head once the head is
in position
• Transfer time is the time for the sector to
pass under the head
31. 31
Physical disk organization
• Access time
= seek time + rotational latency + transfer time
• Efficiency of a sequence of disk accesses
strongly depends on the order of the requests
• Adjacent requests on the same track avoid
additional seek and rotational latency times
• Loading a file as a unit is efficient when the file
has been stored on consecutive sectors on the
same cylinder of the disk
32. 32
Example:
Two single-sector disk requests
• Assume
– average seek time = 10 ms
– average rotational latency = 3 ms
– transfer time for 1 sector = 0.01875 ms
• Adjacent sectors on same track
– access time = 10 + 3 + 2*(0.01875) ms = 13.0375 ms
• Random sectors
– access time = 2*(10 + 3 + 0.01875) ms = 26.0375 ms
33. 34
Disk Scheduling (Cont.)
• Several algorithms exist to schedule the servicing of disk I/O
requests.
• We illustrate them with a request queue (0-199).
98, 183, 37, 122, 14, 124, 65, 67
Head pointer 53
35. 36
SSTF
• Selects the request with the minimum seek
time from the current head position.
• SSTF scheduling is a form of SJF scheduling;
may cause starvation of some requests.
37. 38
SCAN
• The disk arm starts at one end of the disk, and moves
toward the other end, servicing requests until it gets
to the other end of the disk, where the head
movement is reversed and servicing continues.
• Sometimes called the elevator algorithm.
39. 40
C-SCAN
• Provides a more uniform wait time than SCAN.
• The head moves from one end of the disk to the other.
servicing requests as it goes. When it reaches the other end,
however, it immediately returns to the beginning of the disk,
without servicing any requests on the return trip.
• Treats the cylinders as a circular list that wraps around from
the last cylinder to the first one.
41. 42
C-LOOK
• Version of C-SCAN
• Arm only goes as far as the last request in
each direction, then reverses direction
immediately, without first going all the way to
the end of the disk.
43. 44
Selecting a Disk-Scheduling
Algorithm
• SSTF is common and has a natural appeal
• SCAN and C-SCAN perform better for systems that place a heavy load on
the disk.
• Performance depends on the number and types of requests.
• Requests for disk service can be influenced by the file-allocation method.
• The disk-scheduling algorithm should be written as a separate module of
the operating system, allowing it to be replaced with a different algorithm
if necessary.
• Either SSTF or LOOK is a reasonable choice for the default algorithm.