Here are the steps to solve this problem:
a) Non-Preemptive Priority Scheduling:
- Process order based on priority: P2, P4, P1, P3
- Number of context switches = 3
b) Round Robin Scheduling with time slice = 2:
- Process order: P1, P2, P1, P4, P1, P3, P1
- Number of context switches = 6
c) With RR, the behavior depends on the time slice size. With a small time slice of 2ms, most processes cannot complete within one time slice. This leads to a larger number of context switches compared to priority scheduling.
d)
operating systems , ch-05, (CPU Scheduling), 3rd level, College of Computers, Seiyun University. انظمة التشغيل لطلاب المستوى الثالث بكلية الحاسبات بجامعة سيئون المحاضرة 05
The document discusses different CPU scheduling algorithms:
1. First Come First Served scheduling allocates CPU to the longest waiting process first, which can result in longer processes waiting behind shorter ones (convoy effect).
2. Shortest Job First scheduling allocates CPU to the process with the shortest estimated run time, minimizing average wait time. Preemptive SJF allows interrupting the current process if a shorter one arrives.
3. Priority scheduling assigns priority levels and allocates CPU to the highest priority ready process. Preemption and aging policies address starvation of lower priority processes.
4. Round Robin scheduling allocates a time quantum (e.g. 10-100ms) to each ready process
The document discusses various CPU scheduling algorithms used in operating systems. It describes scheduling concepts like processes alternating between CPU and I/O bursts. Common scheduling criteria like CPU utilization, throughput and waiting time are discussed. Specific algorithms covered include First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin scheduling. More advanced techniques like multilevel queue and multilevel feedback queue scheduling are also summarized.
The document discusses various CPU scheduling algorithms used in operating systems. It describes the main objective of CPU scheduling as maximizing CPU utilization by allowing multiple processes to share the CPU. It then explains different scheduling criteria like throughput, turnaround time, waiting time and response time. Finally, it summarizes common scheduling algorithms like first come first served, shortest job first, priority scheduling and round robin scheduling.
This document discusses CPU scheduling algorithms. It begins with basic concepts like multiprogramming and the alternating cycle of CPU and I/O bursts that processes undergo. It then covers key scheduling criteria like CPU utilization, throughput, waiting time and turnaround time. Several single processor scheduling algorithms are explained in detail, including first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). Examples are provided to illustrate how each algorithm works. Finally, it briefly introduces the concept of multi-level queue scheduling using multiple queues to classify different types of processes.
The document discusses different CPU scheduling algorithms used in operating systems including first-come, first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It provides examples of how each algorithm works and compares their performance based on criteria like average waiting time, throughput, turnaround time, and response time. FCFS can lead to convoy effect with longer processes blocking shorter ones. SJF provides optimal waiting times but requires knowing future process lengths. Priority scheduling addresses starvation of low priority processes. RR gives each process a time slice or quantum before switching to ensure fairness.
This document discusses various concepts and algorithms related to process scheduling. It covers basic concepts like CPU bursts and scheduling criteria. It then describes several common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses more advanced topics like multiple processor scheduling, thread scheduling, and load balancing.
The CPU scheduling chapter discusses key concepts like CPU utilization, the CPU-I/O burst cycle, and histogram of CPU burst times. The CPU scheduler selects processes from the ready queue to allocate the CPU. Scheduling can be preemptive or nonpreemptive. Preemptive scheduling can cause race conditions when processes share data. The dispatcher switches processes and handles context switching. Scheduling aims to optimize criteria like CPU utilization, throughput, turnaround time, waiting time and response time. First-come first-served scheduling considers processes in the order they arrive while shortest-job-first is optimal but requires knowing next CPU burst length, which can be estimated. Shortest-remaining-time-first is the preempt
* Using SJF preemptive scheduling:
P2 will execute from time 0 to 5 ms.
P3 will execute from time 5 to 8 ms.
P1 will execute from time 8 to 18 ms.
P4 will execute from time 18 to 38 ms.
P5 will execute from time 38 to 40 ms.
Total waiting time = (10-5) + (8-5) + (18-0) + (38-5) + (40-10) = 5.6 + 3 + 18 + 33 + 30 = 90
Average waiting time = Total waiting time / Number of processes = 90/5 = 5.6 ms
* Using Priority preemptive scheduling
This document discusses different types of processor scheduling techniques. It begins by explaining the goals of processor scheduling and breaking it down into long-term, medium-term, and short-term scheduling. It then provides details on each type of scheduling, including examples. It discusses scheduling criteria such as CPU utilization, throughput, turnaround time, waiting time, and response time. It also covers different scheduling algorithms like priority queuing, shortest job first scheduling, and multi-level feedback queue scheduling.
This document discusses different CPU scheduling algorithms and their criteria for evaluation. It describes that a good scheduling algorithm should maximize CPU utilization and throughput while minimizing waiting time, response time, and turnaround time. Common algorithms mentioned are First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Gantt charts are used to visually represent schedules and examples are provided to demonstrate how different algorithms work.
The document discusses CPU scheduling techniques used in operating systems to improve CPU utilization. It describes how multiprogramming allows multiple processes to share the CPU by switching between processes when one is waiting for I/O. Common scheduling algorithms like first-come first-served (FCFS), priority scheduling, round robin, and shortest job first are explained. The goal of scheduling is to maximize throughput and minimize average wait times for processes.
CPU scheduling determines which process will be assigned to the CPU for execution. There are several types of scheduling algorithms:
First-come, first-served (FCFS) assigns processes in the order they arrive without preemption. Shortest-job-first (SJF) selects the process with the shortest estimated run time, but may result in starvation of longer processes. Priority scheduling assigns priorities to processes and selects the highest priority process, but low priority processes risk starvation.
CPU scheduling is the process by which the CPU selects which process to execute next from among processes in memory that are ready to execute. The CPU scheduler selects processes from the ready queue to execute. The goal of CPU scheduling is to maximize CPU utilization and throughput while minimizing waiting time and response time. Common CPU scheduling algorithms include first come first serve (FCF) which services processes in the order they arrive, and shortest job first (SJF) which selects the process with the shortest estimated run time to execute next.
This document discusses different CPU scheduling algorithms and their criteria for evaluation. It describes that a good scheduling algorithm should maximize CPU utilization and throughput while minimizing waiting time, response time, and turnaround time. Common algorithms mentioned are First Come First Served (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Gantt charts are used to visually represent the scheduling of different processes over time on the CPU. Examples are provided to demonstrate how each algorithm works.
The document discusses different types of schedulers and scheduling algorithms used in operating systems. It describes:
1) Primary and secondary schedulers that prioritize threads and increase priority of non-executing threads.
2) Four states a process can be in: new, ready, running, waiting, terminated. This helps the scheduler respond to each process.
3) Scheduling algorithms like FCFS, SJF, SRTN, priority, and round robin - discussing their approach, advantages, disadvantages.
4) Concepts like arrival time, burst time, completion time, turnaround time, waiting time used in CPU scheduling.
This document discusses different CPU scheduling algorithms. It describes the First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling (PS), and Round Robin (RR) algorithms. Each algorithm is evaluated based on criteria like average turnaround time, waiting time, and CPU utilization. FCFS is found to have the highest CPU utilization but higher average turnaround times. SJF provides the best average turnaround and waiting times but lower CPU utilization. The best algorithm depends on the specific performance measures and criteria that are most important for a given CPU.
This document discusses different CPU scheduling algorithms. It describes the First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling (PS), and Round Robin (RR) algorithms. Each algorithm is evaluated based on criteria like average turnaround time, waiting time, and CPU utilization. FCFS is found to have the highest CPU utilization but higher average turnaround times. SJF provides the lowest average turnaround times but can cause starvation of longer jobs. RR provides fairness but the time quantum setting impacts efficiency. The best algorithm depends on the specific performance measures and system requirements.
*"The Segmented Blueprint: Unlocking Insect Body Architecture"*.pptxArshad Shaikh
Insects have a segmented body plan, typically divided into three main parts: the head, thorax, and abdomen. The head contains sensory organs and mouthparts, the thorax bears wings and legs, and the abdomen houses digestive and reproductive organs. This segmentation allows for specialized functions and efficient body organization.
Ad
More Related Content
Similar to operating system scheduling concept lec2 (20)
This document discusses CPU scheduling algorithms. It begins with basic concepts like multiprogramming and the alternating cycle of CPU and I/O bursts that processes undergo. It then covers key scheduling criteria like CPU utilization, throughput, waiting time and turnaround time. Several single processor scheduling algorithms are explained in detail, including first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). Examples are provided to illustrate how each algorithm works. Finally, it briefly introduces the concept of multi-level queue scheduling using multiple queues to classify different types of processes.
The document discusses different CPU scheduling algorithms used in operating systems including first-come, first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It provides examples of how each algorithm works and compares their performance based on criteria like average waiting time, throughput, turnaround time, and response time. FCFS can lead to convoy effect with longer processes blocking shorter ones. SJF provides optimal waiting times but requires knowing future process lengths. Priority scheduling addresses starvation of low priority processes. RR gives each process a time slice or quantum before switching to ensure fairness.
This document discusses various concepts and algorithms related to process scheduling. It covers basic concepts like CPU bursts and scheduling criteria. It then describes several common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses more advanced topics like multiple processor scheduling, thread scheduling, and load balancing.
The CPU scheduling chapter discusses key concepts like CPU utilization, the CPU-I/O burst cycle, and histogram of CPU burst times. The CPU scheduler selects processes from the ready queue to allocate the CPU. Scheduling can be preemptive or nonpreemptive. Preemptive scheduling can cause race conditions when processes share data. The dispatcher switches processes and handles context switching. Scheduling aims to optimize criteria like CPU utilization, throughput, turnaround time, waiting time and response time. First-come first-served scheduling considers processes in the order they arrive while shortest-job-first is optimal but requires knowing next CPU burst length, which can be estimated. Shortest-remaining-time-first is the preempt
* Using SJF preemptive scheduling:
P2 will execute from time 0 to 5 ms.
P3 will execute from time 5 to 8 ms.
P1 will execute from time 8 to 18 ms.
P4 will execute from time 18 to 38 ms.
P5 will execute from time 38 to 40 ms.
Total waiting time = (10-5) + (8-5) + (18-0) + (38-5) + (40-10) = 5.6 + 3 + 18 + 33 + 30 = 90
Average waiting time = Total waiting time / Number of processes = 90/5 = 5.6 ms
* Using Priority preemptive scheduling
This document discusses different types of processor scheduling techniques. It begins by explaining the goals of processor scheduling and breaking it down into long-term, medium-term, and short-term scheduling. It then provides details on each type of scheduling, including examples. It discusses scheduling criteria such as CPU utilization, throughput, turnaround time, waiting time, and response time. It also covers different scheduling algorithms like priority queuing, shortest job first scheduling, and multi-level feedback queue scheduling.
This document discusses different CPU scheduling algorithms and their criteria for evaluation. It describes that a good scheduling algorithm should maximize CPU utilization and throughput while minimizing waiting time, response time, and turnaround time. Common algorithms mentioned are First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Gantt charts are used to visually represent schedules and examples are provided to demonstrate how different algorithms work.
The document discusses CPU scheduling techniques used in operating systems to improve CPU utilization. It describes how multiprogramming allows multiple processes to share the CPU by switching between processes when one is waiting for I/O. Common scheduling algorithms like first-come first-served (FCFS), priority scheduling, round robin, and shortest job first are explained. The goal of scheduling is to maximize throughput and minimize average wait times for processes.
CPU scheduling determines which process will be assigned to the CPU for execution. There are several types of scheduling algorithms:
First-come, first-served (FCFS) assigns processes in the order they arrive without preemption. Shortest-job-first (SJF) selects the process with the shortest estimated run time, but may result in starvation of longer processes. Priority scheduling assigns priorities to processes and selects the highest priority process, but low priority processes risk starvation.
CPU scheduling is the process by which the CPU selects which process to execute next from among processes in memory that are ready to execute. The CPU scheduler selects processes from the ready queue to execute. The goal of CPU scheduling is to maximize CPU utilization and throughput while minimizing waiting time and response time. Common CPU scheduling algorithms include first come first serve (FCF) which services processes in the order they arrive, and shortest job first (SJF) which selects the process with the shortest estimated run time to execute next.
This document discusses different CPU scheduling algorithms and their criteria for evaluation. It describes that a good scheduling algorithm should maximize CPU utilization and throughput while minimizing waiting time, response time, and turnaround time. Common algorithms mentioned are First Come First Served (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Gantt charts are used to visually represent the scheduling of different processes over time on the CPU. Examples are provided to demonstrate how each algorithm works.
The document discusses different types of schedulers and scheduling algorithms used in operating systems. It describes:
1) Primary and secondary schedulers that prioritize threads and increase priority of non-executing threads.
2) Four states a process can be in: new, ready, running, waiting, terminated. This helps the scheduler respond to each process.
3) Scheduling algorithms like FCFS, SJF, SRTN, priority, and round robin - discussing their approach, advantages, disadvantages.
4) Concepts like arrival time, burst time, completion time, turnaround time, waiting time used in CPU scheduling.
This document discusses different CPU scheduling algorithms. It describes the First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling (PS), and Round Robin (RR) algorithms. Each algorithm is evaluated based on criteria like average turnaround time, waiting time, and CPU utilization. FCFS is found to have the highest CPU utilization but higher average turnaround times. SJF provides the best average turnaround and waiting times but lower CPU utilization. The best algorithm depends on the specific performance measures and criteria that are most important for a given CPU.
This document discusses different CPU scheduling algorithms. It describes the First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling (PS), and Round Robin (RR) algorithms. Each algorithm is evaluated based on criteria like average turnaround time, waiting time, and CPU utilization. FCFS is found to have the highest CPU utilization but higher average turnaround times. SJF provides the lowest average turnaround times but can cause starvation of longer jobs. RR provides fairness but the time quantum setting impacts efficiency. The best algorithm depends on the specific performance measures and system requirements.
*"The Segmented Blueprint: Unlocking Insect Body Architecture"*.pptxArshad Shaikh
Insects have a segmented body plan, typically divided into three main parts: the head, thorax, and abdomen. The head contains sensory organs and mouthparts, the thorax bears wings and legs, and the abdomen houses digestive and reproductive organs. This segmentation allows for specialized functions and efficient body organization.
Ancient Stone Sculptures of India: As a Source of Indian HistoryVirag Sontakke
This Presentation is prepared for Graduate Students. A presentation that provides basic information about the topic. Students should seek further information from the recommended books and articles. This presentation is only for students and purely for academic purposes. I took/copied the pictures/maps included in the presentation are from the internet. The presenter is thankful to them and herewith courtesy is given to all. This presentation is only for academic purposes.
History Of The Monastery Of Mor Gabriel Philoxenos Yuhanon Dolabanifruinkamel7m
History Of The Monastery Of Mor Gabriel Philoxenos Yuhanon Dolabani
History Of The Monastery Of Mor Gabriel Philoxenos Yuhanon Dolabani
History Of The Monastery Of Mor Gabriel Philoxenos Yuhanon Dolabani
What is the Philosophy of Statistics? (and how I was drawn to it)jemille6
What is the Philosophy of Statistics? (and how I was drawn to it)
Deborah G Mayo
At Dept of Philosophy, Virginia Tech
April 30, 2025
ABSTRACT: I give an introductory discussion of two key philosophical controversies in statistics in relation to today’s "replication crisis" in science: the role of probability, and the nature of evidence, in error-prone inference. I begin with a simple principle: We don’t have evidence for a claim C if little, if anything, has been done that would have found C false (or specifically flawed), even if it is. Along the way, I’ll sprinkle in some autobiographical reflections.
Redesigning Education as a Cognitive Ecosystem: Practical Insights into Emerg...Leonel Morgado
Slides used at the Invited Talk at the Harvard - Education University of Hong Kong - Stanford Joint Symposium, "Emerging Technologies and Future Talents", 2025-05-10, Hong Kong, China.
Rock Art As a Source of Ancient Indian HistoryVirag Sontakke
This Presentation is prepared for Graduate Students. A presentation that provides basic information about the topic. Students should seek further information from the recommended books and articles. This presentation is only for students and purely for academic purposes. I took/copied the pictures/maps included in the presentation are from the internet. The presenter is thankful to them and herewith courtesy is given to all. This presentation is only for academic purposes.
Ajanta Paintings: Study as a Source of HistoryVirag Sontakke
This Presentation is prepared for Graduate Students. A presentation that provides basic information about the topic. Students should seek further information from the recommended books and articles. This presentation is only for students and purely for academic purposes. I took/copied the pictures/maps included in the presentation are from the internet. The presenter is thankful to them and herewith courtesy is given to all. This presentation is only for academic purposes.
How to Manage Amounts in Local Currency in Odoo 18 PurchaseCeline George
In this slide, we’ll discuss on how to manage amounts in local currency in Odoo 18 Purchase. Odoo 18 allows us to manage purchase orders and invoices in our local currency.
Form View Attributes in Odoo 18 - Odoo SlidesCeline George
Odoo is a versatile and powerful open-source business management software, allows users to customize their interfaces for an enhanced user experience. A key element of this customization is the utilization of Form View attributes.
*"Sensing the World: Insect Sensory Systems"*Arshad Shaikh
Insects' major sensory organs include compound eyes for vision, antennae for smell, taste, and touch, and ocelli for light detection, enabling navigation, food detection, and communication.
This slide is an exercise for the inquisitive students preparing for the competitive examinations of the undergraduate and postgraduate students. An attempt is being made to present the slide keeping in mind the New Education Policy (NEP). An attempt has been made to give the references of the facts at the end of the slide. If new facts are discovered in the near future, this slide will be revised.
This presentation is related to the brief History of Kashmir (Part-I) with special reference to Karkota Dynasty. In the seventh century a person named Durlabhvardhan founded the Karkot dynasty in Kashmir. He was a functionary of Baladitya, the last king of the Gonanda dynasty. This dynasty ruled Kashmir before the Karkot dynasty. He was a powerful king. Huansang tells us that in his time Taxila, Singhpur, Ursha, Punch and Rajputana were parts of the Kashmir state.
How to Configure Scheduled Actions in odoo 18Celine George
Scheduled actions in Odoo 18 automate tasks by running specific operations at set intervals. These background processes help streamline workflows, such as updating data, sending reminders, or performing routine tasks, ensuring smooth and efficient system operations.
Slides to support presentations and the publication of my book Well-Being and Creative Careers: What Makes You Happy Can Also Make You Sick, out in September 2025 with Intellect Books in the UK and worldwide, distributed in the US by The University of Chicago Press.
In this book and presentation, I investigate the systemic issues that make creative work both exhilarating and unsustainable. Drawing on extensive research and in-depth interviews with media professionals, the hidden downsides of doing what you love get documented, analyzing how workplace structures, high workloads, and perceived injustices contribute to mental and physical distress.
All of this is not just about what’s broken; it’s about what can be done. The talk concludes with providing a roadmap for rethinking the culture of creative industries and offers strategies for balancing passion with sustainability.
With this book and presentation I hope to challenge us to imagine a healthier future for the labor of love that a creative career is.
2. CPU scheduling
• CPU scheduling is a process which allows one process to use the
CPU while the execution of another process is on hold(in
waiting state) due to unavailability of any resource like I/O etc,
thereby making full use of CPU. The aim of CPU scheduling is to
make the system efficient, fast and fair.
• Whenever the CPU becomes idle, the operating system must
select one of the processes in the ready queue to be executed.
• The selection process is carried out by the short-term scheduler
(or CPU scheduler).
• The scheduler selects from among the processes in memory
that are ready to execute, and allocates the CPU to one of them.
3. • Process execution consists of a CPU execution
and IO wait. Process alternates between these
two states.
• CPU burst: controlled by CPU
• IO burst: controlled by IO.
4. Dispatcher
• The dispatcher is the module that gives control of the
CPU to the process selected by the short-term
scheduler. This function involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to
restart that program.
• The dispatcher should be as fast as possible, given that
it is invoked during every process switch. The time it
takes for the dispatcher to stop one process and start
another running is known as the dispatch latency.
5. • CPU scheduling decisions may take place under the following four circumstances:
• When a process switches from the running state to the waiting state(for I/O
request or invocation of wait for the termination of one of the child processes).
• When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
• When a process switches from the waiting state to the ready state(for example,
completion of I/O).
• When a process terminates.
• In circumstances 1 and 4, there is no choice in terms of scheduling. A new
process(if one exists in the ready queue) must be selected for execution. There is
a choice, however in circumstances 2 and 3.
• When Scheduling takes place only under circumstances 1 and 4, we say the
scheduling scheme is non-preemptive; otherwise the scheduling scheme
is preemptive.
6. SCHEDULING TYPES
• Preemptive:
preemptive scheduling is based on priority where
a scheduler may preempt a low priority running
process anytime when a high priority process
enters into a ready state.
• Non-preemptive:
Non-preemptive algorithms are designed so that
once a process enters the running state, it cannot
be preempted until it completes its allotted time
7. Scheduling Criteria
• CPU Utilization:
Keep the CPU as busy as possible. It range from 0 to 100%. In practice, it range from 40 to
90%.More CPU utilization more will be the system performance.
• Throughput:
Throughput is the rate at which processes are completed per unit of time.
• Turnaround time:
This is the how long a process takes to execute a process. It is calculated as the time gap
between the submission of a process and its completion.
Time Difference between completion time and arrival time.
Turn Around Time = Completion Time - Arrival Time
• Waiting time:
Waiting time is the sum of the time periods spent in waiting in the ready queue.
Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time - Burst Time
8. • Response time:
Response time is the time it takes to start
responding from submission time. It is
calculated as the amount of time it takes from
when a request was submitted until the first
response is produced.
• Fairness:
Each process should have a fair share of CPU.
9. Scheduling Algorithm Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
10. SCHEDULING ALGORITHM
• First Come First Serve(FCFS) Scheduling
• Shortest-Job-First(SJF) Scheduling
• Priority Scheduling
• Round Robin(RR) Scheduling
• Multilevel Queue Scheduling
• Multilevel Feedback Queue Scheduling
11. FCFS SCHEDULING
first come first serve.
• Jobs are executed on first come, first serve basis.
• Easy to understand and implement.
• Poor in performance as average wait time is high.
• FCFS is non preemptive. Once CPU has been allocated to a process
that process keeps the CPU until it terminates or IO request comes.
This is particularly troublesome for time sharing systems where each
user needs to get a share of CPU at regular intervals. It would be
disastrous to allow one process to keep CPU for an extended period.
DISADVANTAGE
Convoy effect short process behind long process. All processes are
waiting for one big process to get off the CPU. This effect results in
lower CPU utilization.
12. First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
24 27 30
0
13. FCFS Scheduling (Cont)
Suppose that the processes arrive in the order
P2 , P3 , P1
• The Gantt chart for the schedule is:
• Waiting time for P1 = 6;P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
P1
P3
P2
6
3 30
0
19. First Come First Serve
Process id A T BT
1 0 2
2 3 1
3 5 6
Turn around time= Completion
Time- Arrival Time
Waiting Time= Turn around Time-
Burst Time
20. First Come First Serve
Process id A T BT CT TAT WT
1 0 2 2 2 0
2 3 1 4 1 0
3 5 6 11 6 0
Turn around time= Completion
Time- Arrival Time
Waiting Time= Turn around Time-
Burst Time
22. Shortest Job First
Processes with least execution time are selected first.
CPU is assigned to process with less CPU burst time.
SJF:
Non-Preemption: CPU is always allocated to the process with least burst
time and Process Keeps CPU with it until it is completed.
Pre-Emption: When a new process enters the queue, scheduler checks its
execution time and compare with the already running process.
If Execution time of running process is more, CPU is taken from it and given
to new process.
23. ADVANTAGES
•It gives superior turnaround time performance to
shortest process next because a short job is given
immediate preference to a running longer job.
•Throughput is high.
DIS ADVANTAGES
•Elapsed time (i.e., execution-completed-time) must
be recorded, it results an additional overhead on the
processor.
•Starvation may be possible for the longer processes
Shortest Job First
24. SJF can be premptive or non-preemptive.
A premptive SJF algorithm will preempt the currently
executing process if the next CPU burst of newly
arrived process may be shorter than what is left to the
currently executing process.
A Non-premptive SJF algorithm will allow the currently
running process to finish.Preemptive SJF Scheduling is
sometimes called Shortest Remaining Time First
algorithm
Shortest Job First
26. Shortest Job First (Non-Preemption)
What if their order had been P4, P2, P3 and P1?
Actual order:
P1 burst time: 15
P2 burst time: 8
P3 burst time: 10
P4 burst time: 3
Waiting Time
P4: 0
P2: 3
P3: 11
P1: 21
Completion Time:
P4: 3
P2: 3+8=11
P3: 3+8+10=21
P1: 3+8+10+15=36
Average Waiting Time: (0+3+11+21)/4 = 8.7
Turnaround Time: (3+11+21+36)=71/4=17.75
27. Process Arrival time Burst time
P1 1 7
P2 2 5
P3 3 1
P4 4 2
P5 5 8
SHORTEST JOB FIRST (NON-PREEMTIVE)
Find out completion time ,waiting time and
turn around time
29. Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 9
P2 1 3
P3 2 9
Cal. Completion time, turn around time and avg. waiting time.
30. Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 5
P2 1 3
P3 2 3
P4 3 1
Cal. Completion time, turn around time and avg. waiting time.
0.P1-> 1.P2->2.P2->3.P2->4.P4->5.P3->8.P1
31. Proc
ess
Arrival
time
Burst time CT TAR WT
P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1
Shortest Job First(Preemptive)
Cal. Completion time, turn around time and
avg. waiting time.
35. PRIORITY SCHEDULING
The SJF is a special case of general priority scheduling algorithm.
A Priority (an integer) is associated with each process.The CPU is allocated to the
process with the highest priority.Generally smallest integer is considered as the
highest priority.Equal priority processes are scheduled in First Come First serve
order.It can be preemptive or Non-preemptive.
Non-preemptive Priority Scheduling
In this type of scheduling the CPU is allocated to the process with the
highest priority after completing the present running process
Advantage
Very good response for the highest priority process over non-preemptive
version of it.
Disadvantage
Starvation may be possible for the lowest priority processes
36. Pr
oc
es
s
Arrival
time
Burst time Priority CT TAR WT
P1 0 4 2
P2 1 2 4
P3 2 3 6
P4 3 5 10
P5 4 1 8
P6 5 4 12
P7 6 6 9
Non pre-emtive priority scheduling
Cal. Completion time, turn around time and
avg. waiting time.
40. Round Robin Scheduling
•Round Robin is the preemptive process scheduling
algorithm.
•Each process is provided a fix time to execute, it is
called a quantum.
•Once a process is executed for a given time period, it
is preempted and other process executes for a given
time period.
•Context switching is used to save states of preempted
processes.
41. P1 p2 p3 p1 P4 p5 P2 P6 P5 P2 P6 P5
0 12
4 9 18 25
22
5
Pr
oc
es
s
Arrival
time
Burst time CT TAT WT
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 5 3
2 3 19 21
Round robin scheduling
TIME QUANTUM 2
43. P1 p2 p3 p4 P5 p6 P2 P5
0 10 18
15
Pr
oc
es
s
Arrival
time
Burst time CT TAT WT
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 5 3
4 8 19 21
Round robin scheduling
TIME QUANTUM 4
11
44. P4 P5 P3 P3 P4 P6 P3 P2 P4 P3
0 12
4 9 18 26
22
15
Pr
oc
es
s
Arrival
time
Burst time CT TAT WT
P1 5 5
P2 4 4
P3 3 7
P4 1 9
P5 2 2
P6 6 3
1 6
19
21
Round robin scheduling
TIME QUANTUM 3
25
45. Multilevel Queue Scheduling
A multilevel queue scheduling algorithm partitions the ready queue in several separate queues,
for instance
In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the
process, such as
Memory size , Process priority , Process type
Algorithm choose the process from the occupied queue that has the highest priority, and run
that process either
Preemptive or Non-preemptively
Each queue has its own scheduling algorithm or policy.
Possibility I
If each queue has absolute priority over lower-priority queues then no process in the queue
could run unless the queue for the highest-priority processes were all empty.
For example, in the above figure no process in the batch queue could run unless the queues for
system processes, interactive processes, and interactive editing processes will all empty.
Possibility II
If there is a time slice between the queues then each queue gets a certain amount of CPU
times, which it can then schedule among the processes in its queue. For instance;
80% of the CPU time to foreground queue using RR.
20% of the CPU time to background queue using FCFS.
Since processes do not move between queue so, this policy has the advantage of low scheduling
overhead, but it is inflexible.
46. Multilevel feedback queue-scheduling algorithm allows a process to move between queues.
It uses many ready queues and associate a different priority with each queue.
The Algorithm chooses to process with highest priority from the occupied queue and run that
process either preemptively or un preemptively. If the process uses too much CPU time it will
moved to a lower-priority queue.
Similarly, a process that wait too long in the lower-priority queue may be moved to a higher-
priority queue may be moved to a highest-priority queue. Note that this form of aging prevents
starvation.
Example:
A process entering the ready queue is placed in queue 0.
If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
If it does not complete, it is preempted and placed into queue 2.
Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS basis, only when
queue 0 and queue 1 are empty.`
Multilevel feedback queue-scheduling
algorithm
57. Multicore Processors
Recently computer hardware has been to place multiple processor cores on the same
physical chip, resulting in a multicore processor.
When a processor accesses memory, it spends a significant amount of time waiting for
the data to become available. This situation, known as a memory stall.
To remedy this situation, many recent hardware designs have implemented
multithreaded processor cores in which two (or more) hardware threads are assigned to
each core. That way, if one thread stalls while waiting for memory, the core can switch to
another thread.
There are two ways to multithread a processing core: coarse grained and fine-grained
multithreading.
With coarse-grained multithreading, a thread executes on a processor until a long-
latency event such as a memory stall occurs. Because of the delay caused by the long-
latency event, the processor must switch to another thread to begin execution. However,
the cost of switching between threads is high, since the instruction pipeline must
be flushed before the other thread can begin execution on the processor core. Once this
new thread begins execution, it begins filling the pipeline with its instructions. Fine-
grained (or interleaved) multithreading switches between threads at a much finer level of
granularity—typically at the boundary of an instruction cycle. However, the architectural
design of fine-grained systems includes logic for
58. Real-time operating systems
•Priority-based scheduler
•Guaranteed maximum time to service interrupts
•Ability to ensure that processes are fully loaded into memory and stay in memory
•Consistent and efficient memory allocation
•Preemptable system calls
59. Process types
erminating processes: A terminating process may be considered as one that runs and then exits
terminates). We are interested in the amount of time that it takes it to run to completion.
s deadline is the time that at which it should complete all its work
s compute time is the amount of CPU time it needs.
Nonterminating processes: For processes such as video and audio servers as well as editors, we are
ot interested in the terminating time of these processes but rather in the time between events.
or example, we would like our audio server to fill a 4K byte audio buffer every 500 milliseconds or
we would like our video server to provide a new video frame every 33.3 milliseconds. For these
rocesses, the compute time is the CPU time that the process needs to compute its periodic event an
he deadline is the time at which it must have the results ready. Non terminating processes may be
ivided into two classes:
Periodic: A periodic process has a fixed frequency at which it needs to run. For example, a video
decompressor may have to run 30 times per second at 33.3 millisecond intervals.
Aperiodic: Aperiodic processes have no fixed, cyclical, frequency between events. Event interrup
may occur sporadically and event computation times may vary dramatically. For purposes of
scheduling, we use the shortest period and the longest computation time to play it safe.
60. This assures us that we will have enough CPU time to complete the task. Moreover, for
periodic tasks, the deadline must be within the period. If the period of the process is T, the
following relation must now hold:
C ≤ D ≤ T
61. Rate monotonic analysis is a technique for
assigning static priorities to periodic processes
The rate-monotonic scheduling algorithm
schedules periodic tasks using a
static priority policy with preemption. If a
lower-priority process is running
and a higher-priority process becomes
available to run, it will preempt the
lower-priority process.
Uponentering the system, each periodic task is
assigned
a priority inversely based on its period. The
shorter the period, the higher the
priority; the longer the period, the lower the
priority.
62. Here is an example of ratemonotonic priority assignment. Suppose we have the
following processes:
A runs every 50 msec for 20 msec
B runs every 50 msec for 10 msec
C runs every 30 msec for 10 msec
This example does’nt satisfy the Rate Monotonic approch as its not able to
meet deadline for process C
HOW IT CAN ACHIEVE DEADLINE FOR ALL THE
PROCESSES.
63. This does not give us the desired performance because, while processes A and B get
scheduled acceptably, process C is late the second time it is scheduled and misses an
entire period! Now let us reverse the priorities as ratemonotonic assignment would
dictate:
measure the CPU utilization of a process
Pi as the ratio of its burst to its period—ti/pi
measure the CPU utilization of a process A is
20/50