CPU scheduling determines which process will be assigned to the CPU for execution. There are several types of scheduling algorithms:
First-come, first-served (FCFS) assigns processes in the order they arrive without preemption. Shortest-job-first (SJF) selects the process with the shortest estimated run time, but may result in starvation of longer processes. Priority scheduling assigns priorities to processes and selects the highest priority process, but low priority processes risk starvation.
The document discusses CPU scheduling techniques used in operating systems to improve CPU utilization. It describes how multiprogramming allows multiple processes to share the CPU by switching between processes when one is waiting for I/O. Common scheduling algorithms like first-come first-served (FCFS), priority scheduling, round robin, and shortest job first are explained. The goal of scheduling is to maximize throughput and minimize average wait times for processes.
Scheduling Definition, objectives and types Maitree Patel
Scheduling is the process of determining which process will use the CPU when multiple processes are ready to execute. The objectives of scheduling are to maximize CPU utilization, throughput, and fairness while minimizing response time, turnaround time, and waiting time. There are three main types of schedulers: long-term schedulers manage process admission to the system; short-term or CPU schedulers select the next process to run on the CPU; and medium-term schedulers handle process suspension during I/O waits.
Operating system 28 fundamental of schedulingVaibhav Khanna
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time-sharing system is to switch the CPU among processes so frequently that users can interact with each program while it is running.
For a uniprocessor system, there will never be more than one running process.
If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled
This document discusses CPU scheduling in operating systems. It covers basic scheduling concepts like multiprogramming and preemptive scheduling. It then describes the role of the scheduler and dispatcher in selecting which process runs on the CPU. Several common scheduling algorithms are explained like first-come first-served, shortest job first, priority scheduling, and round robin. Factors for evaluating scheduling performance and examples of scheduling in Linux and real-time systems are also summarized.
The document discusses process scheduling in operating systems. It covers basic concepts of process management including CPU scheduling algorithms. Several CPU scheduling algorithms are described in detail, including first-come, first-served (FCFS), shortest job first (SJF), priority-based scheduling, and round robin (RR). The goals of CPU scheduling algorithms are also discussed, such as minimizing waiting time and turnaround time while maximizing CPU utilization. Examples are provided to illustrate how each scheduling algorithm works.
Lecture 4 - Process Scheduling (1).pptxAmanuelmergia
The document discusses process scheduling in operating systems. It covers basic concepts of process management including that only one process can run at a time on a single processor system. It then discusses process scheduling, which involves selecting the next process to run from the queue of ready processes. Various CPU scheduling algorithms are covered, including first-come, first-served (FCFS), shortest job first (SJF), priority-based, and round robin (RR) scheduling. Key criteria for evaluating scheduling algorithms like CPU utilization, throughput, turnaround time and waiting time are also summarized.
The document discusses different CPU scheduling algorithms used in operating systems. It describes first-come, first-served (FCFS) scheduling, which schedules processes in the order they arrive. Shortest job first (SJF) scheduling prioritizes the shortest jobs. Round-robin (RR) scheduling allocates each process a time slice or quantum to use the CPU before switching to another process. The document also covers shortest remaining time next, preemptive priority scheduling, and some of the criteria used to evaluate scheduling algorithms like CPU utilization, throughput, waiting time and response time.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with respect to criteria like CPU utilization, waiting time, response time, and turnaround time.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with regards to criteria like CPU utilization, waiting time, throughput, response time and turnaround time.
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among processes, the operating system can make the computer more productive
- To introduce CPU scheduling, which is the basis for multi-programmed operating systems
- To describe various CPU-scheduling algorithms
- To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system
- To examine the scheduling algorithms of several operating systems
LM10,11,12 - CPU SCHEDULING algorithms and its processesmanideepakc
The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting the next process to run. It also describes different scheduling algorithms like FCFS, SJF, priority, and round robin scheduling and compares their advantages and disadvantages in optimizing criteria like CPU utilization, wait time, and throughput.
The document discusses different types of schedulers and scheduling algorithms used in operating systems. It describes:
1) Primary and secondary schedulers that prioritize threads and increase priority of non-executing threads.
2) Four states a process can be in: new, ready, running, waiting, terminated. This helps the scheduler respond to each process.
3) Scheduling algorithms like FCFS, SJF, SRTN, priority, and round robin - discussing their approach, advantages, disadvantages.
4) Concepts like arrival time, burst time, completion time, turnaround time, waiting time used in CPU scheduling.
CPU Scheduling is a process of determining which process will own CPU for execution while another process is on hold. The main task of CPU scheduling is to make sure that whenever the CPU remains idle, the OS at least select one of the processes available in the ready queue for execution.
This document provides an overview of CPU scheduling concepts and algorithms. It discusses key scheduling concepts like multiprogramming and processes. It then covers various scheduling algorithms like first-come first-served, shortest job first, priority-based, and round robin. It also discusses scheduling criteria, multilevel queues, multiple processor scheduling, real-time scheduling, and how scheduling algorithms are evaluated. The goal of scheduling is to optimize criteria like wait time, response time, and throughput.
The document discusses various CPU scheduling algorithms used in operating systems. It describes scheduling concepts like processes alternating between CPU and I/O bursts. Common scheduling criteria like CPU utilization, throughput and waiting time are discussed. Specific algorithms covered include First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin scheduling. More advanced techniques like multilevel queue and multilevel feedback queue scheduling are also summarized.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
FCFS (First Come First Serve) is a non-preemptive scheduling algorithm where processes are executed in the order of their arrival. It uses a first-in first-out queue and is simple to implement. However, it can result in longer waiting times, especially for I/O bound processes. It also favors CPU-intensive processes as they can monopolize the CPU once started. The document then provides an example of FCFS scheduling along with the calculation of turnaround time and waiting time for each process.
The document discusses process scheduling in operating systems. It covers basic concepts of process management including CPU scheduling algorithms. Several CPU scheduling algorithms are described in detail, including first-come, first-served (FCFS), shortest job first (SJF), priority-based scheduling, and round robin (RR). The goals of CPU scheduling algorithms are also discussed, such as minimizing waiting time and turnaround time while maximizing CPU utilization. Examples are provided to illustrate how each scheduling algorithm works.
Lecture 4 - Process Scheduling (1).pptxAmanuelmergia
The document discusses process scheduling in operating systems. It covers basic concepts of process management including that only one process can run at a time on a single processor system. It then discusses process scheduling, which involves selecting the next process to run from the queue of ready processes. Various CPU scheduling algorithms are covered, including first-come, first-served (FCFS), shortest job first (SJF), priority-based, and round robin (RR) scheduling. Key criteria for evaluating scheduling algorithms like CPU utilization, throughput, turnaround time and waiting time are also summarized.
The document discusses different CPU scheduling algorithms used in operating systems. It describes first-come, first-served (FCFS) scheduling, which schedules processes in the order they arrive. Shortest job first (SJF) scheduling prioritizes the shortest jobs. Round-robin (RR) scheduling allocates each process a time slice or quantum to use the CPU before switching to another process. The document also covers shortest remaining time next, preemptive priority scheduling, and some of the criteria used to evaluate scheduling algorithms like CPU utilization, throughput, waiting time and response time.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with respect to criteria like CPU utilization, waiting time, response time, and turnaround time.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with regards to criteria like CPU utilization, waiting time, throughput, response time and turnaround time.
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among processes, the operating system can make the computer more productive
- To introduce CPU scheduling, which is the basis for multi-programmed operating systems
- To describe various CPU-scheduling algorithms
- To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system
- To examine the scheduling algorithms of several operating systems
LM10,11,12 - CPU SCHEDULING algorithms and its processesmanideepakc
The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting the next process to run. It also describes different scheduling algorithms like FCFS, SJF, priority, and round robin scheduling and compares their advantages and disadvantages in optimizing criteria like CPU utilization, wait time, and throughput.
The document discusses different types of schedulers and scheduling algorithms used in operating systems. It describes:
1) Primary and secondary schedulers that prioritize threads and increase priority of non-executing threads.
2) Four states a process can be in: new, ready, running, waiting, terminated. This helps the scheduler respond to each process.
3) Scheduling algorithms like FCFS, SJF, SRTN, priority, and round robin - discussing their approach, advantages, disadvantages.
4) Concepts like arrival time, burst time, completion time, turnaround time, waiting time used in CPU scheduling.
CPU Scheduling is a process of determining which process will own CPU for execution while another process is on hold. The main task of CPU scheduling is to make sure that whenever the CPU remains idle, the OS at least select one of the processes available in the ready queue for execution.
This document provides an overview of CPU scheduling concepts and algorithms. It discusses key scheduling concepts like multiprogramming and processes. It then covers various scheduling algorithms like first-come first-served, shortest job first, priority-based, and round robin. It also discusses scheduling criteria, multilevel queues, multiple processor scheduling, real-time scheduling, and how scheduling algorithms are evaluated. The goal of scheduling is to optimize criteria like wait time, response time, and throughput.
The document discusses various CPU scheduling algorithms used in operating systems. It describes scheduling concepts like processes alternating between CPU and I/O bursts. Common scheduling criteria like CPU utilization, throughput and waiting time are discussed. Specific algorithms covered include First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin scheduling. More advanced techniques like multilevel queue and multilevel feedback queue scheduling are also summarized.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
FCFS (First Come First Serve) is a non-preemptive scheduling algorithm where processes are executed in the order of their arrival. It uses a first-in first-out queue and is simple to implement. However, it can result in longer waiting times, especially for I/O bound processes. It also favors CPU-intensive processes as they can monopolize the CPU once started. The document then provides an example of FCFS scheduling along with the calculation of turnaround time and waiting time for each process.
Raiffeisen Bank International (RBI) is a leading Retail and Corporate bank with 50 thousand employees serving more than 14 million customers in 14 countries in Central and Eastern Europe.
Jozef Gruzman is a digital and innovation enthusiast working in RBI, focusing on retail business, operations & change management. Claus Mitterlehner is a Senior Expert in RBI’s International Efficiency Management team and has a strong focus on Smart Automation supporting digital and business transformations.
Together, they have applied process mining on various processes such as: corporate lending, credit card and mortgage applications, incident management and service desk, procure to pay, and many more. They have developed a standard approach for black-box process discoveries and illustrate their approach and the deliverables they create for the business units based on the customer lending process.
The third speaker at Process Mining Camp 2018 was Dinesh Das from Microsoft. Dinesh Das is the Data Science manager in Microsoft’s Core Services Engineering and Operations organization.
Machine learning and cognitive solutions give opportunities to reimagine digital processes every day. This goes beyond translating the process mining insights into improvements and into controlling the processes in real-time and being able to act on this with advanced analytics on future scenarios.
Dinesh sees process mining as a silver bullet to achieve this and he shared his learnings and experiences based on the proof of concept on the global trade process. This process from order to delivery is a collaboration between Microsoft and the distribution partners in the supply chain. Data of each transaction was captured and process mining was applied to understand the process and capture the business rules (for example setting the benchmark for the service level agreement). These business rules can then be operationalized as continuous measure fulfillment and create triggers to act using machine learning and AI.
Using the process mining insight, the main variants are translated into Visio process maps for monitoring. The tracking of the performance of this process happens in real-time to see when cases become too late. The next step is to predict in what situations cases are too late and to find alternative routes.
As an example, Dinesh showed how machine learning could be used in this scenario. A TradeChatBot was developed based on machine learning to answer questions about the process. Dinesh showed a demo of the bot that was able to answer questions about the process by chat interactions. For example: “Which cases need to be handled today or require special care as they are expected to be too late?”. In addition to the insights from the monitoring business rules, the bot was also able to answer questions about the expected sequences of particular cases. In order for the bot to answer these questions, the result of the process mining analysis was used as a basis for machine learning.
Niyi started with process mining on a cold winter morning in January 2017, when he received an email from a colleague telling him about process mining. In his talk, he shared his process mining journey and the five lessons they have learned so far.
How Netflix Uses Big Data to Personalize Audience Viewing ExperiencePromptCloudTechnolog
Ever opened Netflix and felt like it knew exactly what you were in the mood to watch? You’re not alone—and that kind of personalized experience isn’t just luck. It’s the result of some seriously smart data work happening behind the scenes. Netflix big data strategy is one of the main reasons the platform leads the global streaming game today.
The fourth speaker at Process Mining Camp 2018 was Wim Kouwenhoven from the City of Amsterdam. Amsterdam is well-known as the capital of the Netherlands and the City of Amsterdam is the municipality defining and governing local policies. Wim is a program manager responsible for improving and controlling the financial function.
A new way of doing things requires a different approach. While introducing process mining they used a five-step approach:
Step 1: Awareness
Introducing process mining is a little bit different in every organization. You need to fit something new to the context, or even create the context. At the City of Amsterdam, the key stakeholders in the financial and process improvement department were invited to join a workshop to learn what process mining is and to discuss what it could do for Amsterdam.
Step 2: Learn
As Wim put it, at the City of Amsterdam they are very good at thinking about something and creating plans, thinking about it a bit more, and then redesigning the plan and talking about it a bit more. So, they deliberately created a very small plan to quickly start experimenting with process mining in small pilot. The scope of the initial project was to analyze the Purchase-to-Pay process for one department covering four teams. As a result, they were able show that they were able to answer five key questions and got appetite for more.
Step 3: Plan
During the learning phase they only planned for the goals and approach of the pilot, without carving the objectives for the whole organization in stone. As the appetite was growing, more stakeholders were involved to plan for a broader adoption of process mining. While there was interest in process mining in the broader organization, they decided to keep focusing on making process mining a success in their financial department.
Step 4: Act
After the planning they started to strengthen the commitment. The director for the financial department took ownership and created time and support for the employees, team leaders, managers and directors. They started to develop the process mining capability by organizing training sessions for the teams and internal audit. After the training, they applied process mining in practice by deepening their analysis of the pilot by looking at e-invoicing, deleted invoices, analyzing the process by supplier, looking at new opportunities for audit, etc. As a result, the lead time for invoices was decreased by 8 days by preventing rework and by making the approval process more efficient. Even more important, they could further strengthen the commitment by convincing the stakeholders of the value.
Step 5: Act again
After convincing the stakeholders of the value you need to consolidate the success by acting again. Therefore, a team of process mining analysts was created to be able to meet the demand and sustain the success. Furthermore, new experiments were started to see how process mining could be used in three audits in 2018.
indonesia-gen-z-report-2024 Gen Z (born between 1997 and 2012) is currently t...disnakertransjabarda
Gen Z (born between 1997 and 2012) is currently the biggest generation group in Indonesia with 27.94% of the total population or. 74.93 million people.
3. Scheduling
Module
3
• Topics To Be Covered:
• To introduce CPU scheduling, which is the basis for
multiprogram on operating systems
• To describe various CPU-scheduling algorithms ( There is no
one best solution. No one is perfect!)
• To discuss evaluation criteria for selecting a CPU-scheduling
algorithm for a particular system
3
4. Chapter 3
Processor Scheduling
Goal: Which and when a process takes processor time.
Levels
High ====== Admission: adds new comers to the system
Intermediate ====== Suspend/Resume: remove
temporary a process from competition on the processor or the
reverse.
Low ======Dispatch : pick a process from the
ready queue to have the processor
4
5. Chapter 3
Processor Scheduling
Objectives
Fair: The scheduler should Treat all processes in fair criteria's.
Throughput: Is the number of processes executed/ done per
unit time, the scheduler should try to maximize that number.
Support Active users : The active users have short jobs and that
requires fast service because they are setting in front of their
terminals waiting for system reply. So, scheduler should take in
consideration jobs that belong to active users to favor them.
5
6. Chapter 3
Processor Scheduling
Objectives
Predictable: A process executed many times in the system should be
finished within about the same duration i.e. the variance in execution time
should be small.
Overhead: schedulers take time to make the selection if that time is favor to
be as minimum as possible since that time is not paid by customers.
Maximize Resources utilization: computer resources utilization(operation)
means income and more services to customers which is desirable for the
scheduler. So, schedulers should try to maximize the operation of resources.
6
7. Chapter 3
Processor Scheduling
Objectives
No Indefinite Postponement: A process in the system should at some point
of time get the chance to have the processors. So, selection criteria of the
scheduler should take in consideration the process waiting time.
Support Priorities: Processes are normally not at the same level of
importance. So, the schedulers should give the ability to mark processes as
important over others.
Better Service For Best: A process that behaves logically and does not cause
much trouble to the OS should be favored over a process that does the
opposite.
7
8. Chapter 3
Processor Scheduling
Objectives
Degradation gracefully: It is logical if the system load increases
the system performance will degrade but it should degrade
with the same ratio of the load and not die at some point of
load.
8
9. Chapter 3
Basic Concepts
• Single Process one process at a time.
• Maximum CPU utilization obtained with
multiprogramming.
• CPU idle :waiting time is wasted
9
10. Chapter 3
Basic Concepts
• Cycle – Process execution consists of a cycle of CPU
execution and I/O wait
• CPU burst: a time interval when a process uses CPU
only.
• I/O burst: a time interval when a process uses I/O
devices only.
12. Chapter 3
CPU Scheduler
• CPU idle , operating system selects one process
from ready Queue to be executed
• Many mechanisms: FIFO, priority …
• The Records in the queue are generally process
control block PCBs of the processes
12
13. Chapter 3
CPU Scheduler
• Long term scheduling
• Short term scheduling: CPU scheduler
• Medium term scheduling
13
14. Chapter 3
CPU Scheduler
• CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• Scheduling under 1 and 4 in scheduling scheme is
nonpreemptive or cooperative;
• All other scheduling is preemptive
• Scheduler latency – time it takes for the Scheduler
to perform the select
14
15. Chapter 3
Dispatcher
• Scheduler (Algorithm + Data Structure) decides which
process to run next. Remember that decision always
take time and resources.
• Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler; this
involves:
• switching context
• switching to user mode
Note: Kernel runs in kernel mode/System mode
• jumping to the proper location in the user program to restart
that program
• Dispatch latency – time it takes for the dispatcher to
stop one process and start another running
• At each clock interrupt, the OS gets to run and decide
whether the currently running process should be
allowed to continue or whether it has to choose
another process and give it the CPU.
15
16. Scheduling Criteria
Module 3
• CPU utilization – keep the CPU as busy as possible Maximize
• Throughput – # of processes that complete their execution per time unit Maximize
• Turnaround time – amount of time to execute a particular process (time from
submission to termination) Minimize
• Waiting time – amount of time a process has been waiting in ready queue Minimize
• Response time – amount of time it takes from when a request is submitted until the first
response is produced, not output (for time-sharing environment) Minimize
• Fairness- Give CPU to all processes in a fair manner
• SERVICE TIME The time required by a device to handle a request. (seconds) Minimize
• RESIDENCE TIME = SERVICE TIME + QUEUEING TIME. Minimize
16
17. Chapter 3 Processor Scheduling
Criteria’s
I/O bound: The operating system could learn from the behavior of the
process if the process perform many I/O’s compared to the use of the CPU. If
So, the OS mark that process as I/O bound.
CPU bound : The process that uses a lot of CPU time compared to rate of I/O
could be marked by the OS as CPU bound process.
Interactive/Batch : The operating system could have or know if the process is
to support interactive user waiting for response or it’s a batch process that is
no one is waiting for immediate response.
17
18. Chapter 3
Processor Scheduling
Criteria’s
Priority (static, dynamic, Purchased): OS could support priorities which could
be static, dynamic, or purchased. Static priority is given by the user or taken
from the user group and could not be changed during the life of the process
in the system. Dynamic priority is set initially be the submit process and
could be changed during the execution by the OS according to the behavior
of the process. Purchased is paid priorities that can not be altered by the OS
if dynamic
Page Faults: If Virtual storage is implemented the number of times the
process requests a page of the process that does not exist in real memory at
the time of request.
18
19. Chapter 3
Processor Scheduling
Criteria’s
Preemption (Preemptive (? Quantum Size( ): The preemption is the ability of
the OS to take the processor from a process by force. If the preemption is
allowed the OS has the control over the Quantum value. Quantum could be
fixed or vary from process to another.
Deadline: A process could have a deadline to complete.
A process that made to prevent some event from happening if finished after
that event then the result has know value.
Time to Complete: The OS can estimate the remaining of the execution time
if given the total time. So, it know how much time needed for the process to
finish and leave the system.
19
21. Preemptive and Non Preemptive Scheduling
Module 3
• Nonpreemptive
• Once a process is allocated the CPU, it does not leave unless:
1. it has to wait, e.g., for I/O request or for a child to terminate
2. it terminates
• Preemptive
• OS can force (preempt) a process from CPU at anytime
• Say, to allocate CPU to another higher-priority process
21
22. Chapter 3
First-Come, First-Served (FCFS) Scheduling
• By far the simplest CPU-scheduling algorithm is the
first-come, first-served (FCFS) scheduling
algorithm.
• With this scheme, the process that requests the
CPU first is allocated the CPU first.
• The implementation of the FCFS policy is easily
managed with a FIFO queue
23. Chapter 3
First-Come, First-Served (FCFS) Scheduling
23
FIRST-COME, FIRST SERVED:
( FCFS) same as FIFO
Simple, fair, but poor performance. Average
queueing time may be long.
What are the average queueing and residence times
for this scenario?
How do average queueing and residence times
depend on ordering of these processes in the
queue?
24. Chapter 3
Processor Scheduling
Scheduling Disciplines (schemes)
Brust time/Service time : CPU time required
Turnaround Time(TAT)= finish time-time in
Wait Time(WT)= TAT-Brust time
24
25. First-Come, First-Served (FCFS) Scheduling
Module 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
• The Gantt Chart for the schedule is:
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Turnaround time for P1 , P2 , and P3 are 24, 27, 30 respectively
• Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
24 27 30
0
Burst Time
Process
24
P1
3
P2
3
P3
25
26. Chapter 3
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P2 , P3 , P1
• The Gantt chart for the schedule is:
• Turnaround time for P1 , P2 , and P3 are 30, 3, 6 respectively
• Waiting time for P1 = 6;P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect short process behind long process. Lower CPU and device
utilization than might be possible if the shorter processes were allowed to
get CPU first.
• FCFS is not suitable for time-sharing systems, where processes get a share of
CPU at regular intervals-> suitable for batch systems
P1
P3
P2
6
3 30
0
27. Chapter 3
FCFS Scheduling (Cont.)
• Example without ignoring arrival time.
Gantt Chart
0 8 12 21 26
• Average waiting time= [(0)+(8-1)+(12-2)+(21-3)]/4= 8.75 ms
• Average Turn Around time=[(8-0]+(12-1)+(21-2)+(26-3)]/4=15.25 ms
• Throughput = 4 jobs/26 ms = 0.15385 jobs/ms
P1 P2 P3 P4
28. Chapter 3
Shortest-Job-First (SJF) Scheduling
• Associate with each process the length of its next CPU burst. Use these lengths
to schedule the process with the shortest time. If several processes have the
same CPU next burst time, use FCFS to select one
Process Arrival Time Burst Time
P1 0.0 6
P2 0.0 8
P3 0.0 7
P4 0.0 3
Non-Preemptive SJF scheduling Gannt chart (ignore arrival time)
Turnaround time for P4 , P1 , P3 and P2 are 3, 9, 16, 24 respectively
Average waiting time = (0 + 3+ 9 + 16) / 4 = 7 (ignore arrival time)
Same problem with FCFS = 10.25 msec
P4 P3
P1
3 16
0 9
P2
24
29. Chapter 3
Process Arrival Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
Non-Preemptive SJF scheduling Gannt chart
P1 P3
P4
6 16
0 9
P2
24
Turnaround time for P1 , P2 , P3 and P4 are 6, 24-2, 16-4, 9-5 respectively
Average waiting time = (0 + (16-2)+ (9-4) + (6-5)) / 4 = 5
30. Chapter 3 Non-Preemptive Shortest-Job-First (SJF) Scheduling without
Ignoring Arrival time
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
Non-Preemptive SJF scheduling Gannt chart
Turnaround time for P1 , P3 , P2 and P4 are
(7-0), (8-4), (12-2) , (16-5) respectively
7 , 4 , 10 , 11
Average waiting time = (0 + (7-4) + (8-2) + (12-5) / 4 = 4
P1 P2
P3
7 12
0 8
P4
16
3
31. Scheduling
Module 3
• Which is harder to implement? and why?
•Preemptive is harder: Need to maintain consistency of data
shared between processes, and more importantly, kernel data
structures (e.g., I/O queues)
•Think of a preemption while kernel is executing a system call on
behalf of a process (many OSs, wait for system call to finish)
31
32. Chapter 3
Preemptive SJF Scheduler
• Preemptive SJF is optimal – gives minimum average waiting time for a given set
of processes because it moves short process before a long one. It can’t be used
for short-term scheduling
• It is used frequently in long-term scheduling in batch systems because user can
estimate running time needed
• Average waiting time= [(10-1)+(1-1)+(17-2)+(5-3)]/4= 6.5 ms
• Average Turn Around time=[(17-0]+(5-1)+(26-2)+(10-3)]/4=11.5 ms
• Throughput = 4 jobs/26 ms = 0.15385 jobs/ms
• Non-Preemptive scheduling waiting time = 7.75 ms
33. Chapter 3 Determining Length of Next CPU Burst
• The Real difficulty with the SJF algorithm is
knowing the length of the next CPU request
• We may not know but we can predict
• Optimal for minimizing queueing time, but
impossible to implement. Tries to predict the
process to schedule based on previous history.
34. Chapter 3 Shortest-Job-First (SJF) Scheduling without Ignoring Arrival
time
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
Turnaround time for P1 , P2 , P3 and P4 are
(16-0), (7-2), (5-4) , (11-5) respectively
16 , 5 , 1 , 6
Average waiting time = ((11-2) + (5-4)+ (4-4)+ (7-5)/ 4
= ( 9 + 1 + 0 + 2 )/4 = 3
P1 P2
P3
4 7
0 5
P1
16
2
P2
11
P4
35. Chapter 3
Exercise 1
Assume you have the following jobs to execute with one processor, with
the jobs arriving in the order listed here
i T(pi)
0 80
1 20
2 10
3 20
4 50
Suppose a system uses FCFS and SJF scheduling .
1. Create a Gantt chart illustrating the execution of these processes?
2. What is the average wait time for the processes?
3. What is the turnaround time for process p3?
36. Chapter 3
Exercise 2
Assume you have the following jobs to execute with one processor, with
the jobs arriving in the order listed here
i arrival Time T(pi)
0 0 80
1 3 20
2 5 10
3 7 20
4 10 50
Suppose a system uses FCFS and SJF (preemptive and nonpreemptive)
scheduling.
1. Create a Gantt chart illustrating the execution of these processes?
2. What is the average wait time for the processes?
3. What is the turnaround time for process p3?
37. Priority Scheduling
Module 3
• Priority can be defined internally (to OS) or externally.
• Internal: CPU usage time, memory requirements, I/O usage
• External: importance, type, political issues
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority to be dispatched
• Preemptive
• Non-preemptive
• Non-preemptive once a process dispatched it runs to end
• In the preemptive priority scheme the dispatched process continues running until finish or
a comer with higher priority enters ready queue
• SJF is a priority scheduling where priority is the predicted next CPU burst time
• Problem Starvation – low priority processes may never execute
• Solution Aging – – As a process uses more execution time, dynamically decreases its priority or increase the
priority of other lower processes
37
38. Example of Priority Scheduling (nonpreemptive))
Module 3
• The Gantt chart is (all processes arrive at time 0):
• Average waiting time = (6 + 0 + 16 +18+1)/5 = 8.2
P2 P5
16
0 6
P1 P3
1 18
P4
19
Priority
Burst Time
Process
3
10
P1
1
1
P2
4
2
P3
5
1
P4
2
5
P5
38
39. Chapter 3
Assume 0 is the highest priority
Assume Cases:-
•Preemptive priority
•Non-preemptive priority
39
Process Burst Time
(ms)
Priority Arrival
Time
P1 10 3 0
P2 1 1 1
P3 2 3 2
P4 1 4 3
P5 5 2 4
Non-preemptive
As homework: compute average waiting time and average TAT
Waiting time P1=0
P2= 10-3=7
P5=11-4=7
P3= 16-2=14
P4=18-1=17
41. Chapter 3
Processor Scheduling
Scheduling Disciplines (schemes)
Round Robin (RR):
• It is the preemptive version of the FIFO.
• A process will wait in the queue until all the processes in
queue a head of it took at most quantum.
• A process will have the processor for a period of time equal
to at most quantum (usually 10-100 milliseconds).
• A process that took a full quantum on the processor will be
taken by the timer runout() and placed in the back of the
ready queue to wait for its turn to have the processor.
• We will consider favoring other commers to the queue over
the one leaving the processor
41
42. Example of RR with Time Quantum = 4
Module 3
• The Gantt chart is:
• Average waiting time = (0+4+7+(10-4))/3= 17/3= 5.66msec
• Typically, higher average turnaround than SJF, but better response
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Burst Time
Process
24
P1
3
P2
3
P3
42
43. Chapter 3
Typically, higher average turnaround than
SJF, but better responsiveness.
Average = [((6-2)+(10-8))+(2+(8-4))+(4+(9-
6))]/3=19/3= 6.33 msec. It gets better if the
quantum is chosen smaller
Sure all above numbers ignore context switch
time!
P1 P2 P3 P1 P2 P3 P1 P1
0 2 4 6 8 9 10 12 30
Quantum=2
44. Chapter 3
EXAMPLE DATA:
Process Arrival Service
Time Time
1 0 8
2 1 4
3 2 9
4 3 5
0 8 12 16 26
P2 P3 P4 P1
Round Robin, quantum = 4, no priority-based preemption
Average turn around = ( (20-0) + (8-1) + (26-2) + (25-3) )/4 = 74/4 = 18.5
P1
4
P3 P4
20 24 25
P3
CPU SCHEDULING
44
Scheduling
Algorithms
Note:
Example violates rules for
quantum size since most
processes don’t finish in one
quantum.
Average wait = ( (12) + (4-1) + ((8-2)+(20-12)+(25-24)) + ((12-3)+(24-16)) )/4 = 12+3+15+17= 47/4= 11.75
As homework compute draw time line graph
45. Chapter 3 Round-robin and Priority scheduling
Another option is to combine round-robin and priority scheduling in such
a way that the system executes the highest-priority process and runs processes
with the same priority using round-robin scheduling. Time Quantum= 2
46. Chapter 3 Time Quantum and Context Switch Time
Finish in
>1 Q
2 Q + Overhead
10 Q+9 * Overhead
47. Chapter 3
- Setting quantum too short many process switches lowers the
CPU efficiency. Suppose: quantum = 20 msec. process switch = 5
msec. This yields a 20% CPU waste time and 80% usage.
- Setting the quantum too long may cause poor response time to
short interactive requests (e.g. 500 msec. And 5 CPU wasted
time less than 1%)
Time Quantum and Context Switch Time
48. Chapter 3 Turnaround Time Varies With The Time Quantum
Turnarround time
depends on size of
quantum .
Average turnaround
time does not
necessarily improves
as time quantum size
increase.
49. Chapter 3
Multilevel Queue Scheduling
• Suitable for the case where processes can be classified in different
groups according to some process criteria (type, size, priority,... )
• Multilevel Queue Scheduling partitions the Ready queue into several
separate queues according to response time requirement .
• Processes have different scheduling needs:
• foreground (interactive) -high response time and have external priority
• background (batch) – has less interaction and less priority
50. Chapter 3
Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
Scheduling must be done between the queues. Two possibilities
1) Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
If a student process is running and an interactive process
enters the ready queue, the student process will be
preempted.
Means that no process can run from batch if processes exist in
the higher priority queues-- Starvation is possible
Multilevel Queue Scheduling
51. Chapter 3
Multilevel Queue Scheduling(cont.)
2) Time slice – each queue gets a certain amount of
CPU time which it can schedule amongst its processes;
i.e., 80% to foreground in RR and 20% to background
in FCFS
Problem: normally a process enters a queue will remain there.
System does not move it from queue to queue!. Permanent
queue assignment
52. Chapter 3
Processor Scheduling
Scheduling Disciplines (schemes)
Shortest-Remaining-Time (SRT)
• The preemptive version of the SJF.
• The rule of this scheduler is at the time of the dispatch the shortest time
to finish job is selected to dispatch.
• Beside that whenever a job admitted to the ready queue if the time to
finish of that new comer is smaller than that of currently running it will
preemptive that job and new comer will be dispatched.
Process Burst Time
(ms)
Priority Arrival Time
P1 10 3 0
P2 1 1 1
P3 2 3 2
P4 1 4 3
P5 5 2 4
52
53. Chapter 3
Processor Scheduling
Scheduling Disciplines (schemes)
Highest Response Ratio (HRR)
For each job a head of each dispatch even the following ratio is
computed
RR= (Waiting Time + Service Time) /Service Time
The job of the highest RR will be selected for dispatch
53
54. Chapter 3
Processor Scheduling
Scheduling Disciplines (schemes)
Multilevel Feedback Queues (MLQ)
• The scheduler has set the following goal to satisfy:-
• Favor short jobs
• Favor I/O Bound
• Determine Job Nature Quick
• Quantum gets larger as level # get higher
• The implementation is done through the use of multiples
queues with different priorities in network.
• The process will enter the network at the highest queue.
• The process that consume a full quantum of a queue will be
set-down to the next queue down the network.
54
55. Chapter 3
Processor Scheduling
Scheduling Disciplines (schemes)
Multilevel Feedback Queues (MLQ)
• The process that reaches the lowest queue will round robin in that
queue until it finish execution.
• Each queue has quantum the get bigger from the highest to the lowest
queue.
• When ever the job leaves the network for I/O it will be set to start from
the highest level (the network forget the history). The highest queue is
the one of the highest priority So, a process will never run unless all the
queue above its queue is empty.
• A variation from the original I/O back criteria is to let the process back
one queue higher than the one it was in before the I/O.
55
56. Chapter 3
Multilevel Feedback Queue Scheduling
• A process can move between the various queues:
• Separate processes according to their CPU bursts characteristics
• If a process uses too much CPU, move it to lower priority queue
• Use aging to prevent starvation
• If a process stay too long to get CPU, move it to higher priority
queue
• Multilevel-feedback-queue scheduler defined by the following parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will enter when that
process needs service
57. Chapter 3 Example of Multilevel Feedback Queue
• Three queues with different Q and priority and scheduling algorithm:
• Q0 – RR with time quantum 8 milliseconds (Highest priority)
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS (Lowest priority)
• Scheduling
• A new job enters queue Q0 which is served FCFS basis . When it
gains CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.
• At Q1 job is again served FCFS basis and receives 16 additional
milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.
58. Chapter 3
Multilevel Feedback Queue
Short Process has higher priority:
-Always run processes in Q0
-Only when Q0 is empty, processes in Q1 or Q2 (if Q1 is
empty)get CPU
-Preempt running process from Q1 or Q2 when a process
enters Q0