Scheduling Algorithms in Operating System

Learn via video courses
Topics Covered


In the operating system, everything is carried out by processes. At any given time, there is only one process that is running on the CPU. A process scheduler removes one process from the running state in the CPU and selects another process to run based on some scheduling algorithms in OS.

What is a Scheduling Algorithm?

A CPU scheduling algorithm is used to determine which process will use CPU for execution and which processes to hold or remove from execution. The main goal or objective of CPU scheduling algorithms in OS is to make sure that the CPU is never in an idle state, meaning that the OS has at least one of the processes ready for execution among the available processes in the ready queue.

There are two types of scheduling algorithms in OS:

types of scheduling algorithms

Preemptive Scheduling Algorithms

In these algorithms, processes are assigned with priority. Whenever a high-priority process comes in, the lower-priority process that has occupied the CPU is preempted. That is, it releases the CPU, and the high-priority process takes the CPU for its execution.

Non-Preemptive Scheduling Algorithms

In these algorithms, we cannot preempt the process. That is, once a process is running on the CPU, it will release it either by context switching or terminating. Often, these are the types of algorithms that can be used because of the limitations of the hardware.

There are some important terminologies to know for understanding the scheduling algorithms:

Arrival Time: This is the time at which a process arrives in the ready queue.

Completion Time: This is the time at which a process completes its execution.

Burst Time: This is the time required by a process for CPU execution.

Turn-Around Time: This is the difference in time between completion time and arrival time. This can be calculated as:

Turn Around Time = Completion Time – Arrival Time

Waiting Time: This is the difference in time between turnaround time and burst time. This can be calculated as:

Waiting Time = Turn Around Time – Burst Time

Throughput: It is the number of processes that are completing their execution per unit of time.

Why Do We Need Scheduling?

A process to complete its execution needs both CPU time and I/O time. In a multiprogramming system, there can be one process using the CPU while another is waiting for I/O whereas, in a uni programming system, time spent waiting for I/O is completely wasted as the CPU is idle at this time. Multiprogramming can be achieved by the use of process scheduling.

The purposes of a scheduling algorithm are as follows:

  • Maximize the CPU utilization, meaning that keep the CPU as busy as possible.
  • Fair allocation of CPU time to every process
  • Maximize the Throughput
  • Minimize the turnaround time
  • Minimize the waiting time
  • Minimize the response time

Types of Scheduling Algorithms in OS

First Come First Serve (FCFS) Scheduling Algorithm

First Come First Serve is the easiest and simplest CPU scheduling algorithm to implement. In this type of scheduling algorithm, the CPU is first allocated to the process which requests the CPU first. That means the process with minimal arrival time will be executed first by the CPU. It is a non-preemptive scheduling algorithm as the priority of processes does not matter, and they are executed in the manner they arrive in front of the CPU. This scheduling algorithm is implemented with a FIFO(First In First Out) queue. As the process is ready to be executed, its Process Control Block (PCB) is linked with the tail of this FIFO queue. Now when the CPU becomes free, it is assigned to the process at the beginning of the queue.


  • Involves no complex logic and just picks processes from the ready queue one by one.
  • Easy to implement and understand.
  • Every process will eventually get a chance to run so no starvation occurs.


  • Waiting time for processes with less execution time is often very long.
  • It favors CPU-bound processes then I/O processes.
  • Leads to convoy effect.
  • Causes lower device and CPU utilization.
  • Poor performance as the average wait time is high.

Shortest Job First (SJF) Scheduling Algorithm

Shortest Job First is a non-preemptive scheduling algorithm in which the process with the shortest burst or completion time is executed first by the CPU. That means the lesser the execution time, the sooner the process will get the CPU. In this scheduling algorithm, the arrival time of the processes must be the same, and the processor must be aware of the burst time of all the processes in advance. If two processes have the same burst time, then First Come First Serve (FCFS) scheduling is used to break the tie.

The preemptive mode of SJF scheduling is known as the Shortest Remaining Time First scheduling algorithm.


  • Results in increased Throughput by executing shorter jobs first, which mostly have a shorter turnaround time.
  • Gives the minimum average waiting time for a given set of processes.
  • Best approach to minimize waiting time for other processes awaiting execution.
  • Useful for batch-type processing where CPU time is known in advance and waiting for jobs to complete is not critical.


  • May lead to starvation as if shorter processes keep on coming, then longer processes will never get a chance to run.
  • Time taken by a process must be known to the CPU beforehand, which is not always possible.

RoundRobin Scheduling Algorithm

The Round Robin algorithm is related to the First Come First Serve (FCFS) technique but implemented using a preemptive policy. In this scheduling algorithm, processes are executed cyclically, and each process is allocated a small amount of time called time slice or time quantum. The ready queue of the processes is implemented using the circular queue technique in which the CPU is allocated to each process for the given time quantum and then added back to the ready queue to wait for its next turn. If the process completes its execution within the given quantum of time, then it will be preempted, and other processes will execute for the given period of time. But if the process is not completely executed within the given time quantum, then it will again be added to the ready queue and will wait for its turn to complete its execution.

The round-robin scheduling is the oldest and simplest scheduling algorithm that derives its name from the round-robin principle. In this principle, each person will take an equal share of something in turn.

This algorithm is mostly used for multitasking in time-sharing systems and operating systems having multiple clients so that they can make efficient use of resources.


  • All processes are given the same priority; hence all processes get an equal share of the CPU.
  • Since it is cyclic in nature, no process is left behind, and starvation doesn't exist.


  • The performance of Throughput depends on the length of the time quantum. Setting it too short increases the overhead and lowers the CPU efficiency, but if we set it too long, it gives a poor response to short processes and tends to exhibit the same behavior as FCFS.
  • The average waiting time of the Round Robin algorithm is often long.
  • Context switching is done a lot more times and adds to the overhead time.

Shortest Remaining Time First (SRTF) Scheduling Algorithm

Shortest Remaining Time First (SRTF) scheduling algorithm is basically a preemptive mode of the Shortest Job First (SJF) algorithm in which jobs are scheduled according to the shortest remaining time. In this scheduling technique, the process with the shortest burst time is executed first by the CPU, but the arrival time of all processes need not be the same. If another process with the shortest burst time arrives, then the current process will be preempted, and a newer ready job will be executed first.


  • Processes are executed faster than SJF, being the preemptive version of it.


  • Context switching is done a lot more times and adds to the overhead time.
  • Like SJF, it may still lead to starvation and requires the knowledge of process time beforehand.
  • Impossible to implement in interactive systems where the required CPU time is unknown.

Master the complexities of operating systems with Scaler Topics Free certification course. Join now & learn the foundations of the operating system.


  • Scheduling algorithms tell the CPU which will be the next process to have CPU time.
  • The main goal of scheduling algorithms in OS is to Maximize Throughput.
  • Scheduling algorithms can be preemptive and non-preemptive.
  • First Come First Serve, Shortest Job First, Shortest Remaining Time First, and Round Robin are four widely used scheduling algorithms, each with its own advantages and disadvantages.
  • The best scheduling algorithms depend on the situation, needs, and hardware and software capabilities.