Critical Section in OS

Learn via video courses
Topics Covered

The critical section refers to a specific part of a program where shared resources are accessed, and concurrent execution may lead to conflicts or inconsistencies. It is essential for the operating system to provide mechanisms like locks and semaphores to ensure proper synchronization and mutual exclusion in the critical section. These safeguards prevent concurrent processes from interfering with each other, maintaining the integrity of shared resources.

What is the Critical Section Problem in OS?

When there is more than one process accessing or modifying a shared resource at the same time, then the value of that resource will be determined by the last process. This is called the race condition.

Consider an example of two processes, p1 and p2. Let value=3 be a variable present in the shared resource.

Let us consider the following actions are done by the two processes,

The original value of,value should be 6, but due to the interruption of the process p2, the value is changed back to 3. This is the problem of synchronization.

The critical section problem is to make sure that only one process should be in a critical section at a time. When a process is in the critical section, no other processes are allowed to enter the critical section. This solves the race condition.

Solutions to the Critical Section Problem

To effectively address the Critical Section Problem in operating systems, any solution must meet three key requirements:

  1. Mutual Exclusion: This means that when one process is executing within its critical section, no other process should be allowed to enter its own critical section. This ensures that shared resources are accessed by only one process at a time, preventing conflicts and data corruption.

  2. Progress: When no process is currently executing in its critical section, and there is a process that wishes to enter its critical section, it should not be kept waiting indefinitely. The system should enable processes to make progress, ensuring that they eventually get a chance to access their critical sections.

  3. Bounded Waiting: There must be a limit on the number of times a process can execute in its critical section after another process has requested access to its critical section but before that request is granted. This ensures fairness and prevents any process from being starved of critical section access.

Various solutions have been developed to meet these requirements and manage the Critical Section Problem. These solutions primarily use software-based locks for synchronization. Here are some common approaches:

  1. Test-and-Set: This method involves using a shared boolean variable, typically called "lock," and the "test_and_set" instruction, which atomically sets the lock to true.

  2. Compare-and-Swap: Similar to "test_and-set," this approach also uses a shared boolean variable but employs the "compare_and_swap" instruction. It sets the lock to true only if the value passed to it matches an expected value.

  3. Mutex Locks: Mutex (short for mutual exclusion) locks provide functions like "acquire()" and "release()" that execute atomically. These locks ensure that only one process can acquire the lock at a time.

  4. Semaphores: Semaphores are more advanced synchronization tools. They use "wait()" and "signal()" operations, executed atomically on a semaphore variable (typically an integer). Semaphores can manage access to resources more flexibly.

  5. Condition Variables: This approach maintains a queue of processes waiting to enter their critical sections. It ensures orderly access by managing the waiting processes based on certain conditions.

The essential principle across these solutions is to guarantee exclusive access to critical sections while allowing processes to make progress and ensuring that no process is left waiting indefinitely. The specific mechanisms and tools used may vary, but they all aim to maintain the integrity of shared resources in the system.

Strategies For Avoiding Problems

In computer science and operating systems, managing critical sections is a crucial aspect of ensuring concurrent programs run smoothly and without conflicts. Here are some effective strategies for avoiding critical section problems:

1. Fine-Grained Locking:

Fine-grained locking involves breaking down resources into smaller, more specific units and applying locks only to those units rather than a broad, all-encompassing lock. This allows for increased concurrency as different processes can access different parts of the resource simultaneously.

2. Lock Hierarchies:

Lock hierarchies establish a specific order in which locks must be acquired. This helps prevent deadlocks, where two or more processes are unable to proceed because each is waiting for the other to release a lock. By enforcing a consistent lock acquisition order, deadlocks can be avoided.

3. Read-Write Locks:

Read-write locks differentiate between operations that only read data and those that write or modify it. Multiple processes can hold the read lock simultaneously, enabling concurrent reading. However, only one process can hold the write lock, ensuring exclusive access during writes.

4. Optimistic Concurrency Control (OCC):

OCC is a technique used in database systems to manage concurrent access. It allows multiple processes to read data without obtaining locks. When a process attempts to write, the system checks if the data has been modified by another process. If not, the write proceeds; otherwise, it is retried.

5. Lock-Free and Wait-Free Data Structures:

Lock-free and wait-free data structures are designed to operate without traditional locks. Instead, they use atomic operations or specialized algorithms to ensure progress even in the presence of concurrent access.

Implementing these strategies requires a deep understanding of the specific requirements and constraints of the system in question. By carefully selecting and combining these techniques, developers can create robust, concurrent programs that effectively avoid critical section problems.

Examples of critical sections in real-world applications

Let us consider a classic bank example, this example is very similar to the example we have seen above.

  • Let us consider a scenario where money is withdrawn from the bank by both the cashier(through cheque) and the ATM at the same time.
  • Consider an account having a balance of ₹10,000. Let us consider that, when a cashier withdraws the money, it takes 2 seconds for the balance to be updated in the account.
  • It is possible to withdraw ₹7000 from the cashier and within the balance update time of 2 seconds, also withdraw an amount of ₹6000 from the ATM.
  • Thus, the total money withdrawn becomes greater than the balance of the bank account.

This happened because of two withdrawals occurring at the same time. In the case of the critical section, only one withdrawal should be possible and it can solve this problem.

Use of Critical Section can Impact the Scalability

It's essential to understand that the improper or excessive use of critical sections can significantly impact the scalability of a system.

Scalability is a fundamental characteristic of a system that reflects its ability to handle an increasing workload or user demand. A scalable system should be able to efficiently accommodate more users, processes, or data without a proportional decrease in performance.

Critical sections are essential for ensuring data integrity and preventing race conditions in multi-threaded or multi-process environments. However, they inherently introduce contention among processes, as only one process can enter a critical section at a time. This contention can lead to several scalability challenges:

  1. Bottlenecks: When multiple processes contend for access to a single critical section, it can create a bottleneck. The contention can limit the system's ability to scale, as numerous processes are forced to wait for their turn to access the resource.

  2. Reduced Parallelism: Excessive use of critical sections can limit parallelism in the system. Processes may spend significant time waiting for access to shared resources, reducing the overall throughput of the system.

  3. Locking Overhead: The act of acquiring and releasing locks within critical sections incurs overhead. In highly concurrent systems, this overhead can become a significant performance inhibitor, making the system less scalable.

Advantages and Disadvantages of Critical Sections in Process Synchronization

Like any synchronization mechanism, critical section come with their own set of advantages and disadvantages.

Advantages:

  1. Data Integrity: Critical sections provide a controlled environment where shared resources can be accessed by only one process at a time. This ensures that data is modified in a consistent and predictable manner, preventing conflicts and corruption.

  2. Simplicity and Ease of Use: Implementing critical sections is often straightforward, especially with the availability of synchronization primitives provided by operating systems and programming languages. This makes them a convenient choice for managing shared resources.

  3. Predictable Execution: Critical sections allow developers to specify exactly which parts of the code need to be executed exclusively. This level of control ensures that processes do not interfere with each other, leading to more predictable program behavior.

  4. Compatibility with Legacy Code: Critical sections are a well-established synchronization mechanism and are supported by a wide range of programming languages and operating systems. This makes them compatible with existing codebases and systems.

Disadvantages:

  1. Potential for Deadlocks: If not used with care, critical sections can lead to deadlocks, where processes are unable to proceed because they are waiting for resources held by other processes. Designing proper lock acquisition order is crucial to avoid this issue.

  2. Reduced Concurrency: While a process is in a critical section, other processes that require access to the same resource must wait. This can lead to reduced parallelism and overall system throughput.

  3. Overhead of Locking and Unlocking: Acquiring and releasing locks within critical sections incurs overhead. In highly concurrent systems, this can become a performance bottleneck.

  4. Complexity in Debugging: Debugging programs with critical sections can be more challenging, as issues related to race conditions and deadlocks may not always be immediately apparent.

Conclusion

  • The critical section is used to solve the problem of race conditions occurring due to synchronization.
  • The critical section represents the segment of code that can access or modify a shared resource.
  • There can be only one process in the critical section at a time.
  • There are many solutions to critical section problems such as Peterson's solution, semaphore, etc.