Process and Thread Management in Computer Systems

Process and Thread Management in Computer Systems


Process and Thread Management in Computer Systems

In modern computing systems, process and thread management play a critical role in ensuring efficient execution of tasks. Operating systems are responsible for managing these processes and threads, ensuring that computational resources are allocated optimally. This assignment explores the key concepts of process execution, process management, thread management, and synchronization, providing insights into how these elements interact within an operating system. A process is a program in execution, consisting of program code (text section) and its current activity, including the program counter, registers, stack, and heap (Silberschatz et al., 2018). The operating system manages multiple processes simultaneously through process scheduling. A process moves through several states during execution. In the New state, the process is created but not yet ready to run. In the Ready state, it is loaded into memory and waiting for CPU allocation. The Running state occurs when the process is actively executing on the CPU. If the process is waiting for an event, such as an I/O operation, it enters the Waiting state. Finally, in the Terminated state, the process has completed execution or was forcibly stopped. The Process Control Block (PCB) is a crucial data structure that stores essential process information, including the Process ID (PID), Process State, Program Counter, CPU Registers, Memory Management Info, and Scheduling Info, which includes priority levels and time quantum for scheduling decisions.

Processes can be single-threaded or multi-threaded. A single-threaded model consists of a single execution path per process. It is simpler to manage but inefficient for parallel tasks, making it ideal for basic command-line programs (Silberschatz et al., 2018). On the other hand, a multi-threaded model allows multiple threads to execute within a single process, sharing resources like memory and file handles. This model significantly improves performance, especially for multi-core processors, as seen in web browsers where each tab runs as a separate thread. Multi-threading is implemented using different models. User-Level Threads (ULTs) are managed by user-level libraries, making them fast and lightweight, though they lack direct OS support. In contrast, Kernel-Level Threads (KLTs) are managed by the operating system, offering greater power but incurring higher overhead due to context switching.

The critical-section problem arises when multiple threads or processes attempt to access shared resources, potentially leading to race conditions where the final outcome depends on the order of execution. To prevent such issues, solutions must ensure mutual exclusion, progress, and bounded waiting. A well-known software solution is Peterson’s Algorithm, which provides mutual exclusion using two boolean flags (flag[i] and flag[j]) and a turn variable (Shankar, 2012). The algorithm works by having a process signal its intent to enter the critical section by setting flag[i] = true. If the other process is already in the critical section, the process must wait until it gets its turn. Once granted access, it executes its critical section and, upon completion, sets flag[i] = false, allowing the other process to proceed.

 

Link: https://drive.google.com/file/d/1ItduKFXsuAHZLuGv0VNVtnxU_cXymABw/view?usp=sharing

References

Shankar, A. U. (2012). Lock using Peterson’s algorithm. Distributed Programming, 207-212. https://doi.org/10.1007/978-1-4614-4881-5_9

Silberschatz, A., Galvin, P. B., & Gagne, G. (2018). Operating system concepts, 10e abridged print companion. John Wiley & Sons.



Comments

Popular posts from this blog

Newbie to Newbie Blog: Demystifying Algorithmic Design and Data Structures

Operating systems (OS)