StuBS
|
A process is a program in execution, while threads (sometimes also lightweight processes) are the unit of scheduling. The scheduler in the operating system only handles threads. In this course, we do not make a distinction between threads and processes because the process concept usually involves isolation between processes. This isolation will only be implemented in the subsequent course (Betriebssystemtechnik).
A thread is therefore the unit that is brought to execution by the dispatcher and thus the actual active object in the system. The resources that are used during execution are the processors. In "normal" general-purpose operating systems, there are usually more threads than processors, which is why multiplexing of the limited processors must take place on all threads. As with most abstractions in the operating system, the user, in this case the thread, should not notice that it is only partially granted access to a resource. So, despite the sharing of processors, it should appear to a thread as if it had a processor to itself. A processor, on the other hand, can only execute a single instruction stream at a time (we are leaving out hyperthreading here for the sake of simplicity, as it makes no conceptual difference).
So we virtualize processors and introduce the concept of virtual processors: Each thread gets its own. This virtual processor executes the code of the thread in exactly the same way as the real CPU would, albeit somewhat slower and with interruptions. The virtual processors are then distributed to the processor cores.
This distribution works through time multiplexing, i.e. the distribution of a resource through time switching: User A may use the resource for a period of time, then he must relinquish it and user B may use it exclusively for their time unit. In this case, the physical CPU is time-multiplexed to the virtual processors. By quickly switching between the virtual CPUs and thus switching through the threads they are executed pseudo-parallel, i.e. it looks to the end user as if several activities are running simultaneously. In reality, however, each activity runs individually on the processor and is only replaced by another one from time to time. Ultimately, the instructions of two threads are interleaved on the real CPU, so that sometimes instructions of one thread are executed and then those of another thread.
While threads and virtual processors are conceptually different, both concepts can technically be used together, as exactly one thread is always assigned to a vCPU. Therefore, from now on only the term thread will be used for switching. From a technical point of view, the state of the processor must be saved when switching between threads. The register values are saved on the stack or in a specific memory area and finally the stack pointer is also saved in the memory. The stack pointer of the target thread can then be loaded. Finally, the register values are restored from the target stack (or target structure) and the instruction pointer is reset (using the ret
instruction). Switching is to be implemented using the context_switch() function.
In pseudo code it would look like this:
More information on the registers to be saved on x86 can be found in x86-ABI: Register and stack layout.
When implementing an operating system, it is important to separate policy and mechanism. The switching mechanism, i.e. the piece of code that exchanges the register values, does not care which scheduling strategy is implemented with its help.
There are two actors in thread switching: Firstly, we have the dispatcher, which implements the mechanism. It is responsible for switching two known threads, but it has no knowledge of the other threads present in the system. The scheduler on the other hand decides which thread is allowed to run next. While the dispatcher is only implemented once per architecture, there can be any number of scheduling strategies for selecting the next thread. The latter are even architecture-independent. The distinction between strategy and mechanism therefore makes it possible to port the operating system kernel to new hardware more quickly and to implement further (better) scheduling strategies independently of the architecture.
With context_switch(), a mechanism is available that can switch from one control flow to another. However, this mechanism assumes that the thread has already been running because it expects certain values on the target stack. So how should threads that have never run be handled?
Because the first-time loading of threads is a special case in the dispatcher, the dispatcher could be extended to include such a query. The dispatcher would then have to check for each thread to be dispatched whether it has already run once. This increases complexity and also leads to a reduction in performance during runtime (remember: switching happens very frequently to maintain pseudo-parallelism).
Therefore, the stack should already be created when the thread object is created and prepared in such a way that context_switch can use it to switch to the new thread. What values should then be on the stack? Specifically, we have the following requirements for the initial loading of a thread:
Switching from the boot code to the first thread on a CPU for the first time is something different and should not be mixed up. context_launch switches to the very first thread because we no longer need the context of the source coroutine. All further context switches (also to new threads) are then made with context_switch.
A thread can give back control voluntarily, e.g. by calling certain syscalls. Alternatively, control over the CPU can also be withdrawn from the thread, for example by setting a timer that causes the operating system to switch (this will be covered in the next exercise).
With cooperative switching, threads hand over control independently. Incorrectly programmed threads can monopolize the system if they never give back control or get stuck in an infinite loop. The operating system has no way of taking control of the processor away from these threads. Windows 3.1, for example, had a cooperative switching mechanism.
Preemptive switching is usually characterized by a regular timer interrupt that interrupts the thread that is currently executing and then causes the operating system to replace (preempt) it with another thread. The preempted thread has no decision-making power: the system retains full control, even if threads become jammed or stuck in an infinite loop.
Depending on the operating system, one strategy or another is selected. In real-time systems, for example, cooperative switching is quite common. Threads can be trusted there, as they come from the same code base and often from the same development team. On the other hand, the necessary calculations about the runtime behavior are greatly simplified without the regular interrupt. Operating systems in the desktop area, on the other hand, usually use preemptive switching, as potentially malicious processes can exist, which often do not even have to be known at the start of the system.