StuBS
Loading...
Searching...
No Matches
Synchronization objects and passive waiting

The synchronization objects to be implemented are fundamentally different from the mechanisms already programmed to switch off interrupts or the guard. While the latter solve the competition between ISRs and normal control flow or between processors at the lower level, synchronization objects allow mutual exclusion between user threads. The guard is nevertheless required for the implementation, as there are critical sections within the synchronization objects that must be protected with the existing mechanisms.

Ultimately, we are building a new synchronization mechanism based on the existing ones, but one that is better suited to threads because it allows passive waiting.

Synchronization on different levels of the system

Passive vs. active waiting

Already implemented primitives such as the spinlock or the ticketlock, which are used for mutual exclusion between processors, actively wait in a loop for a value to change. While they wait for the critical section to be released, no other control flow can take over, as the dispatcher that would realize the control flow change is itself dependent on the spinlock/ticketlock.

When passively waiting for a semaphore or a mutex, on the other hand, the waiting thread can or must be displaced by other threads. The waiting thread is not brought back into scheduling until the wait condition is fulfilled. This allows the system to make more progress than if the waiting thread were actively waiting for the condition. In a cooperative Scheduling, waiting threads would have to relinquish control and may no longer be scheduled until the condition is fulfilled.

Active waiting can also be useful for user applications in some cases, as passive waiting provokes a context switch, which may be more expensive than (briefly) actively waiting.

Synchronization objects

Boolean synchronization (mutex)

A very simple synchronization object is the mutex. A mutex consists of a Boolean variable that is set when the critical section is entered and reset when it is exited. Access to the Boolean variable must be protected against race conditions. If the mutex is taken, the thread must wait until the owning thread releases it again.

Semaphores

A more general concept than a mutex is the semaphore. Semaphores can also be used to express Boolean mutexes. A semaphore contains an integer variable that can be counted down (Semaphore::P(), "Prolaag") and up (Semaphore::V(), "Verhoog"). If the counter is zero when entering the critical section (P()), the calling thread must wait. When leaving the critical section (V()), the counter is incremented and any sleeping threads are woken up. The start value of a semaphore can also be prefixed.

There are three possible uses for semaphores:

  • Protection of a critical section
  • signaling
  • Limiting resource requirements

Protection of a critical section (use as mutex)

If the start value of the semaphore is initialized to 1 and shared between threads, it can be used to protect critical sections, similar to a Boolean mutex.

// shared semaphore
Semaphore s = newSemaphore ( 1 );
P(S)
// critical section
V(S)
Semaphore used for synchronization of threads.
Definition semaphore.h:15

Signaling

A semaphore can also be used for signaling the arrival of data. For example, the keyboard driver may have just placed a new character in the character buffer. It then increments the semaphore by one to indicate to the user application that there are new characters in the buffer. The user thread is added to the ready list, scheduled later and can read the character. In one of the next runs, it will get stuck again during the P() operation if there are no more characters in the buffer.

// shared semaphore
Semaphore s = newSemaphore ( 0 );
// user thread
void usr_thread () {
while ( 1 ) {
s.P();
// read character
}
}
// keyboard driver
void keyboard_driver ( char newChar ) {
// put character in buffer
s.V()
}

Limit resource requirements

A limited buffer could be used in the communication between threads. A semaphore can then be pre-initialized to the size of the buffer. The writer executes P(), decrementing the counter that specifies the remaining memory locations. The reader can use V() to release memory in the buffer that has already been read.

#define N 42
int buffer[N];
// shared semaphore
Semaphore s = newSemaphore ( N );
// reader
int reader () {
int x = buffer[curPos];
curPos++;
s.V()
return x;
}
// writer
int writer ( int x ) {
buffer[curWritePos] = x;
curWritePos++;
s.P();
}

Sleep on time

Threads sometimes want to do nothing for a certain period of time. This requires the Bellringer, which manages a list of Bells. Bells are waiting rooms.

The bell is created by the thread itself, but must be checked with every timer interrupt. As the check is more frequent than the insertion, it makes sense to design the data structure in such a way that the check is very fast. The Bell created can be placed on the stack of the thread to be put to sleep, because we no longer need the Bell object after waking up anyway.

Idle

If we keep putting more and more threads to sleep, it can happen that there are no more threads in the system that can be processed. However, processors always need something to do or you have to put them to sleep (hlt).

An idle thread should therefore be created for each CPU to keep the CPU busy when there is nothing more to do. Idle threads must never be placed in the ready list!

void idle () {
while (1);
}

An infinite loop would work, but it kills polar bears. Therefore, we prefer to put the processor to sleep so that this core consumes less power.

Between loading the idle thread and the `hlt` instruction, a thread becomes executable

What happens, however, if a thread becomes active between the moment the idle thread is selected and the executed hlt instruction? We do not notice the change of state of the thread, which leads to a Lost Wakeup situation. The processor goes to sleep even though the bellringer has woken up a thread.

To solve the problem, the instructions sti; hlt; directly after each other lead to the two instructions being executed atomically. This means that no interrupt can intervene between sti and hlt.

Tickless kernel

Interrupts wake up the CPU again, this also applies to the timer interrupt. However, this in particular often leads to an unintentional wake-up, as the CPU only wakes up to check that there is still no work to be done. For this reason, it makes sense to implement a tickless kernel that switches off the timer in idle mode before the CPU is put to sleep.

void idle() {
watch.block();
CPU::idle();
watch.unblock();
}
void unblock()
Allow the previously blocked timer interrupts on this core.
Definition watch.cc:74

Unfortunately, this approach does not quite work, as the bellringer also needs the timer to function correctly. This must therefore be handled separately.