StuBS
|
This is by no means a complete introduction to the C++ programming language, but rather a glimpse into certain topics which are relevant for this lecture's exercises. The examples below try to give you a deeper grasp of C++ internals by taking a look at C++ code examples, terminal output and assembly generated by the compiler.
For a more complete introduction to C++ refer to cplusplus.com.
Every variable definition reserves a piece of memory and associates a name as well as a data type with it. We can access this memory via its name. The memory's data type, on the other hand, determines how the region of memory is to be interpreted by the compiler. In order to clarify these rather abstract notions, let us take a look at the compiler's (g++ -m32 -fno-PIE test.c -o test
) output generated from the following piece of C++ code:
First of all, we use nm
to get a list of global objects, their addresses in memory and their sizes. Since we are only interested in the variables defined above, we indicate missing parts of nm
's output by using ...
.
The B
in the program's output above indicates that both variables are part of the BSS segment. They are 4 and 8 bytes large. All variables that are part of the BSS segment are initialized to their zero value on the program start. This is one of the fundamental properties of the BSS segment. We see that the variable integer
is a memory region of 4 bytes and starts at address 0x2028. However, we do not have any information regarding its data type. The data type is merely used by the compiler to generate the correct instructions. From this example, we see that C and C++ do not ensure type-safety like other languages. Let us take a look at the instructions generated from our example:
We can see that the function main
starts at address 0x52d. This is not surprising, since we have already seen that address in nm
's output above. The first instruction adds 1 to the value stored at address 0x2028. This corresponds to the C++ line integer += 1;
. The next three instructions increment the foatingpoint
's value by 1. The code is more complex as it uses the processor's floating-point unit. Afterward, we move the value 0
into eax. This is the function's return value.
Calling objects by value can be slow and may consume a lot of resources. It is a rather inflexible approach. We can achieve better results by introducing a small indirection and forward pointers to objects, instead of the objects themselves to functions.
This function receives a pointer to a memory location that holds a variable of type integer. Every time we want to access the actual value stored at that memory location, we have to dereference the pointer (*number
) first. Let us take a look at how this function is translated to assembly:
The first thing you might have noticed is the peculiar function name. This is caused by a process that is called "C++ Name Mangling". The C++ compiler encodes each function's parameter types (void
and int *
in our case) into the function's name. The function's first instruction loads the pointer number
from its location on the stack into the register %eax
. The next line uses the register indirect addressing mode ((%eax)
) to access the value stored at the memory location that %eax
points to. That means if we would call the function like this: increment(&integer)
, then %eax
would contain the value 0x2020.
One large problem when working with pointers is the fact that we do not get any guarantee that the pointer actually points to a valid location in memory. The common way for functions to indicate an error state when they normally would return a pointer is to return the so-called null pointer (i.e. NULL
in C or nullptr
in modern C++). The C Standard specifies that this special value is actually the value 0, hence its name.
In order to prevent these problems of pointers, C++ allows us to address objects in a different, more secure way, namely by references. A reference is an alias for a certain object, meaning that a reference always points to a valid object. Normally, compilers implement this high-level language feature via pointers, but the C++ language specification does not restrict them to this implementation.
As you can see, a reference is denoted by the &
character. Unlike pointers, we do not have to dereference references, as they are aliases of their respective objects, that is, the name number
behaves just as if it were the actual object that the reference points to. If we take a closer look at the assembly generated by the compiler, we can see that the compiler chose to implement the reference via a pointer. It produces the exact same output as in our previous pointer-based example:
As you might already know, C++ is an object-oriented programming language. The first steps in that direction have already been taken in C. It allows us to group multiple variables of varying types into composite data types. The following C++ data structure definition allows us to create and work with objects of type foobar
.
Additionally, C++ allows us to define methods of a structure. Methods are functions that work on the constituents of the data structure. When accessing variables by their name inside a method, the compiler first checks the surrounding structure for a matching variable, before interpreting the name as a global variable.
In this example, we can see that the method calc
is declared inside the structure's declaration. Afterwards, it is defined (i.e. filled with life) outside of the structure. It is possible to define the method directly in the structure declaration (similar to classes in Java), however, separating a class's interface and implementation is more common in the C++ ecosystem.
The method calc()
accesses the field foo
, meaning the function seems to have some sort of implicit parameter that points to a specific object and calc()
is executed in this object's context. The function does in fact have such an implicit parameter and we can access it using the name this
. The parameter is a pointer to the respective object, so in the case of the method calc()
it is of type foobar *
. We can also explicitly tell the compiler to access the respective object's field foo
, by writing this->foo
. Now let us take a look at the assembly generated by the compiler:
You might have noticed that the object-oriented abstraction is realized simply by providing the this
-pointer as the routine's first argument. All other parameters are moved by the size of the this
pointer, meaning that the method's actual argument x == 0x8(%esp)
. This is equivalent to a regular function with the signature int calc(foobar *this, int x)
. The data structure's member foo
is simply accessed by dereferencing the this
pointer (%eax)
.
However, the coupling of code and data is not the only characteristic of object-oriented programming. Another feature that is often associated with object-oriented languages is inheritance
. This allows a class to access methods and members of the class it inherits from and allows it to extend that base class's functionality by adding new methods or member variables.
This example shows a class foobar_premium
, which extends our previously defined class foobar
. It inherits all of foobar
's members and methods, the keyword public
ensures that all of foobar's members and methods that are declared public
are public
in foobar_premium
as well. This is similar to Java, with a few important exceptions:
virtual
by default, meaning they cannot be overriden by derived classes (this is equivalent to Java's final
keyword).private
keyword to make all inherited members and methods private in our derived class.You might wonder why we are talking about classes when the example code uses the keyword struct
. There also exists the keyword class
in C++, but the meaning of both keywords is actually almost identical. The only difference beeing that all members of struct
s have public
visibility by default, while the default visibility of class
es is private
.
C/C++ programs generally have three areas, where data may be located:
main()
-function is called. They are filled with zeros if they have not been initialized with a certain value in the program's source.void *
) memory using the library function malloc(3)
. The library itself utilizes the system calls brk(2)
& mmap(2)
. When programming in C++ we usually do not call malloc(3)
directly, but use the language's new
-operator to create new objects.However, we cannot rely on Linux's libc implementation when building an operating system and, thus, do not have dynamic memory management at our disposal. We have to rely on memory types 1 and 2 during this lecture. We will revisit this topic and implement dynamic memory management in BST.
When working with local variables we have to take care when it comes to the size of objects. The stack of our operating system is only 4096 bytes large and one common mistake is to forget about that fact and allocate an object that is too large, leading to a stack overflow:
This will not cause our operating system to crash, but it will corrupt any data structure that is located directly below the stack in memory. This in turn may lead to nondeterministic behaviour of your operating system, as it fills other global variables with arbitrary data.
Another common mistake would be to return the address of a local variable. As we have discussed earlier, the lifetime of a local variable allocated on the stack ends once its surrounding function returns. That means that once the function returns the returned address is no longer valid, which also may result in memory corruption bugs. Normally, the compiler will find these errors, but there exist cases where it does not, so be careful when working with local variables.
When implementing an operating system, we often have to manipulate the hardware's state directly by writing to its control registers. The hardware's designers often try to implement their hardware as efficiently as possible, which may lead to unrelated control bits being joined into a single byte. One example from our first exercise is the CGA graphics card. Each character on the screen (80x25 characters) is programmed via 2 bytes. The first byte contains the ASCII character that should be displayed, while the second byte contains the character's fore- and background color.
Bit | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
---|---|---|---|---|---|---|---|---|
Meaning | Blink | Background Color | Foreground Color |
We could assemble these configuration bytes using C or C++'s bitwise operations. These are the bitwise AND (&
), OR (|
), NOT (~
), XOR (^
) as well as both shift operations (<<
, >>
). Using these we could write a wrapper function that does the job of assembling the two bytes for us like in the example below:
We first use the bitwise AND with a bitmask on all three arguments to clear all bits outside of the variable's valid range. Afterward, we shift the bits of background
and blink
to their correct position and combine all three arguments into a single two-byte value using bitwise OR.
There exist common combinations of these bitwise operators for adding, removing or comparing certain bits. They often make use of a bitmask which has all bits set to 1 that one is interested in:
Manipulating bits like this is fine for small applications. If we use these operations on larger projects, however, they tend to make the code impossible to read since it is full of magic constant values. To solve this issue, C and C++ offer bit fields that allow us to access single bits in a data structure by a name, just like regular structure members.
On most compilers, the structure's first member (foreground
) will be stored in the least significant bits (i.e. starting at bit 0). Furthermore, the additional attribute packed
will prevent the compiler from adding padding bits. Padding bits are additional bits added by the compiler between the structure's actual member variables so that the latter are aligned in memory in a way that the processor may access them faster, yielding better performance. If we use our newly created data structure, the code from the previous example becomes much more legible:
The keyword volatile
is not a magic wand that miraculously lets the compiler fix all race conditions in a program. Its meaning becomes clearer, if we understand that the compiler does not read a variable's value from memory each time it is accessed. Accesses to memory are slow when compared to working with the processor's registers. Therefore, the compiler normally generates code that will read the variable's value from memory into a register, perform all relevant modifications to it using the value from the register and only then write the resulting value back to memory. Here is a short example:
Here, the variable foo
's value is loaded from memory. During the next three instructions its actual value is stored in register eax
and modifications done by these operations are not visible in memory. Only after the final mov
instructions is the new value visible in memory.
The keyword volatile
forbids the compiler to do these optimizations. If a variable is declared volatile
the compiler has to read the value from memory, perform one operation and write the updated value back to memory.
One of C++'s most controversial features, apart from C++-Templates, is operator overloading. This feature allows you to define the semantics of standard operators, like a + b
for example, when used with objects of your classes. In order to modify the behavior of comparing two objects using the ==
operator, for example, you simply have to define a method in your class named operator==
:
Since there exist other operator implementations, C++'s regular rules for function overloading will decide which implementation is going to be used for a certain operation. We may also overload an operator by only specifying one instead of two arguments:
In that case, the operator's left-hand side will be the implicit this
argument. Since the operator is defined as a method of class foo
, it may access all of its private fields, which would not be allowed in the previous implementation.
When it comes to I/O, the C++ standard library uses this feature rather curiously. It overloads the shift operator <<
to send numbers and strings to an output stream:
Every operator<<
-call returns a reference to the output stream.
In Java, all of a class's methods are virtual
which means that every call to a method is dispatched dynamically. Another way of dispatching method calls is static dispatch. In this chapter, we take a closer look at virtual and static dispatches in C++.
The code above first creates an object which is assignable to a variable of type Object *
. So it is either an object of class Object
or its class inherits from the class Object
. Whether obj
points to an Object
or to DerivedObject
depends on its dynamic type. The variable that holds a reference to the object determines obj
's static type. This means that while *obj
will have the static type Object
, we cannot directly discern its dynamic type from the code above.
The difference between dynamic and static dispatch is, whether the static or dynamic type of an object is used by the compiler to determine the method being called. The compiler will generate a call
instruction, if the method ->method()
is dispatched statically:
First, the pointer to the object is pushed onto the stack and then the method is called. Notice that the method name has been subject to C++'s name mangling.
If the program should use a dynamic dispatch instead, it first has to figure out the referenced object's type. This also means that the object's type information has to be stored somewhere. If there is not a single virtual method defined in a class, classes and structs are mapped directly to memory. Here is an example of how our class Object
might be defined:
Here, method()
will be dispatched statically and there is no need to know type information during runtime of the program. Therefore, C++ will use exactly 12 bytes (3 * 4 bytes for each integer) for an object of this class. The class's attributes (a
, b
and c
) are stored in memory in the order that they appear in the class definition. Therefore, the following holds:
If we understand what the compiler does we could even cast a fitting piece of memory to an Object
pointer. We assume that the code is compiled for a little-endian machine in the following example
In order to retrieve the dynamic type information, the compiler has to store this information at the object's memory location. Most compilers will add a pointer to the class's Virtual Function Table at the beginning of the object's memory. If two objects have the same dynamic type, then their respective pointers will point to the same table.
Let us take a look at this example using GDB at the specified breakpoint.
The compiler did in fact insert a pointer to Object
's Vtable (<vtable for Object+8
) at the beginning of its memory. The VTable itself contains a pointer to Object::method()
. The pointer to the Vtable directly depends on the object's dynamic type and, therefore, this function table may be used to implement the dynamic dispatch. Here is the compiler-generated assembly for the dynamic method call (%eax
hold the Object *
):
Every virtual method is assigned a slot in the virtual function table. We can compile our files using g++ -fdump-class-hierarchy
to see what information the compiler adds to these tables. Here is the compiler's output:
As you may have realized already, the vtable actually starts 8 bytes before the address pointed to by the virtual pointer stored in our objects. The additional space is used for a pointer to the class's runtime type information (RTTI). Using our newly acquired knowledge, we might be able to create a "fake" VTable:
Compiling the code as a 32-bit binary (using the compiler flag -m32
), we get the expected output: X: 1 2 3
. Finally, here is a quick summary of what we have learned so far:
Links:
In our previous chapters, we have already considered how compound data types (e.g. struct
, union
and class
) are mapped into memory. We also saw that members of simple data structures or "Plain Old Data Structures" (PODs) as they are sometimes called, are simply arranged into memory in their order of appearance (from top to bottom) in ascending order, i.e. from low to high addresses. Furthermore, we have learned that pointers to a class's virtual
methods are stored in the class's virtual function table and that each object contains a pointer to this table (vptr). What we did not cover so far is inheritance. Therefore, let us compile the example below using clang -Xclang -fdump-record-layouts
:
From the program's output, we can see that the class Base
is mapped into memory just the way we would expect it. The left side in the text below shows the offsets from the base address of a pointer to an object of the class:
Both of its integer members (a
and b
) are simply mapped into memory one after the other. If we take a look at the class Derived
, on the other hand, we notice that Base
's members have been prepended to the attribute cc
defined in the derived class itself. Therefore, cc
lies at offset 8:
Storing objects in that way has a huge advantage. Since all member variables inherited by Base
are at the exact same offset as they would appear in an object of the class Base
, we can treat any object of type Derived
just like an object of type Base
. Additionally, if we add a virtual
method to Base
, each Derived
object will be able to use the vptr to point to its own virtual function table.
As long as we inherit from a single base class only, this is all you need to know when it comes to the memory layout of classes. If we inherit from multiple classes, however, the situation is a bit more complicated. Here is an example without the use of virtual
methods:
This is the memory layout of the derived class, when it inherits from Base
and Base2
:
You might have noticed that the memory layout of its base classes is mapped one-to-one into the derived class's memory. However, we cannot use a pointer to an object of type Derived
as a pointer of type Base
directly. We have to adjust the address to which the this
-pointer points to fix this. This is exactly what the compiler does when calling methods of the base class. Let us take a look at the function Base2 *foo(Derived *x) {return x;}
to illustrate this:
To convert the Derived *
x to a Base2 *
, the compiler increases the pointer to x by 8, which is exactly the offset to the Base2
-part of Derived
. The instructions test; cmove
treat the special case that x
is a null pointer. In that case, the value returned should still be a null pointer and not a pointer to address 8.
These implicit offset changes are the reason why there exist multiple cast operations in C++. If classes inherit from multiple base classes and use virtual methods, the situation becomes even more complex. We will not cover this here, but if you are interested you can take a look at the links provided below.