• No results found

DYNAMIC MEMORY MANAGMENT IN C++

N/A
N/A
Protected

Academic year: 2021

Share "DYNAMIC MEMORY MANAGMENT IN C++"

Copied!
25
0
0

Loading.... (view fulltext now)

Full text

(1)

DYNAMIC MEMORY MANAGMENT IN C++

Martin Sperens

Computer Game Programming, bachelor's level 2019

Luleå University of Technology

Department of Computer Science, Electrical and Space Engineering

(2)

D YNAMIC M EMORY M ANAGMENT IN C++

Martin Sperens

Luleå University of Technology – Campus Skellefteå

September 24, 2019

(3)

Acknowledgements

I want to thank my Supervisor Patrik Holmlund for giving me valuable feedback on this report.

-Martin Sperens

(4)

A BSTRACT

Memory allocation is an important part of program optimization as well as of com- puter architecture. This thesis examines some of the concepts of memory allocation and tries to implement overrides for the standard new and delete functions in the c++ library using memory pools combined with other techniques. The overrides are tested against the standard new and delete as well as a custom memory pool with perfect size for the allocations. The study finds that the overrides are slightly faster on a single thread but not on multiple. The study also finds that the biggest gain on performance is to create custom memory pools specific to the programs needs.

Lastly, the study also lists a number of ways that the library could be improved.

S AMMANFATTNING

Minnesallokering är en viktig del av optimering av program samt datorarkitektur.

Den här rapporten undersöker några av koncepten för minnesallokering och försöker implementera överskuggningar för de normala new och delete funktionerna i c++

biblioteket med hjälp av minnespooler kombinerat med andra tekniker. Överskugg- ningarna är testade mot de normala new och delete funktionerna samt minnespooler som har perfekt storlek för allokationerna. Studien visar att överskuggningarna är en aning snabbare om en tråd används men inte på flera. Studien visar också att den största vinsten i prestanda fås genom att allokera till egna minnespooler som är specifika för programmets behov. Studien räknar också upp flera sätt som biblioteket kan förbättras på.

(5)

Terms and abbreviation

• RAM - Random Access Memory is also called main memory

• Secondary memory - Computer storage memory

• OS - Operating System

• CPU - The computers processing unit

• To benchmark - Test the performance limits of software or hardware

• System call - Function handled by the operating system

• Context switch - Current process is saved and another process is started up or continued

(6)

Contents

1 Background 1

1.1 Program Memory . . . 1

1.2 Dynamic Memory . . . 1

1.2.1 Virtual Memory . . . 2

1.2.2 Memory Fragmentation . . . 3

1.3 Memory Allocation . . . 3

1.3.1 Data Structures For Memory Management . . . 3

1.3.2 Overriding/Hooking . . . 4

1.3.3 Guards Bytes and Memory Alignment . . . 5

1.4 Related Work . . . 5

2 Implementation 6 2.1 Social, Ethical and Environmental Considerations . . . 6

2.2 Debugging Tools . . . 6

2.3 Testing . . . 6

2.4 Memory Managment System . . . 6

2.4.1 Memory Pools . . . 6

2.4.2 Arena . . . 7

2.4.3 Thread Pool Manager . . . 8

2.4.4 Huge Allocations . . . 8

2.5 Memory Lifetime . . . 8

3 Results 10 4 Discussion 15 5 Further work 16 5.1 Allocation . . . 16

5.2 Memory Alignment . . . 16

5.3 Threads . . . 16

5.4 Safety and debugging . . . 16

5.5 Data collection . . . 17

5.6 Further Testing . . . 17

(7)

1 Background

Dynamic memory allocation is an important aspect of a program. Bad memory allocation may cause the program to run slower and in some extreme cases, slow it to a grinding stop[1]. The causes are because dynamic memory allocations are slower than automatic allocation as well as fragmentation of memory. Many techniques exists which improves memory allocation from the default way.

This thesis tries to cover the aspects of dynamic memory management and also to create a naive implementation of a memory management library which incorporates some of these aspects. The library was built so it can be extended and improved.

1.1 Program Memory

A program has three kinds memory allocations, static, automatic and dynamic [2]. Static memory is allocated when the program starts and is fixed in size. Automatic and dynamic allocations happen during program runtime. Automatic allocation is called stack allocation because the memory which was allocated last is the first to be freed again. The program handles allocation and freeing of memory and does so within a set scope. When the scope ends, the program removes the memory and that is what makes it automatic.

Dynamic memory is called heap allocation because there is no order in which allocated memory must be freed. With dynamic allocation the program asks the kernel for memory and the kernel has to make sure there is enough memory to give. Unlike automatic allocation, the memory is not freed after its scope has ended but the programmer must make sure that it is freed. Dynamic memory is the bottleneck and is the important part when trying to optimize performance.

Dynamic memory allocation is very slow compared to automatic allocation but trying to completely avoid dynamic memory is often not an option as stack memory is limited in how much memory can be allocated at one time. Handling larges files therefore requires dynamic memory.

The biggest reason to create a custom memory allocator is because of the context switching which is required when dynamic memory is requested from the OS kernel[3]. Because of this, the most important aspect of memory allocation is to allocate a lot of memory at one time and then allocate space inside this memory. This greatly reduces allocation and freeing time. Another important reason is because of memory fragmentation which can cause page and cache misses which makes the program slower.

Memory allocation in c and c++ uses malloc() and free() functions to allocate and free memory[4].

The new and delete functions exists in c++ and are wrappers for malloc and free which also trigger a class’ constructor and destructor functions[5]. To override new and delete is a simple way to create a custom dynamic memory management system.

1.2 Dynamic Memory

Dynamic memory takes up space somewhere in the RAM. This place is called the heap. The heap has a starting point and an end point which is called the program break. When there is not enough memory between start and stop for an allocation, more memory is allocated by simply moving the program break forward. This function is called sbrk[6] and is a system call to the OS. As noted earlier, because a context switch is needed to switch to kernel space, this takes time. When removing memory it is also possible to reverse the end step if nothing is allocated in that space.

(8)

Figure 1: Program memory

1.2.1 Virtual Memory

RAM is a limited resource. When a program asks for more memory than is available in RAM, the computer can either crash the program or remove memory from RAM. The first option is the simpler one but can be highly inconvenient as programs can crash unexpectedly without warning.

To combat this virtual memory was invented. With virtual memory, when there is no more memory to allocate, some of the allocated memory is stored in secondary memory. New memory can be allocated and when the stored memory is needed, it is retrieved from secondary storage. This is called swapping or paging. Memory is stored in fixed sized chunks called pages and allocated memory are stored on these pages. When allocated memory not currently in RAM is retrieved, the kernel finds what page the memory is on and swaps the whole page in. When trying to work on memory not currently in RAM, it is called a page miss [7].

Figure 2: Image of Mapping virtual to physical memory from iFixMyStuff[8]

Virtual memory makes it possible to work with a lot more memory than what is actually available in RAM. But this unfortunately has drawbacks. Storage and retrieval between first and secondary memory takes time and if frequent page misses occur it causes the computers performance to degrade and become very slow. This is called thrashing[1].

(9)

Because virtual memory offers more memory than available RAM, the memory addresses given to a program is not the actual physical addresses but virtual addresses. It is possible to map the virtual addresses to the physical ones [7].

While most systems use virtual memory in some capacity, some systems don’t use paging, especially embedded ones. Most embedded systems use flash memory which only has a finite number of writes, which causes paging to run the risk of destroying the systems memory[9].

1.2.2 Memory Fragmentation

A program processes data during its execution, this data has to be sent from RAM to the CPU. This transfer of memory is not instant, since the CPU runs at a higher clock speed than the memory.

To minimize the impact of transferring data from RAM to CPU, the CPU has its own high speed memory called cache. When data to be processed is not in the cache, it needs to be retrieved from RAM. This is called a cache miss. To reduce cache misses, data that are processed sequentially has to be sequentially stored in memory. However, if this data is not stored in sequence, the number of cache misses will be very high and will reduce the calculation speed of the CPU. This is refereed to as memory fragmentation. Avoiding memory fragmentation is also important because of virtual memory and paging.

1.3 Memory Allocation

1.3.1 Data Structures For Memory Management

There are many strategies on how to allocate memory efficiently. Many of them can be combined for different tasks. For example, Jemalloc uses the buddy allocation method to carve out big chunks of memory to smaller pieces which must be at least a page(4kb) big. Then these pieces are used by memory pools [10].

1.3.1.1 Memory Pools and Free Lists

Memory pools have an allocated chunk of memory which is divided into blocks and managed with a free list, which is a data structure to find the next free block. The blocks in the list can be different sizes but is often a set size. While a free list without set sizes means that allocations of all sizes fit in the same list, it also means a lot of overhead when trying to find a good spot to insert the memory in. Memory pools used to override the standard allocation function are often divided into pools with different set sizes to avoid memory fragmentation. Some sort of management are then needed to allocate memory from the right sized pool [11] [12].

1.3.1.2 Buddy Allocation

Buddy allocation[13] has a initial big block of memory which can be divided into smaller pieces.

When allocating, a small enough memory space for the request is searched for. If no small enough space can be found, the memory blocks are divided recursively into two smaller pieces until a right sized memory block can be used. This means that all memory blocks except the top one have a corresponding block of memory which is their buddy block. When freeing memory, the freed blocks buddy is examined if is also is free. If the buddy is also free, the blocks merge (or coalescence as it’s often called) into a bigger block. This new big block checks if its buddy is free and the blocks continue to merge until a buddy is not free or the new block is the initial memory block.

(10)

(a) Free list with fixed block size

(b) Free list without fixed sizes. The numbers showing how much space is in the block

Figure 3: Two kinds of free lists

1.3.1.3 Bookkeeping

Bookkeeping is useful to speed up the allocation process. This information can be stored and handled in different ways. For example jemalloc[14] [15] uses a header which is the first part of a pool (or run as they call it). The reason why to have a header is because it is better to have relevant data packaged closely together to reduce memory fragmentation.

1.3.1.4 Threading

Memory allocation on more than one process may cause cache contests where two processes try to process different data that are on the same cache line. This causes one process to wait for the other one to finish [16]. To solve this, different threads have different memory spaces assigned to them.

These memory spaces are often called arenas. If huge objects are allocated(>2-4 Mb) then each arena could potentially take up a lot of memory. To solve this huge object can be allocated outside of arenas and kept track of in some other way. For example jemalloc uses a red-black tree to keep track of huge objects[10].

1.3.2 Overriding/Hooking

The new and delete functions are easily overridden[5]. Override functions just has to be included in a library. The override functions can be used to implement a custom memory management system.

(11)

It is also possible to override malloc/free but it is more complex than for new/delete. By linking a custom malloc to the program it is possible to override malloc and free. It is also possible to override malloc and free by hooking, which is way to intercepts a function or program and insert other code before returning to the function or program[17]. By creating a hook which can interrupt these functions, they can be overridden. Linux has a built in standard hooks for malloc and free which can be switched out. Hooking on Windows is more complicated but it is possible to use a third party library like Detours[18] to do it.

1.3.3 Guards Bytes and Memory Alignment

When allocating memory, the system allocates a little more memory than is required. Some of these extra bytes may be padding to make the memory bit-aligned and some are bytes which have specific number values. These values are used for debugging. These values can be checked on deallocation.

If the values are not correct then something has gone wrong[19]

1.4 Related Work

A lot has been written about memory allocation. There are several memory libraries that replaces new and delate as well as malloc and free. Jemalloc and tcmalloc[20] [21] are some of the more acknowledged ones. Jemalloc is used by Mozilla and Facebook as well as the free operating system FreeBSD[22] and tcmalloc is used by Google.

(12)

2 Implementation

The implementation was chosen to be simple for fast creation and testing as well as for understanding the project. This was also because of the limited time span as a lot of time was allocated to describe the background of memory allocation.

To make things clearer the implemented memory management system is abbreviated to mmsys.

Mmsys is not the whole library but only the implementation of the system to override new and delete.

2.1 Social, Ethical and Environmental Considerations

The danger of the memory library might be if it has bugs or unforeseen security flaws. This could lead to sensitive information being hacked, like customer passwords and other data. Bugs might also lead to other damaging behaviour like out of bounds exceptions, which could crash the application, which might hurt customer trust which in turn can lead to reduced revenue. It is therefore important to test the library thoroughly before using it in real applications.

2.2 Debugging Tools

A program for visualizing was created in tandem by Filip Salén during the development period.

The memory library has specific messages which are sent to the visualizer when the library is run in debug mode. The visualizer can measure the amount of memory used as well as fragmentation.

Named pipes are used to send the messages for both Linux and Windows.

2.3 Testing

Benchmarking a memory allocator can be complex because the performance of the allocator may depend on how the program allocates and how the data is structured[10]. There may be edge cases in which an allocator performs badly. Therefore the best way to test an allocator is to try it on real programs. The important factors is speed and how much memory the program allocates and if there are any great spikes in memory usage.

I have chosen to do simple tests which first allocates and then frees a number of times with different set sizes. The tests are done with mmsys, a custom pool with perfect size and the regular new/delete to see how well they perform.

2.4 Memory Managment System

Mmsys is designed so that threads do not use the same memory space to avoid cache line conflicts.

Therefore each thread has a different memory space assigned to it.

The thread manager is called when new or delete are called. The thread manager finds the thread and memory space approriate for allocating or freeing memory. Firstly, the arena for a thread is located and secondly, the arena finds the suitable memory pool for allocation or freeing.

2.4.1 Memory Pools

The memory pools took Kenwrights memory pools as a starting point[12]. The good thing about his pools was the simple implementation, low access time, low fragmentation as well as the blocks does not need to be initialized. The memory pool uses a free list with a set block size which is built into the pools memory space. When a block is not in use, the space can instead be used to point to the next free space in the pool. See figure4

(13)

Memory pools allocate memory with malloc through its pool creation function and not through its constructor. The numbers of blocks a pool has, is declared with the construction function and, the numbers of blocks cannot be changed until the pool is destroyed. Mmsys uses 16384 blocks regardless of block size.

To make sure that a pool can not run out of memory, it has a pointer to another pool which begins as NULL. When more memory is needed, a new pool is created with malloc. The new pool has the same block size and number of blocks as its parent. This creates a chain of memory pools.

While mmsys uses memory pools, the library also lets the user create memory pools with their own set block size and number.

Figure 4: Memory Pool with two used spaces

2.4.2 Arena

To avoid memory fragmentation, it is important to create pools with different sizes. The arena makes sure an object gets allocated in the right sized pool. Mmsys’ arenas has 19 pool sizes ranging from 16 bytes to 4 megabytes. The Arenas have an id for the thread pool manager which can match threads to the arenas ids. Just like the memory pools it is possible for the user to create their own arenas which are not part of mmsys.

Figure 5: Arena with id 1

(14)

2.4.3 Thread Pool Manager

The thread pool manager handles calls from the overridden new and delete functions. The thread pool manager has a list of preallocated arenas which can be assigned to threads. To keep track of which arenas currently are in use, a slightly different free list is used. The thread pool manager has a complimenting list which is the same size as the arena list. Objects in this list keep track of which arenas are currently in use. When trying to find which arena a thread should allocate or free in, the complementing thread object list is iterated through until the threads id is found. If the threads id is not found, the thread takes the next arena in the free list and the thread id and the new arena id is put into a new thread object at the end of the current list size.

When removing a thread, the threads pool manager finds the threads arena and frees its memory pools and then the arena is set as not alive. The last object in the thread object list takes the place of the object handling the removed arena. The last places next-variable is set as the current next and the thread pool managers head-variable is set as the removed arenas index. This way, the thread pool manager always iterates through as many objects as there are current threads. See figure6.

A thread does not automatically remove the memory which it allocated but it can be done fast by calling the thread pool managers remove thread function at the end of the thread.

If an allocation is bigger than the max bucket of a pool manager(4mb), it is instead allocated in a list for huge objects which is not thread specific.

2.4.4 Huge Allocations

Allocations over 4 megabytes are not inserted into a pool. Instead they are put into a Red-Black Binary Search Tree[23] which is a self balancing tree which guarantees that one part of the tree can only be so much bigger than the other part. The tree had limited amount of elements, with 128 pointers by default. An array list was used to contain the pointers and uses indexes to the array instead of pointers, with -1 as the NIL value.

2.5 Memory Lifetime

Because of the way that c++ is constructed, some structures such as filestreams[5] does not end at the end of their scope but after the end of the main programs scope. The filestream will try to handle the memory which was allocated during the programs runtime and then delete it afterwards.

If the program frees the memory before the filestream calls it, the memory is out of bounds and the program crashes. To avoid this, the memory must remain until all the delete calls have been made.

To make sure the memory is freed after all delete calls have been made, the pool manager checks if all pools in the pool chain are empty and then frees all memory allocated by the pools.

(15)

(a) Before arena 0 is removed

(b) After arena 0 is removed

Figure 6: An arena is removed from thread manager. The arena frees its memory pools and is set as not alive

Figure 7: Example of a red black tree from Wikipedia [24]

(16)

3 Results

The test program allocated objects with a certain size, a set number of times and then deleted all the objects again. The test was made from a small population were every combined allocation type, size and number was tested three times. Tests were made on a single thread and ten threads. As discussed in next section, the result could vary a lot, especially when using more threads. Therefore only the results for the larger allocation sizes are shown when using ten threads.

Tests were made on mmsys and the standard new and delete. To see how fast the allocations could be optimally, the tests were compared to a memory pool which had all the memory in one pool which means it had no overhead when allocating or freeing, making it the fastest way to allocate objects. To further differentiate the general allocations of mmsys and standard new/delete from the optimal allocation process, the pools does not call delete for every allocation but simply releases all memory at one time.

Because the number of blocks in mmsys’ memory pools is 16384, the maximum number of pools in a pool chain was 13 which also means that malloc was at most called 25 times on the single threaded tests. One time for every pool except the first one and one time for each memory space that the pools hold. On the tests with ten threads malloc was called a maximum of 30 times, one to create a pool plus two to create pools, for every thread.

Figure 8: Memory managament system (mmsys)

(17)

Figure 9: Standard new and delete

Figure 10: Custom pool

(18)

Figure 11: Memory management system (mmsys). Only smaller sizes. With trendlines

Figure 12: Standard new and delete. Only smaller sizes. With trendlines

(19)

Figure 13: Custom pool. Only smaller sizes. With trendlines

Figure 14: Mmsys using ten threads

(20)

Figure 15: Standard new and delete using ten threads

Figure 16: Standard new and delete using ten threads. Only 10000 bytes

(21)

4 Discussion

All of the allocation types had a large variance in performance, as can be seen in figures 8-16.

Sometimes more allocations gave a faster result than fewer ones. The variance could be quite extreme, especially when using threads but even on a single process, the slower time could be as much as four times larger than the faster ones. The variance has probably something to do with how malloc finds free space on the OS and how threads are created. For a more accurate result of the allocations methods, more tests could be run and the tests could be done on different OS’s to see if the OS’s malloc strategies differ and how this affects the result.

While all of the allocation types had quite a large variance, there is a pattern. On single threads, mmsys only performs marginally better than the standard new/delete on the smaller sizes and mmsys only became faster when larger objects was allocated. A reason why this is so, is because the standard new and delete makes smaller requests for more memory to avoid that memory runs out.

While mmsys is faster using larger requests it also ran out of memory on the last test.

On ten threads the standard new/delete is faster than mmsys except for allocations of 100 000 in size. A reason the standard new/delete might be faster than mmsys is that mmsys has more overhead because it must go through the thread object list for every allocation and freeing. While standard new/delete is faster for threads it is also not safe from several processors trying to read the same cache line. To see if mmsys’ way of handling threads is better, tests should be made with the threads sending data to the CPU and see if standard new/delete causes any cache line collisions and if the cost of the collisions is worse than mmsys’ overhead.

What is interesting is just how much difference there is in time for standard new/delete when allocating 10 000 bytes and 100 000 bytes on several threads. When allocating large sizes in threads the time grows exponentially. This is worth considering when creating programs with several threads.

As expected, the custom memory pool is always the fastest. Best performance is reached when using memory pools with exactly the amount the program needs. Although this is the fastest method, it is not possible in most cases, to know the exact amount of memory a program needs.

A more feasible way might be to use a custom memory pool at sections of the program which are very memory intensive. Another way of improving memory could be to modify the number of blocks used by mmsys to reduce number of pools in the pools chains as well as reduce the number of memory requests.

(22)

5 Further work

Mmsys has a lot of features which could be implemented for better results and functionality. This section is about the most useful improvements to increase performance and functionality.

5.1 Allocation

The pools uses malloc when more memory is needed. To improve performance, memory should be allocated fewer times and in bigger chunks and stored by the thread pool manager. This memory could be stored in free lists with page-sized blocks. When a pool asks for memory it is given a span of these pages from the thread pool manager.

When not enough memory exists in these lists, a new list would be created and memory could be allocated for it. The memory of these new lists would depends on the need of the program. If the program handles small files, a couple of megabytes might be needed but if the program handles a lot of bigger files, hundreds of megabytes could be used.

What also could be done is deciding how much memory each pool should initially take. The more memory a pool takes up, the less number of times is needed to go through the pool chain to find where to free or allocate in. But if they take up to much memory in one go it might be a waste of RAM. This is especially true if there are fewer allocations with more variation in size with a lot of threads. As all the memory pools have the same amount of blocks, that means what might be a reasonable size for 16 bytes might be way too much for 4 megabytes. Some rules should be added to make sure a good size is set. This could simply be to decide that all pool initially take up the same amount of memory. It should also be possible to customize the pool sizes so that each size has its own standard number of blocks.

When allocating from a pool the arena always walks from the first pool to the last in the pool chain to find a free spot. This could be avoided by using something like jemalloc’s bins which points to a pool with free memory.

5.2 Memory Alignment

To reduce page misses it is important to allocate memory that are page aligned. The easiest way is to override malloc and use sbrk or mmap to memory that is a multiple of the page size. If default malloc is used, one has to take into account that malloc gives a little more memory than asked because it will insert guard bytes, so it might be a good idea to ask malloc for memory that is a multiple of the page size minus a few bytes. It is also important to remember that because malloc sets guard bytes before the allocated memory, the first page is not the full page size but a page minus the guard bytes. The same goes for the last page.

5.3 Threads

Background threads could be used to find and clean up after completed threads. Hashing could be used to find the right arena for a thread instead of going through all the currents threads.

5.4 Safety and debugging

No guard bytes are included in the allocator. It could help debugging and safety if these were implemented and checked when allocating and freeing to see if something has accessed the memory in an incorrect way.

(23)

5.5 Data collection

One very useful feature might be to log how much memory is used in each pool and how many times things was allocated and freed in the pools. This could help the user to modify the pool to suit it to the needs of their program.

5.6 Further Testing

Tests should be performed on real programs to find out how well it performs and if there are performance drops. Tests could be performed on programs which other memory allocators have tested against to measure its performance.

(24)

References

[1] Peter J. Denning. Thrashing: Its causes and prevention. AFIPS Conf. Proc., 33:915–922, 01 1968.

[2] Brian W. Kernighan. The C Programming Language. Prentice Hall Professional Technical Reference, 2nd edition, 1988.

[3] LINFO. Context switch definition. http://www.linfo.org/context_switch.html. Ac- cessed: 2019-05-25.

[4] IEEE Std. The open group base specifications issue. http://pubs.opengroup.org/

onlinepubs/9699919799/functions/contents.html. Accessed: 2019-05-27.

[5] Stanley B. Lippman, Jose Lajoie, and Barbara E. Moo. C++ Primer. Addison-Wesley Professional, 5th edition, 2012.

[6] The GNU C Library. Process memory concepts. https://www.gnu.org/software/libc/

manual/html_node/Memory-Concepts.html. Accessed: 2019-05-27.

[7] Peter J. Denning. Virtual memory. In Encyclopedia of Computer Science, pages 1832–1835.

John Wiley and Sons Ltd., Chichester, UK.

[8] iFixMyStuff. How virtual memory works. https://ifixmystuff.com/wp-content/

uploads/2018/03/VirtualMemory.png. Accessed: 2019-05-27.

[9] cypress. Endurance and data retention characterization of cypress flash memory. https:

//www.cypress.com/file/369306/download. Accessed: 2019-05-27.

[10] Jason Evans April. A scalable concurrent malloc(3) implementation for freebsd. 01 2006.

[11] Doug Lea. A memory allocator. http://gee.cs.oswego.edu/dl/html/malloc.html.

Accessed: 2019-05-23.

[12] Ben Kenwright. Fast efficient fixed-size memory pool: No loops and no overhead. 2012.

[13] Donald E. Knuth. The Art of Computer Programming, Volume 1. Addison-Wesley Professional, 1997.

[14] jemalloc. http://jemalloc.net/. Accessed: 2019-05-23.

[15] Patroklos Argyroudis and Chariton Karamitas. Exploiting the jemalloc memory allocator:

Owning firefox’s heap. Blackhat USA, 2012.

[16] Emery D. Berger, Kathryn S. McKinley, Robert D. Blumofe, and Paul R. Wilson. Hoard: A scalable memory allocator for multithreaded applications. SIGOPS Oper. Syst. Rev., 34(5):117–

128, November 2000.

[17] Microsoft Developer Network about hooks. https://docs.microsoft.com/sv-se/

windows/desktop/winmsg/about-hooks. Accessed: 2019-05-23.

[18] Detours. https://www.microsoft.com/en-us/research/project/detours/?from=

http%3A%2F%2Fresearch.microsoft.com%2Fsn%2Fdetours. Accessed: 2019-05-27.

[19] Andrew Suffield. Bounds checking for c and c++. 05 2019.

[20] Aliaksey Kandratsenka et al. tcmalloc. https://github.com/gperftools/gperftools, 2019.

[21] Sanjay Ghemawat and Paul Menage. Tcmalloc : Thread-caching malloc. http://

goog-perftools.sourceforge.net/doc/tcmalloc.html. Accessed: 2019-05-29.

[22] The FreeBSD Foundation. The freebsd project. https://www.freebsd.org/. Accessed:

2019-05-27.

[23] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edition, 2009.

(25)

[24] Wikipedia, the free encyclopedia. An example of a red–black tree. https://en.wikipedia.

org/wiki/Red%E2%80%93black_tree#/media/File:Red-black_tree_example.svg, 2006. accessed May 27, 2019.

References

Related documents

Det är centralt för hanterandet av Alzheimers sjukdom att utveckla en kämparanda i förhållande till sjukdomen och ta kontroll över sin situation (Clare, 2003). Personer i

huvudsakliga skillnaderna i inspelning 2 och inspelning 1 kom att beröra tonmaterialet samt rytmiseringen. Efter att slaviskt och helt systematiskt försökt leda in till och

Bergers semiotiska modell för TV- och filmbilder kommer appliceras på Instagrambilderna och användas för att studera vilken relation betraktaren får till personer i

That, to give the user the ability to see when, where and how dynamic memory is allocated, deallocated and managed to be able to find faulty behaviors, see memory fragmentation,

Street vendors outside Mozambique’s national stadium in Maputo, built by a Chinese firm for the All Africa Games held in September 2011. PHOTO:

In the Darkness, Everything Went All Black was performed seven times during 2018: at Halmstads Teater 6 July, twice at Nordic House in Reykjavik 28 October as a part of

Taking basis in the fact that the studied town district is an already working and well-functioning organisation, and that the lack of financial resources should not be

A study of rental flat companies in Gothenburg where undertaken in order to see if the current economic climate is taken into account when they make investment