Implementing malloc() and free() — splitting large blocks

When implementing malloc(), it is important to consider the size of the allocated blocks. For example, reusing blocks too big for smaller requests cause memory waste. How to solve that? Let’s see on this post of our series on malloc() and free().

In the previous post of this series, we saw how the order in which we choose memory blocks to reuse can lead to greater or lesser memory consumption, and we changed our functions to avoid this waste. But we need to solve another, even more serious, problem: sometimes, a very large memory block can occupy the space that several smaller blocks could use. Consider the case below, where we allocate a large chunk of memory, deallocate it, and then allocate two much smaller blocks:

void *ptr1 = abmalloc(128);
void *ptr2 = abmalloc(8);
abfree(ptr1);
void *ptr3 = abmalloc(8);
void *ptr4 = abmalloc(8);

Here, we have a free 128-byte memory block, and when we allocate a block of just 8 bytes, all 128 bytes become unavailable. When we allocate another 8-byte block, the heap needs to grow again. This is not an efficient use of memory.

There are at least two popular solutions for this case. One, more efficient, is to use bins: lists that group blocks by size. This is a more sophisticated and efficient approach, but more complex. Another option, simpler, is to find a large block and split it into smaller blocks. We’ll follow this approach.

But remember: simpler doesn’t exactly mean simple 😉

Initial Refactoring

Before we begin, let’s do a small refactoring. Currently, the header_new() function does two things: it allocates more memory for a new block and initializes its header, setting the metadata and pointers to the previous block. The part of initializing the header might be useful, so let’s extract it. We’ll create two new functions to improve readability:

  • The header_plug() function, which “plugs” the initialized block to the previous and next blocks.
  • The header_init() function, which sets the initial values of the block’s metadata (size and availability).

Here’s how they look:

void header_init(Header *header, size_t size, bool available) {
    header->size = size;
    header->available = available;
}

void header_plug(Header *header, Header *previous, Header *next) {
    header->previous = previous;
    if (previous != NULL) {
        previous->next = header;
    }
    header->next = next;
    if (next != NULL) {
        next->previous = header;
    }
}

Now, we just need to modify header_new() to use these new functions:

Header *header_new(Header *previous, size_t size, bool available) {
    Header *header = sbrk(sizeof(Header) + size);
    header_init(header, size, available);
    header_plug(header, previous, NULL);
    return header;
}

(Additionally, we can remove the line last->previous->next = last; from the abmalloc() function, since header_plug() now takes care of that.)

Splitting Blocks

With these tools in hand, let’s create the header_split() function. Given a header and a minimum required size, this function splits the memory block into two if the original block is large enough to contain

  • the required size,
  • a new header for the new block, and
  • a bit of extra memory.

First, we check if the block is large enough:

Header *header_split(Header *header, size_t size) {
    size_t original_size = header->size;
    if (original_size >= size + sizeof(Header)) {

If this condition is met, we split the block. First, we reduce the size of the current block by subtracting the size of a header and the space requested by abmalloc:

header->size = original_size - size - sizeof(Header);

This leaves a memory space after the current block, which we’ll use to create the new block. We calculate the pointer for this new block:

Header *new_header = header + sizeof(Header) + header->size;

Now that we have the pointer to the new block, we initialize its header with header_init():

header_init(new_header, size, true);

And we connect the new block to the previous and next blocks using header_plug():

header_plug(new_header, header, header->next);

If the original block was the last one, the new block will now be the last, so we update the last pointer:

if (header == last) {
    last = new_header;
}

Finally, we return the new block:

return new_header;

If the original block is not large enough, we simply return the original block:

} else {
    return header;
}
}

Updating abmalloc()

Now, we just need to go back to the abmalloc() function, and in the place where we find a usable block, we invoke header_split() to try to split it:

if (header->available && (header->size >= size)) {
    header = header_split(header, size);
    header->available = false;
    return header + 1;
}

If the block can be split, the new block will be returned. Otherwise, the original block will be kept and returned as before.

Note on Block Splitting

Notice that we created the new block at the end of the original block. We could have created it at the beginning, but by creating the new used block at the end, the new free block stays closer to older blocks. This way, it will be found first the next time abmalloc() is invoked.

Splitting large memory blocks is a step forward, but there’s an opposite problem: small memory blocks can cause fragmentation, making larger requests cause the heap to grow. We’ll see how to solve this in the next post.

Implementing malloc() and free() — old memory reused first

Suppose you have a linked list of blocks of memory that can be reused. Should you look for one to reuse from the beginning or the end? In this post, we have the answer, explain why and show how to implement it.

In the previous post in this series on implementing malloc()and free(), we showed how it is possible to reuse memory blocks and reduce the heap by freeing newer blocks. However, the current function introduces a subtle issue: it prioritizes reusing newer blocks, which can lead to increased memory consumption over time. Why does this happen? Let’s break it down.

Heap reduction by reusing recent blocks

Consider the following scenario. First, we allocate four memory blocks:

void *ptr1 = abmalloc(8); 
void *ptr2 = abmalloc(8);
void *ptr3 = abmalloc(8);
void *ptr4 = abmalloc(8);

The memory structure can be visualized like this:

Linked list with four nodes. Newer nodes point to older ones. All nodes are in use, so they are all labeled In use.

Now, we release the first and third blocks…

abfree(ptr1); 
abfree(ptr3);

…resulting in the following structure:

Linked list with four nodes. The newest nodes point to the oldest ones. The second and fourth nodes in the list (i.e. the second newest and oldest of all) have been freed with free(), so they are labeled with the word Available.

Then we allocate another block of the same size:

void *ptr5 = abmalloc(8);

As the function abmalloc() starts searching for the most recent free block, it reuses the block at the top. If we now free the last block:

Linked list with four nodes. Newer nodes point to older ones. Now only the last node is free and labeled with the word Free

If we now release the last block…

abfree(ptr4);

…we can reduce the heap size by just one 8-byte block, since the previous block is no longer free:

Linked list with three nodes. Newer nodes point to older ones. The last node is free and labeled with the word Free.

Reuse of old blocks

Now, imagine the same scenario, but with one modification: our function starts searching for free blocks from the oldest one. The initial structure will be the same…

Linked list with four nodes. Older nodes point to newer ones. All nodes are in use and labeled with the word In use.

…and again we free the first and third memory blocks:

Linked list with four nodes. Older nodes point to newer ones. The first and third nodes (i.e., the oldest and the third oldest/second newest) have been deallocated and are labeled with the In use

This time, the first block will be reused:

Linked list with four nodes. Older nodes point to newer ones. The third node (i.e., the oldest and third oldest/second newest) has been deallocated. Hence, it is labeled with the word Free

Now, when we free the last block, we will have two free blocks at the top, allowing us to reduce the heap by two 8-byte blocks:

Linked list with two nodes. We have freed two newer nodes, and since we reused the oldest, we have less nodes than the original case now.

This example illustrates how, by giving preference to newer blocks, we end up accumulating old unused blocks, wasting memory and leading to unnecessary heap growth. The solution is to modify the search strategy, prioritizing the reuse of older blocks.

Implementing preference for old blocks

To solve this problem, we will start by adding a pointer to the next block in the header. We will also create a global pointer to the first block, so we can start the search from it:

typedef struct Header { 
struct Header *previous, *next;
size_t size;
bool available;
} Header;

Header *first = NULL;
Header *last = NULL;

We will create memory blocks with headers in two different situations, so let’s make a small refactoring: we will extract this logic to a helper function that allocates and initializes the header (including setting the field nextwith NULL):

Header *header_new(Header *previous, size_t size, bool available) { 
Header *header = sbrk(sizeof(Header) + size);
header->previous = previous;
header->next = NULL;
header->size = size;
header->available = false;
return header;
}

With this new function, we can simplify the logic within abmalloc():

void *abmalloc(size_t size) { 
if (size == 0) {
return NULL;
}
Header *header = last;
while (header != NULL) {
if (header->available && (header->size >= size)) {
header->available = false;
return header + 1;
}
header = header->previous;
}
last = header_new(last, size, false);
return last + 1;
}

Now we have access to the first and last blocks, and given a block, we can find out the previous and next ones. We also know that when the pointer to the first block is null, no blocks have been allocated yet. So in this case, we will allocate the block immediately, and initialize both first and last:

void *abmalloc(size_t size) { 
if (size == 0) {
return NULL;
}
if (first == NULL) {
first = last = header_new(NULL, size, false);
return first + 1;
}

If first is no longer NULL, there are already allocated blocks, so we will start searching for a reusable block. We will continue using the variable header as an iterator, but instead of starting with the most recent block, the search will start from the oldest:

  Header *header = first;

At each iteration, we will advance to the next block in the sequence, instead of going backwards to the previous block:

  while (header != NULL) { 
if (header->available && (header->size >= size)) {
header->available = false;
return header + 1;
}
header = header->next;
}

The logic remains the same: if we find an available block of sufficient size, it is returned. Otherwise, if no reusable block is found after we traverse the list, a new block is allocated:

  last = header_new(last, size, false);

Now, we need to adjust the block that was the last one (after the allocation, the second to last). It pointed to NULL, but now it should point to the new block. To do this, we set the previous block’s next field to the new last block:

  last->previous->next = last; 
return last + 1;
}

Adjustments in the abfree() function

The function abfree() basically maintains the same structure, but now we must handle some edge cases. When we free blocks at the top of the heap, a new block becomes the last one, as we already do in this snippet:

    last = header->previous; 
brk(header)

Here, the pointer header references the last non-null block available on the stack. We have two possible scenarios:

  1. the current block has a previous block, which will become the new last block. In this case, we should set the pointer nextof this block to NULL.
  2. the current block does not have a previous block (i.e., it is the first and oldest block). When it is freed, the stack is empty. In this case, instead of trying to update a field of a non-existent block, we simply set the variable first to NULL, indicating that there are no more allocated blocks.

Here is how we implement it:

  last = header->previous; 
if (last != NULL) {
last->next = NULL;
} else {
first = NULL;
}
brk(header);

Conclusion

Our functions abmalloc() and abfree() now look like this:

        typedef struct Header {
  struct Header *previous, *next;
  size_t size;
  bool available;
} Header;

Header *first = NULL;
Header *last = NULL;

Header *header_new(Header *previous, size_t size, bool available) {
  Header *header = sbrk(sizeof(Header) + size);
  header->previous = previous;
  header->next = NULL;
  header->size = size;
  header->available = false;
  return header;
}

void *abmalloc(size_t size) {
  if (size == 0) {
    return NULL;
  }
  if (first == NULL) {
    first = last = header_new(NULL, size, false);
    return first + 1;
  }
  Header *header = first;
  while (header != NULL) {
    if (header->available && (header->size >= size)) {
      header->available = false;
      return header + 1;
    }
    header = header->next;
  }
  last = header_new(last, size, false);
  last->previous->next = last;
  return last + 1;
}

void abfree(void *ptr) {
  if (ptr == NULL) {
   return;
  }
  Header *header = (Header*) ptr - 1;
  if (header == last) {
    while ((header->previous != NULL) && header->previous->available) {
      header = header->previous;
    }
    last = header->previous;
    if (last != NULL) {
      last->next = NULL;
    } else {
      first = NULL;
    }
    brk(header);
  } else {
   header->available = true;
  }
 }Code language:  PHP  ( php )

This change allows us to save considerably more memory. There are, however, still problems to solve. For example, consider the following scenario: we request the allocation of a memory block of 8 bytes, and abmalloc()reuse a block of, say, 1024 bytes. There is clearly a waste.

We will see how to solve this in the next post.

(This post is a translation of Implementando malloc() e free() — memória antiga tem preferência, first published in Suspensão de Descrença.)

Error Handling in C with goto

In higher-level languages, the standard approach to handle errors are exceptions. C, however, does not have such feature. Even so, there is this pattern that simplifies handling scenarios where many operations have to be undone. And this uses the all too polemic goto command…

Recently, a discussion started on the Python Brasil mailing list about the reasons for using exceptions. At one point, a notably competent participant commented on how difficult it is to handle errors through function returns, as in C.

When you have a complex algorithm, each operation that can fail requires a series of ifs to check if the operation was successful. If the operation fails, you need to revert all previous operations to exit the algorithm without altering the program’s state.

Let’s look at an example. Suppose I have the following struct to represent arrays:

typedef struct {
    int size;
    int *array;
} array_t;

Now, I’m going to write a function that reads from a text file the number of elements to be placed in one of these arrays and then reads the elements themselves. This function will also allocate the array struct and the array itself. The problem is that this function is quite prone to errors, as we might fail to:

  • Open the given file;
  • Allocate the struct;
  • Read the number of elements from the given file, either due to input/output error or end of file;
  • Allocate memory to store the elements to be read;
  • Read one of the elements, either due to input/output error or end of file.

Complicated, right? Note that if we manage to open the file but fail to allocate the struct, we have to close the file; if we manage to open the file and allocate the struct but fail to read the number of elements from the file, we have to deallocate the struct and close the file; and so on. Thus, if we check all errors and adopt the tradition of returning NULL in case of an error, our function would look something like this:

array_t *readarray(const char *filename) {
    FILE *file;
    array_t *array;
    int i;

    file = fopen(filename, "r");
    if (file == NULL) return NULL;

    array = malloc(sizeof(array_t));
    if (array == NULL) {
        fclose(file);
        return NULL;
    }

    if (fscanf(file, "%d", &(array->size)) == EOF) {
        free(array);
        fclose(file);
        return NULL;
    }

    array->array = malloc(sizeof(int) * array->size);
    if (array->array == NULL) {
        free(array);
        fclose(file);
        return NULL;
    }

    for (i = 0; i < array->size; i++) {
        if (fscanf(file, "%d", array->array + i) == EOF) {
            free(array->array);
            free(array);
            fclose(file);
            return NULL;
        }
    }
    return array;
}

Indeed, quite laborious, and with a lot of repeated code…

Note, however, that there are two situations in the code above.

  1. In one, when I have two operations to revert, I need to revert the last executed one first, and then the previous one. For example, when deallocating both the struct and the integer array, I need to deallocate the integer array first and then the struct. If I deallocate the struct first, I may not be able to deallocate the array later.
  2. In the other situation, the order doesn’t matter. For example, if I am going to deallocate the struct and close the file, it doesn’t matter in which order I do it. This implies that I can also revert the last executed operation first and then the first operation.

What’s the point of this? Well, in practice, I’ve never seen a situation where I have to revert the first executed operation first, then the second, and so on. This means that, when performing the operations a(), b(), c(), etc., the “natural” way to revert them is to call the revert functions in reverse order, something like:

a();
b();
c();
/* ... */
revert_c();
revert_b();
revert_a();

Now comes the trick. In the code above, after each operation, we’ll place an if to check if it failed or not. If it failed, a goto will be executed to the revert function of the last successful operation:

a();
if (failed_a()) goto FAILED_A;
b();
if (failed_b()) goto FAILED_B;
c();
if (failed_c()) goto FAILED_C;
/* ... */
revert_c();
FAILED_C:
revert_b();
FAILED_B:
revert_a();
FAILED_A:
return;

If a() fails, the algorithm returns; if b() fails, the algorithm goes to FAILED_B:, reverts a() and returns; if c() fails, the algorithm goes to FAILED_C, reverts b(), reverts a(), and returns. Can you see the pattern?

If we apply this pattern to our readarray() function, the result will be something like:

array_t *readarray(const char *filename) {
    FILE *file;
    array_t *array;
    int i;

    file = fopen(filename, "r");
    if (file == NULL) goto FILE_ERROR;

    array = malloc(sizeof(array_t));
    if (array == NULL) goto ARRAY_ALLOC_ERROR;

    if (fscanf(file, "%d", &(array->size)) == EOF)
        goto SIZE_READ_ERROR;

    array->array = malloc(sizeof(int) * array->size);
    if (array->array == NULL) goto ARRAY_ARRAY_ALLOC_ERROR;

    for (i = 0; i < array->size; i++) {
        if (fscanf(file, "%d", array->array + i) == EOF)
            goto ARRAY_CONTENT_READ_ERROR;
    }
    return array;

    ARRAY_CONTENT_READ_ERROR:
    free(array->array);
    ARRAY_ARRAY_ALLOC_ERROR:
    SIZE_READ_ERROR:
    free(array);
    ARRAY_ALLOC_ERROR:
    fclose(file);
    FILE_ERROR:
    return NULL;
}

What are the advantages of this pattern? Well, it reduces the repetition of operation reversal code and separates the error handling code from the function logic. In fact, although I think exceptions are the best modern error handling method, for local error handling (within the function itself), I find this method much more practical.

(This post is a translation of Tratamento de errors em C com goto, originally published in Suspensão de Descrença.)

Implementing malloc() e free() — reducing the heap even more

In our journey implementing malloc() and free(), we learned to reuse memory blocks. Today, we will make a very simple optimization: reduce the heap size as much as possible.

This post is part of a series on implementing the malloc() and free() functions. In the previous article, we learned how to reuse memory blocks. It was a significant advancement, but there’s much more room for improvement.

One example is reducing the size of the heap, as explained in the first post. When we free the last memory block, we move the top of the heap to the end of the previous block. However, this previous block might also be free, as well as others. Consider the scenario below:

void *ptr1 = abmalloc(8);
void *ptr2 = abmalloc(8);
abfree(ptr1);
abfree(ptr2);

In this case, when we free the block pointed to by ptr2, we make ptr1 the last block. However, ptr1 is also free, so we could further reduce the heap size.

To achieve this, we’ll iterate over the pointers from the end of the list until there are no more free blocks. If the header of the received pointer points to the last block and the previous block is free, we move the header pointer to it. We repeat this process until we reach an available block whose previous block is in use (or NULL if it’s the first block). Then, we execute the heap reduction procedure:

if (header == last) {
  while ((header->previous != NULL) && header->previous->available) {
    header = header->previous;
  }
  last = header->previous;
  brk(header);
} else {

Now, though, we need to fix a bug in abfree(). According to the specification, the free() function should accept a null pointer and do nothing. However, if abfree() receives NULL, we will have a segmentation fault! Fortunately, it is easy to fix by adding a check at the beginning of the function:

void abfree(void *ptr) {
   if (ptr == NULL) {
     return;
   }
   Header *header = (Header*) ptr - 1;

So, here’s our abfree() function at the moment:

void abfree(void *ptr) {
   if (ptr == NULL) {
     return;
   }
   Header *header = (Header*) ptr - 1;
   if (header == last) {
     while ((header->previous != NULL) && header->previous->available) {
       header = header->previous;
     }
     last = header->previous;
     brk(header);
   } else {
     header->available = true;
   }
 }

Reducing the size of the heap is a simple optimization, but there are still challenges ahead. For example, the way we choose blocks to reuse can lead to larger than necessary heaps. We will why, and how to solve that, in the next post.

(This post is a translation of Implementando malloc() e free() — reduzindo ainda mais o heap, first published in Suspensão de Descrença.)

Implementing malloc() and free() — reusing memory blocks

Dynamic memory allocation is of no use if we cannot reuse freed memory, right? Proceeding with our implementation, we will make our malloc() function use freed blocks of memory when possible!

  1. This post is part of a series on how to implement the malloc() and free() functions. In a previous article, we changed our functions to free up some memory blocks. However, this only occurred if the freed blocks were deallocated from newest to oldest.

This wouldn’t make much difference. Dynamically allocated memory rarely behaves like a stack, where the newest block is always deallocated first. The big advantage of dynamic memory allocation, after all, is that it doesn’t work like a stack.

To understand the limitations of our implementation, consider the code below:

void *ptr1 = abmalloc(8);
void *ptr2 = abmalloc(8);
abfree(ptr1);
void *ptr3 = abmalloc(8);

In the first line, we allocate eight bytes, and free them in the third line. In the last line, we allocate eight bytes again. However, we cannot reuse the freed memory. To truly save memory, we need a more sophisticated solution.

One option is to reuse free blocks. To do this, we add a Boolean field to the block header, called available, which will indicate whether the block is free. As a block can only be reused if the memory requested by abmalloc() is less than or equal to that available in the block, we also need a field in the header indicating the size of the block, which we will call size.

typedef struct Header {
  struct Header *previous;
  size_t size;
  bool available;
} Header;

When the block is allocated, the value of the available field must be false (since the block is not available). We also record the block size in the size field:

void *abmalloc(size_t size) {
  Header *header = sbrk(sizeof(Header) + size);
  header->previous = last;
  header->size = size;
  header->available = false;
  last = header;
  return last + 1;
}

We have the information in the header but we are not yet reusing deallocated memory. To reuse the available blocks, we need to find them! The algorithm for this is very simple: abmalloc() will start iterating over the blocks, starting from the last until reaching the first. Since the previous pointer of the first block is always NULL, we stop when we find such value:

void *abmalloc(size_t size) {
   Header *header = last;
   while (header != NULL) {
     header = header->previous;
   }

In each iteration, we check whether the block is available and has an acceptable size. If in the middle of this process we find an available block greater than or equal to what we need, we got lucky! Just mark the block as unavailable, and return it.

void *abmalloc(size_t size) {
Header *header = last;
while (header != NULL) {
if (header->available && (header->size >= size)) {
header->available = false;
return header + 1;
}
header = header->previous;
}

What if we don’t find a block that satisfies these conditions? In this case, the abmalloc() function increases the heap, as it used to do:

void *abmalloc(size_t size) {
  Header *header = last;
  while (header != NULL) {
    if (header->available && (header->size >= size)) {
      header->available = false;
      return header + 1;
    }
    header = header->previous;
  }
  header = sbrk(sizeof(Header) + size);
  header->previous = last;
  header->size = size;
  header->available = false;
  last = header;
  return last + 1;
}

When it comes to deallocating, we have two possible situations. If the block deallocated by abfree() is the last one, nothing changes: we move the top of the heap to the beginning of the block, we change the last pointer. But what if the block is not on top of the heap? We simply mark it as available, as can be seen in the else clause of the function below:

void abfree(void *ptr) {
   Header *header = (Header*) ptr - 1;
   if (header == last) {
     last = header->previous;
     brk(header);
   } else {
     header->available = true;
   }
 }

Reusing blocks of memory is a huge advance. However, we can be even more efficient in memory usage. For example, we only reduce the heap size if we deallocate the last block. If there are more unused blocks right before it, we could free them too. We will see how to do this in the next post.

(This post is a translation of Implementando malloc() and free() — reutilizando blocos de memória, originally published in Suspensão de Descrença.)

Implementing malloc() and free() — adding metadata to the memory blocks

When malloc() reserves blocks of memory, it needs to somehow make it able to unreserve them later, when free() is called. We fall short of any real solution for this in our last post. In this post, though, we take the first, most fundamental steps to bring real memory efficient to our implementations of malloc() and free()!

This post is part of a series on implementing the malloc() and free() functions. Previously, we implemented a rather simplistic approach that almost doesn’t free any memory: a pointer points to the last allocated block, enabling free() to deallocate it, but only it.

A better option is to make the last block point to the second-to-last, the second-to-last block to the third-to-last, and so on, forming a linked list. To achieve this, we create a struct that will serve as the header of the blocks, containing a pointer to the previous block:

typedef struct Header {
  struct Header *previous;
} Header;

Additionally, the pointer to the last block, which used to be void*, is now of type Header*:

Header *last = NULL;

To use these headers, abmalloc() reserves enough memory to store both the header and the requested size:

void *abmalloc(size_t size) {
  Header *header = sbrk(sizeof(Header) + size);

In this way, we use the beginning of the block to store necessary information, such as a pointer to the last allocated block before the new one:

  header->previous = last;

Then, we update last to point to the new block:

  last = header;

Finally, we return a pointer to the memory that the user can use. Since header points to the metadata, we cannot simply return it. Otherwise, all header information would be overwritten when the user used the pointer! Instead, we return a pointer to just after the header. This pointer is easy to calculate: it is the memory address of the header plus the size of the header:

  return header + 1;
}

Note how we increment the header pointer by 1. Since the pointer type is Header*, the increment is actually the number of bytes of the Header struct, not just one byte. The type of the pointer is very relevant in pointer arithmetic.

Now that our memory blocks have metadata at the beginning, we need to take this into account when deallocating. free() receives a pointer not to the start of the block but to the memory made available to the user. Therefore, we need to find the start of the block from the pointer the user passed. Nothing that a little pointer arithmetic can’t solve:

void abfree(void *ptr) {
  Header *header = (Header*) ptr - 1;

If header points to the last allocated block, the previous block will become the last. In this case, we can return memory from the heap to the operating system through brk():

  if (header == last) {
    last = header->previous;
    brk(header);
  }
}

Here are our new malloc() and free() functions:

typedef struct Header {
   struct Header *previous;
 } Header;

 Header *last = NULL;

 void *abmalloc(size_t size) {
   Header *header = sbrk(sizeof(Header) + size);
   header->previous = last;
   last = header;
   return header + 1;
 }

 void abfree(void *ptr) {
   Header *header = (Header*) ptr - 1;
   if (header == last) {
     last = header->previous;
     brk(header);
   }
 }

abmalloc() and abfree() may be slightly more memory-efficient now, but not by much. Dynamically allocated memory rarely behaves like a stack, where the oldest block is always deallocated first. In the next post, we will see how to use the memory of older blocks that are no longer in use.

(This post is a translation of Implementando malloc() e free() — adicionando metadados aos blocos de memória, from Suspensão de Descrença.)

Implementing malloc() and free() — first steps

Following the wonderful journey that is reading Crafting Interpreters, I reached the point where we implemented an interpreter in C! As always, Bob Nystrom mercilessly proposes very interesting challenges that keep us busy for long periods. For instance, in this chapter, he suggests implementing our own memory allocator, without any real need! Inevitably, I was nerdsniped.

The challenge allows us to allocate a large memory region with an existing malloc() function and manage it, but I decided to implement the malloc() from scratch. Since I use Ubuntu, it was necessary to first understand the memory layout of a process on Linux better.

Consider the diagram below, which represents the memory layout of a process.

In the memory allocated for the process, there are various sections. When the program starts its execution, the shaded part is not yet in use. Throughout its execution, the program declares local variables, causing the stack to grow backward.

On the other hand, dynamically allocated memory is obtained from the heap, which grows in the opposite direction. The popular way to expand the heap is by increasing the size of the data segment (i.e., the section that contains global and static variables) with the sbrk() system call.

Diagram representing how srbk() works, by increasing the data segment pointer but returning the old value.

The above diagram illustrates how this functional system call works. sbrk() takes an integer parameter that will be added to the pointer indicating the end of the data segment. After that, sbrk() returns the value of the pointer before the increment.

In a way, the behavior of sbrk() is already sufficient for memory allocation. Our malloc() function can simply invoke sbrk() and return to the user the pointer to the beginning of the allocated memory block:

void *abmalloc(size_t size) {
   return sbrk(size);
}

In principle, free() doesn’t need to do anything: since in this implementation, we always use memory from the top of the heap, there is nothing we can do to reuse older memory blocks. In that sense, free() can perfectly be a no-op:

void abfree(void *ptr) {
}

A useful operation can be done, however, if the block to be freed is the last one allocated. This means it is at the top of the stack, so we just need to move the stack pointer back with the brk() system call. This syscall takes a pointer as a parameter and, if this pointer is a “reasonable” value (not null, does not point into the stack, does not point before the heap), it uses the pointer’s value as the new top of the heap. The result would be something like this:

void abfree(void *ptr) {
  if (ptr == last_block) {
      brk(last_block);
  }
}

This deallocation, however, is practically useless. Consider the example below:

void *ptr1 = abmalloc(8);
void *ptr2 = abmalloc(8);
abfree(ptr2);
abfree(ptr1);

With the current version of abfree(), we can free the memory pointed to by ptr1, but not the one pointed to by ptr2. To be able to free ptr2, it would be necessary to know that, once ptr1 has been deallocated, the next last block is ptr2. Could we create a second_last_block variable? It wouldn’t help: we would have the same problem with the penultimate block, and so on.

We need a more powerful data structure here, and that’s what we’ll see in our next post.

(This post is a translation of Implementando malloc() e free() — primeiros passos, originally published in Suspensão de Descrença.)

Test utilities, or set-up methods considered harmful

One of the most interesting learnings I had in the old SEA Tecnologia is the creation of test utilities .

Test utilities are a way to reuse code in unit tests. Usually, this is done using setUpor @Before methods, but this has some disadvantages. For example, in a test case, we can have the following initialization:

private Address address;
private AddressDAO addressDAO;

@Before
public void setUp() {
    address = new Address();
    address.setStreet("Rua fulano");
    address.setNumber("123/A");
    addressDAO = new AddressDAO();
}

This initialization works well in the test below…

@Test
public void testGetAllAddresses(){
    addressDAO.addAddress(address);

    List<Address> addresses = addressDAO.getAllAddresses();

    assertEquals(1, addresses.size());
    assertEquals("Rua fulano", addresses.get(0).getStreet());
    assertEquals("123/A", addresses.get(0).getNumber());
}

However, in the following test, the created object is a waste, is not used at all:

@Test
public void testGetNoAddress() {
    List<Address> addresses = addressDAO.getAllAddresses();

    assertEquals(0, addresses.size());
}

In the next following test, we have code redundancy. We also have to decide whether the other object should be created in the @Before method or in the test method:

@Test
public void testGetAllAddressesMoreThanOne() {
    addressDAO.addAddress(address);
    Address address2 = new Address();
    address2.setStreet("Outra rua");
    address2.setNumber("111");
    addressDAO.addAddress(address2);
    List<Address> addresses = addressDAO.getAllAddresses(); 
    assertEquals(1, addresses.size());
    assertEquals("Rua fulano", addresses.get(0).getStreet());
    assertEquals("123/A", addresses.get(0).getNumber()); 
}

These inconveniences are minor when compared to the task of creating a network of dependencies. For example, to test a class Person that adds a test case Address in another test case, we will have to have one @Before similar to this:

private Person person;
private Address address;
private PersonDAO personDAO;

@Before     
public void setUp() {
    address = new Address();
    address.setStreet("Rua fulano");
    address.setNumber("123/A");
    person = new Person();
    person.setName("João");
    person.setAddress(address);
    personDAO = new PersonDAO();
}

The code for creating addresses was duplicated, and it is difficult to create the dependencies. In these examples, we see simple cases, but it is easy to see how the situation would get complicated.

We solve this problem by creating a class to create these objects. This class would be something like this:

public class TestUtil {
    public static Address utilCreateAddress(String street, String number) {
        Address address = new Address();
        address.setStreet("Rua fulano");
        address.setNumber("123/A");
        return address;     
    }

    public static Person utilCreatePerson(String name, Address address) {
        Person person = new Person();
        person.setName(name);
        person.setAddress(address);
        return person;
    }
}

Our test cases extended TestUtil, making object creation easier:

public class TestAddress2 extends TestUtil {
    private AddressDAO addressDAO = new AddressDAO();

    @Test
    public void testGetAllAddresses() {
        Address address = utilCreateAddress("Rua fulano", "123/A");
        addressDAO.addAddress(address);

        List<Address> addresses = addressDAO.getAllAddresses();

        assertEquals(1, addresses.size());
        assertEquals("Rua fulano", addresses.get(0).getStreet());
        assertEquals("123/A", addresses.get(0).getNumber());
    }

    @Test
    public void testGetNoAddress() {
        List<Address> addresses = addressDAO.getAllAddresses();

        assertEquals(0, addresses.size());
    }

    @Test
    public void testGetAllAddressesMoreThanOne() {
        Address address = utilCreateAddress("Rua fulano", "123/A");
        Address address2 = utilCreateAddress("Outra rua", "111");
        addressDAO.addAddress(address);
        addressDAO.addAddress(address2);

        List<Address> addresses = addressDAO.getAllAddresses();

        assertEquals(2, addresses.size());
        assertEquals("Rua fulano", addresses.get(0).getStreet());
        assertEquals("123/A", addresses.get(0).getNumber());
    } 
}

As we also frequently needed some specific object with just one or two parameters to be defined, we created methods variants:

public static Address utilCreateAddress() {
    return utilCreateAddress("Qualquer", "Qualquer");
}

public static Person utilCreatePerson() {
    return utilCreatePerson("José", utilCreateAddress());
}

We learned that in a somewhat complex project, with large networks of object dependencies. The use of these test utilities made it possible to practice TDD on the system. It was exciting to discover that, to create that document that depended on seven other documents and five or six users, all you had to do was call a method.

Of course, there is more to our testing utilities than has been written here, and there may be even more that we haven’t even done. (For example, it may be interesting to write test utilities for specific classes, instead of one gigantic utility.) However, as the idea is very simple, we hope this first step motivates you to think about the topic. Until later!

(This is a translation of the post “Utilitários de Teste” from Suspensão de Descrença. It was originally posted in the old SEA Tecnologia blog. As the original post went offline but the topic remains relevant, I decided to republish it.)

No comments. Now what?

Traditionally, it is considered good practice to comment code. However, this wisdom has been revisited in recent times. At Liferay, for example, we follow a policy of not commenting code. Personally, I am an enthusiast of this philosophy. But I don’t want to present or defend this strategy here, there is a lot of good material on this subject. I want to discuss an open question.

Whoever comments wants to convey some important information. What information is this? And, most importantly, where can we register it? Let’s look at some alternatives.

What do these lines do?

Function names are excellent for explaining what the code does. If a block of code requires a comment, consider extracting it into a function or class. The name of the entity will already clarify its purpose.

Observe, for example, the lines below, taken from this test class:

Assert.assertNotNull(recurrence);
Assert.assertNull(recurrence.getUntilJCalendar());
Assert.assertEquals(0, recurrence.getCount());

These lines check if an event’s RRule has certain properties: it must exist, have a null “untilCalendar, and a count of zero.

The concepts are complex; even I would be confused rereading these asserts. A comment could explain them. But this commit already clarified everything by moving these lines to a method and invoking it:

assertRepeatsForever(recurrence);

Those assertions were checking if the event repeats indefinitely! No comment was needed—fortunately, as these asserts were in various tests.

What is happening here?

If a comment would explain something relevant at runtime, consider turning it into a log message! Check the example below:

if (Validator.isBlank(serviceAccountKey)) {
    // If no credentials are set for GCS Store, the library will
    // use Application Default Credentials.
    _googleCredentials =
        ServiceAccountCredentials.getApplicationDefault();
}
else {
    _googleCredentials = ServiceAccountCredentials.fromStream(
        new ByteArrayInputStream(serviceAccountKey.getBytes()));
}

This comment may be relevant to someone reading the code. However, it would be crucial for someone investigating an authentication issue. Therefore, in practice, I chose to log a message:

if (Validator.isBlank(serviceAccountKey)) {
    if (_log.isInfoEnabled()) {
        _log.info(
            "No credentials set for GCS Store. Library will use " +
                "Application Default Credentials.");
    }

    _googleCredentials =
        ServiceAccountCredentials.getApplicationDefault();
}
else {
    _googleCredentials = ServiceAccountCredentials.fromStream(
        new ByteArrayInputStream(serviceAccountKey.getBytes()));
}

Why are they here?

Comments to explain why certain lines are there are also common. A better place to share this information is in commit messages.

These days, for example, I was asked to help with some code I worked on years ago. Reading a JSP—remember, years ago—I found these lines:

<liferay-portlet:renderURL portletName="<%= KaleoDesignerPortletKeys.KALEO_DESIGNER %>" var="viewURL">
    <portlet:param name="mvcPath" value="/designer/view_kaleo_definition_version.jsp" />
    <portlet:param name="redirect" value="<%= currentURL %>" />
    <portlet:param name="name" value="<%= kaleoDefinitionVersion.getName() %>" />
    <portlet:param name="draftVersion" value="<%= kaleoDefinitionVersion.getVersion() %>" />
</liferay-portlet:renderURL>

This tag is generating a URL to be used elsewhere. But my trained eyes found the portletName parameter strange. This value is usually set automatically.

A git blame clarified everything when I found this commit. The message is clear:

LPS-74977 / LPS-73731 By making the render URL explicitly use the Kaleo Designer name, it will be valid when rendered in another portlet.

I get it! This code will probably be invoked by some other portlet. In this case, the value would be automatically set by the other application, and for some reason, we want to avoid that.

(By the way, that’s why I prefer small commits: they make it easier to discover the reason for very specific code snippets. It’s like every line of code has a comment! It’s not a unanimous position, though: some prefer larger commits.)

The purpose of the line was clarified. But why can it be invoked by another application? This is not usual…

Why was this change made?

Well-written code explains how something was implemented. The commit message explains the why, but in a local context. How do you explain the broader motivation behind code without resorting to comments?

Issue tracker tickets are excellent for this. Typically written to guide development, these documents are very helpful in interpreting the code. If we add the ticket key to the commit message, we can track the reasons.

Returning to the example above. We found that a line allows using the same code in multiple portlets. But this is rarely necessary. Why do we need to reuse the code in this case? Fortunately, the message mentions two tickets. I checked the older one; I arrived at LPSA-64324:

[Information Architecture] EE — As a portal admin, I would like to access all workflow portlets from the control panel section under the same tab.

The title already helps, and the text clarifies it even more. For usability reasons, three different applications started appearing in tabs of the same portlet. It makes complete sense!

The comments we like

It’s important to highlight that we are trying to avoid disorganized comments that intertwine with the code and attempt to explain difficult-to-understand sections. There are various comments, often with standardized formats, that do not hinder readability. An obvious example is the copyright header.

Another effective way to use comments is through literate programming. In this programming style, comments take the spotlight: the source code contains more prose than executable code. This is useful when explaining the algorithm is more important than reading it, as in academic research and data analysis. Not surprisingly, it is the paradigm of popular tools like Jupyter Notebook and Quarto.

Even more relevant, tools like Javadoc, JSDoc, Doxygen, etc. read comments in a specific format to generate documentation. These comments do not affect readability. On the contrary, Javadocs are great for explaining how to use these entities. Combined with tools like my dear Doctest, we even get guarantees of accuracy and correctness!

A World of Possibilities

These are just a few examples of alternatives to comments. There are many other options, such as wikis and blogs. I’ve even found explanations for code I wrote myself on Stack Overflow! We can think of even more solutions to meet different needs. The key point is that with these tools at our disposal, adding comments directly to the code becomes unnecessary.

Naturally, not commenting is just one way to write readable code. Comments are not forbidden; in fact, there are strategies that can make them effective. However, in my experience, avoiding comments often leads to better results, and these techniques help document important information that doesn’t fit directly into the code.

Are you a follower of the “no comments” strategy? If so, where else do you convey information? If not, how do you ensure effective comments? What type of comment do you not see being replaced by these approaches? I’d love to hear your opinions.

(This post is a translation of “Sem Comentários. E Agora?from my blog Suspensão de Descrença.)

Importing ES 6 Modules from CommonJS

Here at Liferay, a few days ago, we needed to use the p-map package. There was only one problem: our application still uses the CommonJS format, and p-map releases ES6 modules only. Even some of the best references I found (e.g. this post) made it clear that it would not be possible to import ES6 modules from CommonJS.

The good news is that this is no longer true! Using dynamic import, we can load ES6 modules from CommonJS. Let’s look at an example.

In this project, the importer.js file tries to use require() to import an ES6 module:

const pmap = require('p-map');

exports.importer = () => {
  console.log('Yes, I could import p-map:', pmap);
}

Of course, it doesn’t work:

$ node index.js 
internal/modules/cjs/loader.js:1102
      throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);
      ^

Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /home/adam/software/es6commonjs/node_modules/p-map/index.js
require() of ES modules is not supported.
require() of /home/adam/software/es6commonjs/node_modules/p-map/index.js from /home/adam/software/es6commonjs/importer.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules.
Instead rename index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /home/adam/software/es6commonjs/node_modules/p-map/package.json.

    at new NodeError (internal/errors.js:322:7)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1102:13)
    at Module.load (internal/modules/cjs/loader.js:950:32)
    at Function.Module._load (internal/modules/cjs/loader.js:790:12)
    at Module.require (internal/modules/cjs/loader.js:974:19)
    at require (internal/modules/cjs/helpers.js:101:18)
    at Object.<anonymous> (/home/adam/software/es6commonjs/importer.js:1:14)
    at Module._compile (internal/modules/cjs/loader.js:1085:14)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
    at Module.load (internal/modules/cjs/loader.js:950:32) {
  code: 'ERR_REQUIRE_ESM'
}

The solution is to convert require() into a dynamic import. But there is one detail: import imports return Promises. There are many ways to deal with this; the simplest one is probably to make our function asynchronous, like in this version:

exports.importer = async () => {
  const pmap = await import('p-map');
  console.log('Yes, I could import p-map:', pmap);
}

Now our little app works!

$ node index.js 
ok
Yes, I could import p-map: [Module: null prototype] {
  AbortError: [class AbortError extends Error],
  default: [AsyncFunction: pMap],
  pMapSkip: Symbol(skip)
}

Some other adjustments may be necessary. (I had to adjust the eslint settings, for example.) The important thing is that this is possible. And it’s not a kludge: Node’s own documentation recommends this approach.

So, don’t be scared by outdated information: you won’t need to rewrite your entire application as ES 6 modules, at least for now. For us, this was quite a relief!

(This post is a translation of Importando Módulos ES6 em CommonJS, first published in Suspensão da Descrença.)