Implementing malloc() e free() — reducing the heap even more

In our journey implementing malloc() and free(), we learned to reuse memory blocks. Today, we will make a very simple optimization: reduce the heap size as much as possible.

This post is part of a series on implementing the malloc() and free() functions. In the previous article, we learned how to reuse memory blocks. It was a significant advancement, but there’s much more room for improvement.

One example is reducing the size of the heap, as explained in the first post. When we free the last memory block, we move the top of the heap to the end of the previous block. However, this previous block might also be free, as well as others. Consider the scenario below:

void *ptr1 = abmalloc(8);
void *ptr2 = abmalloc(8);
abfree(ptr1);
abfree(ptr2);

In this case, when we free the block pointed to by ptr2, we make ptr1 the last block. However, ptr1 is also free, so we could further reduce the heap size.

To achieve this, we’ll iterate over the pointers from the end of the list until there are no more free blocks. If the header of the received pointer points to the last block and the previous block is free, we move the header pointer to it. We repeat this process until we reach an available block whose previous block is in use (or NULL if it’s the first block). Then, we execute the heap reduction procedure:

if (header == last) {
  while ((header->previous != NULL) && header->previous->available) {
    header = header->previous;
  }
  last = header->previous;
  brk(header);
} else {

Now, though, we need to fix a bug in abfree(). According to the specification, the free() function should accept a null pointer and do nothing. However, if abfree() receives NULL, we will have a segmentation fault! Fortunately, it is easy to fix by adding a check at the beginning of the function:

void abfree(void *ptr) {
   if (ptr == NULL) {
     return;
   }
   Header *header = (Header*) ptr - 1;

So, here’s our abfree() function at the moment:

void abfree(void *ptr) {
   if (ptr == NULL) {
     return;
   }
   Header *header = (Header*) ptr - 1;
   if (header == last) {
     while ((header->previous != NULL) && header->previous->available) {
       header = header->previous;
     }
     last = header->previous;
     brk(header);
   } else {
     header->available = true;
   }
 }

Reducing the size of the heap is a simple optimization, but there are still challenges ahead. In the next post, we’ll discuss how to avoid reusing very large memory blocks for small requests.

(This post is a translation of Implementando malloc() e free() — reduzindo ainda mais o heap, first published in Suspensão de Descrença.)

Implementing malloc() and free() — reusing memory blocks

Dynamic memory allocation is of no use if we cannot reuse freed memory, right? Proceeding with our implementation, we will make our malloc() function use freed blocks of memory when possible!

  1. This post is part of a series on how to implement the malloc() and free() functions. In a previous article, we changed our functions to free up some memory blocks. However, this only occurred if the freed blocks were deallocated from newest to oldest.

This wouldn’t make much difference. Dynamically allocated memory rarely behaves like a stack, where the newest block is always deallocated first. The big advantage of dynamic memory allocation, after all, is that it doesn’t work like a stack.

To understand the limitations of our implementation, consider the code below:

void *ptr1 = abmalloc(8);
void *ptr2 = abmalloc(8);
abfree(ptr1);
void *ptr3 = abmalloc(8);

In the first line, we allocate eight bytes, and free them in the third line. In the last line, we allocate eight bytes again. However, we cannot reuse the freed memory. To truly save memory, we need a more sophisticated solution.

One option is to reuse free blocks. To do this, we add a Boolean field to the block header, called available, which will indicate whether the block is free. As a block can only be reused if the memory requested by abmalloc() is less than or equal to that available in the block, we also need a field in the header indicating the size of the block, which we will call size.

typedef struct Header {
  struct Header *previous;
  size_t size;
  bool available;
} Header;

When the block is allocated, the value of the available field must be false (since the block is not available). We also record the block size in the size field:

void *abmalloc(size_t size) {
  Header *header = sbrk(sizeof(Header) + size);
  header->previous = last;
  header->size = size;
  header->available = false;
  last = header;
  return last + 1;
}

We have the information in the header but we are not yet reusing deallocated memory. To reuse the available blocks, we need to find them! The algorithm for this is very simple: abmalloc() will start iterating over the blocks, starting from the last until reaching the first. Since the previous pointer of the first block is always NULL, we stop when we find such value:

void *abmalloc(size_t size) {
   Header *header = last;
   while (header != NULL) {
     header = header->previous;
   }

In each iteration, we check whether the block is available and has an acceptable size. If in the middle of this process we find an available block greater than or equal to what we need, we got lucky! Just mark the block as unavailable, and return it.

void *abmalloc(size_t size) {
Header *header = last;
while (header != NULL) {
if (header->available && (header->size >= size)) {
header->available = false;
return header + 1;
}
header = header->previous;
}

What if we don’t find a block that satisfies these conditions? In this case, the abmalloc() function increases the heap, as it used to do:

void *abmalloc(size_t size) {
  Header *header = last;
  while (header != NULL) {
    if (header->available && (header->size >= size)) {
      header->available = false;
      return header + 1;
    }
    header = header->previous;
  }
  header = sbrk(sizeof(Header) + size);
  header->previous = last;
  header->size = size;
  header->available = false;
  last = header;
  return last + 1;
}

When it comes to deallocating, we have two possible situations. If the block deallocated by abfree() is the last one, nothing changes: we move the top of the heap to the beginning of the block, we change the last pointer. But what if the block is not on top of the heap? We simply mark it as available, as can be seen in the else clause of the function below:

void abfree(void *ptr) {
   Header *header = (Header*) ptr - 1;
   if (header == last) {
     last = header->previous;
     brk(header);
   } else {
     header->available = true;
   }
 }

Reusing blocks of memory is a huge advance. However, we can be even more efficient in memory usage. For example, we only reduce the heap size if we deallocate the last block. If there are more unused blocks right before it, we could free them too. We will see how to do this in the next post.

(This post is a translation of Implementando malloc() and free() — reutilizando blocos de memória, originally published in Suspensão de Descrença.);

Implementing malloc() and free() — adding metadata to the memory blocks

When malloc() reserves blocks of memory, it needs to somehow make it able to unreserve them later, when free() is called. We fall short of any real solution for this in our last post. In this post, though, we take the first, most fundamental steps to bring real memory efficient to our implementations of malloc() and free()!

This post is part of a series on implementing the malloc() and free() functions. Previously, we implemented a rather simplistic approach that almost doesn’t free any memory: a pointer points to the last allocated block, enabling free() to deallocate it, but only it.

A better option is to make the last block point to the second-to-last, the second-to-last block to the third-to-last, and so on, forming a linked list. To achieve this, we create a struct that will serve as the header of the blocks, containing a pointer to the previous block:

typedef struct Header {
  struct Header *previous;
} Header;

Additionally, the pointer to the last block, which used to be void*, is now of type Header*:

Header *last = NULL;

To use these headers, abmalloc() reserves enough memory to store both the header and the requested size:

void *abmalloc(size_t size) {
  Header *header = sbrk(sizeof(Header) + size);

In this way, we use the beginning of the block to store necessary information, such as a pointer to the last allocated block before the new one:

  header->previous = last;

Then, we update last to point to the new block:

  last = header;

Finally, we return a pointer to the memory that the user can use. Since header points to the metadata, we cannot simply return it. Otherwise, all header information would be overwritten when the user used the pointer! Instead, we return a pointer to just after the header. This pointer is easy to calculate: it is the memory address of the header plus the size of the header:

  return header + 1;
}

Note how we increment the header pointer by 1. Since the pointer type is Header*, the increment is actually the number of bytes of the Header struct, not just one byte. The type of the pointer is very relevant in pointer arithmetic.

Now that our memory blocks have metadata at the beginning, we need to take this into account when deallocating. free() receives a pointer not to the start of the block but to the memory made available to the user. Therefore, we need to find the start of the block from the pointer the user passed. Nothing that a little pointer arithmetic can’t solve:

void abfree(void *ptr) {
  Header *header = (Header*) ptr - 1;

If header points to the last allocated block, the previous block will become the last. In this case, we can return memory from the heap to the operating system through brk():

  if (header == last) {
    last = header->previous;
    brk(header);
  }
}

Here are our new malloc() and free() functions:

typedef struct Header {
   struct Header *previous;
 } Header;

 Header *last = NULL;

 void *abmalloc(size_t size) {
   Header *header = sbrk(sizeof(Header) + size);
   header->previous = last;
   last = header;
   return header + 1;
 }

 void abfree(void *ptr) {
   Header *header = (Header*) ptr - 1;
   if (header == last) {
     last = header->previous;
     brk(header);
   }
 }

abmalloc() and abfree() may be slightly more memory-efficient now, but not by much. Dynamically allocated memory rarely behaves like a stack, where the oldest block is always deallocated first. In the next post, we will see how to use the memory of older blocks that are no longer in use.

(This post is a translation of Implementando malloc() e free() — adicionando metadados aos blocos de memória, from Suspensão de Descrença.)

Tiny Ticket Types

Tickets in Jira tend to accumulate redundant and optional fields, becoming complex and confusing. I like Jira, but I understand the frustration it causes.

A plausible solution could be inspired by software development. We programmers are used to finding massive source files, and we know that breaking them into smaller files drastically improves code comprehension. Therefore, inspired by coding best practices, I suggest creating smaller tickets.

Only three states

One way to limit the size of tickets is to simplify the workflow by restricting the number of states. For example, we can define that each type of ticket would have, at most, three states:

  • To do
  • In progress
  • Done

To represent other stages, we can create new types of tickets, such as sub-tasks.

A moderately complex ticket type

Let’s look at an example. Consider the ticket below:

Key: XYZ-1234. Status: Testing. Title: Nasal demons. Description: Calling free() on a previosly dealocaded pointer results in demons coming out of the nose. Technical analysis: The root cause is an undefined behavior. Test results: The patch does not work, now ghosts pop out of the user’s ears. Release date: 2023-12-22

It would follow this workflow:

Open ⇨ To do ⇨ In Analysis ⇨ Doing ⇨ Testing ⇨ Release ⇨ Done

How could we reduce the number of phases?

We can start by removing the “In Analysis” stage. In its place, we create a new type of ticket called “Technical Analysis.” This way, the original task remains in progress (“Doing”) while the technical analysis is underway.

Fewer fields in a ticket

An advantage of this would be transferring fields to sub-tasks. Fields that would clutter the original ticket can appear only in tasks where they are relevant.

Consider the “Release date” field, which only makes sense in the “Release” phase. If developers, testers, etc., are not responsible for the release, this field is confusing and pollutes the original task. With a new task type called “Release,” this field would be in the most appropriate place, keeping the original ticket concise.

Repeating stages without regressing

Another advantage is that the original ticket can go through the same stage multiple times. It’s common for a ticket to have a development phase followed by quality tests, for example. However, if a problem arises in the evaluation, it’s not advisable to revert to the development phase. How to deal with this?

By working with sub-tasks, we can mark validation as completed and create a new implementation ticket. In our ticket, for example, we can remove the “Testing” phase and create a sub-task of type “Test,” as well as another one called “Development.” Every time the test fails, we close testing and open a new development task.

Result

Following this strategy, our ticket would look like this:

Key: XYZ-1234. Status: Doing. Title: Nasal demons. Description: Calling free() on a previosly dealocaded pointer results in demons coming out of the nose. Links: XYZ-1235 Technical analysis; XYZ-2345 Remove text in Latin; XYZ-3456 Test Latin removal; XYZ-2345 Use function medium(); XYZ-3456 Test medium() function; XYZ-4444 Release plan

And the workflow would be much simpler:

Open ⇨ To do ⇨ Doing ⇨ Done

Naturally, this strategy is flexible. In our case, for example, we haven’t removed the “To do” phase yet. Restricting it to five (including backlog and validation) is another possibility. The core idea is to limit the number of stages to a small value for all tickets.

Conclusions

In programming, it’s common to encounter the so-called “God objects,” huge objects responsible for various different functions. Breaking them down is a surefire way to achieve code quality. Therefore, I suspect the same principle can apply to tickets in Jira.

I’m not a project manager, but as a programmer, I believe that limiting the size and steps of tickets can be an effective idea. I’m curious to know if anyone has tried this and how it went.

Implementing malloc() and free() — first steps

Following the wonderful journey that is reading Crafting Interpreters, I reached the point where we implemented an interpreter in C! As always, Bob Nystrom mercilessly proposes very interesting challenges that keep us busy for long periods. For instance, in this chapter, he suggests implementing our own memory allocator, without any real need! Inevitably, I was nerdsniped.

The challenge allows us to allocate a large memory region with an existing malloc() function and manage it, but I decided to implement the malloc() from scratch. Since I use Ubuntu, it was necessary to first understand the memory layout of a process on Linux better.

Consider the diagram below, which represents the memory layout of a process.

In the memory allocated for the process, there are various sections. When the program starts its execution, the shaded part is not yet in use. Throughout its execution, the program declares local variables, causing the stack to grow backward.

On the other hand, dynamically allocated memory is obtained from the heap, which grows in the opposite direction. The popular way to expand the heap is by increasing the size of the data segment (i.e., the section that contains global and static variables) with the sbrk() system call.

Diagram representing how srbk() works, by increasing the data segment pointer but returning the old value.

The above diagram illustrates how this functional system call works. sbrk() takes an integer parameter that will be added to the pointer indicating the end of the data segment. After that, sbrk() returns the value of the pointer before the increment.

In a way, the behavior of sbrk() is already sufficient for memory allocation. Our malloc() function can simply invoke sbrk() and return to the user the pointer to the beginning of the allocated memory block:

void *abmalloc(size_t size) {
   return sbrk(size);
}

In principle, free() doesn’t need to do anything: since in this implementation, we always use memory from the top of the heap, there is nothing we can do to reuse older memory blocks. In that sense, free() can perfectly be a no-op:

void abfree(void *ptr) {
}

A useful operation can be done, however, if the block to be freed is the last one allocated. This means it is at the top of the stack, so we just need to move the stack pointer back with the brk() system call. This syscall takes a pointer as a parameter and, if this pointer is a “reasonable” value (not null, does not point into the stack, does not point before the heap), it uses the pointer’s value as the new top of the heap. The result would be something like this:

void abfree(void *ptr) {
  if (ptr == last_block) {
      brk(last_block);
  }
}

This deallocation, however, is practically useless. Consider the example below:

void *ptr1 = abmalloc(8);
void *ptr2 = abmalloc(8);
abfree(ptr2);
abfree(ptr1);

With the current version of abfree(), we can free the memory pointed to by ptr1, but not the one pointed to by ptr2. To be able to free ptr2, it would be necessary to know that, once ptr1 has been deallocated, the next last block is ptr2. Could we create a second_last_block variable? It wouldn’t help: we would have the same problem with the penultimate block, and so on.

We need a more powerful data structure here, and that’s what we’ll see in our next post.

(This post is a translation of Implementando malloc() e free() — primeiros passos, originally published in Suspensão de Descrença.)

Test utilities, or set-up methods considered harmful

One of the most interesting learnings I had in the old SEA Tecnologia is the creation of test utilities .

Test utilities are a way to reuse code in unit tests. Usually, this is done using setUpor @Before methods, but this has some disadvantages. For example, in a test case, we can have the following initialization:

private Address address;
private AddressDAO addressDAO;

@Before
public void setUp() {
    address = new Address();
    address.setStreet("Rua fulano");
    address.setNumber("123/A");
    addressDAO = new AddressDAO();
}

This initialization works well in the test below…

@Test
public void testGetAllAddresses(){
    addressDAO.addAddress(address);

    List<Address> addresses = addressDAO.getAllAddresses();

    assertEquals(1, addresses.size());
    assertEquals("Rua fulano", addresses.get(0).getStreet());
    assertEquals("123/A", addresses.get(0).getNumber());
}

However, in the following test, the created object is a waste, is not used at all:

@Test
public void testGetNoAddress() {
    List<Address> addresses = addressDAO.getAllAddresses();

    assertEquals(0, addresses.size());
}

In the next following test, we have code redundancy. We also have to decide whether the other object should be created in the @Before method or in the test method:

@Test
public void testGetAllAddressesMoreThanOne() {
    addressDAO.addAddress(address);
    Address address2 = new Address();
    address2.setStreet("Outra rua");
    address2.setNumber("111");
    addressDAO.addAddress(address2);
    List<Address> addresses = addressDAO.getAllAddresses(); 
    assertEquals(1, addresses.size());
    assertEquals("Rua fulano", addresses.get(0).getStreet());
    assertEquals("123/A", addresses.get(0).getNumber()); 
}

These inconveniences are minor when compared to the task of creating a network of dependencies. For example, to test a class Person that adds a test case Address in another test case, we will have to have one @Before similar to this:

private Person person;
private Address address;
private PersonDAO personDAO;

@Before     
public void setUp() {
    address = new Address();
    address.setStreet("Rua fulano");
    address.setNumber("123/A");
    person = new Person();
    person.setName("João");
    person.setAddress(address);
    personDAO = new PersonDAO();
}

The code for creating addresses was duplicated, and it is difficult to create the dependencies. In these examples, we see simple cases, but it is easy to see how the situation would get complicated.

We solve this problem by creating a class to create these objects. This class would be something like this:

public class TestUtil {
    public static Address utilCreateAddress(String street, String number) {
        Address address = new Address();
        address.setStreet("Rua fulano");
        address.setNumber("123/A");
        return address;     
    }

    public static Person utilCreatePerson(String name, Address address) {
        Person person = new Person();
        person.setName(name);
        person.setAddress(address);
        return person;
    }
}

Our test cases extended TestUtil, making object creation easier:

public class TestAddress2 extends TestUtil {
    private AddressDAO addressDAO = new AddressDAO();

    @Test
    public void testGetAllAddresses() {
        Address address = utilCreateAddress("Rua fulano", "123/A");
        addressDAO.addAddress(address);

        List<Address> addresses = addressDAO.getAllAddresses();

        assertEquals(1, addresses.size());
        assertEquals("Rua fulano", addresses.get(0).getStreet());
        assertEquals("123/A", addresses.get(0).getNumber());
    }

    @Test
    public void testGetNoAddress() {
        List<Address> addresses = addressDAO.getAllAddresses();

        assertEquals(0, addresses.size());
    }

    @Test
    public void testGetAllAddressesMoreThanOne() {
        Address address = utilCreateAddress("Rua fulano", "123/A");
        Address address2 = utilCreateAddress("Outra rua", "111");
        addressDAO.addAddress(address);
        addressDAO.addAddress(address2);

        List<Address> addresses = addressDAO.getAllAddresses();

        assertEquals(2, addresses.size());
        assertEquals("Rua fulano", addresses.get(0).getStreet());
        assertEquals("123/A", addresses.get(0).getNumber());
    } 
}

As we also frequently needed some specific object with just one or two parameters to be defined, we created methods variants:

public static Address utilCreateAddress() {
    return utilCreateAddress("Qualquer", "Qualquer");
}

public static Person utilCreatePerson() {
    return utilCreatePerson("José", utilCreateAddress());
}

We learned that in a somewhat complex project, with large networks of object dependencies. The use of these test utilities made it possible to practice TDD on the system. It was exciting to discover that, to create that document that depended on seven other documents and five or six users, all you had to do was call a method.

Of course, there is more to our testing utilities than has been written here, and there may be even more that we haven’t even done. (For example, it may be interesting to write test utilities for specific classes, instead of one gigantic utility.) However, as the idea is very simple, we hope this first step motivates you to think about the topic. Until later!

(This is a translation of the post “Utilitários de Teste” from Suspensão de Descrença. It was originally posted in the old SEA Tecnologia blog. As the original post went offline but the topic remains relevant, I decided to republish it.)

No comments. Now what?

Traditionally, it is considered good practice to comment code. However, this wisdom has been revisited in recent times. At Liferay, for example, we follow a policy of not commenting code. Personally, I am an enthusiast of this philosophy. But I don’t want to present or defend this strategy here, there is a lot of good material on this subject. I want to discuss an open question.

Whoever comments wants to convey some important information. What information is this? And, most importantly, where can we register it? Let’s look at some alternatives.

What do these lines do?

Function names are excellent for explaining what the code does. If a block of code requires a comment, consider extracting it into a function or class. The name of the entity will already clarify its purpose.

Observe, for example, the lines below, taken from this test class:

Assert.assertNotNull(recurrence);
Assert.assertNull(recurrence.getUntilJCalendar());
Assert.assertEquals(0, recurrence.getCount());

These lines check if an event’s RRule has certain properties: it must exist, have a null “untilCalendar, and a count of zero.

The concepts are complex; even I would be confused rereading these asserts. A comment could explain them. But this commit already clarified everything by moving these lines to a method and invoking it:

assertRepeatsForever(recurrence);

Those assertions were checking if the event repeats indefinitely! No comment was needed—fortunately, as these asserts were in various tests.

What is happening here?

If a comment would explain something relevant at runtime, consider turning it into a log message! Check the example below:

if (Validator.isBlank(serviceAccountKey)) {
    // If no credentials are set for GCS Store, the library will
    // use Application Default Credentials.
    _googleCredentials =
        ServiceAccountCredentials.getApplicationDefault();
}
else {
    _googleCredentials = ServiceAccountCredentials.fromStream(
        new ByteArrayInputStream(serviceAccountKey.getBytes()));
}

This comment may be relevant to someone reading the code. However, it would be crucial for someone investigating an authentication issue. Therefore, in practice, I chose to log a message:

if (Validator.isBlank(serviceAccountKey)) {
    if (_log.isInfoEnabled()) {
        _log.info(
            "No credentials set for GCS Store. Library will use " +
                "Application Default Credentials.");
    }

    _googleCredentials =
        ServiceAccountCredentials.getApplicationDefault();
}
else {
    _googleCredentials = ServiceAccountCredentials.fromStream(
        new ByteArrayInputStream(serviceAccountKey.getBytes()));
}

Why are they here?

Comments to explain why certain lines are there are also common. A better place to share this information is in commit messages.

These days, for example, I was asked to help with some code I worked on years ago. Reading a JSP—remember, years ago—I found these lines:

<liferay-portlet:renderURL portletName="<%= KaleoDesignerPortletKeys.KALEO_DESIGNER %>" var="viewURL">
    <portlet:param name="mvcPath" value="/designer/view_kaleo_definition_version.jsp" />
    <portlet:param name="redirect" value="<%= currentURL %>" />
    <portlet:param name="name" value="<%= kaleoDefinitionVersion.getName() %>" />
    <portlet:param name="draftVersion" value="<%= kaleoDefinitionVersion.getVersion() %>" />
</liferay-portlet:renderURL>

This tag is generating a URL to be used elsewhere. But my trained eyes found the portletName parameter strange. This value is usually set automatically.

A git blame clarified everything when I found this commit. The message is clear:

LPS-74977 / LPS-73731 By making the render URL explicitly use the Kaleo Designer name, it will be valid when rendered in another portlet.

I get it! This code will probably be invoked by some other portlet. In this case, the value would be automatically set by the other application, and for some reason, we want to avoid that.

(By the way, that’s why I prefer small commits: they make it easier to discover the reason for very specific code snippets. It’s like every line of code has a comment! It’s not a unanimous position, though: some prefer larger commits.)

The purpose of the line was clarified. But why can it be invoked by another application? This is not usual…

Why was this change made?

Well-written code explains how something was implemented. The commit message explains the why, but in a local context. How do you explain the broader motivation behind code without resorting to comments?

Issue tracker tickets are excellent for this. Typically written to guide development, these documents are very helpful in interpreting the code. If we add the ticket key to the commit message, we can track the reasons.

Returning to the example above. We found that a line allows using the same code in multiple portlets. But this is rarely necessary. Why do we need to reuse the code in this case? Fortunately, the message mentions two tickets. I checked the older one; I arrived at LPSA-64324:

[Information Architecture] EE — As a portal admin, I would like to access all workflow portlets from the control panel section under the same tab.

The title already helps, and the text clarifies it even more. For usability reasons, three different applications started appearing in tabs of the same portlet. It makes complete sense!

The comments we like

It’s important to highlight that we are trying to avoid disorganized comments that intertwine with the code and attempt to explain difficult-to-understand sections. There are various comments, often with standardized formats, that do not hinder readability. An obvious example is the copyright header.

Another effective way to use comments is through literate programming. In this programming style, comments take the spotlight: the source code contains more prose than executable code. This is useful when explaining the algorithm is more important than reading it, as in academic research and data analysis. Not surprisingly, it is the paradigm of popular tools like Jupyter Notebook and Quarto.

Even more relevant, tools like Javadoc, JSDoc, Doxygen, etc. read comments in a specific format to generate documentation. These comments do not affect readability. On the contrary, Javadocs are great for explaining how to use these entities. Combined with tools like my dear Doctest, we even get guarantees of accuracy and correctness!

A World of Possibilities

These are just a few examples of alternatives to comments. There are many other options, such as wikis and blogs. I’ve even found explanations for code I wrote myself on Stack Overflow! We can think of even more solutions to meet different needs. The key point is that with these tools at our disposal, adding comments directly to the code becomes unnecessary.

Naturally, not commenting is just one way to write readable code. Comments are not forbidden; in fact, there are strategies that can make them effective. However, in my experience, avoiding comments often leads to better results, and these techniques help document important information that doesn’t fit directly into the code.

Are you a follower of the “no comments” strategy? If so, where else do you convey information? If not, how do you ensure effective comments? What type of comment do you not see being replaced by these approaches? I’d love to hear your opinions.

(This post is a translation of “Sem Comentários. E Agora?from my blog Suspensão de Descrença.)

10 years of Liferay

These days, we’ve got an unexpected package. What came inside it was even more surprising! What was going on?

An iPad box with a card on top of it. The card has the number "10" written in golden color.

Well, it turns out a few months ago I just completed ten years working for Liferay! This is not only a remarkable tenure, but also one that led me to a lot of growth. I lived in two cities, traveled to a few more around the world, learned to work remotely with very diverse teams, worked with numerous stacks and could see the LATAM branch grow from a dozen people to hundreds.

A card with a header written: "Happy Liferay Anniversary."

Below, handwritten in Brazilian Portuguese:

"Adam,
It is an honor to write this card for you to celebrate your 10 years at Liferay.
Work done with commitment and dedication always bears good fruit.
I am very proud to have been part of your journey. May you continue to be an inspiration to all of us.
Happy 10 years at Liferay!! Here's to many more to come..."

These days, it’s unusual to stay in the same place for that long, especially in a tech career. But Liferay is indeed a nice place to work and there are always new things to learn, new challenges both technically and in teamwork and serving the customer. I surely grew a lot and, as it seems, I have room here to evolve even further!

So, thank you, people, for the gift, but more importantly, thank you for the great time, growing and challenges. And brace yourselves, as I plan to be a delightful “nuisance” among you all for many more fruitful years to come! 😄🎉

(Originally posted on LinkedIn.)

Importing ES 6 Modules from CommonJS

Here at Liferay, a few days ago, we needed to use the p-map package. There was only one problem: our application still uses the CommonJS format, and p-map releases ES6 modules only. Even some of the best references I found (e.g. this post) made it clear that it would not be possible to import ES6 modules from CommonJS.

The good news is that this is no longer true! Using dynamic import, we can load ES6 modules from CommonJS. Let’s look at an example.

In this project, the importer.js file tries to use require() to import an ES6 module:

const pmap = require('p-map');

exports.importer = () => {
  console.log('Yes, I could import p-map:', pmap);
}

Of course, it doesn’t work:

$ node index.js 
internal/modules/cjs/loader.js:1102
      throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);
      ^

Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /home/adam/software/es6commonjs/node_modules/p-map/index.js
require() of ES modules is not supported.
require() of /home/adam/software/es6commonjs/node_modules/p-map/index.js from /home/adam/software/es6commonjs/importer.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules.
Instead rename index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /home/adam/software/es6commonjs/node_modules/p-map/package.json.

    at new NodeError (internal/errors.js:322:7)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1102:13)
    at Module.load (internal/modules/cjs/loader.js:950:32)
    at Function.Module._load (internal/modules/cjs/loader.js:790:12)
    at Module.require (internal/modules/cjs/loader.js:974:19)
    at require (internal/modules/cjs/helpers.js:101:18)
    at Object.<anonymous> (/home/adam/software/es6commonjs/importer.js:1:14)
    at Module._compile (internal/modules/cjs/loader.js:1085:14)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
    at Module.load (internal/modules/cjs/loader.js:950:32) {
  code: 'ERR_REQUIRE_ESM'
}

The solution is to convert require() into a dynamic import. But there is one detail: import imports return Promises. There are many ways to deal with this; the simplest one is probably to make our function asynchronous, like in this version:

exports.importer = async () => {
  const pmap = await import('p-map');
  console.log('Yes, I could import p-map:', pmap);
}

Now our little app works!

$ node index.js 
ok
Yes, I could import p-map: [Module: null prototype] {
  AbortError: [class AbortError extends Error],
  default: [AsyncFunction: pMap],
  pMapSkip: Symbol(skip)
}

Some other adjustments may be necessary. (I had to adjust the eslint settings, for example.) The important thing is that this is possible. And it’s not a kludge: Node’s own documentation recommends this approach.

So, don’t be scared by outdated information: you won’t need to rewrite your entire application as ES 6 modules, at least for now. For us, this was quite a relief!

(This post is a translation of Importando Módulos ES6 em CommonJS, first published in Suspensão da Descrença.)

Billing the technical debt

I really like to work on technical debt issues only when they affect demands. But why? Well, here are some reasons…

Some time ago the man, the myth, the legend Fabrício Buzeto asked this interesting question:

Out of curiosity. Does your team keep a list of technical debt? Does it make you feel joy?

It brought me some memories back. I was for a few years responsible for the Liferay Calendar and Kaleo Designer portlets. These were complex single-page apps, built in a fast pace when the concept of SPAs was still evolving: many choices called for a review.

So I started writing JIRA tickets for technical debt. When one of those health issues made a bug fix or feature harder to implement, I’d convert that technical debt ticket into a sub-task of the demand. As I like to say, I was “billing the debt from the feature.”

I commented that and he asked me a crucial question:

Why not treat them like any other card in the backlog then?

Why, indeed?

Well, at first, we tried! I would present the debt issues in our prioritization meetings. Having the problems written helped a lot to caught the managers’ attention, by the way.

Technical debt is a hard sell, though. People are understandably wary about buying into something whose value they could not see. Nonetheless, changes took increasingly more time to deliver and regression bugs kept popping up. We needed to fix these health problems.

That’s why I started to work on debt as part of value-adding tasks. Working on the debt to make a demand easier was a great evidence that extra work was worth it. It was not just some random idea we worked on to postpone duties: it delivered value.

That is the first reason for handling technical debt as sub-tasks of value issues: By binding the debt to a value-adding task, it is easier to justify the extra effort to stakeholders.

At first, this debt-billing was only a communication device. But there was a cool side effect: the most glaring issues kept being solved first. That makes sense: since we worked on them when they caused problems, the the ones causing more problems were solved first. Since prioritization is always a challenge (and prioritizing technical debt is even harder) it was a huge help.

We still had a pile of technical debt tasks, but many of the pending tasks were not relevant. Some, already solved. Others were elegant ideas back then, but didn’t make sense anymore. In hindsight, a good part of the “debt” were personal preferences, or assumptions that weren’t true anymore after some product evolution.

This is the second reason for debt-billing: Working on health issues as part of demand is an effective way to prioritize which technical debt to work on.

See how great it is! Had we worked on technical debt by themselves — for example, in a task force —, we might apply changes that could actually make future evolution harder. Debt-billing let us confirm which requests were fit for our goals. And it has a subtler, more important consequence.

We developers are are an opinionated lot, and this is good. We usually try to make these opinions into a goal. But it is hard to know if a goal is right. Once we use these ideas as helpers for something more clearly relevant, that goal turns into a tool. Tools are much easier to evaluate!

This is a third reason for debt-billing: when technical debt is linked to value delivery, the creative force from the team works together with the organization’s objectives.

Our experience is that this strategy was quite effective. Everybody knew their suggestions would be evaluated: health tasks wouldn’t be a chore to prioritize anymore, but a toolset that our colleagues would look for to help with their challenges. The debt backlog was not a wishing well anymore.

The apps got better, too. When I started working on the Calendar, for example, it was usually seen as a especially problematic portlet. The first release couldn’t schedule events! When I left that team, the Calendar had no bug of priority 3 or higher (the levels we have to fix). And we delivered quite a good amount of features, even some missing in leader competitors. Not bad for a product that was an example of a non-working feature!

It felt right to bill the technical debt from the demands, but I never thought deeply about why it felt right. So, thank you for asking that, Fabricio! It was a joy to think about it.

EDIT: I just recalled Ron Jeffries wrote a great post about his approach to refactoring, which the one here is similar to, although advocating against a specific point. Totally worth reading!