Tiny Ticket Types

Tickets in Jira tend to accumulate redundant and optional fields, becoming complex and confusing. I like Jira, but I understand the frustration it causes.

A plausible solution could be inspired by software development. We programmers are used to finding massive source files, and we know that breaking them into smaller files drastically improves code comprehension. Therefore, inspired by coding best practices, I suggest creating smaller tickets.

Only three states

One way to limit the size of tickets is to simplify the workflow by restricting the number of states. For example, we can define that each type of ticket would have, at most, three states:

  • To do
  • In progress
  • Done

To represent other stages, we can create new types of tickets, such as sub-tasks.

A moderately complex ticket type

Let’s look at an example. Consider the ticket below:

Key: XYZ-1234. Status: Testing. Title: Nasal demons. Description: Calling free() on a previosly dealocaded pointer results in demons coming out of the nose. Technical analysis: The root cause is an undefined behavior. Test results: The patch does not work, now ghosts pop out of the user’s ears. Release date: 2023-12-22

It would follow this workflow:

Open ⇨ To do ⇨ In Analysis ⇨ Doing ⇨ Testing ⇨ Release ⇨ Done

How could we reduce the number of phases?

We can start by removing the “In Analysis” stage. In its place, we create a new type of ticket called “Technical Analysis.” This way, the original task remains in progress (“Doing”) while the technical analysis is underway.

Fewer fields in a ticket

An advantage of this would be transferring fields to sub-tasks. Fields that would clutter the original ticket can appear only in tasks where they are relevant.

Consider the “Release date” field, which only makes sense in the “Release” phase. If developers, testers, etc., are not responsible for the release, this field is confusing and pollutes the original task. With a new task type called “Release,” this field would be in the most appropriate place, keeping the original ticket concise.

Repeating stages without regressing

Another advantage is that the original ticket can go through the same stage multiple times. It’s common for a ticket to have a development phase followed by quality tests, for example. However, if a problem arises in the evaluation, it’s not advisable to revert to the development phase. How to deal with this?

By working with sub-tasks, we can mark validation as completed and create a new implementation ticket. In our ticket, for example, we can remove the “Testing” phase and create a sub-task of type “Test,” as well as another one called “Development.” Every time the test fails, we close testing and open a new development task.

Result

Following this strategy, our ticket would look like this:

Key: XYZ-1234. Status: Doing. Title: Nasal demons. Description: Calling free() on a previosly dealocaded pointer results in demons coming out of the nose. Links: XYZ-1235 Technical analysis; XYZ-2345 Remove text in Latin; XYZ-3456 Test Latin removal; XYZ-2345 Use function medium(); XYZ-3456 Test medium() function; XYZ-4444 Release plan

And the workflow would be much simpler:

Open ⇨ To do ⇨ Doing ⇨ Done

Naturally, this strategy is flexible. In our case, for example, we haven’t removed the “To do” phase yet. Restricting it to five (including backlog and validation) is another possibility. The core idea is to limit the number of stages to a small value for all tickets.

Conclusions

In programming, it’s common to encounter the so-called “God objects,” huge objects responsible for various different functions. Breaking them down is a surefire way to achieve code quality. Therefore, I suspect the same principle can apply to tickets in Jira.

I’m not a project manager, but as a programmer, I believe that limiting the size and steps of tickets can be an effective idea. I’m curious to know if anyone has tried this and how it went.

No comments. Now what?

Traditionally, it is considered good practice to comment code. However, this wisdom has been revisited in recent times. At Liferay, for example, we follow a policy of not commenting code. Personally, I am an enthusiast of this philosophy. But I don’t want to present or defend this strategy here, there is a lot of good material on this subject. I want to discuss an open question.

Whoever comments wants to convey some important information. What information is this? And, most importantly, where can we register it? Let’s look at some alternatives.

What do these lines do?

Function names are excellent for explaining what the code does. If a block of code requires a comment, consider extracting it into a function or class. The name of the entity will already clarify its purpose.

Observe, for example, the lines below, taken from this test class:

Assert.assertNotNull(recurrence);
Assert.assertNull(recurrence.getUntilJCalendar());
Assert.assertEquals(0, recurrence.getCount());

These lines check if an event’s RRule has certain properties: it must exist, have a null “untilCalendar, and a count of zero.

The concepts are complex; even I would be confused rereading these asserts. A comment could explain them. But this commit already clarified everything by moving these lines to a method and invoking it:

assertRepeatsForever(recurrence);

Those assertions were checking if the event repeats indefinitely! No comment was needed—fortunately, as these asserts were in various tests.

What is happening here?

If a comment would explain something relevant at runtime, consider turning it into a log message! Check the example below:

if (Validator.isBlank(serviceAccountKey)) {
    // If no credentials are set for GCS Store, the library will
    // use Application Default Credentials.
    _googleCredentials =
        ServiceAccountCredentials.getApplicationDefault();
}
else {
    _googleCredentials = ServiceAccountCredentials.fromStream(
        new ByteArrayInputStream(serviceAccountKey.getBytes()));
}

This comment may be relevant to someone reading the code. However, it would be crucial for someone investigating an authentication issue. Therefore, in practice, I chose to log a message:

if (Validator.isBlank(serviceAccountKey)) {
    if (_log.isInfoEnabled()) {
        _log.info(
            "No credentials set for GCS Store. Library will use " +
                "Application Default Credentials.");
    }

    _googleCredentials =
        ServiceAccountCredentials.getApplicationDefault();
}
else {
    _googleCredentials = ServiceAccountCredentials.fromStream(
        new ByteArrayInputStream(serviceAccountKey.getBytes()));
}

Why are they here?

Comments to explain why certain lines are there are also common. A better place to share this information is in commit messages.

These days, for example, I was asked to help with some code I worked on years ago. Reading a JSP—remember, years ago—I found these lines:

<liferay-portlet:renderURL portletName="<%= KaleoDesignerPortletKeys.KALEO_DESIGNER %>" var="viewURL">
    <portlet:param name="mvcPath" value="/designer/view_kaleo_definition_version.jsp" />
    <portlet:param name="redirect" value="<%= currentURL %>" />
    <portlet:param name="name" value="<%= kaleoDefinitionVersion.getName() %>" />
    <portlet:param name="draftVersion" value="<%= kaleoDefinitionVersion.getVersion() %>" />
</liferay-portlet:renderURL>

This tag is generating a URL to be used elsewhere. But my trained eyes found the portletName parameter strange. This value is usually set automatically.

A git blame clarified everything when I found this commit. The message is clear:

LPS-74977 / LPS-73731 By making the render URL explicitly use the Kaleo Designer name, it will be valid when rendered in another portlet.

I get it! This code will probably be invoked by some other portlet. In this case, the value would be automatically set by the other application, and for some reason, we want to avoid that.

(By the way, that’s why I prefer small commits: they make it easier to discover the reason for very specific code snippets. It’s like every line of code has a comment! It’s not a unanimous position, though: some prefer larger commits.)

The purpose of the line was clarified. But why can it be invoked by another application? This is not usual…

Why was this change made?

Well-written code explains how something was implemented. The commit message explains the why, but in a local context. How do you explain the broader motivation behind code without resorting to comments?

Issue tracker tickets are excellent for this. Typically written to guide development, these documents are very helpful in interpreting the code. If we add the ticket key to the commit message, we can track the reasons.

Returning to the example above. We found that a line allows using the same code in multiple portlets. But this is rarely necessary. Why do we need to reuse the code in this case? Fortunately, the message mentions two tickets. I checked the older one; I arrived at LPSA-64324:

[Information Architecture] EE — As a portal admin, I would like to access all workflow portlets from the control panel section under the same tab.

The title already helps, and the text clarifies it even more. For usability reasons, three different applications started appearing in tabs of the same portlet. It makes complete sense!

The comments we like

It’s important to highlight that we are trying to avoid disorganized comments that intertwine with the code and attempt to explain difficult-to-understand sections. There are various comments, often with standardized formats, that do not hinder readability. An obvious example is the copyright header.

Another effective way to use comments is through literate programming. In this programming style, comments take the spotlight: the source code contains more prose than executable code. This is useful when explaining the algorithm is more important than reading it, as in academic research and data analysis. Not surprisingly, it is the paradigm of popular tools like Jupyter Notebook and Quarto.

Even more relevant, tools like Javadoc, JSDoc, Doxygen, etc. read comments in a specific format to generate documentation. These comments do not affect readability. On the contrary, Javadocs are great for explaining how to use these entities. Combined with tools like my dear Doctest, we even get guarantees of accuracy and correctness!

A World of Possibilities

These are just a few examples of alternatives to comments. There are many other options, such as wikis and blogs. I’ve even found explanations for code I wrote myself on Stack Overflow! We can think of even more solutions to meet different needs. The key point is that with these tools at our disposal, adding comments directly to the code becomes unnecessary.

Naturally, not commenting is just one way to write readable code. Comments are not forbidden; in fact, there are strategies that can make them effective. However, in my experience, avoiding comments often leads to better results, and these techniques help document important information that doesn’t fit directly into the code.

Are you a follower of the “no comments” strategy? If so, where else do you convey information? If not, how do you ensure effective comments? What type of comment do you not see being replaced by these approaches? I’d love to hear your opinions.

(This post is a translation of “Sem Comentários. E Agora?from my blog Suspensão de Descrença.)

Importing ES 6 Modules from CommonJS

Here at Liferay, a few days ago, we needed to use the p-map package. There was only one problem: our application still uses the CommonJS format, and p-map releases ES6 modules only. Even some of the best references I found (e.g. this post) made it clear that it would not be possible to import ES6 modules from CommonJS.

The good news is that this is no longer true! Using dynamic import, we can load ES6 modules from CommonJS. Let’s look at an example.

In this project, the importer.js file tries to use require() to import an ES6 module:

const pmap = require('p-map');

exports.importer = () => {
  console.log('Yes, I could import p-map:', pmap);
}

Of course, it doesn’t work:

$ node index.js 
internal/modules/cjs/loader.js:1102
      throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);
      ^

Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /home/adam/software/es6commonjs/node_modules/p-map/index.js
require() of ES modules is not supported.
require() of /home/adam/software/es6commonjs/node_modules/p-map/index.js from /home/adam/software/es6commonjs/importer.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules.
Instead rename index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /home/adam/software/es6commonjs/node_modules/p-map/package.json.

    at new NodeError (internal/errors.js:322:7)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1102:13)
    at Module.load (internal/modules/cjs/loader.js:950:32)
    at Function.Module._load (internal/modules/cjs/loader.js:790:12)
    at Module.require (internal/modules/cjs/loader.js:974:19)
    at require (internal/modules/cjs/helpers.js:101:18)
    at Object.<anonymous> (/home/adam/software/es6commonjs/importer.js:1:14)
    at Module._compile (internal/modules/cjs/loader.js:1085:14)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
    at Module.load (internal/modules/cjs/loader.js:950:32) {
  code: 'ERR_REQUIRE_ESM'
}

The solution is to convert require() into a dynamic import. But there is one detail: import imports return Promises. There are many ways to deal with this; the simplest one is probably to make our function asynchronous, like in this version:

exports.importer = async () => {
  const pmap = await import('p-map');
  console.log('Yes, I could import p-map:', pmap);
}

Now our little app works!

$ node index.js 
ok
Yes, I could import p-map: [Module: null prototype] {
  AbortError: [class AbortError extends Error],
  default: [AsyncFunction: pMap],
  pMapSkip: Symbol(skip)
}

Some other adjustments may be necessary. (I had to adjust the eslint settings, for example.) The important thing is that this is possible. And it’s not a kludge: Node’s own documentation recommends this approach.

So, don’t be scared by outdated information: you won’t need to rewrite your entire application as ES 6 modules, at least for now. For us, this was quite a relief!

(This post is a translation of Importando Módulos ES6 em CommonJS, first published in Suspensão da Descrença.)

Billing the technical debt

I really like to work on technical debt issues only when they affect demands. But why? Well, here are some reasons…

Some time ago the man, the myth, the legend Fabrício Buzeto asked this interesting question:

Out of curiosity. Does your team keep a list of technical debt? Does it make you feel joy?

It brought me some memories back. I was for a few years responsible for the Liferay Calendar and Kaleo Designer portlets. These were complex single-page apps, built in a fast pace when the concept of SPAs was still evolving: many choices called for a review.

So I started writing JIRA tickets for technical debt. When one of those health issues made a bug fix or feature harder to implement, I’d convert that technical debt ticket into a sub-task of the demand. As I like to say, I was “billing the debt from the feature.”

I commented that and he asked me a crucial question:

Why not treat them like any other card in the backlog then?

Why, indeed?

Well, at first, we tried! I would present the debt issues in our prioritization meetings. Having the problems written helped a lot to caught the managers’ attention, by the way.

Technical debt is a hard sell, though. People are understandably wary about buying into something whose value they could not see. Nonetheless, changes took increasingly more time to deliver and regression bugs kept popping up. We needed to fix these health problems.

That’s why I started to work on debt as part of value-adding tasks. Working on the debt to make a demand easier was a great evidence that extra work was worth it. It was not just some random idea we worked on to postpone duties: it delivered value.

That is the first reason for handling technical debt as sub-tasks of value issues: By binding the debt to a value-adding task, it is easier to justify the extra effort to stakeholders.

At first, this debt-billing was only a communication device. But there was a cool side effect: the most glaring issues kept being solved first. That makes sense: since we worked on them when they caused problems, the the ones causing more problems were solved first. Since prioritization is always a challenge (and prioritizing technical debt is even harder) it was a huge help.

We still had a pile of technical debt tasks, but many of the pending tasks were not relevant. Some, already solved. Others were elegant ideas back then, but didn’t make sense anymore. In hindsight, a good part of the “debt” were personal preferences, or assumptions that weren’t true anymore after some product evolution.

This is the second reason for debt-billing: Working on health issues as part of demand is an effective way to prioritize which technical debt to work on.

See how great it is! Had we worked on technical debt by themselves — for example, in a task force —, we might apply changes that could actually make future evolution harder. Debt-billing let us confirm which requests were fit for our goals. And it has a subtler, more important consequence.

We developers are are an opinionated lot, and this is good. We usually try to make these opinions into a goal. But it is hard to know if a goal is right. Once we use these ideas as helpers for something more clearly relevant, that goal turns into a tool. Tools are much easier to evaluate!

This is a third reason for debt-billing: when technical debt is linked to value delivery, the creative force from the team works together with the organization’s objectives.

Our experience is that this strategy was quite effective. Everybody knew their suggestions would be evaluated: health tasks wouldn’t be a chore to prioritize anymore, but a toolset that our colleagues would look for to help with their challenges. The debt backlog was not a wishing well anymore.

The apps got better, too. When I started working on the Calendar, for example, it was usually seen as a especially problematic portlet. The first release couldn’t schedule events! When I left that team, the Calendar had no bug of priority 3 or higher (the levels we have to fix). And we delivered quite a good amount of features, even some missing in leader competitors. Not bad for a product that was an example of a non-working feature!

It felt right to bill the technical debt from the demands, but I never thought deeply about why it felt right. So, thank you for asking that, Fabricio! It was a joy to think about it.

EDIT: I just recalled Ron Jeffries wrote a great post about his approach to refactoring, which the one here is similar to, although advocating against a specific point. Totally worth reading!

The surprising mainframe longevity

(I wrote this this post some years ago in The Practical Dev. I found it by chance and wondered, why not put in the blog? So here it is!)

Days ago, I read a post that said something like this:

There’s a lot of mainframe developers that are currently out of a job because they refused to look ahead. […] now, many of them are scrambling to catch up on 30 years of technology.

Well, I never worked with mainframes myself, but that sounded dubious. I had contact with mainframe developers, they did not seem in low demand at all. What happens is, the dynamics of the mainframe environment are surprising for most of us new developers.

Sectors such as government, banking and telecommunications still have a large infrastructure based on these machines. Those systems are decades old and work quite well until today. Sunsetting them is really expensive and, in general, they do not cause problems. For this reason, many organizations have no plans to migrate them to other platforms. As a consequence, there is always someone hiring programmers for these platforms.

85% of our typical daily transactions such as ATM withdrawals and credit card payments still go through mainframe systems. (Source)

In fact, these positions tend to compensate well. There are few mainframe developers for a steady demand. With many of them retiring, the demand can even get higher. In fact, the labor costs used to be one of the reasons to move out of mainframes.

Experienced COBOL programmers can earn more than $100 an hour when they get called in to patch up glitches, rewrite coding manuals or make new systems work with old. (Source)

Anyway, these platforms did not stagnate. IBM just released a new machine some time ago. Neither are they an exclusive choice: most often than not, these systems pair with newer technologies. My bank Android app, for example, consumes data that comes from mainframes through many gateways. Or see this amazing story of integrating some old systems with new tech.

Because a mainframe offers reliable performance and strict security, it is often the on-premise component of a hybrid cloud environment that processes and stores an organization’s most sensitive data. (Source)

What makes mainframes less common is, I believe, their price. Their cost has a good reason: A mainframe can be as powerful as a cloud data center — indeed, some are cloud data centers. However, most companies do not start with enough money, or even the need, for such power. For many of us, it is more cost-effective to start with inexpensive platforms and grow them into distributed systems.

Of course, there are concerns in this ecosystem. The best developers are retiring. Also, much of that code is hard to maintain, made before we even knew much about software engineering.

The mainframe boxes themselves are not aging. In fact they outcompete Microsoft and Linux on features like performance, scalability, security, and reliability. It’s not the machines but applications and programmers that are aging. (Source)

However, the most experienced ones agree: the solution is not merely to rewrite it all. Some communities bring new blood to this market. Given an organizational culture shift, it is also possible to bring agility and good quality to old applications. Indeed, refactoring these applications is necessary even if you want to move off the mainframes.

It sounds weird to us because we do not follow this carer path. Yet, the mainframe market is very much alive.

The Evolution of Remote Careers

How can remote works grow in their careers? Since remote work is a recent revolution, it is a challenging question. In general, white-collar employees tend to grow more by changing companies, and, in my experience, it is even more common in remote environments. Nonetheless, it’s possible to grow in the same company as a remote worker—if the company did its homework. Since I started a community about remote work (in Portuguese), I’ve met many of those remote-first companies which worked hard to develop their collaborators and decided to look for a bit of their knowledge.

Careers in a blue sky

I invited my old friend from UnB, Fabricio Buzeto, co-founder of bxblue (a growing fintech here from Brasília), for a (virtual) coffee on October 5, 2020. He told me how bxblue’s career plan works: “We don’t do anything different from in-person companies. We have periodic evaluations and a promotions calendar.”

In their case, the career plans have two parts: a compensation and roles plan to ensure recognition to growing professionals, and a competencies plan, which will help them grow even more. Each department has its own evaluation criteria. For example, customer support has closing metrics, while engineering doesn’t.

Criteria should be clear, objective and, notably, collective. “Here at bx, the metrics are the entire team’s average,” Fabricio told me. “Our customer-facing department, today, is ten times more productive than the best individual attendant from the past.” This strategy, which focuses on the team and not the individual, makes it easy to find the real concerns behind the metrics. “If the attendants are not closing, which skill is missing to close more? The software can present inviable offers. Or maybe the attendant can be too slow to call, or doesn’t complete the call and doesn’t try different channels.”

Distributed planning for distributed careers

Intrigued by bxblue’s career plan, I decided to talk to other companies. Then I recall my dear friend Karina Varela from Red Hat—you may remember her from the brilliant tips on working from home with family (in Portuguese). She told me how, being a child from the 90s and 2000 free software movements, Red Hat has always been international, distributed and remote-first. I schedule another coffee with her and her leader, Glauce Santos, Latin America’s acquisition manager, for October 8, 2020. Then, I asked: how is the career plan at RH?

To my surprise, they don’t have one!

Glauce explained that the career development at Red Hat is more localized. “We don’t have a career plan, as in a Big Four. We have an open culture and individualized performance evaluation with the direct manager.” In this case, the accountable persons are the collaborators themselves. “The responsibility stays in the hands of the employee,” Glauce informs. For that, the manager’s support is fundamental, as Karina tells us: “The manager helps the collaborators to get where they want to be.”

While this is a very different approach from bxblue’s, there are similarities: criteria are defined by areas and teams. “The consultant is evaluated by customer’s satisfaction, maybe by worked hours. At support, one sees how many requests were attended and how many SLAs were met. Sales teams have targets,” Karina told me. Glauce complements: “Employees are evaluated for main responsibilities, goals, targets, and objectives. And there is a development plan for each one, developed together with the manager.”

Growing sideways

One of the most interesting points from the conversation was about something also encouraged here at Liferay: exchanging roles and teams. I, for one, changed teams many times. It happens both at bxblue and Red Hat.

“We are incentivized to change teams through internal selection processes,” Karina told me. The good side is that, when there are no vacancies or budget for promotion, the employees can develop themselves by expanding their horizons. Glauce complements: “At RH, there are always opportunities. Sometimes we don’t have the budget or the ‘next step,’ but we always have more responsibilities. There are horizontal, vertical or forked careers, it is possible to change areas of expertise, become a specialist, etc.”

Are sideway moves a solution for career growth? In my opinion, it can be a good complementary tool. Naturally, though, they do not replace promotions. Both collaborators and HR departments need the awareness that those do not substitute growth. On the other hand, I believe it can help a lot. By changing teams or departments, I myself have solved problems I thought demanded a promotion. I still looked for an upgrade, but the change was a breath of fresh air.

Summing up

Today, maybe even more than at the time of the interviews, companies have to make an effort to keep their collaborators. With more and more companies adopting remote-first, the challenge is yet more significant. Well-defined career plans, such as bxblue’s, are a great benefit to keep professionals. They are not mandatory, though, as Red Hat’s distributed model has proved. Team and area changes are also helpful, although, personally, I believe it is necessary to pay attention to avoid stagnation.

What do you think? Please comment below!

(This post is a translation of A Evolução da Carreira Remota.)

The pleasures of language learning

(This is the script of a speech I gave to Liferay‘s Toastmasters club. Alas, I forgot to record it, as always. Yet, it may still be worth sharing. Let’s hope I remember to record my next speech!)

I have to say, it is always a pleasure to be here, not the least because our chapter is so cosmopolitan! It is one of the things I like the most in my career now: the opportunity to converse with such a diverse set of people and cultures.

I’m sorry if I sound provincial; it is because I am a bit. We are not global citizens here where I come from. Last year, my barber couldn’t believe I had daily meetings in English. Although I’m pretty comfortable returning to the neighborhood I grew up in, I would be bored to death if I were locked here. 

Fortunately, going to the university and getting a career in IT expanded my horizons. For one thing, I had to learn English, and what a marvelous achievement it was! It opened the doors of my comprehension in ways I couldn’t even imagine before. I was lucky my university had this course, Instrumental English, to teach us how to use the language fluently. (My previous experiences with language courses were disappointing, to be honest.) It took time and practice, but by consuming content in English, I got to a point where I felt quite comfortable, even if not flawless.

I know many of you here are native English speakers. You are lucky, my friends! I can only wonder how knowing such a universal language from an early age can give you a vaster view of the world. On the other hand, it may rob you of the very satisfying pastime of learning languages.

Many people are adamant that everybody should learn a foreign language to the point of fluency. While I agree it is a good idea, I wouldn’t be so bold as to affirm every person should do this, and surely wouldn’t disregard monolingual people. Letting aside the fact that you do you, learning English makes more sense and is likely more straightforward than learning most other languages for a bunch of reasons.

First of all, English is almost automatically useful. We here who learned English as a second language most likely learned it because we needed it for our studies or careers. I can ensure that, here where I live, fluency in English can open a lot of doors that are already open to English native speakers. It is easier to justify learning English as a second language, to yourself and to others.

Also, English is relatively simple. Not that simple, mind you: the pronunciation is frankly bewildering, as well as the nonsensical writing. That said, it is one of the most frugal grammars I have ever seen, maybe losing only to Mandarin. Who knows, someday I can be skilled enough to compare them properly.

On top of that, the enormous cultural influence of countries such as the United States and the United Kingdom paves the way to us, non-native speakers. I don’t know about your cultural context, but here we use English words and expressions a lot! Also, there is so much quality material to consume and practice all over the place. There is, of course, lots of quality material in other languages as well. Yet, those may be harder to find for those still learning. On the other hand, English has so much content it is hard not to find something interesting.

Given all that, I have this theory that non-native English speakers have a leg up on the path to polyglotism. Studying languages becomes easier and easier the more languages you know. Since we have to learn English, we have to give the first, and most challenging, step after all!

All that said, I still recommend learning languages emphatically, even if you are an English native speaker. To speak another language expands your worldview drastically, is helpful for your career, makes trips abroad much more fun. There are even some studies suggesting it can help to prevent memory loss and other neurological ailments. Although I confess, you may find yourself too often forgetting how to say this or that word in your native language, a phenomenon my bilingual friends can surely relate to.

And learning languages is fun, I can attest. After studying English, I tried to learn German for years, without much success but having a lot of fun. When I started working for Liferay Latin America, the company offered all employees a Spanish course with an excellent teacher, which I took with enthusiasm. I was so lucky, not only because I had this opportunity but also because Spanish is remarkably easy for Portuguese speakers. (Which is another disadvantage for English speakers: I don’t know any language as close to English as the Romanche languages are to each other.) With moderate fluency in two languages, I got a taste for lingos. My old German books came out of the archives, and I am even taking a Chinese course right now. The point is, multilingualism becomes more accessible and fun with time.

So, what about you? Do you speak more than one tongue? Would you like to? If so, give it a try. It may look scary or exhausting at first, but it doesn’t need to be. Language learning, like beers and sports, can be disconcerting at first but exhilarating once you get the taste.

Don’t Interpret Me Wrong: Improvising Tests for an Interpreter

I’m in love with the Crafting Interpreters book. In it, Bob Nystrom teach us how to writer an interpreter by implementing a little programming language called Lox. It was a long time since I had so much fun programming! Besides being well-written, the book is funny and teach way more than I would expect. But I have a problem.

The snippets in the bug are written in a way we can copy and paste them. However, the book has challenges at the end of each chapter, these challenges have no source code and sometime they force us to change the interpreter a lot. I do every one of these exercises and as a result my interpreter diverges too much from the source in the book. Consequently, I often break some part of my interpreter.

How to solve that?

Unity tests would be brittle since the code structure changes frequently. End-to-end tests seem more practical in this case. So, for each new feature of the language, I wrote a little program. For example, my interpreter should create closures, and to ensure that I copied the Lox program below to the file counter.lox:

return count;
}

var counter = makeCounter();
counter(); // “1”.
counter(); // “2”.</code></pre>
<p>

This program result should be the numbers 1 and 2 printed in different lines. So I put these values in a file called counter.lox.out. The program cannot fail either, so I created an empty file called counter.lox.err. (In some cases, it is necessary to ensure the Lox program will fail. In these cases, the file .lox.err should have content.)

Well, I wrote programs and output files for various examples; now I need to compare the programs’ results to the expected outputs. I decided to use the tool that helps me the most in urgent times: shell script. I did a Bash script with a for iterating over all examples:

done</code></pre>
<p>

For each example, I executed the Lox program, redirecting the outputs to temporary files:

Now, we compare the real output with the expected output through diff. When it compares two files, diff returns 0 if there is no difference, 1 if there exists a difference or 2 in case of error. Since in Bash the conditional if considers 0 as true, we just check the negation of diff‘s exit code.

If the program prints something in standard output that is different from what is in its .lox.out file, we have a failure:

if ! diff $l.out $out
then
FAIL=1
fi
done</code></pre>
<p>

We also check the standard error and the .lox.err file:

if ! diff $l.out $out
then
FAIL=1
fi

if ! diff $l.err $err
then
FAIL=1
fi
done</code></pre>
<p>

Finally, I check if there was some failure and report the result:

if ! diff $l.out $out
then
FAIL=1
fi

if ! diff $l.err $err
then
FAIL=1
fi

if [ &quot;$FAIL&quot; = &quot;1&quot; ]
then
echo &quot;FAIL&quot; $l
else
echo &quot;PASS&quot; $l
fi
done</code></pre>
<p>

Not all of my Lox programs can be checked, though. For example, there is a program which times loop executions, it is impossible to anticipate the value it will print. Because of that, I added the possibility to jump some programs: we need just to create a file with the .lox.skip extension:

out=$(mktemp)
err=$(mktemp)
java -classpath target/classes/ br.com.brandizzi.adam.myjlox.Lox $l &gt; $out 2&gt; $err

if ! diff $l.out $out
then
FAIL=1
fi

if ! diff $l.err $err
then
FAIL=1
fi

if [ &quot;$FAIL&quot; = &quot;1&quot; ]
then
echo &quot;FAIL&quot; $l
else
echo &quot;PASS&quot; $l
fi
done</code></pre>
<p>

If, however, I have a Lox example and it does not have expected output files (nor the .lox.skip file) then I have a problem and the entire script fails:

out=$(mktemp)
err=$(mktemp)
java -classpath target/classes/ br.com.brandizzi.adam.myjlox.Lox $l &gt; $out 2&gt; $err

if ! diff $l.out $out
then
FAIL=1
fi

if ! diff $l.err $err
then
FAIL=1
fi

if [ &quot;$FAIL&quot; = &quot;1&quot; ]
then
echo &quot;FAIL&quot; $l
else
echo &quot;PASS&quot; $l
fi
done</code></pre>
<p>

With that, my test script is done. Let us see how it behaves:

$ ./lcheck.sh
PASS examples/attr.lox
PASS examples/bacon.lox
PASS examples/badfun.lox
PASS examples/badret.lox
PASS examples/bagel.lox
PASS examples/bostoncream.lox
PASS examples/cake.lox
PASS examples/checkuse.lox
PASS examples/circle2.lox
PASS examples/circle.lox
1d0
< 3
1c1
<
---
> [line 1] Error at ',': Expect ')' after expression.
FAIL examples/comma.lox
PASS examples/counter.lox
PASS examples/devonshinecream.lox
PASS examples/eclair.lox
PASS examples/fibonacci2.lox
PASS examples/fibonacci.lox
PASS examples/func.lox
PASS examples/funexprstmt.lox
PASS examples/hello2.lox
PASS examples/hello3.lox
PASS examples/hello.lox
PASS examples/math.lox
PASS examples/notaclass.lox
PASS examples/noteveninaclass.lox
PASS examples/point.lox
PASS examples/retthis.lox
PASS examples/scope1.lox
PASS examples/scope.lox
PASS examples/supersuper.lox
PASS examples/thisout.lox
PASS examples/thrice.lox
SKIP examples/timeit.lox
PASS examples/twovars.lox
PASS examples/usethis.lox
PASS examples/varparam.lox

Oops, apparently I removed the support for the comma operator by accident. Good thing I wrote this script, right?

I hope this post was minimally interesting! Now, I am going to repair my comma operator and keep reading this wonderful book.

(This post is a translation of Não me Interprete Mal: Improvisando Testes para um Interpretador.)