The surprising mainframe longevity

(I wrote this this post some years ago in The Practical Dev. I found it by chance and wondered, why not put in the blog? So here it is!)

Days ago, I read a post that said something like this:

There’s a lot of mainframe developers that are currently out of a job because they refused to look ahead. […] now, many of them are scrambling to catch up on 30 years of technology.

Well, I never worked with mainframes myself, but that sounded dubious. I had contact with mainframe developers, they did not seem in low demand at all. What happens is, the dynamics of the mainframe environment are surprising for most of us new developers.

Sectors such as government, banking and telecommunications still have a large infrastructure based on these machines. Those systems are decades old and work quite well until today. Sunsetting them is really expensive and, in general, they do not cause problems. For this reason, many organizations have no plans to migrate them to other platforms. As a consequence, there is always someone hiring programmers for these platforms.

85% of our typical daily transactions such as ATM withdrawals and credit card payments still go through mainframe systems. (Source)

In fact, these positions tend to compensate well. There are few mainframe developers for a steady demand. With many of them retiring, the demand can even get higher. In fact, the labor costs used to be one of the reasons to move out of mainframes.

Experienced COBOL programmers can earn more than $100 an hour when they get called in to patch up glitches, rewrite coding manuals or make new systems work with old. (Source)

Anyway, these platforms did not stagnate. IBM just released a new machine some time ago. Neither are they an exclusive choice: most often than not, these systems pair with newer technologies. My bank Android app, for example, consumes data that comes from mainframes through many gateways. Or see this amazing story of integrating some old systems with new tech.

Because a mainframe offers reliable performance and strict security, it is often the on-premise component of a hybrid cloud environment that processes and stores an organization’s most sensitive data. (Source)

What makes mainframes less common is, I believe, their price. Their cost has a good reason: A mainframe can be as powerful as a cloud data center — indeed, some are cloud data centers. However, most companies do not start with enough money, or even the need, for such power. For many of us, it is more cost-effective to start with inexpensive platforms and grow them into distributed systems.

Of course, there are concerns in this ecosystem. The best developers are retiring. Also, much of that code is hard to maintain, made before we even knew much about software engineering.

The mainframe boxes themselves are not aging. In fact they outcompete Microsoft and Linux on features like performance, scalability, security, and reliability. It’s not the machines but applications and programmers that are aging. (Source)

However, the most experienced ones agree: the solution is not merely to rewrite it all. Some communities bring new blood to this market. Given an organizational culture shift, it is also possible to bring agility and good quality to old applications. Indeed, refactoring these applications is necessary even if you want to move off the mainframes.

It sounds weird to us because we do not follow this carer path. Yet, the mainframe market is very much alive.

The Evolution of Remote Careers

How can remote works grow in their careers? Since remote work is a recent revolution, it is a challenging question. In general, white-collar employees tend to grow more by changing companies, and, in my experience, it is even more common in remote environments. Nonetheless, it’s possible to grow in the same company as a remote worker—if the company did its homework. Since I started a community about remote work (in Portuguese), I’ve met many of those remote-first companies which worked hard to develop their collaborators and decided to look for a bit of their knowledge.

Careers in a blue sky

I invited my old friend from UnB, Fabricio Buzeto, co-founder of bxblue (a growing fintech here from Brasília), for a (virtual) coffee on October 5, 2020. He told me how bxblue’s career plan works: “We don’t do anything different from in-person companies. We have periodic evaluations and a promotions calendar.”

In their case, the career plans have two parts: a compensation and roles plan to ensure recognition to growing professionals, and a competencies plan, which will help them grow even more. Each department has its own evaluation criteria. For example, customer support has closing metrics, while engineering doesn’t.

Criteria should be clear, objective and, notably, collective. “Here at bx, the metrics are the entire team’s average,” Fabricio told me. “Our customer-facing department, today, is ten times more productive than the best individual attendant from the past.” This strategy, which focuses on the team and not the individual, makes it easy to find the real concerns behind the metrics. “If the attendants are not closing, which skill is missing to close more? The software can present inviable offers. Or maybe the attendant can be too slow to call, or doesn’t complete the call and doesn’t try different channels.”

Distributed planning for distributed careers

Intrigued by bxblue’s career plan, I decided to talk to other companies. Then I recall my dear friend Karina Varela from Red Hat—you may remember her from the brilliant tips on working from home with family (in Portuguese). She told me how, being a child from the 90s and 2000 free software movements, Red Hat has always been international, distributed and remote-first. I schedule another coffee with her and her leader, Glauce Santos, Latin America’s acquisition manager, for October 8, 2020. Then, I asked: how is the career plan at RH?

To my surprise, they don’t have one!

Glauce explained that the career development at Red Hat is more localized. “We don’t have a career plan, as in a Big Four. We have an open culture and individualized performance evaluation with the direct manager.” In this case, the accountable persons are the collaborators themselves. “The responsibility stays in the hands of the employee,” Glauce informs. For that, the manager’s support is fundamental, as Karina tells us: “The manager helps the collaborators to get where they want to be.”

While this is a very different approach from bxblue’s, there are similarities: criteria are defined by areas and teams. “The consultant is evaluated by customer’s satisfaction, maybe by worked hours. At support, one sees how many requests were attended and how many SLAs were met. Sales teams have targets,” Karina told me. Glauce complements: “Employees are evaluated for main responsibilities, goals, targets, and objectives. And there is a development plan for each one, developed together with the manager.”

Growing sideways

One of the most interesting points from the conversation was about something also encouraged here at Liferay: exchanging roles and teams. I, for one, changed teams many times. It happens both at bxblue and Red Hat.

“We are incentivized to change teams through internal selection processes,” Karina told me. The good side is that, when there are no vacancies or budget for promotion, the employees can develop themselves by expanding their horizons. Glauce complements: “At RH, there are always opportunities. Sometimes we don’t have the budget or the ‘next step,’ but we always have more responsibilities. There are horizontal, vertical or forked careers, it is possible to change areas of expertise, become a specialist, etc.”

Are sideway moves a solution for career growth? In my opinion, it can be a good complementary tool. Naturally, though, they do not replace promotions. Both collaborators and HR departments need the awareness that those do not substitute growth. On the other hand, I believe it can help a lot. By changing teams or departments, I myself have solved problems I thought demanded a promotion. I still looked for an upgrade, but the change was a breath of fresh air.

Summing up

Today, maybe even more than at the time of the interviews, companies have to make an effort to keep their collaborators. With more and more companies adopting remote-first, the challenge is yet more significant. Well-defined career plans, such as bxblue’s, are a great benefit to keep professionals. They are not mandatory, though, as Red Hat’s distributed model has proved. Team and area changes are also helpful, although, personally, I believe it is necessary to pay attention to avoid stagnation.

What do you think? Please comment below!

(This post is a translation of A Evolução da Carreira Remota.)

The pleasures of language learning

(This is the script of a speech I gave to Liferay‘s Toastmasters club. Alas, I forgot to record it, as always. Yet, it may still be worth sharing. Let’s hope I remember to record my next speech!)

I have to say, it is always a pleasure to be here, not the least because our chapter is so cosmopolitan! It is one of the things I like the most in my career now: the opportunity to converse with such a diverse set of people and cultures.

I’m sorry if I sound provincial; it is because I am a bit. We are not global citizens here where I come from. Last year, my barber couldn’t believe I had daily meetings in English. Although I’m pretty comfortable returning to the neighborhood I grew up in, I would be bored to death if I were locked here. 

Fortunately, going to the university and getting a career in IT expanded my horizons. For one thing, I had to learn English, and what a marvelous achievement it was! It opened the doors of my comprehension in ways I couldn’t even imagine before. I was lucky my university had this course, Instrumental English, to teach us how to use the language fluently. (My previous experiences with language courses were disappointing, to be honest.) It took time and practice, but by consuming content in English, I got to a point where I felt quite comfortable, even if not flawless.

I know many of you here are native English speakers. You are lucky, my friends! I can only wonder how knowing such a universal language from an early age can give you a vaster view of the world. On the other hand, it may rob you of the very satisfying pastime of learning languages.

Many people are adamant that everybody should learn a foreign language to the point of fluency. While I agree it is a good idea, I wouldn’t be so bold as to affirm every person should do this, and surely wouldn’t disregard monolingual people. Letting aside the fact that you do you, learning English makes more sense and is likely more straightforward than learning most other languages for a bunch of reasons.

First of all, English is almost automatically useful. We here who learned English as a second language most likely learned it because we needed it for our studies or careers. I can ensure that, here where I live, fluency in English can open a lot of doors that are already open to English native speakers. It is easier to justify learning English as a second language, to yourself and to others.

Also, English is relatively simple. Not that simple, mind you: the pronunciation is frankly bewildering, as well as the nonsensical writing. That said, it is one of the most frugal grammars I have ever seen, maybe losing only to Mandarin. Who knows, someday I can be skilled enough to compare them properly.

On top of that, the enormous cultural influence of countries such as the United States and the United Kingdom paves the way to us, non-native speakers. I don’t know about your cultural context, but here we use English words and expressions a lot! Also, there is so much quality material to consume and practice all over the place. There is, of course, lots of quality material in other languages as well. Yet, those may be harder to find for those still learning. On the other hand, English has so much content it is hard not to find something interesting.

Given all that, I have this theory that non-native English speakers have a leg up on the path to polyglotism. Studying languages becomes easier and easier the more languages you know. Since we have to learn English, we have to give the first, and most challenging, step after all!

All that said, I still recommend learning languages emphatically, even if you are an English native speaker. To speak another language expands your worldview drastically, is helpful for your career, makes trips abroad much more fun. There are even some studies suggesting it can help to prevent memory loss and other neurological ailments. Although I confess, you may find yourself too often forgetting how to say this or that word in your native language, a phenomenon my bilingual friends can surely relate to.

And learning languages is fun, I can attest. After studying English, I tried to learn German for years, without much success but having a lot of fun. When I started working for Liferay Latin America, the company offered all employees a Spanish course with an excellent teacher, which I took with enthusiasm. I was so lucky, not only because I had this opportunity but also because Spanish is remarkably easy for Portuguese speakers. (Which is another disadvantage for English speakers: I don’t know any language as close to English as the Romanche languages are to each other.) With moderate fluency in two languages, I got a taste for lingos. My old German books came out of the archives, and I am even taking a Chinese course right now. The point is, multilingualism becomes more accessible and fun with time.

So, what about you? Do you speak more than one tongue? Would you like to? If so, give it a try. It may look scary or exhausting at first, but it doesn’t need to be. Language learning, like beers and sports, can be disconcerting at first but exhilarating once you get the taste.

Don’t Interpret Me Wrong: Improvising Tests for an Interpreter

I’m in love with the Crafting Interpreters book. In it, Bob Nystrom teach us how to writer an interpreter by implementing a little programming language called Lox. It was a long time since I had so much fun programming! Besides being well-written, the book is funny and teach way more than I would expect. But I have a problem.

The snippets in the bug are written in a way we can copy and paste them. However, the book has challenges at the end of each chapter, these challenges have no source code and sometime they force us to change the interpreter a lot. I do every one of these exercises and as a result my interpreter diverges too much from the source in the book. Consequently, I often break some part of my interpreter.

How to solve that?

Unity tests would be brittle since the code structure changes frequently. End-to-end tests seem more practical in this case. So, for each new feature of the language, I wrote a little program. For example, my interpreter should create closures, and to ensure that I copied the Lox program below to the file counter.lox:

return count;
}

var counter = makeCounter();
counter(); // “1”.
counter(); // “2”.</code></pre>
<p>

This program result should be the numbers 1 and 2 printed in different lines. So I put these values in a file called counter.lox.out. The program cannot fail either, so I created an empty file called counter.lox.err. (In some cases, it is necessary to ensure the Lox program will fail. In these cases, the file .lox.err should have content.)

Well, I wrote programs and output files for various examples; now I need to compare the programs’ results to the expected outputs. I decided to use the tool that helps me the most in urgent times: shell script. I did a Bash script with a for iterating over all examples:

done</code></pre>
<p>

For each example, I executed the Lox program, redirecting the outputs to temporary files:

Now, we compare the real output with the expected output through diff. When it compares two files, diff returns 0 if there is no difference, 1 if there exists a difference or 2 in case of error. Since in Bash the conditional if considers 0 as true, we just check the negation of diff‘s exit code.

If the program prints something in standard output that is different from what is in its .lox.out file, we have a failure:

if ! diff $l.out $out
then
FAIL=1
fi
done</code></pre>
<p>

We also check the standard error and the .lox.err file:

if ! diff $l.out $out
then
FAIL=1
fi

if ! diff $l.err $err
then
FAIL=1
fi
done</code></pre>
<p>

Finally, I check if there was some failure and report the result:

if ! diff $l.out $out
then
FAIL=1
fi

if ! diff $l.err $err
then
FAIL=1
fi

if [ &quot;$FAIL&quot; = &quot;1&quot; ]
then
echo &quot;FAIL&quot; $l
else
echo &quot;PASS&quot; $l
fi
done</code></pre>
<p>

Not all of my Lox programs can be checked, though. For example, there is a program which times loop executions, it is impossible to anticipate the value it will print. Because of that, I added the possibility to jump some programs: we need just to create a file with the .lox.skip extension:

out=$(mktemp)
err=$(mktemp)
java -classpath target/classes/ br.com.brandizzi.adam.myjlox.Lox $l &gt; $out 2&gt; $err

if ! diff $l.out $out
then
FAIL=1
fi

if ! diff $l.err $err
then
FAIL=1
fi

if [ &quot;$FAIL&quot; = &quot;1&quot; ]
then
echo &quot;FAIL&quot; $l
else
echo &quot;PASS&quot; $l
fi
done</code></pre>
<p>

If, however, I have a Lox example and it does not have expected output files (nor the .lox.skip file) then I have a problem and the entire script fails:

out=$(mktemp)
err=$(mktemp)
java -classpath target/classes/ br.com.brandizzi.adam.myjlox.Lox $l &gt; $out 2&gt; $err

if ! diff $l.out $out
then
FAIL=1
fi

if ! diff $l.err $err
then
FAIL=1
fi

if [ &quot;$FAIL&quot; = &quot;1&quot; ]
then
echo &quot;FAIL&quot; $l
else
echo &quot;PASS&quot; $l
fi
done</code></pre>
<p>

With that, my test script is done. Let us see how it behaves:

$ ./lcheck.sh
PASS examples/attr.lox
PASS examples/bacon.lox
PASS examples/badfun.lox
PASS examples/badret.lox
PASS examples/bagel.lox
PASS examples/bostoncream.lox
PASS examples/cake.lox
PASS examples/checkuse.lox
PASS examples/circle2.lox
PASS examples/circle.lox
1d0
< 3
1c1
<
---
> [line 1] Error at ',': Expect ')' after expression.
FAIL examples/comma.lox
PASS examples/counter.lox
PASS examples/devonshinecream.lox
PASS examples/eclair.lox
PASS examples/fibonacci2.lox
PASS examples/fibonacci.lox
PASS examples/func.lox
PASS examples/funexprstmt.lox
PASS examples/hello2.lox
PASS examples/hello3.lox
PASS examples/hello.lox
PASS examples/math.lox
PASS examples/notaclass.lox
PASS examples/noteveninaclass.lox
PASS examples/point.lox
PASS examples/retthis.lox
PASS examples/scope1.lox
PASS examples/scope.lox
PASS examples/supersuper.lox
PASS examples/thisout.lox
PASS examples/thrice.lox
SKIP examples/timeit.lox
PASS examples/twovars.lox
PASS examples/usethis.lox
PASS examples/varparam.lox

Oops, apparently I removed the support for the comma operator by accident. Good thing I wrote this script, right?

I hope this post was minimally interesting! Now, I am going to repair my comma operator and keep reading this wonderful book.

(This post is a translation of Não me Interprete Mal: Improvisando Testes para um Interpretador.)

Exchanging World Cup’s sticker figures with the terminal

One of my hobbies during this recent World Cup was to collect stickers. Actually, I’ve built the sticker album because my son wanted it but I had fun, too, I guess.

2018 sticker album showing France team missing three pictures.
Sadly, not completed yet

An important part of collecting stickers is to exchange the repeated ones. Through messages in WhatsApp groups, we report which repeated stickers we have and which ones we still need. As a programmer, I refused to compare the lists myself, so I wrote a little program em Python (with doctests and all) to find intersections.

The missing laptop

Last week, a person came to my home to exchange stickers. I had the lists of repeated and needed cards, both mine and hers, but my script was in another laptop. I did not even know where this machine was and my guest was in a hurry.

There was no time to find the computer, or rewriting the program. Or even to compare manually.

It’s Unix time!

The list format

In general, the lists had this format:

15, 18, 26, 31, 40, 45 (2), 49, 51, 110, 115, 128, 131 (2), 143, 151, 161, 162, 183 (2), 216 (2), 221, 223, 253, 267 (3), 269, 280, 287, 296, 313, 325, 329, 333 (2), 353 (3), 355, 357, 359, 362, 365, 366, 371, 373, 384, 399, 400, 421 (2), 445, 457, 469, 470, 498 (2), 526, 536, 553, 560, 568, 570, 585, 591 (2), 604 (2), 639 (2), 660.

Basically, I needed to remove everything which were not digits, alongside with the numbers in parentheses, and to compare both lists. Easy, indeed.

Pre-processing with sed

First, I had to remove the counters between parentheses:

$ cat list.txt | sed 's/([^)]*)//g'
15, 18, 26, 31, [...] 591 , 604 , 639 , 660.

(I know, UUOC. Whatever.)

Then, I put each number in its own line:

$ cat list.txt | sed 's/([^)]*)//g' | sed 's/, */\n/g'

Later, I clean up every line removing any character that is not a digit:

cat list.txt | sed 's/([^)]*)//g' | sed 's/, */\n/g' | sed 's/[^0-9]*\([0-9]*\)[^0-9]*/\1/g'

In practice, I only call sed once, passing up both expressions. Here, I believe it would be clearer to invoke sed many times.)

Finally, I sort the values:

$ cat list.txt | sed 's/([^)]*)//g' | sed 's/, */\n/g' | sed 's/[^0-9]*\([0-9]*\)[^0-9]*/\1/g' | sort -n > mine-needed.txt

I do it with the list of needed stickers, and also with the list of repeated stickers, getting two files.

Finding intersections with grep

Now, I need to compare them. There are many options, and I choose to use grep.

In this case, I called grep with one of the files as an input, and the other file as a list of patterns to match, through the -f option. Also, only the complete match matters here, so we are going to use the -x flag. Finally, I asked grep to compare strings directly (instead of treating them as regular expressions) with the -F flag.

$ fgrep -Fxf mine-needed.txt theirs-repeated.txt
253
269
333
470
639

Done! In a minute, I already know which stickers I want. I just need to do the same with my repeated ones.

Why is this interesting?

These one-liners are not really a big deal to me, today. The interesting thing is that when I started to use the terminal, they would be incredible. Really, look how many pipes we use to pre-process the files! And this grep trick? I suffered to merely create a regex which worked! Actually, until solving this problem, I did not even know the -x option.

I once helped a friend to process a good number of files. He already spent more than two hours trying to do it with Java, and we solved it together in ten minutes with shell script. He then asked me how much he wanted to know shell script and asked me how to learn it.

Well, little examples (like this one), as simple as they seem, taught me a lot. This is how I learned to script: trying to solve problems, knowing new commands and options in small batches. In the end, this is a valuable skill.

So, I hope this little toying enrich your day, too. I certainly enriched mine — I’d like to think about it before spending three times more time with my Python script!

This post is a translation of Trocando figurinhas sobre o terminal.

Give Doctest a chance

Doctest is one of my favorite Python modules. With doctest, it is possible to execute code snippets from documentation. You could, for example, write something like this in your  turorial.md

>>> f()
1

…and then execute the command python -mdoctest tutorial.md. If f() returns 1, nothing will happen. If it returns something else, though, an error message will appear, similar to this one:

**********************************************************************
File "f.txt", line 2, in f.txt
Failed example:
    f()
Expected:
    1
Got:
    2
**********************************************************************
1 items had failures:
   1 of   2 in f.txt
***Test Failed*** 1 failures.

It is an impressive tool, but also an unpopular one.  The problem is, Doctest is often improperly used. For example, it is common to try to write unit tests with doctests. Great mistake.

Nonetheless, I believe it is unfair to disregard the module due to these misunderstandings. Doctest can and should be used for what it does best: to keep your documentation alive, and even to guide your development!

Let me show an example.

When you don’t know what to do

Some days ago, I was writing a class to modify an HTML document using xml.dom.minidom. At one point, I needed a function to map CSS classes to nodes from the document. That alone would be a complicated function! I had no idea of where to start.

In theory, unit tests could be useful here. They just would not be very practical: this was an internal, private function, an implementation detail. To test it, I’d have to expose it. We would also need a new file, for the tests. And test cases are not that legible anyway.

Reading the documentation from the future

Instead, I documented the function first. I wrote a little paragraph describing what it would do. It alone was enough to clarify my ideas a bit:

Given an xml.dom.minidom.Node, returns a map
from every “class” attribute to a list of nodes
with this class.

Then, I though about how to write the same thing, but with a code example. In my head, this function (which I called get_css_class_dict()) would receive xml.dom.minidom document. So, I wrote an example:

    >>> doc = xml.dom.minidom.parseString(
    ...     '''
    ...     
    ...         
    ...


... ... ... ''')

Given this snippet, I would expect the function to return a dict. My document has two CSS classes, “a” and “b,” and then my dict would have two keys. Each key would have a list of the nodes with the CSS class. Something like this:

    >>> d = get_css_class_dict(doc)
    >>> d['a']  # doctest: +ELLIPSIS
    [, ]
    >>> d['b']  # doctest: +ELLIPSIS
    []

I put these sketches in the docstring  of get_css_class_dict(). So far, we have this function:

def get_css_class_dict(node):
    """
    Given an xml.dom.minidom.Node, returns a map from every "class" attribute
    from it to a list of nodes with this class.

    For example, for the document below:

    >>> doc = xml.dom.minidom.parseString(
    ...     '''
    ...     
    ...         
    ...


... ... ... ''') ...we will get this: >>> d = get_css_class_dict(doc) >>> d['a'] # doctest: +ELLIPSIS [, ] >>> d['b'] # doctest: +ELLIPSIS [] """ pass

I could do something similar with unit tests but there would be much more code around, polluting the documentation. Besides that, the prose graciously complements the code, giving rhythm to the reading.

I execute the doctests and this is the result:

**********************************************************************
File "vtodo/listing/filler.py", line 75, in filler.get_css_class_dict
Failed example:
    d['a']
Exception raised:
    Traceback (most recent call last):
      File "/usr/lib/python3.6/doctest.py", line 1330, in __run
        compileflags, 1), test.globs)
      File "", line 1, in 
        d['a']
    TypeError: 'NoneType' object is not subscriptable
**********************************************************************
File "vtodo/listing/filler.py", line 77, in filler.get_css_class_dict
Failed example:
    d['b']
Exception raised:
    Traceback (most recent call last):
      File "/usr/lib/python3.6/doctest.py", line 1330, in __run
        compileflags, 1), test.globs)
      File "<https://suspensao.blog.br/disbelief/wp-admin/edit-tags.php?taxonomy=category;doctest filler.get_css_class_dict[3]>", line 1, in 
        d['b']
    TypeError: 'NoneType' object is not subscriptable
**********************************************************************
1 items had failures:
   2 of   4 in filler.get_css_class_dict
***Test Failed*** 2 failures.

I’m following test-driven development, basically, but with executable documentation. At once, I got a readable example and a basic test.

Now, we just need to implement the function! I used some recursion and, if the code is not the most succinct ever at first…

def get_css_class_dict(node):
    """
    Given an xml.dom.minidom.Node, returns a map from every "class" attribute
    from it to a list of nodes with this class.

    For example, for the document below:

    >>> doc = xml.dom.minidom.parseString(
    ...     '''
    ...     
    ...         
    ...


... ... ... ''') ...we will get this: >>> d = get_css_class_dict(doc) >>> d['a'] # doctest: +ELLIPSIS [, ] >>> d['b'] # doctest: +ELLIPSIS [] """ css_class_dict = {} if node.attributes is not None and 'class' in node.attributes: css_classes = node.attributes['class'].value for css_class in css_classes.split(): css_class_list = css_class_dict.get(css_class, []) css_class_list.append(node) css_class_dict[css_class] = css_class_list childNodes = getattr(node, 'childNodes', []) for cn in childNodes: ccd = get_css_class_dict(cn) for css_class, nodes_list in ccd.items(): css_class_list = css_class_dict.get(css_class, []) css_class_list.extend(nodes_list) css_class_dict[css_class] = css_class_list return css_class_dict

…at least it works as expected:

$ python -mdoctest vtodo/listing/filler.py 
**********************************************************************
File "vtodo/listing/filler.py", line 77, in filler.get_css_class_dict
Failed example:
    d['b']  # doctest: +ELLIPSIS
Expected:
    []
Got:
    []
**********************************************************************
1 items had failures:
   1 of   4 in filler.get_css_class_dict
***Test Failed*** 1 failures.

Wait a minute. What was that?!

When the documentation is wrong

Well, there is a mistake in my doctest! The span element does not have the “b” class—the div element does. So, I just need to change the line

[<DOM Element: span at ...>]

to

[<DOM Element: div at ...>]

and the Doctest will pass.

Isn’t it wonderful? I found a slip in my documentation almost immediately. More than that: if my function’s behavior changes someday, the example from my docstring will fail. I’ll know exactly where the documentation will need updates.

Making doctests worth it

That is the rationale behind Doctest. Our documentation had a subtle mistake and we found it by executing it. Doctests do not guarantee the correctness of the code; they reinforces the correctness of documentation. It is a well-known aspect of the package but few people seem to believe it is worth it.

I think it is! Documentation is often deemed an unpleasant work but it does not have to be so.  Just as TDD make tests exciting, it is possible to make documentation fun with doctests.

Besides that, in the same way TDD can point to design limitations, a hard time writing doctests can point out to API problems. If it was hard to write a clear and concise example of use for your API, surrounded by explaining text, it is likely too complicated, right?

Give Doctest a chance

In the end, I see doctests limitations. They are surely inadequate for unit tests, for example.  And yet, doctest makes documenting so easy and fun! I don’t see why it is so unpopular.

Nonetheless, its greatest advantage is how doctest makes the development process easier. Some time ago, I joked that we need to create DocDD:

With Doctest, it is not just a joke anymore.

This post is a translation of Dê uma chance a Doctest.