These days, we’ve got an unexpected package. What came inside it was even more surprising! What was going on?
Well, it turns out a few months ago I just completed ten years working for Liferay! This is not only a remarkable tenure, but also one that led me to a lot of growth. I lived in two cities, traveled to a few more around the world, learned to work remotely with very diverse teams, worked with numerous stacks and could see the LATAM branch grow from a dozen people to hundreds.
These days, it’s unusual to stay in the same place for that long, especially in a tech career. But Liferay is indeed a nice place to work and there are always new things to learn, new challenges both technically and in teamwork and serving the customer. I surely grew a lot and, as it seems, I have room here to evolve even further!
So, thank you, people, for the gift, but more importantly, thank you for the great time, growing and challenges. And brace yourselves, as I plan to be a delightful “nuisance” among you all for many more fruitful years to come! đđ
Here at Liferay, a few days ago, we needed to use the p-map package. There was only one problem: our application still uses the CommonJS format, and p-map releases ES6 modules only. Even some of the best references I found (e.g. this post) made it clear that it would not be possible to import ES6 modules from CommonJS.
The good news is that this is no longer true! Using dynamic import, we can load ES6 modules from CommonJS. Let’s look at an example.
In this project, the importer.js file tries to use require() to import an ES6 module:
const pmap = require('p-map');
exports.importer = () => {
console.log('Yes, I could import p-map:', pmap);
}
Of course, it doesn’t work:
$ node index.js
internal/modules/cjs/loader.js:1102
throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);
^
Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /home/adam/software/es6commonjs/node_modules/p-map/index.js
require() of ES modules is not supported.
require() of /home/adam/software/es6commonjs/node_modules/p-map/index.js from /home/adam/software/es6commonjs/importer.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules.
Instead rename index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /home/adam/software/es6commonjs/node_modules/p-map/package.json.
at new NodeError (internal/errors.js:322:7)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1102:13)
at Module.load (internal/modules/cjs/loader.js:950:32)
at Function.Module._load (internal/modules/cjs/loader.js:790:12)
at Module.require (internal/modules/cjs/loader.js:974:19)
at require (internal/modules/cjs/helpers.js:101:18)
at Object.<anonymous> (/home/adam/software/es6commonjs/importer.js:1:14)
at Module._compile (internal/modules/cjs/loader.js:1085:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
at Module.load (internal/modules/cjs/loader.js:950:32) {
code: 'ERR_REQUIRE_ESM'
}
The solution is to convert require() into a dynamic import. But there is one detail: import imports return Promises. There are many ways to deal with this; the simplest one is probably to make our function asynchronous, like in this version:
exports.importer = async () => {
const pmap = await import('p-map');
console.log('Yes, I could import p-map:', pmap);
}
Now our little app works!
$ node index.js
ok
Yes, I could import p-map: [Module: null prototype] {
AbortError: [class AbortError extends Error],
default: [AsyncFunction: pMap],
pMapSkip: Symbol(skip)
}
So, don’t be scared by outdated information: you won’t need to rewrite your entire application as ES 6 modules, at least for now. For us, this was quite a relief!
It brought me some memories back. I was for a few years responsible for the Liferay Calendar and Kaleo Designer portlets. These were complex single-page apps, built in a fast pace when the concept of SPAs was still evolving: many choices called for a review.
So I started writing JIRA tickets for technical debt. When one of those health issues made a bug fix or feature harder to implement, I’d convert that technical debt ticket into a sub-task of the demand. As I like to say, I was “billing the debt from the feature.”
Well, at first, we tried! I would present the debt issues in our prioritization meetings. Having the problems written helped a lot to caught the managers’ attention, by the way.
Technical debt is a hard sell, though. People are understandably wary about buying into something whose value they could not see. Nonetheless, changes took increasingly more time to deliver and regression bugs kept popping up. We needed to fix these health problems.
That’s why I started to work on debt as part of value-adding tasks. Working on the debt to make a demand easier was a great evidence that extra work was worth it. It was not just some random idea we worked on to postpone duties: it delivered value.
That is the first reason for handling technical debt as sub-tasks of value issues: By binding the debt to a value-adding task, it is easier to justify the extra effort to stakeholders.
At first, this debt-billing was only a communication device. But there was a cool side effect: the most glaring issues kept being solved first. That makes sense: since we worked on them when they caused problems, the the ones causing more problems were solved first. Since prioritization is always a challenge (and prioritizing technical debt is even harder) it was a huge help.
We still had a pile of technical debt tasks, but many of the pending tasks were not relevant. Some, already solved. Others were elegant ideas back then, but didn’t make sense anymore. In hindsight, a good part of the “debt” were personal preferences, or assumptions that weren’t true anymore after some product evolution.
This is the second reason for debt-billing: Working on health issues as part of demand is an effective way to prioritize which technical debt to work on.
See how great it is! Had we worked on technical debt by themselves â for example, in a task force â, we might apply changes that could actually make future evolution harder. Debt-billing let us confirm which requests were fit for our goals. And it has a subtler, more important consequence.
We developers are are an opinionated lot, and this is good. We usually try to make these opinions into a goal. But it is hard to know if a goal is right. Once we use these ideas as helpers for something more clearly relevant, that goal turns into a tool. Tools are much easier to evaluate!
This is a third reason for debt-billing: when technical debt is linked to value delivery, the creative force from the team works together with the organization’s objectives.
Our experience is that this strategy was quite effective. Everybody knew their suggestions would be evaluated: health tasks wouldn’t be a chore to prioritize anymore, but a toolset that our colleagues would look for to help with their challenges. The debt backlog was not a wishing well anymore.
The apps got better, too. When I started working on the Calendar, for example, it was usually seen as a especially problematic portlet. The first release couldn’t schedule events! When I left that team, the Calendar had no bug of priority 3 or higher (the levels we have to fix). And we delivered quite a good amount of features, even some missing in leader competitors. Not bad for a product that was an example of a non-working feature!
It felt right to bill the technical debt from the demands, but I never thought deeply about why it felt right. So, thank you for asking that, Fabricio! It was a joy to think about it.
EDIT: I just recalled Ron Jeffries wrote a great post about his approach to refactoring, which the one here is similar to, although advocating against a specific point. Totally worth reading!
(I wrote this this post some years ago in The Practical Dev. I found it by chance and wondered, why not put in the blog? So here it is!)
Days ago, I read a post that said something like this:
There’s a lot of mainframe developers that are currently out of a job because they refused to look ahead. […] now, many of them are scrambling to catch up on 30 years of technology.
Well, I never worked with mainframes myself, but that sounded dubious. I had contact with mainframe developers, they did not seem in low demand at all. What happens is, the dynamics of the mainframe environment are surprising for most of us new developers.
Sectors such as government, banking and telecommunications still have a large infrastructure based on these machines. Those systems are decades old and work quite well until today. Sunsetting them is really expensive and, in general, they do not cause problems. For this reason, many organizations have no plans to migrate them to other platforms. As a consequence, there is always someone hiring programmers for these platforms.
85% of our typical daily transactions such as ATM withdrawals and credit card payments still go through mainframe systems. (Source)
In fact, these positions tend to compensate well. There are few mainframe developers for a steady demand. With many of them retiring, the demand can even get higher. In fact, the labor costs used to be one of the reasons to move out of mainframes.
Experienced COBOL programmers can earn more than $100 an hour when they get called in to patch up glitches, rewrite coding manuals or make new systems work with old. (Source)
Anyway, these platforms did not stagnate. IBM just released a new machine some time ago. Neither are they an exclusive choice: most often than not, these systems pair with newer technologies. My bank Android app, for example, consumes data that comes from mainframes through many gateways. Or see this amazing story of integrating some old systems with new tech.
Because a mainframe offers reliable performance and strict security, it is often the on-premise component of a hybrid cloud environment that processes and stores an organizationâs most sensitive data. (Source)
What makes mainframes less common is, I believe, their price. Their cost has a good reason: A mainframe can be as powerful as a cloud data center â indeed, some arecloud data centers. However, most companies do not start with enough money, or even the need, for such power. For many of us, it is more cost-effective to start with inexpensive platforms and grow them into distributed systems.
The mainframe boxes themselves are not aging. In fact they outcompete Microsoft and Linux on features like performance, scalability, security, and reliability. Itâs not the machines but applications and programmers that are aging. (Source)
How can remote works grow in their careers? Since remote work is a recent revolution, it is a challenging question. In general, white-collar employees tend to grow more by changing companies, and, in my experience, it is even more common in remote environments. Nonetheless, itâs possible to grow in the same company as a remote workerâif the company did its homework. Since I started a community about remote work (in Portuguese), Iâve met many of those remote-first companies which worked hard to develop their collaborators and decided to look for a bit of their knowledge.
Careers in a blue sky
I invited my old friend from UnB,Fabricio Buzeto, co-founder of bxblue (a growing fintech here from BrasĂlia), for a (virtual) coffee on October 5, 2020. He told me how bxblueâs career plan works: âWe donât do anything different from in-person companies. We have periodic evaluations and a promotions calendar.â
In their case, the career plans have two parts: a compensation and roles plan to ensure recognition to growing professionals, and a competencies plan, which will help them grow even more. Each department has its own evaluation criteria. For example, customer support has closing metrics, while engineering doesnât.
Criteria should be clear, objective and, notably, collective. âHere at bx, the metrics are the entire teamâs average,â Fabricio told me. âOur customer-facing department, today, is ten times more productive than the best individual attendant from the past.â This strategy, which focuses on the team and not the individual, makes it easy to find the real concerns behind the metrics. âIf the attendants are not closing, which skill is missing to close more? The software can present inviable offers. Or maybe the attendant can be too slow to call, or doesnât complete the call and doesn’t try different channels.â
Distributed planning for distributed careers
Intrigued by bxblueâs career plan, I decided to talk to other companies. Then I recall my dear friend Karina Varela from Red Hatâyou may remember her from the brilliant tips on working from home with family (in Portuguese). She told me how, being a child from the 90s and 2000 free software movements, Red Hat has always been international, distributed and remote-first. I schedule another coffee with her and her leader, Glauce Santos, Latin Americaâs acquisition manager, for October 8, 2020. Then, I asked: how is the career plan at RH?
To my surprise, they donât have one!
Glauce explained that the career development at Red Hat is more localized. âWe donât have a career plan, as in a Big Four. We have an open culture and individualized performance evaluation with the direct manager.â In this case, the accountable persons are the collaborators themselves. âThe responsibility stays in the hands of the employee,â Glauce informs. For that, the managerâs support is fundamental, as Karina tells us: âThe manager helps the collaborators to get where they want to be.â
While this is a very different approach from bxblueâs, there are similarities: criteria are defined by areas and teams. âThe consultant is evaluated by customerâs satisfaction, maybe by worked hours. At support, one sees how many requests were attended and how many SLAs were met. Sales teams have targets,â Karina told me. Glauce complements: âEmployees are evaluated for main responsibilities, goals, targets, and objectives. And there is a development plan for each one, developed together with the manager.â
Growing sideways
One of the most interesting points from the conversation was about something also encouraged here at Liferay: exchanging roles and teams. I, for one, changed teams many times. It happens both at bxblue and Red Hat.
âWe are incentivized to change teams through internal selection processes,â Karina told me. The good side is that, when there are no vacancies or budget for promotion, the employees can develop themselves by expanding their horizons. Glauce complements: âAt RH, there are always opportunities. Sometimes we donât have the budget or the ânext step,â but we always have more responsibilities. There are horizontal, vertical or forked careers, it is possible to change areas of expertise, become a specialist, etc.â
Are sideway moves a solution for career growth? In my opinion, it can be a good complementary tool. Naturally, though, they do not replace promotions. Both collaborators and HR departments need the awareness that those do not substitute growth. On the other hand, I believe it can help a lot. By changing teams or departments, I myself have solved problems I thought demanded a promotion. I still looked for an upgrade, but the change was a breath of fresh air.
Summing up
Today, maybe even more than at the time of the interviews, companies have to make an effort to keep their collaborators. With more and more companies adopting remote-first, the challenge is yet more significant. Well-defined career plans, such as bxblueâs, are a great benefit to keep professionals. They are not mandatory, though, as Red Hatâs distributed model has proved. Team and area changes are also helpful, although, personally, I believe it is necessary to pay attention to avoid stagnation.
(This is the script of a speech I gave to Liferay‘s Toastmasters club. Alas, I forgot to record it, as always. Yet, it may still be worth sharing. Let’s hope I remember to record my next speech!)
I have to say, it is always a pleasure to be here, not the least because our chapter is so cosmopolitan! It is one of the things I like the most in my career now: the opportunity to converse with such a diverse set of people and cultures.
I’m sorry if I sound provincial; it is because I am a bit. We are not global citizens here where I come from. Last year, my barber couldn’t believe I had daily meetings in English. Although I’m pretty comfortable returning to the neighborhood I grew up in, I would be bored to death if I were locked here.
Fortunately, going to the university and getting a career in IT expanded my horizons. For one thing, I had to learn English, and what a marvelous achievement it was! It opened the doors of my comprehension in ways I couldn’t even imagine before. I was lucky my university had this course, Instrumental English, to teach us how to use the language fluently. (My previous experiences with language courses were disappointing, to be honest.) It took time and practice, but by consuming content in English, I got to a point where I felt quite comfortable, even if not flawless.
I know many of you here are native English speakers. You are lucky, my friends! I can only wonder how knowing such a universal language from an early age can give you a vaster view of the world. On the other hand, it may rob you of the very satisfying pastime of learning languages.
Many people are adamant that everybody should learn a foreign language to the point of fluency. While I agree it is a good idea, I wouldn’t be so bold as to affirm every person should do this, and surely wouldn’t disregard monolingual people. Letting aside the fact that you do you, learning English makes more sense and is likely more straightforward than learning most other languages for a bunch of reasons.
First of all, English is almost automatically useful. We here who learned English as a second language most likely learned it because we needed it for our studies or careers. I can ensure that, here where I live, fluency in English can open a lot of doors that are already open to English native speakers. It is easier to justify learning English as a second language, to yourself and to others.
Also, English is relatively simple. Not that simple, mind you: the pronunciation is frankly bewildering, as well as the nonsensical writing. That said, it is one of the most frugal grammars I have ever seen, maybe losing only to Mandarin. Who knows, someday I can be skilled enough to compare them properly.
On top of that, the enormous cultural influence of countries such as the United States and the United Kingdom paves the way to us, non-native speakers. I don’t know about your cultural context, but here we use English words and expressions a lot! Also, there is so much quality material to consume and practice all over the place. There is, of course, lots of quality material in other languages as well. Yet, those may be harder to find for those still learning. On the other hand, English has so much content it is hard not to find something interesting.
Given all that, I have this theory that non-native English speakers have a leg up on the path to polyglotism. Studying languages becomes easier and easier the more languages you know. Since we have to learn English, we have to give the first, and most challenging, step after all!
All that said, I still recommend learning languages emphatically, even if you are an English native speaker. To speak another language expands your worldview drastically, is helpful for your career, makes trips abroad much more fun. There are even some studies suggesting it can help to prevent memory loss and other neurological ailments. Although I confess, you may find yourself too often forgetting how to say this or that word in your native language, a phenomenon my bilingual friends can surely relate to.
And learning languages is fun, I can attest. After studying English, I tried to learn German for years, without much success but having a lot of fun. When I started working for Liferay Latin America, the company offered all employees a Spanish course with an excellent teacher, which I took with enthusiasm. I was so lucky, not only because I had this opportunity but also because Spanish is remarkably easy for Portuguese speakers. (Which is another disadvantage for English speakers: I don’t know any language as close to English as the Romanche languages are to each other.) With moderate fluency in two languages, I got a taste for lingos. My old German books came out of the archives, and I am even taking a Chinese course right now. The point is, multilingualism becomes more accessible and fun with time.
So, what about you? Do you speak more than one tongue? Would you like to? If so, give it a try. It may look scary or exhausting at first, but it doesn’t need to be. Language learning, like beers and sports, can be disconcerting at first but exhilarating once you get the taste.
I’m in love with the Crafting Interpreters book. In it, Bob Nystrom teach us how to writer an interpreter by implementing a little programming language called Lox. It was a long time since I had so much fun programming! Besides being well-written, the book is funny and teach way more than I would expect. But I have a problem.
The snippets in the bug are written in a way we can copy and paste them. However, the book has challenges at the end of each chapter, these challenges have no source code and sometime they force us to change the interpreter a lot. I do every one of these exercises and as a result my interpreter diverges too much from the source in the book. Consequently, I often break some part of my interpreter.
How to solve that?
Unity tests would be brittle since the code structure changes frequently. End-to-end tests seem more practical in this case. So, for each new feature of the language, I wrote a little program. For example, my interpreter should create closures, and to ensure that I copied the Lox program below to the file counter.lox:
This program result should be the numbers 1 and 2 printed in different lines. So I put these values in a file called counter.lox.out. The program cannot fail either, so I created an empty file called counter.lox.err. (In some cases, it is necessary to ensure the Lox program will fail. In these cases, the file .lox.err should have content.)
Well, I wrote programs and output files for various examples; now I need to compare the programs’ results to the expected outputs. I decided to use the tool that helps me the most in urgent times: shell script. I did a Bash script with a for iterating over all examples:
done</code></pre>
<p>
For each example, I executed the Lox program, redirecting the outputs to temporary files:
If the program prints something in standard output that is different from what is in its .lox.out file, we have a failure:
if ! diff $l.out $out
then
FAIL=1
fi
done</code></pre>
<p>
We also check the standard error and the .lox.err file:
if ! diff $l.out $out
then
FAIL=1
fi
if ! diff $l.err $err
then
FAIL=1
fi
done</code></pre>
<p>
Finally, I check if there was some failure and report the result:
if ! diff $l.out $out
then
FAIL=1
fi
if ! diff $l.err $err
then
FAIL=1
fi
if [ "$FAIL" = "1" ]
then
echo "FAIL" $l
else
echo "PASS" $l
fi
done</code></pre>
<p>
Not all of my Lox programs can be checked, though. For example, there is a program which times loop executions, it is impossible to anticipate the value it will print. Because of that, I added the possibility to jump some programs: we need just to create a file with the .lox.skip extension:
if [ "$FAIL" = "1" ]
then
echo "FAIL" $l
else
echo "PASS" $l
fi
done</code></pre>
<p>
If, however, I have a Lox example and it does not have expected output files (nor the .lox.skip file) then I have a problem and the entire script fails:
One of my hobbies during this recent World Cup was to collect stickers. Actually, Iâve built the sticker album because my son wanted it but I had fun, too, I guess.
An important part of collecting stickers is to exchange the repeated ones. Through messages in WhatsApp groups, we report which repeated stickers we have and which ones we still need. As a programmer, I refused to compare the lists myself, so I wrote a little program em Python (with doctests and all) to find intersections.
The missing laptop
Last week, a person came to my home to exchange stickers. I had the lists of repeated and needed cards, both mine and hers, but my script was in another laptop. I did not even know where this machine was and my guest was in a hurry.
There was no time to find the computer, or rewriting the program. Or even to compare manually.
$ cat list.txt | sed 's/([^)]*)//g' | sed 's/, */\n/g'
Later, I clean up every line removing any character that is not a digit:
cat list.txt | sed 's/([^)]*)//g' | sed 's/, */\n/g' | sed 's/[^0-9]*\([0-9]*\)[^0-9]*/\1/g'
In practice, I only call sed once, passing up both expressions. Here, I believe it would be clearer to invoke sed many times.)
Finally, I sort the values:
$ cat list.txt | sed 's/([^)]*)//g' | sed 's/, */\n/g' | sed 's/[^0-9]*\([0-9]*\)[^0-9]*/\1/g' | sort -n > mine-needed.txt
I do it with the list of needed stickers, and also with the list of repeated stickers, getting two files.
Finding intersections with grep
Now, I need to compare them. There are many options, and I choose to use grep.
In this case, I called grep with one of the files as an input, and the other file as a list of patterns to match, through the -f option. Also, only the complete match matters here, so we are going to use the -x flag. Finally, I asked grep to compare strings directly (instead of treating them as regular expressions) with the -F flag.
Done! In a minute, I already know which stickers I want. I just need to do the same with my repeated ones.
Why is this interesting?
These one-liners are not really a big deal to me, today. The interesting thing is that when I started to use the terminal, they would be incredible. Really, look how many pipes we use to pre-process the files! And this grep trick? I suffered to merely create a regex which worked! Actually, until solving this problem, I did not even know the -x option.
I once helped a friend to process a good number of files. He already spent more than two hours trying to do it with Java, and we solved it together in ten minutes with shell script. He then asked me how much he wanted to know shell script and asked me how to learn it.
Well, little examples (like this one), as simple as they seem, taught me a lot. This is how I learned to script: trying to solve problems, knowing new commands and options in small batches. In the end, this is a valuable skill.
So, I hope this little toying enrich your day, too. I certainly enriched mine â I’d like to think about it before spending three times more time with my Python script!
Doctest is one of my favorite Python modules. With doctest, it is possible to execute code snippets from documentation. You could, for example, write something like this in your turorial.md…
>>> f()
1
…and then execute the command python -mdoctest tutorial.md. If f() returns 1, nothing will happen. If it returns something else, though, an error message will appear, similar to this one:
**********************************************************************
File "f.txt", line 2, in f.txt
Failed example:
f()
Expected:
1
Got:
2
**********************************************************************
1 items had failures:
1 of 2 in f.txt
***Test Failed*** 1 failures.
It is an impressive tool, but also an unpopular one. The problem is, Doctest is often improperly used. For example, it is common to try to write unit tests with doctests. Great mistake.
Nonetheless, I believe it is unfair to disregard the module due to these misunderstandings. Doctest can and should be used for what it does best: to keep your documentation alive, and even to guide your development!
Let me show an example.
When you don’t know what to do
Some days ago, I was writing a class to modify an HTML document using xml.dom.minidom. At one point, I needed a function to map CSS classes to nodes from the document. That alone would be a complicated function! I had no idea of where to start.
In theory, unit tests could be useful here. They just would not be very practical: this was an internal, private function, an implementation detail. To test it, I’d have to expose it. We would also need a new file, for the tests. And test cases are not that legible anyway.
Reading the documentation from the future
Instead, I documented the function first. I wrote a little paragraph describing what it would do. It alone was enough to clarify my ideas a bit:
Given an xml.dom.minidom.Node, returns a map
from every “class” attribute to a list of nodes
with this class.
Then, I though about how to write the same thing, but with a code example. In my head, this function (which I called get_css_class_dict()) would receive xml.dom.minidom document. So, I wrote an example:
Given this snippet, I would expect the function to return a dict. My document has two CSS classes, “a” and “b,” and then my dict would have two keys. Each key would have a list of the nodes with the CSS class. Something like this:
I put these sketches in the docstring of get_css_class_dict(). So far, we have this function:
def get_css_class_dict(node):
"""
Given an xml.dom.minidom.Node, returns a map from every "class" attribute
from it to a list of nodes with this class.
For example, for the document below:
>>> doc = xml.dom.minidom.parseString(
... '''
...
...
...
...
...
... ''')
...we will get this:
>>> d = get_css_class_dict(doc)
>>> d['a'] # doctest: +ELLIPSIS
[, ]
>>> d['b'] # doctest: +ELLIPSIS
[]
"""
pass
I could do something similar with unit tests but there would be much more code around, polluting the documentation. Besides that, the prose graciously complements the code, giving rhythm to the reading.
I execute the doctests and this is the result:
**********************************************************************
File "vtodo/listing/filler.py", line 75, in filler.get_css_class_dict
Failed example:
d['a']
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python3.6/doctest.py", line 1330, in __run
compileflags, 1), test.globs)
File "", line 1, in
d['a']
TypeError: 'NoneType' object is not subscriptable
**********************************************************************
File "vtodo/listing/filler.py", line 77, in filler.get_css_class_dict
Failed example:
d['b']
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python3.6/doctest.py", line 1330, in __run
compileflags, 1), test.globs)
File "<https://suspensao.blog.br/disbelief/wp-admin/edit-tags.php?taxonomy=category;doctest filler.get_css_class_dict[3]>", line 1, in
d['b']
TypeError: 'NoneType' object is not subscriptable
**********************************************************************
1 items had failures:
2 of 4 in filler.get_css_class_dict
***Test Failed*** 2 failures.
I’m following test-driven development, basically, but with executable documentation. At once, I got a readable example and a basic test.
Now, we just need to implement the function! I used some recursion and, if the code is not the most succinct ever at first…
def get_css_class_dict(node):
"""
Given an xml.dom.minidom.Node, returns a map from every "class" attribute
from it to a list of nodes with this class.
For example, for the document below:
>>> doc = xml.dom.minidom.parseString(
... '''
...
...
...
...
...
... ''')
...we will get this:
>>> d = get_css_class_dict(doc)
>>> d['a'] # doctest: +ELLIPSIS
[, ]
>>> d['b'] # doctest: +ELLIPSIS
[]
"""
css_class_dict = {}
if node.attributes is not None and 'class' in node.attributes:
css_classes = node.attributes['class'].value
for css_class in css_classes.split():
css_class_list = css_class_dict.get(css_class, [])
css_class_list.append(node)
css_class_dict[css_class] = css_class_list
childNodes = getattr(node, 'childNodes', [])
for cn in childNodes:
ccd = get_css_class_dict(cn)
for css_class, nodes_list in ccd.items():
css_class_list = css_class_dict.get(css_class, [])
css_class_list.extend(nodes_list)
css_class_dict[css_class] = css_class_list
return css_class_dict
…at least it works as expected:
$ python -mdoctest vtodo/listing/filler.py
**********************************************************************
File "vtodo/listing/filler.py", line 77, in filler.get_css_class_dict
Failed example:
d['b'] # doctest: +ELLIPSIS
Expected:
[]
Got:
[]
**********************************************************************
1 items had failures:
1 of 4 in filler.get_css_class_dict
***Test Failed*** 1 failures.
Wait a minute. What was that?!
When the documentation is wrong
Well, there is a mistake in my doctest! The span element does not have the “b” classâthe div element does. So, I just need to change the line
[<DOM Element: span at ...>]
to
[<DOM Element: div at ...>]
and the Doctest will pass.
Isn’t it wonderful? I found a slip in my documentation almost immediately. More than that: if my function’s behavior changes someday, the example from my docstring will fail. I’ll know exactly where the documentation will need updates.
Making doctests worth it
That is the rationale behind Doctest. Our documentation had a subtle mistake and we found it by executing it. Doctests do not guarantee the correctness of the code; they reinforces the correctness of documentation. It is a well-known aspect of the package but few people seem to believe it is worth it.
I think it is! Documentation is often deemed an unpleasant work but it does not have to be so. Just as TDD make tests exciting, it is possible to make documentation fun with doctests.
Besides that, in the same way TDD can point to design limitations, a hard time writing doctests can point out to API problems. If it was hard to write a clear and concise example of use for your API, surrounded by explaining text, it is likely too complicated, right?
Give Doctest a chance
In the end, I see doctests limitations. They are surely inadequate for unit tests, for example. And yet, doctest makes documenting so easy and fun! I don’t see why it is so unpopular.
Nonetheless, its greatest advantage is how doctest makes the development process easier. Some time ago, I joked that we need to create DocDD:
I need to invent the documentation-driven development. I'm writing some docstrings here and uh! so many things were wrong!