Despite I do not work at Sage anymore, some of the tools I discovered there still are a very wise choice when it comes to web development. Since they did not fit in the first post of this series, I'll briefly describe them in this second one. Don't worry: I promise not to write a third part.

After having landed type safeness, code auto formatting, static analysis and spell checking to our project, we can forget about the more technical aspects of development. This second part is dedicated to ensuring code quality and repository integrity. Before you walk away, remember what somebody once said: "Quality is free, but only to those who are willing to pay heavily for it".

Homer Simpson messing up

Natural language tests

The goal of natural language tests is both to have easily understandable tests and to provide a friendly description of the software (which might be used as documentation too). It requires some effort to setup and get used to it, but it helps making the software more scalable and easier to maintain.

Before giving it a try, it sounds unnecessarily complicated: "It will take much more to develop a new feature", "It will increase the complexity of development", "The benefits are not that clear" and "I am already familiar with Mocha/Jest". Yeap, all those sentences have a point. The transition to natural language testing doesn't happen overnight. However, let's recap for a second about the goal of having tests.

All we want when it comes to unit tests is having a set of scripts that run our code and ensure the software behaves in the way we expect. As long as we fulfill the objective, any way of implement those scripts will be valid. From here on choosing a test runner or another seems merely a matter of personal preference.

And, in fact, if you are aware of the importance of the tests and you are already caring about them like they were your beloved child, there is nothing wrong in sticking to Mocha, Jest or whichever test runner you are currently using. Nevertheless, let's have a look at a simple React + Jest + Enzyme example test:

This kind of test structure is mixing the what (the behavior we want to test) with the how (the test implementation itself). Now, I must make clear than the test implementation is essential. No magical framework will avoid us having to declare and initialize the component properties, rendering the component and writing an expression to verify the component behaves in the expected way.

The advantage of using natural language is that tests will focus on describing the software behavior, leaving the implementation as a secondary aspect, that will only need to be accessed when having to fix a failing test or when extending the product functionalities. This is what a natural language version of the previous test could look like (e.g. text-component.feature):

Under the hood, the implementation of the test would be very similar to the one depicted above. The only tricky part of it is that the test code needs to me mapped to the natural sentences. Cucumber.js is a popular gherkin library that helps us doing so. This is how the previous sentences can be defined through cucumber (e.g. text-component.step.tsx):

As you can see, the test logic remains the same, but wrapped inside natural language sentences that summarize their intention and which can later be used in any number of test cases. It requires some practice to learn how to split a test case into multiple sentences and how to make those sentences as reusable as possible but, once you get used to it, you will not want to test in any other way 💘

Steps
  • Install cucumber
    npm install --save-dev cucumber
  • Create a cucumber.js file in the root of the project with the following content:
  • If the project is meant to run in the browser, we will need to mock the document. We can do that with jsdom and a similar initialization code that we will require from the test npm script (e.g. cucumber-environment.ts):
    npm install --save-dev jsdom
  • Finally, add a test npm script to package.json that will run cucumber, requiring the jsdom initialization file, the step definition files and the feature files. You might want to modify the require globs according to your project structure:
  • If using typescript, install the corresponding types and ts-node, and adapt the test npm script to require ts-node too:
    npm install --save-dev @types/cucumber @types/jsdom ts-node

Code coverage

Next natural step after adding tests to your code is making sure that you test all the parts that need to be tested. I don't advice to cover 100% of the lines, but you definitely need to address the business critical logic that might turn in production critical bugs. There is a variety of test coverage tools that will instrument your code before running the tests and will keep track of the lines that are being addressed during the tests execution.

Personally, I have only worked with istanbul.js. It is easy to run (you only need to add a new npm script) and provides helpful Html reports, highlighting the lines that are going unnoticed in the code files. Feel free to explore alternatives, but this is the horse I am betting on.

istanbul.js coverage report
Steps
  • Install istanbul (for some reason, they named the npm package nyc):
    npm install --save-dev nyc
  • Create a .nycrc istanbul configuration file with the following contents (you might need to adapt the configuration to your needs):
  • Add a coverage npm script (that will run your already existing test npm script):
  • Add .nyc_output/ and coverage/ folders to your .gitignore

Repository integrity

The last thing you want to happen after having defined a quality assurance pipeline (e.g. linting, formatting, testing, etc.) is to forget about it as the time goes by. No matter how careful developer you are, you will fail to remember that checklist some times (specially on busy deadlines), so, running those verifications automatically is quite of a good idea.

Git provides a way to fire off custom scripts when certain actions occur (e.g. when a commit is about to be made, when a branch is about to be pushed, etc.). In fact, when you initialize a new repository with git init, Git populates the hooks directory with a bunch of example scripts (see Git documentation for more details).

If you don't want to go that deep when it comes to Git hooks, and certainly you don't need to, there is a tool called Husky (woof!) that will deal with that complexity for you. By associating a set of Git actions and commands in your package.json, Husky will run the corresponding commands each time a Git action occurs. This way you can make sure, for example, that you never push failing tests to your branch again.

Steps
  • Install husky:
    npm install --save-dev husky
  • Create a husky section in your package.json file specifying the commands to be executed for each Git action. For example:

Conventional commits

Last but not least! Acknowledgement is key when it comes to software quality, and part of the knowledge transfer is done over the Git history. You might already be taking your time to write descriptive commit messages and that's admirable (nothing is more useless than "minor fixes" like messages). You can however take a step further by enforcing a commit convention and take advantage of some additional benefits while keeping your Git history crystal clear.

Conventional commits is a lightweight convention to make the commit messages more descriptive and the Git history more explicit. In addition, it makes it easier to automate certain aspects of the software releases (e.g., generating a changelog file for free). The convention can be automatically verified in every commit with the help of tools like commitlint and, once the standard is put into place, you can take advantage of tools that rely on it (e.g. standard version, a utility for versioning using semver and CHANGELOG generation). Here are some example commit messages:

feat(lang): add polish language

feat: allow provided config object to extend other configs

docs: correct spelling of CHANGELOG
Steps
  • Install commitlint and the conventional commits configuration package (or any other configuration package you prefer):
    npm install --save-dev @commitlint/cli @commitlint/config-conventional
  • Create a commitlint.config.js configuration file, exporting the previously installed configuration:
  • Finally, run commitlint for every commit. Even though there are other ways, you can do that through Husky hooks:

Conclusions

And that's the end of it! Four simple concepts that will help you making your tests more readable (and can even qualify them as public documentation), spotting non tested lines in your code, preventing from committing/pushing unfinished code and making your Git history more descriptive. Give them a try and don't wait to add any combination of them to your projects 💪 See you in the next post!

Wanna hear back from me?

Subscribe to my newsletter and you will get a mail when I post a new entry (about once a month, spam free). Cancel the subscription at any time

Posts timeline