The 100% Code Coverage Misconception. There are several articles found on the internet right now claiming that achieving 100% coverage is a futile objective. I wholeheartedly disagree. Generally, code that is difficult to test indicates that it should be refactored. I understand. I used to be terrible at testing a few years back. I assumed it was simply something that would slow me down. When I first began coding, it was simply not something that people did. If it was, it was frequently the responsibility of a distinct QA team. However, a few years ago, it became a genuine hot topic.

The percentage of your source code that has been tested is indicated by code coverage, which is a performance metric.

Candidates were expected to know how to write tests during interviews, and more firms pushed it from the top down as a quality project. I've always strived to be at the top of my game, and I determined that coming into interviews and claiming "testing isn't really my strong suit" was no longer acceptable, so I resolved to obtain 100% coverage on all of my tests moving forward.

At the time, I wasn't entirely certain of the benefits I'd derive from it, or even whether there were any. I honestly would not return now. When something goes wrong in a code base with 100% coverage, it's quite probable that your tests will tell you precisely where and how it went wrong. This is not to suggest that unit testing is sufficient. That is not the case. However, I believe that leaving code untested is not a wise alternative. Rewind with me to a time when I, too, was skeptical of the benefits of test coverage.

Part 1: Learning the Lingo

At the time, the trade's tools included a blend of mocha, sinon, and chai. Mocha was used to run the tests, sinon was used to build "mocks" and "spies," and chai is an assertion library that permits you to type assertions in a human-friendly manner. To begin, what the devil is a spy or a mock? Although James Bond or Ethan Hunt immediately spring to mind. That is certainly not what is being discussed here, yet it is an adequate parallel. After some research, I discovered that a spy is a function that has been updated by a testing framework to offer meta information about its use. It eavesdrops on it. Similar to how Apple's latest FaceTime bug allowed third parties to snoop on you. Thus, akin to James Bond.

A mock is like a spy, but it has been further modified. It not only provides and tracks how a specific function has been used, but it also modifies its behavior to make it predictable. I also discovered that there are various types of testing. Not just the three most common: unit testing, integration testing, and end-to-end testing. When we say "unit testing," we mean that we must be able to divide our code into individual units. Anything outside of that specific unit, such as other functions or entire modules, is a candidate for mocking. Jest is my preferred unit testing tool. The only type of testing in which coverage is measured is unit testing.

When we do Integration Testing, we are testing the integration of our software with other pieces of software, such as a test that sends a message to Kafka that our service should receive and then finds the result in the database. When writing integration tests, I usually use Jest.

E2E testing is similar to a bot interacting with your app. You program it to open the site in a browser, click buttons, and ensure that everything works as it should from the user's viewpoint. I spent several months ensuring that every line of code I wrote was tested. It was difficult at first, I admit. I spent alot of time on StackOverflow searching for mocking and spying examples. By the end, I discovered that my level of confidence in my code had significantly increased.

Another advantage was that when something went wrong, my tests would usually tell me exactly where it went wrong. When other engineers made adjustments to code that I wrote, I was able to review it much faster. When important APIs changed, people were notified via a failing test and either quickly updated it or reconsidered their changes. More importantly, I began writing better code. I learned that if something is difficult to test or fully cover, it usually means I didn't write the code well, and it could be refactored to produce more maintainable and flexible APIs. To that end, attempting to achieve 100 percent coverage encouraged me to convert anonymous functions to functions that are named as well as understanding dependency injection and partial application in a variety of refactors.

I even abandoned GitFlow for trunk-based development after completing integrations tests. Committing to mastery was something I thought was crazy a few years ago, but now I do it every day on a team of nearly 15 engineers.

Part 2: Lead by Example

Around the time I was becoming comfortable with my new testing stack, another tool was introduced to the market that many claimed made unit testing even easier: Jest.

Jest is a framework for automated testing developed by Facebook. Jest does an excellent job of condensing the previous libraries I had used into a single coherent framework that includes a test-runner as well as APIs for mocking, spying, and assertions. Jest, in addition to providing a single library for all of your unit-testing needs, does an excellent job of simplifying some of the concepts and patterns with powerful and simple mocking.

However, I have been documenting the process of building a React application with Parcel and Streaming SSR, and this article will pick up where the previous one left off. I reasoned that the best way to demonstrate complete coverage was to show how to get there. We will most likely discover several places in the process where code is capable of being refactored so it is more testable. From where I left off and completed this project's coverage, demonstrating what refactorings to make, a place to utilize dependency injection and partial application, and what you will mock in the process when coverage is hard to obtain. The project includes a react app in the app folder as well as a server folder containing the SSR logic. Let's begin with application tests.

Additional great topics on software testing:

www.parasoft.com/solutions/static-code-analysis/

www.parasoft.com/solutions/misra/

www.parasoft.com/solutions/iso-26262/

www.parasoft.com/solutions/devsecops/

www.parasoft.com/solutions/static-application-security-testing-sast

www.parasoft.com/solutions/api-security-testing/

Application Tests

I have a few React components that are similarly simple. This is among the reasons why functional components are so effective. Classes are more difficult to test than functions. They lack state and instead rely on inputs and outputs. They have output Y given input X. When there is state, it can be saved outside of the component. In this regard, the new React Hooks API is useful because it encourages the creation of functional components and includes an easily mockable mechanism for providing state to the component. In terms of testing, Redux offers the same advantage. Let's begin by removing the remaining easy features. Basically we are required to render them and possibly double check that some important information is rendered.

The following commits' tests are all very alike:

  • Test: style component renders test
  • Fix: test for pages
  • Fix: tests for components

As you can observe, simply ensuring that our component renders is enough to ensure that these components are completely covered. More comprehensive interactions are better left to E2E tests, which are beyond the scope of this article.The following component, app/App.jsx, is a little more complicated. After writing a rendering test, you'll notice that the Router still uses an unreachable anonymous function to render the About page.To access and test this, we'll perform a small refactoring, extracting the function to a named function so we can export and test it.

We'll leave the more specific tests for the About page to live there because we have another set of tests for it above, and we'll just need to check that it renders here. With that, the only file remaining to test in our application is app/client.js, after which we can proceed to the server side tests. The first thing that amazed me is the reliance on global variables — document, process, and module. The second issue is that nothing is exported, making it difficult to run multiple times with different inputs.

We can fix this with a few refactorings:

  1. All of the logic should be put into a function that we can export. This function will accept a list of options objects along with all of their dependencies. This is known as dependency injection. This will allow us to easily distribute mock versions of a variety of things if we so desire.
  2. After rehydrating, we have an anonymous function in production mode that should be extracted to a named function.

We can fix this with a few refactorings: All of the logic should be put into a function that we can export. This function will accept a list of options objects along with all of their dependencies. This is known as dependency injection. This will allow us to easily distribute mock versions of a variety of things if we so desire. After rehydrating, we have an anonymous function in production mode that should be extracted to a named function.

Server Tests

I wrote tests for one application file and one server file in the previous article, tests for server/index.js are already present. We must now test the remaining three files in server/lib. First and foremost, I've discovered a large chunk of code from a previous abandoned strategy that isn't even used in the project. From export const parseRawHTMLForData to export const clientData. I'm going to start by removing that. When code has less lines of code, there are fewer places for bugs to exist. There are also a couple of exports that I never used and that can be kept private to the module. It appears that one test should suffice for this one. However, there is a minor hiccup in the plan: this file is dependent on the previous build because it reads in the generated build.

Technically, this makes sense because you would never try to render the app on the server unless you had a built app to render. Given that constraint, I'd say it's fine, and it's probably not worth the effort to refactor if we can simply ensure that our pipeline calls build before testing. If we wanted to have truly pure unit isolation, we could consider refactoring a bit more because the entire application is technically a dependency of SSR and thus could be mocked. However, using the actual build is probably more useful in any case. Throughout the test-writing process, you'll come across trade-offs like this all the time.

Next, server/lib/server.js is quite small, so let's remove it. It appears that we are basically delegating all responsibility to express, and we expect express to provide this contract; we can simply ensure that it does, and it doesn't seem to make sense to go beyond this.

Finally, there is only one file left to test: server/lib/ssr.js. It's a bit lengthy, and there are several paths to take. I do want to make a couple of minor refactorings to help with isolation, such as moving the logic for generating the app to a separate function and using partial application to easily mock some redirects by injecting application stream renderer. Also, because write and end are a little difficult to access, we can pull those out higher using partial application as well.

Let's write some tests now. If we don't set the jest-environment for this file specifically for node, the styled-components portion will not work. Because this file was more complex than the others, it took a few more tests to cover all of the branches. For clarity, each function is wrapped in its own describe block. When we run our tests now, we have complete coverage! Finally, before I go, I'm going to make a small change to my jest.config to force 100 percent coverage. It's much easier to keep coverage than it is to get to it the first time. Many of the modules we tested are unlikely to change in the future.

Conclusion

My aim for this article was to demonstrate the techniques required to Isolate units or refactor your code by utilizing dependency injection and mocks to create difficult to test code easy to reach, as well as to discuss some of the benefits of achieving 100% coverage. Furthermore, using TDD as a starting point is much simpler. I'm a firm believer that if achieving 100 percent coverage is difficult, it's because code needs to be refactored.

In many cases, an E2E test will be a better test for specific things. On top of this, a Cypress.io suite that loads the application and clicks around, would boost our confidence even more. Working in a codebase with 100 percent coverage, in my opinion, does a great job of increasing your confidence in each release and, as a result, increasing the velocity with which you can make and detect breaking changes.