Chapter 7 – Get Your Hands Dirty on Clean Architecture

Chapter 7

Testing Architecture Elements

In many projects I have witnessed, automated testing is a mystery. Everyone writes tests as they see fit because it's required by some dusty rule documented in a wiki, but no one can answer targeted questions about the team's testing strategy.

This chapter provides a testing strategy for hexagonal architecture. For each element of our architecture, we will discuss the type of test to cover it.

The Test Pyramid

Let's start the discussion about testing along the lines of the test pyramid (the test pyramid can be traced back to Mike Cohn's book "Succeeding with Agile" from 2009) in the following figure, which is a metaphor that helps us to decide how many tests and of which type we should aim for:

Figure 7.1: According to the test pyramid, we should create many cheap tests and fewer expensive ones

The basic statement is that we should have high coverage of fine-grained tests that are cheap to build, easy to maintain, fast-running, and stable. These are unit tests verifying that a single "unit" (usually a class) works as expected.

Once tests combine multiple units and cross-unit boundaries, architectural boundaries, or even system boundaries, they tend to become more expensive to build, slower to run, and more brittle (failing due to some configuration error instead of a functional error). The pyramid tells us that the more expensive those tests become, the less we should aim for high coverage of those tests because otherwise we will spend too much time building tests instead of new functionality.

Depending on the context, the test pyramid is often shown with different layers. Let's take a look at the layers I chose to discuss testing our hexagonal architecture. Note that the definitions of "unit test," "integration test," and "system test" vary with context. In one project, they may mean a different thing than in another. The following are interpretations of these terms as we will use them in this chapter.

Unit tests are the base of the pyramid. A unit test usually instantiates a single class and tests its functionality through its interface. If the class under test has dependencies to other classes, those other classes are not instantiated, but replaced with mocks, simulating the behavior of the real classes as needed during the test.

Integration tests form the next layer of the pyramid. These tests instantiate a network of multiple units and verify whether this network works as expected by sending some data into it through the interface of an entry class. In our interpretation, integration tests will cross the boundary between two layers, so the network of objects is not complete or must work against mocks at some point.

System tests, finally, spin up the whole network of objects that makes up our application and verify whether a certain use case works as expected through all the layers of the application.

Above the system tests, there might be a layer of end-to-end tests that include the UI of the application. We will not consider end-to-end tests here since we are only discussing backend architecture in this book.

Now that we have defined some test types, let's see which type of test best fits each of the layers of our hexagonal architecture.

Testing a Domain Entity with Unit Tests

We will start by looking at a domain entity at the center of our architecture. Let's recall the Account entity from Chapter 4, Implementing a Use Case. The state of an Account consists of a balance the account had at a certain point in the past (the baseline balance) and a list of deposits and withdrawals (activities) since then. We now want to verify that the withdraw() method works as expected:

class AccountTest {

  @Test

  void withdrawalSucceeds() {

    AccountId accountId = new AccountId(1L);

    Account account = defaultAccount()

        .withAccountId(accountId)

        .withBaselineBalance(Money.of(555L))

        .withActivityWindow(new ActivityWindow(

            defaultActivity()

                .withTargetAccount(accountId)

                .withMoney(Money.of(999L)).build(),

            defaultActivity()

                .withTargetAccount(accountId)

                .withMoney(Money.of(1L)).build()))

        .build();

  

    boolean success = account.withdraw(Money.of(555L), new AccountId(99L));

  

    assertThat(success).isTrue();

    assertThat(account.getActivityWindow().getActivities()).hasSize(3);

    assertThat(account.calculateBalance()).isEqualTo(Money.of(1000L));

  }

}

The preceding test is a plain unit test that instantiates Account in a specific state, calls its withdraw() method, and verifies that the withdrawal was successful and had the expected side effects on the state of the Account object under test.

The test is rather easy to set up, easy to understand, and it runs very quickly. Tests don't come much simpler than this. Unit tests like this are our best bet to verify the business rules encoded within our domain entities. We don't need any other type of test since domain entity behavior has little to no dependencies on other classes.

Testing a Use Case with Unit Tests

Going a layer outward, the next architecture element to test is the use cases. Let's look at a test of SendMoneyService, discussed in Chapter 4, Implementing a Use Case. The SendMoney use case locks the source Account so no other transactions can change its balance in the meantime. If we can successfully withdraw money from the source account, we lock the target account as well and deposit the money there. Finally, we unlock both accounts again.

We want to verify that everything works as expected when the transaction succeeds:

class SendMoneyServiceTest {

  // declaration of fields omitted

  @Test

  void transactionSucceeds() {

  

    Account sourceAccount = givenSourceAccount();

    Account targetAccount = givenTargetAccount();

  

    givenWithdrawalWillSucceed(sourceAccount);

    givenDepositWillSucceed(targetAccount);

  

    Money money = Money.of(500L);

  

    SendMoneyCommand command = new SendMoneyCommand(

        sourceAccount.getId(),

        targetAccount.getId(),

        money);

  

    boolean success = sendMoneyService.sendMoney(command);

  

    assertThat(success).isTrue();

  

    AccountId sourceAccountId = sourceAccount.getId();

    AccountId targetAccountId = targetAccount.getId();

    

    then(accountLock).should().lockAccount(eq(sourceAccountId));

    then(sourceAccount).should().withdraw(eq(money), eq(targetAccountId));

    then(accountLock).should().releaseAccount(eq(sourceAccountId));

  

    then(accountLock).should().lockAccount(eq(targetAccountId));

    then(targetAccount).should().deposit(eq(money), eq(sourceAccountId));

    then(accountLock).should().releaseAccount(eq(targetAccountId));

  

    thenAccountsHaveBeenUpdated(sourceAccountId, targetAccountId);

  }

  

  // helper methods omitted

}

To make the test a little more readable, it's structured into given/when/then sections, which are commonly used in behavior-driven development.

In the "given" section, we create the source and target Account instance and put them into the correct state with some methods whose names start with given...(). We also create an instance of SendMoneyCommand to act as input to the use case. In the "when" section, we simply call the sendMoney() method to invoke the use cases. The "then" section asserts that the transaction was successful and verifies that certain methods have been called on the source and target Account and on the AccountLock instance, which is responsible for locking and unlocking the accounts.

Under the hood, the test makes use of the Mockito libary (https://site.mockito.org/) library to create mock objects in the given...() methods. Mockito also provides the then() method to verify whether a certain method has been called on a mock object.

Since the use case service under test is stateless, we cannot verify a certain state in the "then" section. Instead, the test verifies that the service interacted with certain methods on its (mocked) dependencies. This means that the test is vulnerable to changes in the structure of the code under test and not only its behavior. This, in turn, means that there is a higher chance that the test has to be modified if the code under test is refactored.

With this in mind, we should think hard about which interactions we actually want to verify in the test. It might be a good idea not to verify all interactions as we did in the preceding test but instead focus on the most important ones. Otherwise, we have to change the test with every single change to the class under test, undermining the value of the test.

While this test is still a unit test, it borders on being an integration test, because we are testing the interaction of dependencies. It's easier to create and maintain than a full-blown integration test, however, because we are working with mocks and don't have to manage the real dependencies.

Testing a Web Adapter with Integration Tests

Moving outward another layer, we arrive at our adapters. Let's discuss testing a web adapter.

Recall that a web adapter takes input, for example, in the form of JSON strings, via HTTP, maybe does some validation on it, maps the input to the format a use case expects, and then passes it to that use case. It then maps the result of the use case back to JSON and returns it to the client via an HTTP response.

In the test for a web adapter, we want to make certain that all those steps work as expected:

@WebMvcTest(controllers = SendMoneyController.class)

class SendMoneyControllerTest {

  @Autowired

  private MockMvc mockMvc;

  @MockBean

  private SendMoneyUseCase sendMoneyUseCase;

  @Test

  void testSendMoney() throws Exception {

    mockMvc.perform(

        post("/accounts/send/{sourceAccountId}/{targetAccountId}/{amount}",

             41L, 42L, 500)

          .header("Content-Type", "application/json"))

          .andExpect(status().isOk());

    then(sendMoneyUseCase).should()

        .sendMoney(eq(new SendMoneyCommand(

            new AccountId(41L),

            new AccountId(42L),

            Money.of(500L))));

  }

}

The preceding test is a standard integration test for a web controller named SendMoneyController built with the Spring Boot framework. In the testSendMoney() method, we create an input object and then send a mock HTTP request to the web controller. The request body contains the input object as a JSON string.

With the isOk() method, we then verify that the status of the HTTP response is 200 and we verify that the mocked use case class has been called.

Most of the responsibilities of a web adapter are covered by this test.

We are not actually testing over the HTTP protocol since we are mocking that away with the MockMvc object. We trust that the framework translates everything to and from HTTP properly; there's no need to test the framework.

The whole path from mapping the input from JSON into a SendMoneyCommand object is covered, however. If we built the SendMoneyCommand object as a self-validating command, as explained in Chapter 4, Implementing a Use Case, we have even made sure that this mapping produces syntactically valid input for the use case. Also, we have verified that the use case is actually called and that the HTTP response has the expected status.

So, why is this an integration test and not a unit test? Even though it seems that we are only testing a single web controller class in this test, there's a lot more going on under the covers. With the @WebMvcTest annotation, we tell Spring to instantiate a whole network of objects that is responsible for responding to certain request paths, the mapping between Java and JSON, validating HTTP input, and so on. And in this test, we are verifying that our web controller works as a part of this network.

Since the web controller is heavily bound to the Spring framework, it makes sense to test it integrated into this framework instead of testing it in isolation. If we tested the web controller with a plain unit test, we'd lose coverage of all the mapping and validation and HTTP stuff and we could never be sure whether it actually worked in production, where it's just a cog in the machine of the framework.

Testing a Persistence Adapter with Integration Tests

For a similar reason, it makes sense to cover persistence adapters with integration tests instead of unit tests, since we not only want to verify the logic within the adapter, but also the mapping into the database.

We want to test the persistence adapter we built in Chapter 6, Implementing a Persistence Adapter. The adapter has two methods, one for loading an Account entity from the database and another to save new account activities to the database:

@DataJpaTest

@Import({AccountPersistenceAdapter.class, AccountMapper.class})

class AccountPersistenceAdapterTest {

  @Autowired

  private AccountPersistenceAdapter adapterUnderTest;

  

  @Autowired

  private ActivityRepository activityRepository;

  @Test

  @Sql("AccountPersistenceAdapterTest.sql")

  void loadsAccount() {

    Account account = adapter.loadAccount(

        new AccountId(1L),

        LocalDateTime.of(2018, 8, 10, 0, 0));

    assertThat(account.getActivityWindow().getActivities()).hasSize(2);

    assertThat(account.calculateBalance()).isEqualTo(Money.of(500));

  }

  

  @Test

  void updatesActivities() {

    Account account = defaultAccount()

        .withBaselineBalance(Money.of(555L))

        .withActivityWindow(new ActivityWindow(

            defaultActivity()

                .withId(null)

                .withMoney(Money.of(1L)).build()))

        .build();

  

    adapter.updateActivities(account);

  

    assertThat(activityRepository.count()).isEqualTo(1);

  

    ActivityJpaEntity savedActivity = activityRepository.findAll().get(0);

    assertThat(savedActivity.getAmount()).isEqualTo(1L);

  }

}

With @DataJpaTest, we are telling Spring to instantiate the network of objects that are needed for database access, including our Spring Data repositories that connect to the database. We add an additional @Import statement to make sure that certain objects are added to that network. These objects are needed by the adapter under test to map incoming domain objects to database objects, for instance.

In the test for the loadAccount() method, we put the database into a certain state using a SQL script. Then, we simply load the account through the adapter API and verify that it has the state that we would expect it to have given the database state in the SQL script.

The test for updateActivities() goes the other way around. We create an Account object with new account activity and pass it to the adapter to persist. Then, we check whether the activity has been saved to the database through the API of ActivityRepository.

An important aspect of these tests is that we are not mocking away the database. The tests are actually hitting the database. Had we mocked the database away, the tests would still cover the same lines of code, producing the same high coverage of lines of code. But despite this high coverage, the tests would still have a rather high chance of failing in a setup with a real database due to errors in SQL statements or unexpected mapping errors between database tables and Java objects.

Note that, by default, Spring will spin up an in-memory database to use during tests. This is very practical, as we don't have to configure anything, and the tests will work out of the box.

Since this in-memory database is most probably not the database we are using in production, however, there is still a significant chance of something going wrong with the real database even when the tests worked perfectly against the in-memory database. Databases love to implement their own flavor of SQL, for instance.

For this reason, persistence adapter tests should run against the real database. Libraries such as Testcontainers (https://www.testcontainers.org/) are a great help in this regard, spinning up a Docker container with a database on demand.

Running against the real database has the added benefit that we don't have to take care of two different database systems. If we are using the in-memory database during tests, we might have to configure it in a certain way, or we might have to create separate versions of database migration scripts for each database, which is no fun at all.

Testing Main Paths with System Tests

On top of the pyramid of system tests, a system test starts up the whole application and runs requests against its API, verifying that all our layers work in concert.

In a system test for the "Send Money" use case, we send an HTTP request to the application and validate the response as well as the new balance of the account:

@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)

class SendMoneySystemTest {

  @Autowired

  private TestRestTemplate restTemplate;

  @Test

  @Sql("SendMoneySystemTest.sql")

  void sendMoney() {

  

    Money initialSourceBalance = sourceAccount().calculateBalance();

    Money initialTargetBalance = targetAccount().calculateBalance();

  

    ResponseEntity response = whenSendMoney(

        sourceAccountId(),

        targetAccountId(),

        transferredAmount());

  

    then(response.getStatusCode())

        .isEqualTo(HttpStatus.OK);

  

    then(sourceAccount().calculateBalance())

        .isEqualTo(initialSourceBalance.minus(transferredAmount()));

  

    then(targetAccount().calculateBalance())

        .isEqualTo(initialTargetBalance.plus(transferredAmount()));

  

  }

  private ResponseEntity whenSendMoney(

      AccountId sourceAccountId,

      AccountId targetAccountId,

      Money amount) {

    

    HttpHeaders headers = new HttpHeaders();

    headers.add("Content-Type", "application/json");

    HttpEntity<Void> request = new HttpEntity<>(null, headers);

  

    return restTemplate.exchange(

        "/accounts/sendMoney/{sourceAccountId}/{targetAccountId}/{amount}",

        HttpMethod.POST,

        request,

        Object.class,

        sourceAccountId.getValue(),

        targetAccountId.getValue(),

        amount.getAmount());

  }

    // some helper methods omitted

}

With @SpringBootTest, we are telling Spring to start up the whole network of objects that make up the application. We are also configuring the application to expose itself on a random port.

In the test method, we simply create a request, send it to the application, and then check the response status and the new balance of the accounts.

We are using TestRestTemplate to send the request and not MockMvc, as we did earlier in the web adapter test. This means we are doing real HTTP, bringing the test a little closer to a production environment.

Just as we are going over real HTTP, we are going through the real output adapters. In our case, this is only a persistence adapter that connects the application to a database. In an application that talks to other systems, we would have additional output adapters in place. It's not always feasible to have all those third-party systems up and running, even for a system test, so we might mock them away, after all. Our hexagonal architecture makes this as easy as it can be for us since we only have to stub out a couple of output port interfaces.

Note that I went out of my way to make the test as readable as possible. I hid every bit of ugly logic within helper methods. These methods now form a domain-specific language that we can use to verify the state of things.

While a domain-specific language like this is a good idea in any type of test, it's even more important in system tests. System tests simulate the real users of the application much better than a unit or integration test can, so we can use them to verify the application from the viewpoint of the user. This is much easier with a suitable vocabulary at hand. This vocabulary also enables domain experts, who are best suited to embody a user of the application and who probably aren't programmers, to reason about the tests and give feedback. There are whole libraries for behavior-driven development, such as JGiven (http://jgiven.org/), that provide a framework to create a vocabulary for your tests.

If we have created unit and integration tests as described in the previous sections, the system tests will cover a lot of the same code. Do they even provide any additional benefits? Yes, they do. Usually, they flush out other types of bugs than the unit and integration tests do. Some mapping between the layers could be off, for instance, which we would not notice with the unit and integration tests alone.

System tests play out their strengths best if they combine multiple use cases to create scenarios. Each scenario represents a certain path a user might typically take through the application. If the most important scenarios are covered by passing system tests, we can assume that we haven't broken the application with our latest modifications and it is ready to ship.

How Much Testing is Enough?

A question many project teams I have been part of couldn't answer is how much testing we should do. Is it enough if our tests cover 80% of our lines of code? Should it be higher than that?

Line coverage is a bad metric to measure test success. Any goal other than 100% is completely meaningless because important parts of the codebase might not be covered at all. And even at 100%, we still can't be sure that every bug has been squashed.

I suggest measuring test success by how comfortable we feel to ship the software. If we trust the tests enough to ship after having executed them, we are good. The more often we ship, the more trust we can have in our tests. If we only ship twice a year, no one will trust the tests because they will only prove themselves twice a year.

This requires a leap of faith the first couple of times we ship, but if we make it a priority to fix and learn from bugs in production, we are on the right track.

For each production bug, we should ask the question, "Why didn't our tests catch this bug?", document the answer, and then add a test that covers it. Over time, this will make us comfortable with shipping and the documentation will even provide a metric to gauge our improvement over time.

It helps, however, to start with a strategy that defines the tests we should create. One such strategy for our hexagonal architecture is this one:

  • While implementing a domain entity, cover it with a unit test.
  • While implementing a use case, cover it with a unit test.
  • While implementing an adapter, cover it with an integration test.
  • Cover the most important paths a user can take through the application with a system test.

Note the words "while implementing": when tests are done during the development of a feature and not after, they become a development tool and no longer feel like a chore.

If we have to spend an hour fixing tests each time we add a new field, however, we are doing something wrong. Probably, our tests are too vulnerable to structural changes in the code and we should look at how to improve that. Tests lose their value if we have to modify them for each refactoring.

How Does This Help Me Build Maintainable Software?

The Hexagonal Architecture style cleanly separates domain logic and outward-facing adapters. This helps us to define a clear testing strategy that covers the central domain logic with unit tests and the adapters with integration tests.

The input and output ports provide very visible mocking points in tests. For each port, we can decide to mock it or to use the real implementation. If the ports are each very small and focused, mocking them is a breeze instead of a chore. The fewer methods a port interface provides, the less confusion there is about which of the methods we have to mock in a test.

If it becomes too much of a burden to mock things away or if we don't know what kind of test we should use to cover a certain part of the code base, it's a warning sign. In this regard, our tests have the additional responsibility of acting as a canary – to warn us about flaws in the architecture and to steer us back on the path to creating a maintainable codebase.