Chapter 6 – Get Your Hands Dirty on Clean Architecture

Chapter 6

Implementing a Persistence Adapter

In Chapter 1, What's Wrong with Layers?, I ranted about the traditional layered architecture and claimed that it supports "database-driven design" because, in the end, everything depends on the persistence layer. In this chapter, we will have a look at how to make the persistence layer a plugin to the application layer to invert this dependency.

Dependency Inversion

Instead of a persistence layer, we will talk about a persistence adapter that provides persistence functionality to the application services.

The following figure shows how we can apply the Dependency Inversion Principle to do just that:

Figure 6.1: The services from the core use ports to access the persistence adapter

Our application services call port interfaces to access persistence functionality. These ports are implemented by a persistence adapter class that does the actual persistence work and is responsible for talking to the database.

In hexagonal architecture lingo, the persistence adapter is a "driven" or "outgoing" adapter, because it's called by our application and not the other way around.

The ports are effectively a layer of indirection between the application services and the persistence code. Let's remind ourselves that we are adding this layer of indirection in order to be able to evolve the domain code without having to think about persistence problems, meaning without code dependencies to the persistence layer. Refactoring the persistence code will not necessarily lead to a code change in the core.

Naturally, at runtime, we still have a dependency from our application core to the persistence adapter. If we modify code in the persistence layer and introduce a bug, for example, we may still break functionality in the application core. But as long as the contracts of the ports are fulfilled, we are free to do as we want in the persistence adapter without affecting the core.

The Responsibilities of a Persistence Adapter

Let's have a look at what a persistence adapter usually does:

  1. Takes input
  2. Maps input into a database format
  3. Sends input to the database
  4. Maps database output into an application format
  5. Returns output

The persistence adapter takes input through a port interface. The input model may be a domain entity, or an object dedicated to a specific database operation, as specified by the interface.

It then maps the input model to a format it can work with to modify or query the database. In Java projects, we commonly use the Java Persistence API (JPA) to talk to a database, so we might map the input into JPA entity objects that reflect the structure of the database tables. Depending on the context, mapping the input model into JPA entities may be a lot of work for little gain, so we will talk about strategies without mapping in Chapter 8, Mapping between Boundaries.

Instead of using JPA or another object-relational mapping framework, we could use any other technique to talk to the database. We might map the input model to plain SQL statements and send these statements to the database, or we might serialize incoming data into files and read them back from there.

The important part is that the input model to the persistence adapter lies within the application core and not within the persistence adapter itself so that changes in the persistence adapter don't affect the core.

Next, the persistence adapter queries the database and receives the query results.

Finally, it maps the database answer into the output model expected by the port and returns it. Again, it's important that the output model lies within the application core and not within the persistence adapter.

Aside from the fact that the input and output models lie in the application core instead of the persistence adapter itself, the responsibilities are not really different from those of a traditional persistence layer.

But implementing a persistence adapter as described previously will inevitably raise some questions that we probably wouldn't ask when implementing a traditional persistence layer because we are so used to the traditional way that we don't think about them.

Slicing Port Interfaces

One question that comes to mind when implementing services is how to slice the port interfaces that define the database operations available to the application core.

It's common practice to create a single repository interface that provides all database operations for a certain entity, as shown in the following figure:

Figure 6.2: Centralizing all database operations into a single outgoing port interface makes all services depend on methods they don't need

Each service that relies on database operations will then have a dependency on this single "broad" port interface, even if it uses only a single method from the interface. This means we have unnecessary dependencies in our codebase.

Dependencies on methods that we don't need in our context make the code harder to understand and to test. Imagine we are writing a unit test for RegisterAccountService from the preceding figure. Which of the methods of the AccountRepository interface do we have to create a mock for? We have to first find out which of the AccountRepository methods the service actually calls. Having mocked only part of the interface may lead to other problems because the next person working on that test might expect the interface to be completely mocked and run into errors. So, they (again) will have to do some research.

To put it in the words of Martin C. Robert:

"Depending on something that carries baggage that you don't need can cause you troubles that you didn't expect." (Clean Architecture by Robert C. Martin, page 86).

The Interface Segregation Principle provides an answer to this problem. It states that broad interfaces should be split into specific ones so that clients only know the methods they need.

If we apply this to our outgoing ports, we might get the result shown in the following figure:

Figure 6.3: Applying the Interface Segregation Principle removes unnecessary dependencies and makes the existing dependencies more visible

Each service now only depends on the methods it actually needs. What's more, the names of the ports clearly state what they are about. In a test, we no longer have to think about which methods to mock, since most of the time there is only one method per port.

Having very narrow ports like these makes coding a plug-and-play experience. When working on a service, we just "plug in" the ports we need; there's no baggage to carry around.

Of course, the "one method per port" approach may not be applicable in all circumstances. There may be groups of database operations that are so cohesive and often used together that we may want to bundle them together in a single interface.

Slicing Persistence Adapters

In the preceding figures, we have seen a single persistence adapter class that implements all persistence ports. There is no rule, however, that forbids us to create more than one class, as long as all persistence ports are implemented.

We might choose, for instance, to implement one persistence adapter per domain class for which we need persistence operations (or "aggregate" in DDD lingo), as shown in the following figure:

Figure 6.4: We can create multiple persistence adapters, one for each aggregate

This way, our persistence adapters are automatically sliced along the seams of the domain that we support with persistence functionality.

We might split our persistence adapters into even more classes, for instance, when we want to implement a couple of persistence ports using JPA or another OR-Mapper and some other ports using plain SQL for better performance. We might then create one JPA adapter and one plain SQL adapter, each implementing a subset of the persistence ports.

Remember that our domain code doesn't care about which class ultimately fulfills the contracts defined by the persistence ports. We are free to do as we see fit in the persistence layer, as long as all ports are implemented.

The "one persistence adapter per aggregate" approach is also a good foundation for separating the persistence needs for multiple bounded contexts in the future. Say, after a time, we identify a bounded context responsible for billing use cases. The following figure gives an overview of this scenario:

Figure 6.5: If we want to create hard boundaries between bounded contexts, each bounded context should have its own persistence adapter(s)

Each bounded context has its own persistence adapter (or potentially more than one, as described previously). The term "bounded context" implies boundaries, which means that services of the account context may not access persistence adapters of the billing context and vice versa. If one context needs something from the other context, it can access it via a dedicated incoming port.

Example with Spring Data JPA

Let's have a look at a code example that implements AccountPersistenceAdapter from the preceding figures. This adapter will have to save and load accounts to and from the database. We saw the Account entity in Chapter 4, Implementing a Use Case, but here is its skeleton again for reference:

package buckpal.domain;

@AllArgsConstructor(access = AccessLevel.PRIVATE)

public class Account {

  @Getter private final AccountId id;

  @Getter private final ActivityWindow activityWindow;

  private final Money baselineBalance;

  public static Account withoutId(

          Money baselineBalance,

          ActivityWindow activityWindow) {

    return new Account(null, baselineBalance, activityWindow);


  public static Account withId(

          AccountId accountId,

          Money baselineBalance,

          ActivityWindow activityWindow) {

    return new Account(accountId, baselineBalance, activityWindow);


  public Money calculateBalance() {

    // ...


  public boolean withdraw(Money money, AccountId targetAccountId) {

    // ...


  public boolean deposit(Money money, AccountId sourceAccountId) {

    // ...



Note that the Account class is not a simple data class with getters and setters but instead tries to be as immutable as possible. It only provides factory methods that create an Account entity in a valid state and all mutating methods perform some validation, such as checking the account balance before withdrawing money, so that we cannot create an invalid domain model.

We will use Spring Data JPA to talk to the database, so we also need @Entity-annotated classes representing the database state of an account:

package buckpal.adapter.persistence;


@Table(name = "account")




class AccountJpaEntity {



  private Long id;


The code block for the activity table:

package buckpal.adapter.persistence;


@Table(name = "activity")




class ActivityJpaEntity {



  private Long id;

  @Column private LocalDateTime timestamp;

  @Column private Long ownerAccountId;

  @Column private Long sourceAccountId;

  @Column private Long targetAccountId;

  @Column private Long amount;


The state of an account consists merely of an ID at this stage. Later, additional fields such as user ID may be added. More interesting is ActivityJpaEntity, which contains all activities for a specific account. We could have connected ActivitiyJpaEntity with AccountJpaEntity via JPA's @ManyToOne or @OneToMany annotations to mark the relation between them, but we have opted to leave this out for now, as it adds side effects to database queries. In fact, at this stage, it would probably be easier to use a simpler ORM than JPA to implement the persistence adapter, but we will use it anyway because we think we might need it in the future.

Does that sound familiar to you? You choose JPA as an OR mapper because it's the thing people use for this problem. A couple months into development you curse eager and lazy loading and the caching features and wish for something simpler. JPA is a great tool, but for many problems, simpler solutions may be, well, simpler.

Next, we use Spring Data to create repository interfaces that provide basic CRUD functionality out of the box, as well as custom queries to load certain activities from the database:

interface AccountRepository extends JpaRepository<AccountJpaEntity, Long> {


And here's the code for the ActivityRepository:

interface ActivityRepository extends JpaRepository<ActivityJpaEntity, Long> {

  @Query("select a from ActivityJpaEntity a " +

      "where a.ownerAccountId = :ownerAccountId " +

      "and a.timestamp >= :since")

  List<ActivityJpaEntity> findByOwnerSince(

      @Param("ownerAccountId") Long ownerAccountId,

      @Param("since") LocalDateTime since);

  @Query("select sum(a.amount) from ActivityJpaEntity a " +

      "where a.targetAccountId = :accountId " +

      "and a.ownerAccountId = :accountId " +

      "and a.timestamp < :until")

  Long getDepositBalanceUntil(

      @Param("accountId") Long accountId,

      @Param("until") LocalDateTime until);

  @Query("select sum(a.amount) from ActivityJpaEntity a " +

      "where a.sourceAccountId = :accountId " +

      "and a.ownerAccountId = :accountId " +

      "and a.timestamp < :until")

  Long getWithdrawalBalanceUntil(

      @Param("accountId") Long accountId,

      @Param("until") LocalDateTime until);


Spring Boot will automatically find these repositories, and Spring Data will work its magic to provide an implementation behind the repository interface that will actually talk to the database.

Now that we have JPA entities and repositories in place, we can implement the persistence adapter that provides the persistence functionality for our application:



class AccountPersistenceAdapter implements


    UpdateAccountStatePort {

  private final AccountRepository accountRepository;

  private final ActivityRepository activityRepository;

  private final AccountMapper accountMapper;


  public Account loadAccount(

          AccountId accountId,

          LocalDateTime baselineDate) {

    AccountJpaEntity account =



    List<ActivityJpaEntity> activities =




    Long withdrawalBalance = orZero(activityRepository




    Long depositBalance = orZero(activityRepository




    return accountMapper.mapToDomainEntity(






  private Long orZero(Long value){

    return value == null ? 0L : value;



  public void updateActivities(Account account) {

    for (Activity activity : account.getActivityWindow().getActivities()) {

      if (activity.getId() == null) {;





The persistence adapter implements two ports that are needed by the application, LoadAccountPort, and UpdateAccountStatePort.

To load an account from the database, we load it from AccountRepository and then load the activities of this account for a certain time window through ActivityRepository.

To create a valid Account domain entity, we also need the balance the account had before the start of this activity window, so we get the sum of all withdrawals and deposits for this account from the database. Finally, we map all this data to an Account domain entity and return it to the caller.

To update the state of an account, we iterate all the activities of the Account entity and check whether they have IDs. If they don't, they are new activities, which we persist through ActivityRepository.

In the scenario described previously, we have a two-way mapping between the Account and Activity domain models and the AccountJpaEntity and ActivityJpaEntity database models. Why make the effort of mapping back and forth? Couldn't we just move the JPA annotations to the Account and Activity classes and directly store them as entities in the database?

Such a "no mapping" strategy may be a valid choice, as we will see in Chapter 8, Mapping between Boundaries, when we will be talking about mapping strategies. However, JPA then forces us to make compromises in the domain model. For instance, JPA requires entities to have a no-args constructor. Or it might be that, in the persistence layer, a @ManyToOne relationship makes sense from a performance point of view, but in the domain model we want this relationship to be the other way around because we always only load part of the data anyway.

So, if we want to create a rich domain model without compromising the underlying persistence, we will have to map between the domain model and the persistence model.

What about Database Transactions?

We have not touched upon the topic of database transactions yet. Where do we put our transaction boundaries?

A transaction should span all write operations to the database that are performed within a certain use case so that all those operations can be rolled back together if one of them fails.

Since the persistence adapter doesn't know which other database operations are part of the same use case, it cannot decide when to open and close a transaction. We have to delegate this responsibility to the services that orchestrate the calls to the persistence adapter.

The easiest way to do this with Java and Spring is to add the @Transactional annotation to the application service classes so that Spring will wrap all public methods with a transaction:

package buckpal.application.service;


public class SendMoneyService implements SendMoneyUseCase {



If we want our services to stay pure and not be stained with @Transactional annotations, we can use aspect-oriented programming – for example, with AspectJ – in order to weave transaction boundaries into our codebase.

How Does This Help Me Build Maintainable Software?

Building a persistence adapter that acts as a plugin to the domain code frees the domain code from persistence details so that we can build a rich domain model.

Using narrow port interfaces, we have the flexibility to implement one port this way and another port that way, perhaps even with a different persistence technology, without the application noticing. We can even switch out the complete persistence layer, as long as the port contracts are obeyed.