6. Backend patterns for micro-frontends – Building Micro-Frontends

Chapter 6. Backend patterns for micro-frontends

You may think that micro-frontends are a possible architecture only when you combine them with microservices because we can have end-to-end technology autonomy.

Maybe you’re thinking that your monolith architecture would never support micro-frontends, or even that having a monolith on the API layer would mean mirroring the architecture on the frontend as well.

However, that’s not the case. There are several nuances to take into consideration and micro-frontends can definitely be used in combination with microservices and monolith.

In this chapter, we review some possible integrations between the frontend and backend layers, in particular, we analyze how micro-frontends can work in combination with a monolith or modular monolith backend, with microservices, and even with the backend for frontend (BFF) pattern.

Also, we will discuss the best patterns to integrate with different micro-frontends implementations, such as the vertical split, the horizontal split with a client-side composition, and the horizontal split with server-side composition.

Finally, we will explore how GraphQL can be a valid solution for micro-frontends as a single entry point for our APIs.

Let’s start by defining the different APIs approaches we may have in a web application. As shown in figure 9.1, we focus our journey on the most used and well-known patterns.

This doesn’t mean micro-frontends work only with these implementations. You can devise the right approach for a WebSocket or hypermedia, for instance, by learning how to deal with BFF, API gateway, or service dictionary patterns.

Figure 6-1. 1 - Micro-frontends and API layers

The patterns we analyze in this chapter are:

  • Service dictionary. The service dictionary is just a list of services available for the client to consume. It’s used mainly when we are developing an API layer with a monolith or modular monolith architecture; however, it can also be implemented with a microservices architecture with an API gateway, among other architectures. A service dictionary avoids the need to create shared libraries, environment variables, or configurations injected during the CI process or to have all the endpoints hardcoded inside the frontend codebase.
    The dictionary is loaded for the first time when the micro-frontend loads, allowing the client to retrieve the URLs to consume directly from the service dictionary.

  • API gateway. Well known in the microservices community, an API gateway is a single entry point for a microservices architecture. The clients can consume the APIs developed inside microservices through one gateway.
    The API gateway also allows centralizing a set of capabilities, like token validation, API throttling, or rate-limiting.

  • BFF. The BFF is an extension of the API gateway pattern, creating a single entry point per client type. For instance, we may have a BFF for the web application, another for mobile, and a third for the Internet of Things (IoT) devices we are commercializing.
    BFF reduces the chattiness between client and server aggregating the API responses and returning an easy data structure for the client to be parsed and render inside a user interface, allowing a great degree of freedom to shape APIs dedicated to a client and reducing the round trips between a client and the backend layer.

These patterns are not mutually exclusive, either; they can be combined to work together.

An additional possibility worth mentioning is writing an API endpoints library for the client side. However, I discourage this practice with micro-frontends because we risk embedding an older library version in some of them and, therefore, the user interface may have some issues like outdated information or even APIs errors due to dismissal of some APIs. Without strong governance and discipline around this library, we risk having certain micro-frontends using the wrong version of an API.

Domain-driven design (DDD) also influences architectures and infrastructure decisions. Especially with micro-architectures, we can divide an application into multiple business domains, using the right approach for each business domain.

For instance, it’s not unusual to have part of the application exposing the APIs with a BBF pattern and another part exposing with a service dictionary.
This level of flexibility provides architects and developers with a variety of choices not possible before. At the same time, however, we need to be careful not to fragment the client-server communication too much, instead introducing a new pattern when it provides a real benefit for our application.

Working with a Service Dictionary

A service dictionary is nothing more than a list of endpoints available in the API layer provided to a micro-frontend. This allows the API to be consumed without the need to bake the endpoints inside the client-side code to inject them during a continuous integration pipeline or in a shared library.

Usually, a service dictionary is provided via a static JSON file or an API that should be consumed as the first request for a micro-frontend (in the case of a vertical-split architecture) or an application shell (in the case of a horizontal split).

A service dictionary may also be integrated into existing configuration files or APIs to reduce the round trips to the server and optimize the client startup.
In this case, we can have a JSON object containing a list of configurations needed for our clients, where one of the elements is the service dictionary.

An example of service dictionary structure would be:

{
“my_amazing_api”: {
		“v1”: "https://api.acme.com/v1/my_amazing_api",
	“v2”: "https://api.acme.com/v2/my_amazing_api",
		“v3”: "https://api.acme.com/v3/my_amazing_api"
},
	“my_super_awesome_api”: {
		“v1”: "https://api.acme.com/v1/my_super_awesome_api"
	}
}

As you can see, we are listing all the APIs supported by the backend. Thanks to API versioning, we can handle cross-platforms applications without introducing breaking changes because each client can use the API version that suits it better.
One thing we can’t control in such scenarios is the penetration of a new version in every mobile device. When we release a new version of a mobile application, updating may take several days, if not weeks, and in some situations, it may take even longer.

Therefore, versioning the APIs is important to ensure we don’t harm our user experience.

Reviewing the cadence of when to dismiss an API version, then, is important.
One of the main reasons is that potential attacks may harm our platform’s stability.
Usually, when we upgrade an API to a new version, we are improving not only the business logic but also the security. But unless this change can be applicable to all the versions of a specific API, it would be better to assess whether the APIs are still valid for legitimate users and then decide whether to dismiss the support of an API.

To create a frictionless experience for our users, implementing a forced upgrade in every application released via an executable (mobile, smart TVs, or consoles)may be a solution, preventing the user from accessing older applications due to drastic updates in our APIs or even in our business model.

Therefore, we must think about how to mitigate these scenarios in order to create a smooth user experience for our customers.

Endpoint discoverability is another reason to use a service dictionary.
Not all companies work with cross-functional teams; many still work with components teams, with some teams fully responsible for the frontend of an application and others for the backend.

Using a service dictionary allows every frontend team to be aware of what’s happening in other teams. If a new version of an API is available or a brand-new API is exposed in the service dictionary, the frontend team will be aware.

This is also a valid argument for cross-functional teams when we develop a cross-functional application.

In fact, it’s very unlikely that inside a two-pizza team we would be able to have all the knowledge needed for developing web, backend, mobile (iOS and Android), and maybe even smart TVs and console applications.

Using a service dictionary allows every team to have a list of available APIs in every environment just by checking the dictionary.

We often think the problem is just a communication issue that can be resolved with better communication. However, look again at the number of links in a 12-person team. Forgetting to update a team regarding a new API version may happen more often than not. A service dictionary helps introduce the discussion with the team responsible for the API, especially in large organizations with distributed teams.

Last but not least, a service dictionary is also helpful for testing micro-frontends with new endpoint versions while in production.

A company that uses a testing-in-production strategy can expand that to its micro-frontends architecture, thanks to the service dictionary, all without affecting the standard user experience.

We can test new endpoints in production by providing a specific header recognized by our service dictionary service. The service will interpret the header value and respond with a custom service dictionary used for testing new endpoints directly in production.

We would choose to use a header instead of a token or any other type of authentication, because it covers authenticated and unauthenticated use cases. Let’s see a high-level design on what the implementation would look like (figure 9.2).

Figure 6-2. 2 - A high-level architecture on how to use a service dictionary for testing in production

In figure 9.2 we can see that the application shell consumes the service dictionary API as the first step. But this time, the application shell passes a header with an ID related to the configuration to load.

In this example, the ID was generated at runtime by the application shell.

When the service dictionary receives the call, it will check whether a header is present in the request and if so, it will try to load the associated configuration stored inside the database.

It then returns the response to the application shell with the specific service dictionary requested. The application shell is now ready to load the micro-frontends to compose the page.

Finally, the custom endpoint configuration associated with the client ID is produced via a dashboard (top right corner of the diagram) used only by the company’s employees.

In this way we may even extend this mechanism for other use cases inside our backend, providing a great level of flexibility for micro-frontends and beyond.

The service dictionary can be implemented with either a monolith or a modular monolith. The important thing to remember is to allow categorization of the endpoints list based on the micro-frontend that requests the endpoints.

For instance we can group the endpoints related to a business subdomain or a bounded context. This is the strategic goal we should aim for.

A service dictionary makes more sense with micro-frontends composed on the client side rather than on the server side. BFFs and API gateways are better suited for the server-side composition, considering the coupling between a micro-frontend and its data layer.

Let’s now explore how to implement the service dictionary in a micro-frontend architecture.

Implementing a Service Dictionary in a Vertical-Split Architecture

The service dictionary pattern can easily be implemented in a vertical-split micro-frontends architecture, where every micro-frontend requests the dictionary related to its business domain.

However, it’s not always possible to implement a service dictionary per domain, such as when we are transitioning from an existing SPA to micro-frontends, where the SPA requires the full list of endpoints because it won’t reload the JavaScript logic till the next user session.

In this case, we may decide to implement a tactical solution, providing the full list of endpoints to the application shell instead of a business domain endpoints list to every single micro-frontend. With this tactical solution, we assume the application shell exposes or injects the list of endpoints for every micro-frontend.

When we are in a position to divide the services list by domain, there will be a minimum effort for removing the logic from the application shell and then moving into every micro-frontend as displayed in figure 9.3.

Figure 6-3. 3 With vertical-split architecture we can retrieve the service dictionary directly inside a micro-frontend, dividing the endpoints list by business domain.

The service dictionary approach may also be used with a monolith backend. If we determine that our API layer will never move to microservices, we can still implement a service dictionary divided by domain per every micro-frontend, especially if we implement a modular monolith.

Taking into account figure 9.3, we can derive a sample of sequence diagrams like the one in figure 9.4. Bear in mind there may be additional steps to perform either in the application shell or in the micro-frontend loaded, depending on the context we operate in. Take the following sequence diagram just as an example.

Figure 6-4. 4 Sequence diagram to implement a service dictionary with a vertical-split architecture

As the first step, the application shell loads the micro-frontend requested, in this example the catalogue micro-frontend.

After mounting the micro-frontend, the catalogue initializes and consumes the service dictionary API for rendering the view. It can consume any additional APIs, as necessary.

From this moment on, the catalogue micro-frontend has access to the list of endpoints available and uses the dictionary to retrieve the endpoints to call.

In this way we are loading only the endpoints needed for a micro-frontend, reducing the payload of our configuration and maintaining control of our business domain.

Implementing a Service Dictionary in a Horizontal-Split Architecture

To implement the service dictionary pattern with a micro-frontends architecture using a horizontal split, we have to pay attention to where the service dictionary API is consumed and how to expose it for the micro-frontends inside a single view.

When the composition is managed client side, the recommended way to consume a service dictionary API is inside the application shell or host page. Because the container has visibility into every micro-frontend to load, we can perform just one round trip to the API layer to retrieve the APIs available for a given view and expose or inject the endpoints list to every loaded micro-frontend.

Consuming the service dictionary APIs from every micro-frontend would negatively impact our applications’ performance, so it’s strongly recommended to stick the logic in the micro-frontends container as shown in figure 9.5.

Figure 6-5. 5 - The service dictionary should always be loaded from the micro-frontends container in a horizontal-split architecture

The application shell should expose the endpoints list via the window object, making it accessible to all the micro-frontends when the technical implementation allows us to do it. Another option is injecting the service dictionary, alongside other configurations, after loading every micro-frontend.

For example, using module federation in a React application requires sharing the data using React context APIs. The context API allows you to expose a context, in our case the service dictionary, to the component tree without having to pass props down manually at every level.

The decision to inject or expose our configurations is driven by the technical implementation.

Let’s see how we can express this use case with the sequence diagram in figure 9.6.

Figure 6-6. 6 - This sequence diagram shows how a horizontal-split architecture with client-side composition may consume the service dictionary API.

In this sequence diagram, the request from the host application, or application shell, to the service dictionary is at the very top of the diagram.

The host application then exposes the endpoints list via the window object and starts loading the micro-frontends that compose the view.

Again, we may have a more complex situation in reality. Adapt the technical implementation and business logic to your project needs accordingly.

Working with an API gateway

An API gateway pattern represents a unique entry point for the outside world to consume APIs in a microservices architecture.

Not only does an API gateway simplify access for any frontend to consume APIs by providing a unique entry point, but it’s also responsible for requests routing, API composition and validation, and other edge functions, like authentication and rate limiting.

An API gateway also allows us to keep the same communication protocol between clients and the backend, while the gateway routes a request in the background in the format requested by a microservice (see figure 9.7).

Figure 6-7. 7 - An API gateway pattern simplifies the communication between clients and server and centralizes functionalities like authentication and authorization via edge functions.

Imagine a microservice architecture composed with HTTP and gRPC protocols. Without implementing an API gateway, the client won’t be aware of every API or all the communication protocol details. Instead of using the API gateway pattern, we can hide the communication protocols behind the API gateway and leave the client’s implementation dealing with the API contracts and implementing the business logic needed on the user interface.

Other capabilities of edge functions are rate limiting, caching, metrics collection, and log requests.

Without an API gateway, all these functionalities will need to be replicated in every microservice instead of centralized as we can do with a single entry point.

Still, the API gateway also has some downsides.

As a unique entry point, it could be a single point of failure, so we need to have a cluster of API gateways to add resilience to our application.

Another challenge is more operational. In a large organization, where we have hundreds of developers working on the same project, we may have many services behind a single API gateway. We’ll need to provide solid governance for adding or removing APIs in the API gateway to prevent .

Finally, if we implement an additional layer between the client and the microservice to consume, we’ll add some latency to the system.

The process for updating the API gateway must be as lightweight as possible, making investing in the governance around this process a mandatory step. Otherwise, developers will be forced to wait in line to update the gateway with a new version of their endpoint.

The API gateway can work in combination with a service dictionary, adding the benefits of a service dictionary to those of the API gateway pattern.

Finally, with micro-architectures, we are opening a new scenario, where it may be possible and easier to manage and control because we are splitting our APIs by domain, having multiple API gateways to gather a group of APIs for instance.

One API entry point per business domain

Another opportunity to consider is creating one API entry point per business domain instead of having one entry point for all the APIs, as with an API gateway.

Multiple API gateways enable you to partition your APIs and policies by solution type and business domain.
In this way, we avoid having a single point of failure in our infrastructure. Part of the application can fail without impacting the rest of the infrastructure. Another important characteristic of this approach is that we can use the best entry point strategy per bounded context based on the requirements needed, as shown in figure 9.8.

Figure 6-8. 8 - On the left is a unique entry point for the API layer; on the right are multiple entry points, one per subdomain.

So let’s say we have a bounded context that needs to aggregate multiple APIs from different microservices and return a subset of the body response of every microservice. In this case, a BFF would be a better fit for being consumed by a micro-frontend rather than handing over to the client doing multiple round trips to the server and filtering the APIs body responses for displaying the final result to the user.

But in the same application, we may have a bounded context that doesn’t need a BFF.
Let’s go one step further and say that in this subdomain, we have to validate the user token in every call to the API layer to check whether the user is entitled to access the data.

In this case, using an API gateway pattern with validation at the API gateway level will allow you to fulfill the requirements in a simple way.

With infrastructure ownership, choosing different entry points for our API layer means every team is responsible for building and maintaining the entry point chosen, reducing potential external dependencies across teams, and allowing them to own end-to-end the subdomain they are responsible for.

This approach may require more work to build, but it allows a fine-grain control of identifying the right tool for the job instead of experiencing a trade-off between flexibility and functionalities. It also allows the team to really be independent end to end, allowing engineers to change the frontend, backend, and infrastructure without affecting any other business domain.

A client-side composition, with an API gateway and a service dictionary

Using an API gateway with a client-side micro-frontends composition (either vertical or horizontal split) is not that different from implementing the service dictionary in a monolith backend.

In fact, we can use the service dictionary to provide our micro-frontends with the endpoints to consume, with the same suggestions we provided previously..

The main difference, in this case, will be that the endpoints list will be provided by a microservice responsible for serving the service dictionary or a more generic client-side configuration, depending on our use case.

Another interesting option is that with an API gateway, authorization may happen at the API-gateway level, removing the risk of introducing libraries at the API level, as we can see in figure 9.9.

Figure 6-9. 9 - A vertical-split architecture with a client-side composition requesting data to a microservice architecture with an API gateway as entry point.

Based on the concepts shared with the service dictionary, the backend infrastructure has changes but not the implementation side. As a result, the same implementations applicable to the service dictionary are also applicable in this scenario with the API gateway.

Let’s look at one more interesting use case for the API gateway.

Some applications allow us to use a micro-frontends architecture to provide different flavors of the same product to multiple customers, such as customizing certain micro-frontends on a customer-by-customer basis.

In such cases, we tend to reuse the API layer for all the customers, using part or all of the microservices based on the user entitlement. But in a shared infrastructure we can risk having some customers consuming more of our backend resources than others.

In such scenarios, using API throttling at the API gateway will mitigate this problem by assigning the right limits per customer or per product.

At the micro-frontends level we won’t need to do much more than handle the errors triggered by the API gateway for this use case.

A server-side composition with an API gateway

A microservices architecture opens up the possibility of using a micro-frontends architecture with a server-side composition.

Note

Remember that with a server-side composition we identify our micro-frontends with a horizontal split, not a vertical one

Figure 6-10. 10 - An example of a server-side composition with a microservices architecture

As we can see in figure 9.10, after the browser’s request to the API gateway, the gateway handles the user authentication/authorization first, then allows the client request to be processed by the UI composition service responsible for calling the microservices needed to aggregate multiple micro-frontends, with their relative content fetched from the microservices layer.

For the microservices layer, we use a second API gateway to expose the API for internal services, in this case, used by the UI composition service.

Figure 6-20.11 illustrates a hypothetical implementation with the sequence diagram related to this scenario.

Figure 6-11. 11 - An example of server-side composition with API gateway

After the API gateway token validation, the client-side request lands at the UI composition service, which calls the micro-frontend to load. The micro-frontend service is then responsible for fetching the data from the API layer and the relative template for the UI and serving a fragment to the UI composition layer that will compose the final result for the user.

This diagram presents an example with a micro-frontend, but it’s applicable for all the others that should be retrieved for composing a user interface.

Usually, the microservice used for fetching the data from the API layer should have a one-to-one relation with the API it consumes, which allows an end-to-end team’s ownership of a specific micro-frontend and microrservice.

There are several micro-frontend frameworks with a similar implementation, such as the interface framework from Zalando, OpenComponents, Project Mosaic, and Ara Framework based on Airbnb Hypernova.

Working with the BFF pattern

Although the API gateway pattern is a very powerful solution for providing a unique entry point to our APIs, in some situations we have views that require aggregating several APIs to compose the user interface, such as a financial dashboard that may require several endpoints for gathering the data to display inside a unique view.

Sometimes, we aggregate this data on the client side, consuming multiple endpoints and interpolating data for updating our view with the diagrams, tables, and useful information that our application should display. Can we do something better than that?

Another interesting scenario where an API gateway may not be suitable is in a cross-platform application where our API layer is consumed by web and mobile applications.

Moreover, the mobile platforms often require displaying the data in a completely different way from the web application, especially taking into consideration screen size.

In this case, many visual components and relative data may be hidden on mobile in favor of providing a more general high-level overview and allowing a user to drill down to a specific metric or information that interests them instead of waiting for all the data to download.

Finally, mobile applications often require a different method for aggregating data and exposing them in a meaningful way to the user. APIs on the backend are the same for all clients, so for mobile applications, we need to consume different endpoints and compute the final result on the device instead of changing the API responses based on the device that consumes the endpoint.

In all these cases, BFF, as described by Phil Calçado (formerly of SoundCloud), comes to the rescue.

The BFF pattern develops niche backends for each user experience.

This pattern will only make sense if and when you have a significant amount of data coming from different endpoints that must be aggregated for improving the client’s performance or when you have a cross-platform application that requires different experiences for the user based on the device used.

This pattern can also help solve the challenge of introducing a layer between the API and the clients, as we can see in figure 9.12.

Figure 6-12. 12 - On the left a microservices architecture consumed by different clients; on the right a BBF layer exposing only the APIs needed for a given group of devices, in this case, mobile and web BFF.

Thanks to BFF we can create a unique entry point for a given device group, such as one for mobile and another for a web application.

However, this time we also have the option of aggregating API responses before serving them to the client and, therefore, generating less chatter between clients and the backend because the BFF aggregates the data and serves only what is needed for a client with a structure reflecting the view to populate.

Interestingly, the microservices architecture’s complexity sits behind the BFF, creating a unique entry point for the client to consume the APIs without needing to understand the complexity of a microservices architecture.

BFF can also be used when we want to migrate a monolith to microservices. In fact, thanks to the separation between clients and APIs, we can use the strangler pattern for killing the monolith in an iterative way, as illustrated in figure 9.13. This technique is also applicable to the API gateway pattern.

Figure 6-13. 13 - The red boxes represent services extracted from the monolith and converted to microservices. The BFF layer allows the client to be unaware of the change happening in the backend, maintaining the same contract at the BFF level.

Another interesting use case for the BFF is aggregating APIs by domain, as we have seen for the API gateway.

Creating a BFF for a group of devices could have multiple BFF calling the same microservices. When not controlled properly, this can harm platform stability. Obviously, we may decide to introduce caches in different layers for mitigating traffic spikes, but we can mitigate this problem another way.

Following our subdomain decomposition, we can identify a unique entry point for each subdomain, grouping all the microservices for a specific domain together instead of taking into consideration the type of device that should consume the APIs.

This would allow us to have similar service-level agreements (SLAs) inside the same domain, control the response to the clients in a more cohesive way, and allow the application to fail more gracefully than having a single layer responsible for serving all the APIs, as in the previous examples.

Figure 6-20.14 illustrates how we can have two BFFs, one for the catalogue and one for the Account section, for aggregating and exposing these APIs to different clients. In this way, we can scale the BFFs based on their traffic.

Figure 6-14. 14 - This diagram shows how to separate different domain-driven design subdomains.

Gathering all the APIs behind a unique layer, however, may lead to an application’s popular subdomains requiring a different treatment compared to less-accessed subdomains.

Dividing by subdomain, then, would allow us to apply the right SLA instead of generalizing one for the entire BFF layer.

Sometimes BFF raises some concerns due to some inherent pitfalls such as reusability and code duplication.

In fact, we may need to duplicate some code for implementing similar functionalities across different BFF, especially when we create one per device family. In these cases, we need to assess whether the burden of having teams implementing similar code twice is greater than abstracting (and maintaining) the code.

A client-side composition, with a BFF and a service dictionary

Because a BFF is an evolution of the API gateway, many of the implementation details for an API gateway are valid for a BFF layer as well, plus we can aggregate multiple endpoints, reducing client chatter with the server.

It’s important to iterate this capability because it can drastically improve application performance.

Yet there are some caveats when we implement either a vertical split or a horizontal one.

For instance, in figure 9.15, we have a product details page that has to fetch the data for composing the view.

Figure 6-15. 15 - A wireframe of a product page

When we want to implement a vertical-split architecture, we may design the BFF to fetch all the data needed for composing this view, as we can see in figure 9.16.

Figure 6-16. 16 - Sequence diagram showing the benefits of the BFF pattern used in combination with a vertical split composed on the client side

In this example, we assume the micro-frontend has already retrieved the endpoint for performing the request via a service dictionary and that it consumes the endpoints, leaving the BFF layer to compose the final response.

In this use case we can also easily use a service dictionary for exposing the endpoints available in our BFF to our micro-frontends similar to the way we do it for the API gateway solution.

However, when we have a horizontal split composed on the client side, things become trickier because we need to maintain the micro-frontends’ independence, as well as having the host page domain as unaware as possible.

In this case, we need to combine the APIs in a different way, delegating each micro-frontend to consume the related API, otherwise, we will need to make the host page responsible for fetching the data for all the micro-frontends, which could create a coupling that would force us to deploy the host page with the micro-frontends, breaking the intrinsic characteristic of independence between micro-frontends.

Taking into consideration these micro-frontends and the host page may be developed by different teams, this setup would slow down features development rather than leveraging the benefits that this architecture provides us.

BFF with a horizontal split composed on the client side could create more challenges than benefits in this case. It’s wise to analyze whether this pattern’s benefits will outweigh the challenges.

A server-side composition, with a BFF and service dictionary

When we implement a horizontal-split architecture with server-side composition and we have a BFF layer, our micro-frontends implementation resembles the API gateway one.

The BFF exposes all the APIs available for every micro-frontend, so using the service dictionary pattern will allow us to retrieve the endpoints for rendering our micro-frontends ready to be composed by a UI composition layer.

Using GraphQL with micro-frontends

In a chapter about APIs and micro-frontends, we couldn’t avoid mentioning GraphQL.

GraphQL is a query language for APIs and a server-side runtime for executing queries by using a type system you define for your data.

GraphQL was created by Facebook and released in 2015. Since then it has gained a lot of traction inside the developers’ community.

Especially for frontend developers, GraphQL represents a great way to retrieve the data needed for rendering a view, decoupling the complexity of an API layer, rationalizing the API response in a graph, and allowing any client to reduce the number of round trips to the server for composing the UI.

Because GraphQL is a client-centric API, the paradigm for designing an API schema should be based on how the view we need to render looks instead of looking at the data exposed by the API layer.

This is a very key distinction compared to how we design our database schemas or our REST APIs.

Two projects in the GraphQL community stand out as providing great support and productivity with the open source tools available, such as Apollo and Rely.

Both projects leverage GraphQL, adding an opinionated view on how to implement this layer inside our application, increasing our productivity thanks to the features available in one or both, like authentication, rate limiting, caching, and schema federations.

GraphQL can be used as an API gateway, acting as a proxy for specific microservices, for instance, or as a BFF, orchestrating the requests to multiple endpoints and aggregating the final response for the client.

Remember that GraphQL acts as a unique entry point for your entire API layer. By design GraphQL exposes a unique endpoint where the clients can perform queries against the GraphQL server. Because of this, we tend to not version our GraphQL entry point, although if the project requires a versioning because we don’t have full control of the clients that consume our data, we can version the GraphQL endpoint. Shopify does this by adding the date in the URL and supporting all the versions up to a certain period.

GraphQL simplifies data retrieval for the clients, allows us to query only the fields needed in a view based on client type (e.g., mobile or web), and simplifies the maintenance and evolution of the GraphQL layer compared to more complicated backend ecosystems.

The data graph is reachable via a unique endpoint. When a new microservice is added to the graph, the only change for the client to make would be at the query level, also minimizing maintenance.

The schema federation

Schema federation is a set of tools to compose multiple GraphQL schemas declaratively into a single data graph.

When we work with GraphQL in a midsize to large organization, we risk creating a bottleneck because all the teams are contributing to the same schema.

But with a schema federation we can have individual teams working on their own schemas and exposing them to the client as unique entry points, just like a traditional data graph.

Apollo Server exposes a gateway with all associated schemas from other services, allowing each team to be independent and not change the way the frontend consumes the data graph.

This technique comes in handy when we work with microservices, though it comes with a caveat.

A GraphQL schema should be designed with the UI in mind, so it’s essential to avoid silos inside the organization. We must facilitate the initial analysis engaging with multiple teams and follow all improvements in order to have the best implementation possible.

Figure 6-20.17 shows how a schema federation works using the gateway as an entry point for all the implementing services and providing a unique entry point and data graph to query for the clients.

Figure 6-17. 17 - A sequence diagram showing how schema federation exposes all the schemas from multiple services

Schema federation represents the evolution of schema stitching, which has been used by many large organizations for similar purposes. It wasn’t well designed, however, which led Apollo to deprecate schema stitching in favor of schema federation.

More information regarding the schema federation is available on Apollo’s documentation website.

Using GraphQL with micro-frontends and client-side composition

Integrating GraphQL with micro-frontends is a trivial task, especially after reviewing the implementation of the API gateway and BFF.

With schema federations, we can have the teams who are responsible for a specific domain’s APIs create and maintain the schema for their domain and then merge all the schemas into a unique data graph for our client applications.

This approach allows the team to be independent, maintaining their schema and exposing what the clients would need to consume.

When we integrate GraphQL with a vertical split and a client-side composition, the integration resembles the others described above: the micro-frontend is responsible for consuming the GraphQL endpoint and rendering the content inside every component present in a view.

Applying such scenarios with microservices become easier thanks to schema federation, as shown in figure 9.18.

Figure 6-18. 18 - A high-level architecture for composing a microservice backend with schema federation. The catalogue micro-frontend consumes the graph composed by all the schemas inside the GraphQL server.

In this case, thanks to the schema federation, we can compose the graph with all the schemas needed and expose a unique data graph for a micro-frontend to consume.

Interestingly, with this approach, every micro-frontend will be responsible for consuming the same endpoint. Optionally, we may want to split the BFF into different domains, creating a one-to-one relation with the micro-frontend. This would reduce the scope of work and make our application easier to manage, considering the domain scope is smaller than having a unique data graph for all the applications.

Applying a similar backend architecture to horizontal-split micro-frontends with a client-side composition isn’t too different from other implementations we have discussed in this chapter.

As we see in figure 9.19, the application shell exposes or injects the GraphQL endpoint to all the micro-frontends and all the queries related to a micro-frontend will be performed by every micro-frontend.

Figure 6-19. 19 - A high-level architecture of GraphQL with schema federation. When we implement it with a micro-frontends architecture with horizontal split and a client-side composition, all micro-frontends query the graph layer.

When we have multiple micro-frontends in the same or different view performing the same query, it’s wise to look at the query and response cacheability at different levels, like the CDN used, and otherwise leverage the GraphQL server-client cache.

Caching is a very important concept that has to be leveraged properly; doing so could protect your origin from burst traffic so spend the time.

Using GraphQL with micro-frontends and a server-side composition

The last approach is using a GraphQL server with a micro-frontends architecture with horizontal split and a server-side composition.

When the UI composition requests multiple micro-frontends to their relative microservices, every microservice queries the graph and prepares the view for the final page composition (see figure 9.20).

Figure 6-20. 20 - A high-level architecture for a micro-frontends architecture with a server-side composition where every micro-frontend consumes the graph exposed by the GraphQL server

In this scenario, every microservice that will query the GraphQL server requires having the unique entry point accessible, authenticating itself, and retrieving the data needed for rendering the micro-frontend requested by the UI composition layer.

This implementation overlaps quite nicely with the others we have seen so far on API gateway and BFF patterns.

Best practices

After discussing how micro-frontends can fit with multiple backend architectures, we must address some topics that are architecture-agnostic but could help with the successful integration of a micro-frontends architecture.

Multiple micro-frontends consuming the same API

When we work with a horizontal-split architecture, we may end up having similar micro-frontends in the same view consuming the same APIs.

In this case, we should challenge ourselves to determine whether maintaining separate micro-frontends brings any value to our system. Would grouping them in a unique micro-frontend be better?

Usually, such scenarios should indicate a potential architectural improvement. Don’t ignore that signal; instead, try to revisit the decision made at the beginning of the project with the information and the context available, making sure performing the same API request twice inside the same view is acceptable. If not, be prepared to review the micro-frontends boundaries.

APIs come first, then the implementation

Independent of the architecture we will implement in our projects, we should apply API-first principles to ensure all teams are working with the same understanding of the desired result.

An API-first approach means that for any given development project, your APIs are treated as “first-class citizens.”

As discussed at the beginning of this book, we need to make sure the API identified for communicating between micro-frontends or for client-server communication are defined up front to enable our teams to work in parallel and generate more value in a shorter time.

In fact, investing time at the beginning for analyzing the API contract with different teams will reduce the risk of developing a solution not suitable for achieving the business goals or a smooth integration within the system.

Gathering all the teams involved in the creation and consumption of new APIs can save a lot of time further down the line when the integration starts.
At the end of these meetings, producing an API spec with mock data will allow teams to work in parallel.
The team that has to develop the business logic will have clarity on what to produce and can create tests for making sure they will produce the expected result, and the teams that consume this API will be able to start the integration, evolving or developing the business logic using the mocks defined during the initial meeting.

Moreover, when we have to introduce a breaking change in an API, sharing a request for comments (RFC) with the teams consuming the API may help to update the contract in a collaborative way. This will provide visibility on the business requirements to everyone and allow them to share their thoughts and collaborate on the solution using a standard document for gathering comments.

RFCs are very popular in the software industry. Using them for documenting API changes will allow us to scale the knowledge and reasoning behind certain decisions, especially with distributed teams where it is not always possible to schedule a face-to-face meeting in front of a whiteboard.

RFCs are also used when we want to change part of the architecture, introduce new patterns, or change part of the infrastructure.

API consistency

Another challenge we need to overcome when we work with multiple teams on the same project is creating consistent APIs, standardizing several aspects of an API, such as error handling.

API standardization allows developers to easily grasp the core concepts of new APIs, minimizes the learning curve, and makes the integration of APIs from other domains easier.

A clear example would be standardizing error handling so that every API returns a similar error code and description for common issues like wrong body requests, service not available, or API throttling.

This is true not only for client-server communication but for micro-frontends too. Let’s think about the communication between a component and a micro-fronted or between micro-frontends in the same view. Identifying the events schema and the possibility we grant inside our system is fundamental for the consistency of our application and for speeding up the development of new features.
There are very interesting insights available online for client-server communication, some of which may also be applicable to micro-frontends. Google and Microsoft API guidelines share a well-documented section on this topic, with many details on how to structure a consistent API inside their ecosystems.

Web socket and micro-frontends

In some projects, we need to implement a WebSocket connection for notifying the frontend that something is happening, like a video chat application or an online game.

Using WebSockets with micro-frontends requires a bit of attention because we may be tempted to create multiple socket connections, one per micro-frontend. Instead, we should create a unique connection for the entire application and inject or make available the WebSocket instance to all the micro-frontends loaded during a user session.

When working with horizontal-split architectures, create the socket connection in the application shell and communicate any message or status change (error, exit, and so on) to the micro-frontends in the same view via an event emitter or custom events for managing their visual update.

In this way, the socket connection is managed once instead of multiple times during a user session. There are some challenges to take into consideration, however.

Imagine that some messages are communicated to the client while a micro-frontend is loaded inside the application shell. In this case, creating a message buffer may help to replay the last N messages and allow the micro-frontend to catch up once fully loaded.

Finally, if only one micro-frontend has to listen to a WebSocket connection, encapsulating this logic inside the micro-frontend would not cause any harm because the connection will leave naturally inside its subdomain.

For vertical-split architectures, the approach is less definitive. We may want to load inside every micro-frontend instead of at the application shell, simplifying the lifecycle management of the socket connection.

The right approach for the right subdomain

Working with micro-frontends and microservices provides a level of flexibility we didn’t have before.
To leverage this new quality inside our architecture we need to identify the right approach for the job.

For instance, in some parts of an application, we may want to have some micro-frontends communicating with a BFF instead of a regular service dictionary because that specific domain requires an aggregation of data retrievable by existing microservices but the data should be aggregated in a completely different way.

Using micro-architectures, these decisions are easier to embrace due to the architecture’s intrinsic characteristic. To grant this flexibility, we must invest time at the beginning of the project analyzing the boundaries of every business domain and then refine them every time we see complications in API implementation.

In this way, every team will be entitled to use the right approach for the job instead of following a standard approach that may not be applicable for the solution they are developing.

This is not a one-off decision but it has to evolve and revise with a regular cadence to support the business evolution.

Designing APIs for cross-platform applications

Nowadays we are developing cross-platform applications more often than not.
Mobile devices are part of our routine. They help us accomplish our daily tasks and a tablet may have already replaced our laptop for working.

When we approach a cross-platform application and we aren’t using a BFF layer to aggregate the data model for every device we target, we need to remember a simple rule: move the configurations as much as you can on the API layer.

With this approach, we will be able to abstract and control certain behaviors without the need to build a new release of our mobile application and wait for the penetration in the market.

For example, let’s say you need to create a polling strategy for consuming an API and react to the response every few minutes. Usually, we would just define the interval in the client application. However, in some use cases, this implementation may become risky, such as when you have very bursty traffic and you want to create a mechanism to back off your requests to the server instead of throttling or slowing down the communication between server and client.
In this case, moving the interval value to the body response of the API to pull would allow you to manage situations like that without distributing a new version of the mobile application.

This also applies to micro-frontends, where we may have multiple micro-frontends that should implement similar logic. Instead of implementing inside the client-side code, consider moving some configurations on the server and implementing the logic for reacting to the server response.

In this way, we will be able to solve many headaches that may happen in production and that affects our users with a simple and strategic decision.

Summary

We have covered how micro-frontends can be integrated with multiple API layers.
Micro-frontends are suitable with not only microservices but also monolith architecture.
There may be strong reasons why we cannot change the monolithic architecture on the backend but we want to create a new interface with multiple teams. Micro-frontends may be the solution to this challenge.

We discussed the service dictionary approach that could help with cross-platform applications and with the previous layer for reducing the need for a shared client-side library that gathers all the endpoints. We also discussed how BBF can be implemented with micro-frontends and a different twist on BFF using API gateways.

In the last part of this chapter, we reviewed how to implement GraphQL with micro-frontends, discovering that the implementation overlaps quite nicely with the one described in the API gateway and BFF patterns.

Finally, we closed the chapter with some best practices, like approaching API design with an API-first approach, leveraging DDD at the infrastructure level for using the right technical approach for a subdomain, and designing APIs for cross-platform applications by moving some logic to the backend instead of replicating into multiple frontend applications.