Chapter 3. Architects Live in the First Derivative
In a Constantly Moving World, Your Current Position Isn’t Very Meaningful
Defining a system’s architecture is a balancing act between many, often-conflicting goals: flexible systems can be complex; high-performing systems can be difficult to understand; easy-to-maintain systems can take more effort to construct initially. Although this is what makes an architect’s work so interesting, it also makes it difficult to pin down what exactly drives architectural decisions.
Rate of Change Defines Architecture
If I had to name one primary factor that influences architecture, I’d put rate of change at the top of my list, based on reasoning about the inverse question: when does a system not need any architecture at all? Although as an architect this isn’t a natural question to ask (nor to answer), it can reveal what system property makes architecture valuable. In my mind, the only system that wouldn’t benefit from architecture is one that doesn’t change at all. If everything about a system is 100% fixed, just getting it working somehow seems good enough.
Now, reverting the logic back to the original proposition, it appears natural that the rate of change is a major driver of architecture’s value and architectural decisions. It’s easy to see that a system that doesn’t need to change much will have a substantially different architecture than one that needs to absorb frequent changes over long periods of time. Good architects, therefore, deal with change. This means that they live in the system’s first derivative: the mathematical expression for how quickly a function’s value changes.1
Once we understand the influence change has on architecture, it’s useful to consider the various forms of change affecting an IT system. The first change that comes to mind is a change in functional requirements, but there’s a lot more: changes in the volume of traffic or data to be processed, changing the runtime environment to the cloud, or changes to the business context such as using the system in different languages or by different people.
Change = Business as Unusual?
Despite the popular saying that “the only constant is change,” traditional IT organizations tend to have a somewhat uneasy relationship with change. This mindset is often revealed by a popular engine room slogan: “never touch a running system” (Chapter 12). When change can’t be avoided, IT departments neatly package it into a project. The most celebrated part of an IT project is the end, or launch, which ironically is often the first time real users actually get to use the system. The reason for celebration is that things can return to “business as usual,” that is, stable operations without any change.
Packaging change into projects reflects an organization’s belief that “no change” is the normal, desired state and “change” is the intermittent, unusual state.
Thus, many organizational systems are designed to control and prevent change: budgeting processes limit spending on change; quality gates limit changes going to production; project planning and requirements documents limit scope changes. Transforming a software delivery organization such that it embraces constant change requires adjusting these processes to support rather than prevent change without ignoring the (generally useful) motivation for setting them up in the first place. That’s not an easy task and is the reason why this book devotes an entire part to transformation (Part V).
Varying Rates of Change
Technology is a fast-moving field: we don’t think much of IT products carrying a three-part version number: “well, if you’re still on 2.4.14, I can’t help you much; it’s really time to upgrade to .15.”
Luckily, not everything in IT moves fast: the most common processor architecture, the base for Intel’s x86 processors, originates from 1978. The ARM chips that dominate today’s mobile devices are based on a design from around 1985. Both Linux and Windows operating systems are well past their teenage years, and even Java passed the 20-year mark at version 9 some years ago, closely followed by the Java Spring Framework, which has surpassed a respectable 15 years.
Naturally, such low rates of change can largely be observed in lower layers of the so-called IT stack: hardware and operating systems have such a vast installed base and so many dependencies that the cost of an all-out replacement would be huge. Hence, we tend to see more evolution than revolution here. These technologies form the base of the pyramid (Chapter 28), giving us a stable foundation to build on.
On top, things move a lot faster. For example, the popular AngularJS framework was essentially replaced by the very different Angular framework just five years after its inception. Google’s Fabric framework also lived just five years before being subsumed by Firebase. And Google Mashup Editor, one of my favorites of the day, survived a mere two years.
Things are moving fast and are only getting faster. If rate of change is a driver for architecture, it looks like we’ll need more of it!
Although we’re surely sad to witness products’ early demise, the rate at which new products and tools arrive paints an even more dramatic picture. For example, a look at the Cloud Native Interactive Landscape offered by the Cloud Native Computing Foundation (CNCF) will quickly convince you that building modern applications requires a fast-growing list of ingredients.
A Software System’s First Derivative
If the first derivative is an architect’s primary concern, how does this somewhat abstract concept translate into the reality of systems architecture? We can get a hint by thinking about which part of a software system determines its rate of change. For a custom-built system, the critical element for change is the build toolchain, the part that converts source code into an executable format that is subsequently deployed onto the runtime infrastructure.
A software system’s first derivative is its build and deployment toolchain.
All changes to the software (better) go through this build and deployment toolchain. Knowing that the software toolchain is the first derivative, increasing a software system’s rate of change requires a well-tuned toolchain (Chapter 13).
It’s no surprise, then, that in recent years the industry has put much attention and effort into reducing friction in software delivery: Continuous Integration (CI), Continuous Deployment (CD), and configuration automation are all aspects of increasing the first derivative of software systems and thus speeding up software delivery. Without such innovations, daily or hourly software deployments wouldn’t be possible, and companies wouldn’t be able to compete in digital markets, which thrive on constant improvement and frequent updates.
Whereas build systems previously were the proverbial shoemaker’s children, meaning they didn’t get a lot of attention, they now run on the same type of infrastructure as the production systems. Containerized, fully automated, elastic, cloud-based, on-demand build systems are quickly becoming the norm. Teams building and maintaining such sophisticated build systems clearly live in the first derivative!
Designing for the First Derivative
Too many interdependencies between a system’s components will result in small changes needing adjustments in many places, increasing both effort and risk. Systems with fewer interdependencies—for example, because they are modular and cleanly separate responsibilities—localize changes and can therefore generally absorb a higher rate of change. The research conducted by the authors of the book Accelerate2 shows that decoupling system components is the biggest contributor to sustained software delivery.
Both cost and risk of change increase with friction, generated, for example, by long lead times for infrastructure provisioning or numerous manual deployment steps. Teams that live in the first derivative therefore ensure that their software build chain is fully automated.
- Poor quality
There’s a common misbelief that good quality requires extra time and effort. The inverse is actually true: poor quality slows down software delivery. Changes to a poorly tested or poorly built system take more time and are more likely to break things.
Often ignored, a programmer’s attitude has a major impact on the rate of change. Poor quality and low levels of automation make change a risky proposition. Developers will thus be afraid of making changes. This leads to code rot, which in turn increases the risk of change—a nasty spiral.
The list shows that an architect has several levers with which they can increase velocity, some technical in nature and others that relate to team attitude. It’s another example of how technical and organizational architecture go hand in hand.
Confidence Brings Speed
If fear slows you down, confidence should speed you up. Automated tests do just that: they give teams confidence and thus increase the rate of change. That’s why determining whether a system has sufficient test coverage shouldn’t be measured in the percentage of lines of code covered. Rather, it should be measured by whether teams can make changes confidently.
Propose to a development team that they let you delete 20 arbitrary lines from their source code. Then, they’ll run their tests—if they pass, they’ll push the code straight into production. From their reaction, you’ll know immediately whether their source code has sufficient test coverage.
Despite an abundance of tools that are supposed to speed up software delivery, the determining factor remains decidedly human. The change that’s never made out of fear cannot be accelerated by the world’s best toolchain.
Rate of Change Trade-Offs
Increasing an organization’s rate of change is not an all-or-nothing affair and involves balancing trade-offs. Borrowing one more time from the routinely overstretched analogy between IT architecture and building architecture yields useful advice on the multiple facets of designing for change. If either a large software project or housing project is undertaken without a conscious decision about its architecture, the “default” architecture converges toward the “Big Ball of Mud,” also referred to by its real-world incarnation of a shantytown (Chapter 8).
A shantytown, or slum, is generally constructed using cheap materials and unskilled labor. Low cost and a broad labor pool are actually desirable properties. Additionally, local changes, such as adding a wall or even another floor, are often quick and inexpensive—in contrast to fancier high-rise buildings. However, besides not providing a very comfortable living environment, slums also lack common infrastructure, such as a well-built electrical or sewer system. The lack of such infrastructure ultimately limits their rate of growth. This is a good reminder that optimizing for local or short-term change can inhibit global or long-term change.
If a system’s rate of change influences its architecture, it would seem natural to construct a system such that components are separated by rate of change. This approach forms the basis for the popular concepts of two-speed architecture or bi-modal IT, which suggest that traditional companies looking to become competitive in a digital world should initially increase the rate of change in the interaction layer (“Systems of Engagement”) while keeping legacy systems (“Systems of Record”) stable. In doing so, rapid changes can supposedly be applied to the customer-facing systems, whereas the record-keeping systems are kept stable and reliable.
Although dividing systems by rate of change is a fair idea, this particular approach has significant shortcomings. First, it’s based on the flawed assumption that one can move faster by compromising quality (Chapter 40). Otherwise we wouldn’t need to keep a low rate of change in systems of record to maintain their reliability. Second, a company will be hard pressed to localize change into the interaction layer. For example, the addition of a simple field to the system of engagement typically also requires a change to the system of record, coupling the two systems’ rates of change: if the system of record follows a six-month release cycle, there won’t be much speed inside this two-speed architecture.
It turns out that the separation between systems of engagement and systems of record is artificial and doesn’t line up well with the overall rate of change from a business or end-user perspective. This insight is underlined by the fact that hardly any digital business follows such a setup.
Digital companies only know one speed: fast.
Separating rate of change along a different dimension might well be beneficial, though. For example, a company’s accounting or payroll system will likely have a lower rate of change and can utilize a different architecture from the core business systems, which form a competitive differentiator for the organization, and hence should support a higher rate of change.
The Second Derivative
If the first derivative describes a software system’s rate of change, following our mathematical analogy, increasing the rate of change is dependent on a positive second derivative. Using the speed of a car as an analogy, a car’s speed is the first derivative of its position: it defines how much distance it can cover over a given time interval. Accelerating—that is, increasing the speed—is the second derivative.
Back in IT, the second derivative is the essence of most transformation programs: they aim to increase the rate of change in an organization or its IT systems. Thus, for an organization to appreciate and successfully conduct a transformation program, it first needs to appreciate the importance of the first derivative; that is, it must understand economies of speed (Chapter 35). It’s hard to sell a stronger engine and a shorter gear ratio for faster acceleration to someone who prefers to coast along on cruise control.
Rate of Change for Architects
Lastly, technical systems and organizations aren’t the only systems that need to increase their rate of change. Architects also do because new technologies arrive at an ever-faster pace, leaving architects with an enormous challenge of staying up to date. If they don’t, they might be relegated to life in the ivory tower (Chapter 1), far away from the engine room.
How can architects expect to keep up in today’s world of rapid innovation? Trying to do so by yourself appears futile—no one can stay current on everything. Instead, architects should be part of a trusted but diverse network of experts, which can provide unbiased information.
When you sit near a large IT budget that’s being vied for by vendors, you’ll have many folks wanting to update you on new technologies, or rather products (Chapter 16). However, neutrality is an architect’s major asset, so they’re expected to cut through the buzzword fog to discern what’s really new and what’s just clever repackaging of old concepts.
Even though living in a world that’s moving ever faster can be tiring, it’s also what keeps architects’ jobs interesting and makes architecture more valuable. So, embrace life in the first derivative!