1. The Business Bottleneck – The Business Bottleneck

Chapter 1. The Business Bottleneck

All businesses have one core strategy: to stay alive. They do this by offering new reasons for people to buy from them and, crucially, stay with them.1 Over the past decade, traditional businesses have been freaked out by new competitors that are systematically and sneakily stealing customers. The super-clever among these competitors innovate entirely new business models: hourly car rentals, next-day delivery, short-term insurance for jackets, paying for that jacket with your phone, banking with only your iPhone as a branch, and incorporating real-time weather information into your reinsurance risk analysis, to name a few examples.

In the majority (maybe all) of these cases, surviving and innovating is done with small software development cycles and a product-centric focus on managing the evolution of those applications. As the software evolves and you learn what works and doesn’t work, the business improves as well. You can use software to not only run your business, but also to develop it.

When you meld together software and business development, you can innovate by systematically failing weekly, over and over, until you find the thing people will buy and the best way to work with them. Look at any company called a “tech company,” the members of “FANG,” or “The Four,” and you’ll see companies that expertly use software to program their business. Tech company types introduce some confusing lexical bravado when they talk about this as “failing fast.” That of course sounds like the opposite of what you’d want. What’s really happening is that you’re using the agility of software to innovate, create more optionality, and even manage risk better, as we’ll see.

From what I can tell, people refer to all this as “digital transformation”: modernizing how they think of, build, and run their custom written software to be more like a “tech company.” This means treating their software like a product instead of a project (Figure 1-1).

Figure 1-1. Projects end, products are forever

Most IT runs as a project. IT is given a set of features, a deadline, and a budget. They write the software, provision the datacenters, create runbooks for how to operate the software in production, and deliver it all as the completion of a project.

Melissa Perri defines a software project well:2

A project is a discrete scope of work that has a particular aim. It usually has a deadline, milestones, and specific outputs that will be delivered. When projects are complete, the aim is reached, and you move on to the next one.

A product approach focuses on the full life of the software: is the software useful and does it help the customers and users and thus the business? Everything is oriented around gathering customer and market feedback and changing the software accordingly, ongoing. “Products,” Perri says, “are vehicles of value. They deliver value repeatedly to customers and users, without requiring the company to build something new every time.”3 Although there may be projects to improve the product, the product is an enduring thing that can be updated and changed to meet customer demand without having to fill out tickets and write-up lengthy requirement documents for a new project.

In a project approach, IT is responsible for delivering what was asked for and keeping the software up and running. In the product approach, IT shares responsibility for the business being successful.

Thinking about software as a product has been enshrined in processes like The Lean Startup (O’Reilly), Agile development, DevOps, and informed by business theories like Jobs to Be Done and disruption. Even though these processes are known and proven, they’ve hit several bottlenecks in the organizations that are trying to apply them. In the past, IT created many of these bottlenecks: it took years to deliver on projects, at high cost, and with a whimper of long-ago promised features. Plus, the systems would often go down or perform poorly. Most IT organizations still run this way, but an emerging cohort of high-performing companies have perfected how IT builds and delivers software.

Organizations like the US Air Force, Comcast, Allianz, Fidelity, and many others have moved their software release cycles from years and months to just weeks and sometimes days. They’ve accelerated how they use software to create new business value and keep their organizations alive, competitive, and thriving. And, the software tends to work better as a bonus side effect!

Let’s imagine the case in which IT can now deliver and run software reliability and that the process it follows results in applications that are useful and desirable for customers. As with all pipeline cleanups, what I’ve started to see in situations like this are new bottlenecks: everything up the stack from the IT department to the developers and system operators. Here, we look at the critical three bottlenecks that I’ve come across so far:

  1. Finance, which needs to adjust its window of budgeting to much less than 12 months and a data-driven model for managing software investments

  2. Strategy, which needs to take advantage of the rich data provided by frequently delivered software and push down product strategy decisions to the teams working on the software

  3. Leadership, which needs to understand and use new technologies and methodologies to build software, restructure their organizations, and start evolving their businesses

Before we get to confront those bottlenecks, let’s begin with an important conceptual point. How exactly should we think of software?

Business and Software Development Is Chaos

Software development is a chaotic, unpredictable activity. We’ve known this for decades, but we willfully ignore it like the advice to floss each day. Mark Schwartz has a clever take4 on understanding the true nature of software. He starts by pointing out that the Standish Group has been tracking software project success and failure for around 25 years. Its methodologies and survey base might change here and there over the years, but that dataset gives you a long-term view of how we’re doing with software projects.

Over much of that that 25-year period, just about 30% of software projects were rated as “successful.” Thankfully the rate of failure is around 20%, with a rating of “challenging” taking up the rest of the projects. These aren’t good rates. If my oven worked only 30% of the time, I’d buy a new one. Worse, these rates have remained largely the same over that time span: we’re not getting better at projects!5 This is maddening if you think about the billions (trillions?) of dollars spent on doing software better and all the innovations in technology and process we’ve come up with in the past few decades. In this time, Agile software development was invented and reached mainstream acceptance and virtualization was introduced, along with cloud computing, and DevOps. The result has been little overall improvement.6

These constant rates of failure can be read in a different, more accurate way: it’s not that these projects failed, it was that we had false hopes. For most people, software success means delivering on time, on budget, and with the agreed-upon set of features. This makes sense based on real-world analogies. If I ask a plumber to install a toilet, I would like it done by the day they agreed on, at the cost we agreed on, and I’d like a toilet that has a seat and flushes, not just a bucket with a hole in it. Software is much more difficult than toilets.

What Schwartz suggests is that the failure and challenged rates in the Standish Reports actually shows that software performs consistent to its true nature. What if those 70% of software projects that were failures or challenged were actually the best you could hope for with software? The “successful” projects were just anomalies; you got lucky!

What this second way of looking at software projects shows is that the time and budget it takes to get software right can’t be predicted with any useful accuracy. A further implication is that you can’t predict the correct feature set. The only useful accuracy in software engineering is that you’ll be wrong in your predictions. Manufacturing is a common (but inaccurate) metaphor for software development: we know what the end product should look like, so we just need to figure out how to put together a factory to stamp it out. Software isn’t like that at all. With software, you’re not inventing a product and then creating endless copies of it in a factory.

And never mind just the software. Business development is chaotic, as well. Who knows what new business idea or what exact feature will work and be valuable to customers? Business innovation is also all trial and error, constantly trying to sense and shape what people and businesses will buy and at what price. Add in competitors doing the same, suppliers gasping for air in their own chaos-quicksand, governments regulating, and culture changing people’s tastes, and it’s all a swirling cipher.

In each case, the only hope is rigorously using a system of exploration and refining. Finding and defining the problem is as big a problem as figuring out how to solve it. And then after you do solve the problem with a product or service, you need to constantly iterate and innovate to solve it correctly. Until you actually experiment by putting a product out there, seeing what demand and pricing is, and how your competitors will respond, you know nothing. The same is true for software.

The Small-Batch Cycle

Each domain has tools for this exploration. I’m less familiar with business development apart from software, and only trust in the Jobs to Be Done tool. This strategic theory “asserts that people buy products and services because they are trying to make progress in their lives,” as Clayton Christensen put it, “Once people realize they have a job to do, they reach out and ‘hire’ (or fire) a product to get that job done.” For a business, this means studying customer behavior and needs and then changing (or sustaining!) your business to profit from that knowledge.7

In software, the discovery cycle follows a simple recipe: you reduce your release cycle down to a week and use a theory-driven design process to constantly explore and react to customer preferences. You’re looking to find the best way to implement a specific feature in the software, usually to maximize revenue and customer satisfaction. That is, to achieve whatever “business value” you’re after. It has many names and diagrams: Plan, Do, Check, Act; the Improvement Kata; the OODA loop; and more. I like to call this process the small-batch cycle (Figure 1-2) to highlight not only the iterative nature, but also that you need to do small batches of code in each cycle, quickly, rather than longer, slow cycles.

Figure 1-2. Theory, experiment, observe, repeat as needed (based on a diagram from AirFrance-KLM)

By “small batches,” I mean the following:

  • Identifying the problem to solve

  • Formulating a theory of how to solve it

  • Creating a hypothesis that can prove or disprove the theory

  • Doing the smallest amount of coding necessary to test your hypothesis

  • Deploying the new code to production

  • Observing how users interact with your software to validate or invalidate your theory

  • Using those observations to improve your software

The cycle, of course, repeats itself, as Figure 1-2 shows.

At the beginning, the app isn’t delivered to the end user; teams often need several cycles to get a minimal viable product (MVP) that end users can begin using. However, after that MVP is delivered, this entire process should take at most a week—and hopefully just a few days. All of these small batches, of course, add up over time to large pieces of software, but in contrast to a large-batch approach, each small batch of code that survives the loop has been rigorously validated with actual users. Of course, in each cycle you’ll deliver less software than in a big batch: I’m not talking about doing more coding in a smaller amount of time. In fact, doing less coding each cycle is actually a better approach: you not only get the types of insights I’m explaining here, you also introduce performance and security stability because there’s less to go wrong and less to look through to find bugs. Less code, released in shorter cycles, reduces risk and drives business innovation.

For example, Orange, the French telecommunications giant, used this cycle when perfecting its customer billing app. Orange wanted to reduce traffic to call centers, thus lowering costs but also driving up customer satisfaction (who wants to call a call center?). By following a small-batch cycle, the company found that its customers wanted to see only the last two months’ worth of bills and their employees’ current data usage. These insights inspired changes to that app that drove 50% of the customer base to use it, helping remove their reliance on actual call centers, which drove down costs and improved customer satisfaction.

With this focus on rapid, validated learning, savings and customer satisfaction usually go hand in hand. Black Swan Farming’s widely cited case study of global shipping company Maersk Line documents this connection well. The small-batch loop is more efficient as well: it often tells you what not to do. For example, The Home Depot kept close to customers and “found that by testing with users early in the first two months, it could save six months of development time on features and functionality that customers wouldn’t use.” That’s four months’ time and money saved, but also functionality in the software that better matches what customers want.

As these examples show, speed is indeed nice, but the higher frequency of feedback and new discoveries is what typically results in business value. Even though “the actual pressure on the business to go faster may not exist,” Jon Osborn says, “The miss, for many organizations, is that going faster may uncover business potential that no one understood before the work started. Even if these nuggets are not uncovered, going faster probably saves money, time, and headache.”

These business and software methodologies start with the actual customers and use these people as the raw materials and lab to run experiments. The results of these experiments are used to validate, or more often invalidates theories of what the business should be and do. Putting the small-batch loop in place is a whole other story, and the subject of my previous book, Monolithic Transformation (O’Reilly).

With this understanding of software, let’s look at the first business bottleneck most organizations encounter. It’s the bottleneck that cuts off business health and innovation before it even starts: finance.

Most software development finance should be done differently in a way that takes advantage of the true nature of software. This will help the business tremendously. Finance seeks to be accurate, predictable, and, above all else, responsible. Now that you understand software’s true nature—as chaotic as it is magical—you should be thinking: uh-oh.

1 Being more cynical...er, pragmatic, they can also do this by using artificial means: maintaining exclusivity with patents and copyrights, establishing and holding monopolies, or deluging their industry with so many regulations that it’s costly for any new competition. High barriers to entry for their markets work well, too. It costs a lot to dig and install all the cables needed for networks. Even Google has trouble entering that market, and they have robots! However, Netflix ruffles the feathers of cable providers quarterly without having to dig up any cable trenches. The best way to breach a barrier is to not breach it.

2 From the excellent Melissa Perri, Escaping the Build Trap (O’Reilly).

3 Ibid.

4 Mark Schwartz, War and Peace and IT: Business Leadership, Technology, and Success in the Digital Age (IT Revolution Press).

5 This is based on aggregating excerpts from their 2009 and 2015 studies.

6 There are always different ways to interpret data like this. You could say that as improvements came along, we stopped solving “easy” problems and went to focusing on more difficult ones and thus have maintained a rate of failure that’s constant. As we’ll get to, this actually might be, sort of, a good scenario?

7 There’s another theory that expands this notion to include the full scope of acquiring and doing that job, the so called “customer journey.” I’m not yet familiar enough with this tool to speak on it, but I only hear good things, and it looks like a crafty tool.