Software architecture is among the computer science and software engineering terms that have the greatest number of definitions. This fact can be quickly verified if you visit the software architecture page on the Software Engineering Institute (SEI) Web site. It is also one of the most used and abused terms of computer science. It means several things to several people. We would like to introduce you to the discipline of software architecture without getting into formal definitions.
In the past 40 years or so, since the term software engineering was coined during a NATO conference, the practice of software development has come a long way. We are able to build very complex, huge and smart systems using a large number of people. We have mastered many techniques, and developed tools and methods to aid in large-scale software development. For example, imagine the complexity of a system that makes your travel bookings based on your needs. It consults various air, hotel and car rental databases and finds out the best possible deal for you, and accepts a single credit card payment for various services from different service providers it is able to put together for your need. Please note that in today’s software development environment, it takes less than 90 days to put together such complex systems.
However, it is a well-known fact that a significant number of software systems are not successful. In fact, a majority of the software systems are not built within the time and budget. What separates successful software systems from not-so-successful systems? An interesting story comes to mind.
Netscape Communications Corporation is one of the most talked about companies for its innovation and agility. Their flagship product, Netscape Navigator or simply ‘Netscape’, was very popular in the 1990s. Netscape was a proprietary Web browser with a huge user base and was dominant in the market. It has very good features and very attractive commercial licensing schemes combined with good attention and promotion from Internet Service Providers (ISPs) as well as the general media. Netscape was well positioned to take advantage of the consumer Internet revolution during the late 1990s.
Microsoft realized the importance of having its own Web browser. It began development of Internet Explorer (IE) in October 1994 and shipped it in August 1995. The competition was clearly one sided. Microsoft could not make any significant dent in the market share of Netscape.
Meanwhile, Netscape was experimenting with a Web-based mini operating system (codenamed ‘Constellation’) that would enable users to access, edit and share their files independent of the hardware or operating system they used. Obviously, this was a big threat for Microsoft.
Netscape released its version 4.0 bundled with other groupware tools such as an e-mail client, messenger, collaborative threaded discussions, audio and video conferencing, group calendaring and scheduling. This package was called Communicator. Communicator 4.0 had around 3 million lines of code, and around 120 developers were involved in its release. This was a period of rapid growth for Netscape.
Microsoft saw an obvious threat to its operating system, and it began a full-fledged campaign to increase its share in the browser market. This period is famous as the ‘Browser war’ era. Microsoft could not catch up with Netscape until its version 3.0. With IE 4.0, it appeared like serious competition for Netscape, and with version 5.0 Microsoft surpassed Netscape. Finally, the war was won by Microsoft.
What is interesting for us is to re-look at the fact that Netscape bundled a lot of functionality with its version 4.0. This became spaghetti code because all the developers were just focused on adding a lot of functionality on a war footing. What was earlier fast and robust now became slow, buggy and crash-prone. Netscape abandoned its effort to release version 5.0 and instead threw open the code base of 5.0, for which there were not any takers. Netscape decided to start all over again with Communicator 6.0 with a new code base and later released version 7.0 before its disappearance. Many industry experts feel that Microsoft is not solely responsible for the fall of Netscape. In fact, from the technical standpoint, it scripted its own end. Microsoft only helped it to get there faster.
The Netscape story is a classical example to showcase the result of attempting to scale up without planning for such growth. It tells us that ad hoc software development cannot win in the long run. Most importantly, software systems that are developed without a solid and open architecture and design cannot scale up to meet the growing demands of the environment.
If you observe the trends in the software industry, especially in the services sector, it iwill be very clear that demands on software development teams are growing very fast. Customers expect better, faster, larger and cheaper systems all the time. We cannot achieve all these qualities by accident. There has to be something more than adding more people to the project, and that has to do with having a good blueprint of the proposed system and its possible growth before actually building it. The following are some observations on building large-scale software systems:
A major challenge of software system development is to figure out appropriate interfaces between the domain components and the software components. While developing a software system in a specific domain, most of the effort goes into modelling the domain and developing components that are specific to that domain. These domain-specific components need to be integrated into the software framework, which usually includes components such as Web servers, application servers and databases. These interfaces between domain engineering and software engineering are becoming very complex because of the distributed nature of the platforms as well as development methodologies.
Non-functional requirements such as performance and security cannot be planned for after the functional requirements are achieved. This observation is in contrast with general perceptions and common practice in the industry. One needs to plan for all these so called non-functional requirements right from the beginning along with functional requirements.
Complexity and modularity go hand in hand while developing software systems and are often misunderstood. The complexity of software systems is growing continuously, and one of the ways of reducing or managing complexity is to make use of the modularity principle. If this understanding is missing, modularity for the sake of modularity increases the complexity.
The ability to re-use effectively and to the maximum extent possible holds the key to faster, cheaper, better and bigger systems. If the components are built in such a way that they remain useful even after the current need, then they have the potential to contribute to the bottom-line of the company in a more definite manner. It needs to be emphasized that re-usability needs a special organizational frame of mind and support from all the stakeholders.
Requirements engineering has become less of mystery, and clients have become more knowledgeable. The gap between software requirements engineering and the system that gets finally delivered is increasing and is becoming obvious. Hence, meeting the functional and non-functional requirements has become a challenge.
Similarly, the relation between the proposed software system and the rest of the phases in the software development life cycle (such as design, coding, testing and maintenance, if you are considering waterfall model) needs to be brought into the focus of the review at regular intervals. Otherwise, there is a potential loss of control as the software development progresses.
The biggest and most important challenge of large-scale software development is establishing proper communication channels between all the stakeholders—customers, sponsors, developers, management, users and any other parties. This can be considered as the single most important success factor.
Most of the above issues can be addressed very effectively by having a proper blueprint of the proposed software system before it is actually built. The need for such a blue print can never be over-emphasized. This blueprint gives us guidance, sets us right goals, tells us if we are going wrong somewhere, reminds us of priorities, manages the complexity of the system for us and most importantly works as a communication tool. We refer to this blueprint as software architecture.
There are several definitions and descriptions for software architecture. You will find them in most of the pointers we have provided in the Further Reading section as well as in the References section. We do not wish to repeat those definitions. Instead of offering yet another definition for software architecture, we like to introduce you to the concept in a non-formal way. In most simple terms, software architecture has to do with two things:
Doing the right things
Doing the things right
Doing the right things involves understanding the market dynamics and figuring out what works well and what doesn’t. This means the ability to understand component and platform suppliers, system integrators and all third-party stakeholders including retailers, providers and finally consumers. In the value chain, the equations change very fast in a dynamic market. We cannot hang on to old knowledge or assumptions. Figuring out what the right things to do are is the most important function of an architect.
Similarly, doing the things right involves understanding the solution space and the technology to create such a solution. Most often, this know-how comes from the lessons learnt from the past and the experience gained. Typically, the architect only guides the implementation, and most of actual work is done by the designers and engineers. The architect provides them with guidelines in the form of high-level designs that are presented as many viewpoints.
Creating effective software architecture involves a proper understanding of the problem and creation of an appropriate solution, that is, working with the problem space as well as the solution space. The problem space includes understanding customer needs as well as the domain, and the solution space includes know-how related to the solution and technologies to make it possible to build the solution. The role of an architect is to do justice to all the stakeholders while working independently and to come up with an effective solution.
Muller (2003) puts this very nicely by breaking software architecture activities into three parts:
The market dynamics, convergence, integration and diversity are the focus of study in the Understanding Why phase. Creating configurable platforms and frameworks, and family architectures with various viewpoints is the function of Describing What and working with the designers and engineers to realize the architecture by providing guidelines and various viewpoints of the architecture forms goes into the phase of Guiding How.
For naive software developers, it is not easy to understand the importance of software architecture. Even most software development managers do not really appreciate the importance of software architecture. Most of the time, they only pay lip service to it. In this section, we attempt to answer the question that often lingers in people’s minds but for some reason never gets asked: Why focus on architecture especially when there are time and budget pressures? We will re-visit this question in the next chapter when we discuss the need for software architecture with respect to a case study called Assure-Health. Here we attempt to give only a gist of some of the important benefits of software architecture.
Clements and Northrop (1996) suggested that there are three fundamental reasons why software architecture is important:
Tool for communication. A well documented software architecture helps in forming various communication channels among all the stakeholders of the system. This is a high-level representation or abstraction of the system under consideration. The software architecture is a common talking point among developers, sponsors, managers, customers and users as it helps in capturing various viewpoints of the problems space as well as the solution space.
Early design decisions. It is well known that the earlier we make right design decisions in the software development life cycle, the higher the returns are. A software architecture helps in making the right decisions early before we get into the detailed design and actual implementation phases.
Transferable abstraction of a system. A software architecture can be considered as a model or an abstraction of the actual system. This abstraction is typically at a platform and technology level, which implies that it can be re-targeted towards a new platform or some other system with almost similar requirements.
Besides these reasons mentioned by Clements and Northrop, there are some other important benefits of software architecture:
Abstraction. Software architecture provides a model of components and connections along with their behaviour, which in turn results in improved communication between stakeholders. The model is the abstraction of the actual proposed software system. With the help of this abstraction, we can achieve a better understanding of the proposed software system. This understanding may be achieved by reducing the system to its essence.
Architectural level of analysis. Once we have an architecture in place, it is possible to determine the degree to which a system satisfies its requirements. That means we actually perform a lot of analysis at the level of architecture without really spending our energies in creating a very detailed design or coding it on a particular platform. This kind of analysis is performed at the platform-independent level and always keeping the big picture in mind, no matter how complex the proposed system is. This activity helps us to analyse the completeness, safety, component interaction, performance and many other important aspects of the software system. We can also perform tasks such as localizing faults, regression testing, detecting drift, re-use and reverse engineering at this level.
In addition to these important aspects, software architecture provides many other side benefits, such as the following:
Longevity. A better designed system with a solid architecture will be able to adapt to the changing environment and hence it has a long life. This is evident from our Netscape story. Many developers leave the teams, but architectures that are created by these teams have a longer tenure at the organizations.
Competition advantage. In order to withstand the market dynamics, software systems need to be designed in such a way that any set of features can be added while the system is being used by the customer community. Only those enterprises with this capability will be able to maintain their advantage over the competition. If the turn-around time for adding a new functionality is too high, there is a significant risk of losing the customer base; on the other hand, if you are able to design a system that can offer a new functionality within a very small time period, the chances for success will be increased multifold.
Stability. A well designed architecture provides stability by ensuring minimum fundamental re-working as the system is extended to provide additional functionality. This additional functionality may be achieved over multiple releases, and at the same time the development team can hold on to the foundation instead of working on something that is constantly changing. Obviously, this results in minimizing the costs.
Ensuring architecture attributes. Architectural attributes (such as adaptability, security, usability, testability, maintainability, availability, reliability, recoverability, performance and safety) impact the design and development of various parts of software. The development team needs to constantly keep evaluating the system with respect to these architectural attributes to determine if the system meets the desired goals.
Patterns. Patterns capture known solutions to recurring problems in ways that enable you to apply this knowledge in new situations. While designing the software architecture, it is pragmatic to look for documented patterns and make use of the ones that are applicable. If you find some new problem pattern and the corresponding solution, document it so that this knowledge can be re-used later.
Design plan. Software architecture provides a design plan or a blueprint of the system so that the complexity of the system can be managed by abstracting out the details. Software architecture captures the early design decisions and provides an organizational structure to the development team. It also specifies the constraints on the lower level design and implementation.
Verification of requirements. The architectural viewpoints can be used in verification of the system requirements, be they functional or non-functional. The architecture exposes missing, incomplete, invalid and inconsistent requirements.
Making modifications. Making changes to software systems is inevitable—in fact this is the most expensive item in the software life cycle. The need to make modifications can be because the environment is changing or the technology is changing or both of them are changing. Reasoning at an architectural level can provide the insight necessary to make decisions and plans related to change. More fundamentally, however, the architecture partitions possible changes into three categories: local, non-local and architectural.
Aids in testing. Testers need to understand the overall system as well as all its sub-systems or components along with their interfaces to perform white box testing. The software architecture views also capture all key process interactions and the corresponding performance information that can be verified during the testing procedures.
Project management. Project managers make use of software architecture throughout the project life cycle. They use it to structure the development teams and identify the work to be performed by each team. Project managers also use the architecture information to identify the interface elements to be negotiated among the development teams. The software architecture can aid the project managers to make some of the most critical decisions such as reducing the functionality or moving the functionality to a later release.
Training. Various architectural views and viewpoints along with their high-level description of how the various sub-systems interact with each other to carry out the required behaviour often provide a high-level introduction to the system for new project members. Any new member who has been assigned to an existing software project can benefit from well documented software architecture to bring her or him up to speed. Even customers, managers, testers and operations people, not just the developers, can benefit from the architecture documentation.
The role of a software architect is quite varied. An architect typically performs several roles and is involved in several tasks which include the following:
Envisioning the software system
Being chief strategist, advisor and mentor to the development team
Mimicking the role of a customer or an actual user
His or her primary task is that of a problem solver. That is the reason why we give a lot of emphasis on problem-solving activities in this book. Problem solving is required in the problem space, where the architect focuses on requirements understanding and specification, and in the solution space, where the activities include the following:
Coming up with various architecture viewpoints and design
Making sure that the design is realized correctly during the implementation stage
Making adjustments to the design to ease implementation challenges
Communicating it well among all the stakeholders of the system
The deliverables of an architect are typically various kinds of documents and communication material. These may include various reports and studies, specification documents, design documents, guidelines and best practices to help the development team in realizing the design through correct implementation.
An architect needs to be involved in a lot of communication activities—most importantly, listening to various stakeholders. You can find a good architect always engaged in activities such as talking, discussing, brainstorming, presenting, meeting, reading, writing, consolidating, reviewing, browsing, thinking, analysing, teaching and mentoring. An architect is expected to help the project manager with work breakdown analysis, scheduling the tasks, risk analysis and mitigation activities. An architect invests a lot of time in talking to customers, partners and various third party vendors.
One of the major challenges of an architect is to stay ahead of all others in the learning curve and have the ability to predict common pitfalls. In other words, an architect is a person who is supposed to have all the answers. That means he or she needs to invest a lot of time in order to stay ahead of the rest by reading about the state of the art technology and business developments, attending and contributing in conferences, and continuing to have hands-on experience so that pilot tasks, testing, integration and verification can be done before involving other members in the development team.
There are several professional certification courses and formal training programmes conducted by high-end training centres and institutions such as SEI (Software Engineering Institute, Carnegie Mellon University) and Bredemeyer Consulting. In addition most of the technology platforms such as Net, J2EE, SAP, Symbian have their own training and certification programmes targeting designers and architects.
An architect should be able to see the big picture of the proposed system and the environment in which it will be operating at all times. If he or she loses sight of this big picture, several problems can creep in. In fact, this ability distinguishes an architect from others in the development, where the focus typically is on one aspect of system development like hardware, database design, functionality or project management.
This is the reason why it is highly recommended that the software architect be involved throughout the software development life cycle. Initially, at least till the implementation phase begins, the involvement and number of activities of the architect will be great—right through the requirements engineering phase, architecture and design phase. After the design phase, an architect is still required to make sure that her or his design is realized in the correct manner. That is when he or she can make minor modifications to the design if it helps in making implementation easier. Even during the integration and testing phase, the role of an architect becomes significant. During the maintenance phase, the architect can help in extending existing systems with newer functionality and in fixing errors.
Currently, in many software product companies, the role of a software architect is very well defined, but unfortunately the same cannot be said in most software services companies. There is also confusion about this role as there are several titles and designation labels used for the role of a software architect: Solution Architect, Systems Architect, Technical Architect, Senior Consultant, J2EE Architect, etc., which does not really capture the semantics. You can find many solution architects actually performing other design tasks such as such as systems architecture or domain modelling functions. This causes further confusion among engineers who aspire to become software architects.
The role of an architect in the services industry should be defined very clearly in the organizational hierarchy, and there should be a technical career path clearly defined for software engineers to become software designers and then software architects. This technical track should be recognized and compensated on par with the managerial track. Once this is in place, there will be a motivation for software engineers to choose this option, which is otherwise largely missing because the only career path software engineers see is to become team leaders and project managers after gaining a few years of experience, which may not involve a technical role at all.
In this section we introduce some terminology used in the software architecture literature. We will not be making an attempt to go in depth into the subject, but this should help you to connect with the rest of the material in a better way in case you are not already familiar with the terms.
A component is a basic building block of the software architecture and is an encapsulated building block of the entire software system which provides interfaces to make its services available. A component that does not implement all the elements of its interfaces is called an abstract component. A concrete component has all its interfaces implemented and can be instantiated. This distinction is similar to the one between abstract and concrete classes.
Another perspective is that software components conform to a component model and can be independently deployed and composed without making modifications to the rest of the system. However, one needs to follow a composition standard.
Some of the related terms are described in the following.
Component-based development refers to a situation where all of or a large part of the software development is done using already available and/or a new set of components.
CBSE is a sub-discipline of software engineering that is primarily concerned with developing software with already available software components. The focus of CBSE is on the ability to re-use the available components in other applications and the ability to maintain and customize these components to produce new functionality or features in these new applications.
In the literature you may find that the terms CBD and CBSE are used in almost in the same sense.
A relationship is a static or dynamic connection between components. Static connections are pre-defined and hard coded, whereas dynamic connections are established during the run time, dealing with the interaction between components.
A framework defines the architecture for a family of systems and their sub-systems and provides basic building blocks to instantiate them. In an object-oriented paradigm, it typically means that there are a set of classes, both concrete and abstract, which can be sub-classed or composed. These classes define the framework.
An architectural view is a representation of a system or its sub-systems from a specific perspective of that system. For example, a proposed system, from the hardware infrastructure perspective, can be represented as a deployment view, and the same system from the perspective of arrangement of all sub-systems and corresponding modules can be represented as a logical view. We will discuss these views further in several occasions subsequently in this book.
In this section, we introduce you to some of the important concepts of software architecture. This section will provide you sufficient background to appreciate rest of the book in a better manner. However, each of the topics discussed in this section deserves in-depth study, and if you are interested we advise you to go through the pointers in the Further Reading section of this chapter.
We discuss the following facets of software architecture in the subsequent sub-sections:
Types of architectures
Architectural styles or architecture patterns
The term architecture can be interpreted differently based on what we are focusing on. In the literature of software architecture, the following types of architectures are discussed:
Each of these terms is applicable at different layers of the model. Figure 1.1 illustrates this concept.
The main goal of enterprise architecture is to capture the business processes and workflows so that the big picture is clear for all stakeholders and the scope of the software and technical aspects are very clear. It is not necessary to build IT or software solutions to achieve the organizational goals—the overall solution may consist of people, processes and software (along with hardware).
Enterprise architecture encompasses all other types of architectures and makes sure that all of them are aligned with the overall goals and objectives of the enterprise. A typical example of an enterprise-level objective is to have the flexibility to build, buy or outsource IT solution components. Re-use across product families is another enterprise-level objective.
When we talk about software (or application) architecture at the enterprise level, we may only focus on the product line or product family specification rather than giving details of individual components or specific products. In that sense, enterprise architecture provides meta-architecture—that is, guidelines to aid in structuring the systems rather than the structure itself. Similarly, when we talk about technical architecture at the enterprise level, we need to emphasize the common platform on which various systems are being built. This includes frameworks, components, shared infrastructure and tools. If there is a planned re-use across a product family and from the common platform point of view, that needs to be specified at this level.
The term systems architecture is somewhat confusing because if we look at any type of architecture, it is a ‘system’ by itself. The entire enterprise is a system, and so are the software and technical systems. However, we would like to distinguish systems architecture from the rest mainly because of the following reasons.
The systems architecture defines the scope of the system we are trying to build.
It unambiguously defines the purpose for which the system is being built.
It helps us understand how this system interacts with the rest of the environment.
It helps us understand how the individual sub-systems interact with each other and contribute to the purpose of the system. In other words, it helps us understand the overall system before we attempt to develop or build a sub-component of this larger system.
The whole is greater than the sum of the parts—that is, the system has properties beyond those of its parts. In order to appreciate the benefits of systems architecture, we shall look at a few definitions of system:
The IEEE Std. 610.12-1990 specification defines it as ‘... a collection of components organized to accomplish a specific function or set of functions’.
The UML 1.3 specification defines the system as ‘a collection of connected units that are organized to accomplish a specific purpose. A system can be described by one or more models, possibly from different viewpoints’.
These definitions make a case for systems architecture and its role in comparison with other types of architectures. The earlier statement about the whole being more than the sum of its parts emphasizes that not only does the system perform a unique function, but it has unique characteristics or qualities that are inherent in the system and not just the parts (Rechtin, 1991).
Most of these definitions and discussions are influenced by a discipline called Systems Thinking, which has its roots in work by Buckminster Fuller and Russell Ackoff, among others. Many intellectual leaders such as Peter Checkland and Peter Senge (author of The Fifth Discipline) developed the ideas of systems thinking. Earlier we have mentioned the work by Eberhardt Rechtin (1991). His book Systems Architecting: Creating and Building Complex Systems is one of the most influential works in this domain. We discuss systems thinking in greater detail subsequently in this chapter.
The software architecture is the set of software components or sub-systems, the properties of these components (or sub-systems), and the relationships and interactions among them that define the structure of the software system. Bass et al. (2003) define software architecture as
Structure or structures of the system which comprise of software components and extremely visible properties of these components and relationships among them.
By ‘externally visible’ properties, they are referring to assumptions other components can make of a component, such as the services it provides, performance characteristics, fault handling and shared resource usage.
Conceptual architecture. The purpose of conceptual architecture is to provide direct attention at an appropriate decomposition of the system without delving into details. The main purpose of this view is to communicate the architecture to non-technical stakeholders such as management, investors, marketing or sales personnel, and sometimes to the customers or users. The conceptual architecture consists of major components or sub-systems and an informal component specification. It does not show or describe any interfaces among the components.
Logical architecture. Logical architecture provides more details about each of the components including their interface specifications and collaborative diagrams that show interactions between various components. Logical architecture adds enough detail and precision to such an extent where component developers and component users can work independent of each other. This view provides detailed architecture including the interfaces between components and documents the discussions, decisions and rationale behind the decisions.
Execution architecture. Execution architecture typically provides the process view and the deployment view. The process view shows the mapping of components onto the processes of the physical or actual system. The deployment view shows the mapping of the physical components in the executing system onto the nodes of the physical system.
Figure 1.2 shows the most salient features of all these views.
Each of the software architecture views has a structural dimension as well as a behavioural dimension. These dimensions enhance our understanding of the architecture and help us in addressing these two aspects (that is structural and behavioural aspects) separately.
The structural view is central to decomposing the system into various components, their (visible) properties and their relationships. From the engineering point of view, this decomposition is very critical, and we are interested in making sure that we reduce the complexity of the system and understand the system using the structural view.
The behavioural view provides the answer to the most important question that comes after we decompose the system into a structure. Given the components and their interfaces, how does the system work? This is the understanding provided by the behaviour view with the help of collaboration diagrams and sequence diagrams, which we discuss in more detail subsequently.
Technical architecture, also known as IT architecture, can be described as a formal description of an information technology (IT) system, organized in a way that supports reasoning about the structural properties of the system. Technical architecture formally describes components and their relationships. It also provides clear and specific guidelines on how to design, evolve and maintain the system.
The US army’s joint technical architecture (JTA) defines IT architecture as follows:
It is the minimal set of rules governing the arrangement, interaction, and interdependence of the parts or elements that together may be used to form an information system. Its purpose is to ensure that a conformant system satisfies a specific set of requirements
Technical architecture defines the components or building blocks that make up an overall information system, and provides a plan from which products can be procured, and systems developed, that will work together to implement the overall system. The Open Group Architectural Framework (TOGAF) for the Open Group’s IT Architecture Development Method is greatly focused on technical architecture. We talk about TOGAF as one of the software architecture frameworks subsequently in this chapter.
In addition to enterprise, system, software and technical architecture, we hear about other types of architectures. We outline some of those which occur very commonly: data architecture, reference architecture and product line architecture.
Data architecture refers to specification of various databases (how many databases and what kind of databases) along with their logical and physical database design, allocation of data to these servers, backup/archival and data replication strategy, and the strategy and design of a data warehouse.
Data architecture is supposed to answer all questions related to structuring, storing and handling of data for a given system.
Reference architecture is an architecture defined for a particular application domain such as healthcare, insurance, banking or telecommunications. Reference architectures describe a set of high-level components along with their interactions in a given application domain.
These components need to be purposely at a high level or general so that they can be customized for a large number of applications in a given domain. Reference architectures provide an excellent framework for the development of several applications, saving a lot of time and effort for the software architects. Most of the savings come from not having to re-discover the common elements.
Product line architecture is used to define a set of products or a family of products in such a way that these products share many common components as well as design and implementation information among various teams developing these products.
Product line architecture helps in making all the members of a product family have not just a uniform look and feel; it also provides the consistency in the way they are designed, developed, tested and supported. There is a very high level of re-use if product line architectures are employed, which saves a lot of money for the organization.
Architectural drivers are the most important requirements, be they functional, non-functional or quality related. Architectural drivers are this combination of most influential requirements that shape the software architecture for a base set of applications.
For example, the security aspects of the electronic fund transfer functionality of an online baking system development may be an architectural driver whereas the ability to view past one year’s transaction data may not be an architectural driver for the same system.
In fact, during the process of designing the software architecture, identifying architectural drivers is the most critical step. After exploring the problem space very thoroughly, the architect identifies the functional, non-functional and quality attributes of the system based on her or his past experience, making use of known guidelines and principles as well as all the support she or he gets from the methodology that she or he subscribes to. This exercise is done both at a high level as well as very concretely at a fine-grain and detailed level. Once these attributes are discovered and documented, the most important of them are identified as architectural drivers. All the key stakeholders need to have an agreement on the architectural drivers. These architectural drivers determine the kind of architectural style we choose (we discuss architectural styles later in this chapter). The architectural style chosen needs to satisfy all the architectural drivers.
Wikipedia defines an architecture framework thus: ‘An architecture framework is a specification of how to organize and present an [...] architecture’.
Architectural framework is a thinking tool to capture the information about an organization and to understand how things work so that one can build the information systems efficiently.
Any software architecture framework is expected to provide the following to aid in the process of designing software architecture:
An ability to find out all the important dependences by critically looking at the models.
Support for decision making in a dynamic business environment as the architecture brings together the business aspects as well as the technical aspects of the system.
An ability to trace the impact of any changes at the organizational level or at the business level on the systems.
Use of an architecture framework increases the chance of success for your architecture initiative. From the many frameworks available, choose the best one for you, and augment it with components from other frameworks.
There are several software architecture frameworks that are popular. We shall discuss the following frameworks:
There are some other frameworks such as POSIX 1003.23, Federal Enterprise Architecture Framework (FEAF) and Garner Enterprise Architecture Framework (GEAF). Though we will not be discussing all these frameworks in this book, we need to be aware of the fact that no framework is complete. We need to choose the one that is the best for the organization. At the same time we should be ready to augment this framework with the components from other frameworks. In reality, organizations end up defining their own framework with ideas, methodologies, philosophies, tools, components and processes from a variety of sources. However, in an enterprise architecture initiative, it is recommended to describe the framework and process that are going to be used so that all stakeholders in the project understand the approach. It is better to keep this description simple, short and easy to read if you actually want people to read it.
John Zachman conceived a framework for enterprise architecture in 1980s at IBM which was later made public and became well known as the Zachman framework. It allows a highly structured and formal way of defining an enterprise’s systems architecture.
This framework, presented in the form of a table (Table 1.1), uses a grid model based around six basic questions (What, How, Where, Who, When and Why), each representing a column. The rows represent the stakeholder groups to whom these questions are asked. The five nominated stakeholder groups are Planner, Owner, Designer, Builder and Sub-contractor. In other words, the vertical axis represents multiple dimensions of the overall architecture; the horizontal axis classifies various artefacts produced based on the interests of particular stakeholder groups representing their perspectives. If we can fill in each cell in the grid, we will have a holistic view of the enterprise. There is alignment of cells with each other. Each cell must be aligned with the cells that are next to it both horizontally and vertically, that is, each cell is aligned to the cells left, right, above and below. There is no alignment between diagonal cells.
Table 1.1. Zachman table—Zachman framework for information system architecture
List of things important to the business
List of processes the business performs
List of locations in which the business operates
List of organizations important to the business
List of events significant to the business
List of business goals/strategies
Semantic or entity–relationship model
Business process model
Business logistics system
Logical data model
Distributed system architecture
Human interface Architecture
Business rule model
Physical data model
Detailed representations (out-of-context)
It is very difficult to imagine any aspect missing from this framework. Any artefact is included in some way in the framework. That is the reason why the Zachman framework is known to be the most comprehensive framework. At the same time, it is so comprehensive that it is takes an enormous amount of effort for organizations to populate all the cells. The Zachman framework generates a lot of documentation due to its completeness. Most of these details are hard to consume and digest, and sometimes the utility of these details is also questionable. Hence it is recommended that only the relevant cells be populated.
The initial three dimensions of what, how and where (that is, data, function and network aspects) were part of the original framework. Later in 1993 the other dimensions of who, when and why were added (people, time and motivation). These dimensions are not ordered and are completely arbitrary.
There is one major missing link in the Zachman framework. It does not cover the actual process of designing architecture. But it is claimed that any architecture process can be used with the Zachman framework. Hence it is method-neutral and can be used as a complimentary framework along with other frameworks. This framework per se does not include governance and communication aspects either, although John Zachman has written about these and many other architectural aspects.
The Zachman framework is considered as the mother of all frameworks or a reference framework against which other frameworks can be mapped or positioned. The Zachman framework has become part of any enterprise or system architecture review exercise within IT departments. However, its popularity has not reached the developer and user communities.
The Open Group, consisting of Cap Gemini, HP, NEC, EDS and IBM, developed a framework to provide an approach for designing, planning, implementing and governance of an enterprise information architecture called the Open Group Architecture Framework (TOGAF). TOGAF is very actively updated and matured compared with other architecture frameworks. It has about 55–60 members who are responsible for the development of TOGAF. It is currently in version 8.1.1. The versions up to 7.0 are known as technical versions, and from version 8 onwards as enterprise versions because TOGAF is focusing on enterprise architecture.
TOGAF provides a step-by-step method of how to understand requirements and of how to build, maintain and implement enterprise architecture. The graphic of TOGAF (see Figure 1.3) is dynamic unlike the rectangular grid of cells of the Zachman. It consists of a set of circles showing the progression cycle through various phases.
According to TOGAF, architecture is typically modelled at four levels or domains: Business, Application, Data and Technology. Unlike the Zachman framework, TOGAF is a process-driven framework. The central process is called the Architecture Development Method (ADM). Using this process, architects can develop the different aspects of enterprise architecture to meet the business and information technology needs of an organization. The ADM should be adapted to each organization’s needs and is then used in architecture planning activities.
The ADM process depicted in Figure 1.3 (from the Open Group Web site).
A brief description of each of the main phases follows:
First, one needs to decide what to do in this round of development, which includes determining the scope of the project and the stakeholders involved and obtaining the necessary approvals and the necessary support. The baseline (current architecture) and the target architecture have to be determined at a high level and documented in this phase.
Here one focuses on determining the business aspects in depth, which requires extensive modelling of the present and target architectures. Gap analysis is performed to determine what needs to be done to bridge the gap between the current system and the target system.
Here the data and application architectures are analysed in depth. The system is broken down into building blocks that may or may not yet exist.
The business and information architectures that were created in phases B and C are implemented in phase D. It involves breaking down the main functionality into re-usable building blocks and describing these blocks with respect to the foundation architecture. Technology architecture has sub-phases that are pretty much similar to the main phases as can be seen in the figure.
In this phase, one needs to determine which building blocks can be re-used, which ones must be replaced and which ones must be provided (a build or buy decision is also involved). TOGAF documentation says, ‘The most successful strategy for the Opportunities and Solutions phase is to focus on projects that will deliver short-term payoffs and so create an impetus for proceeding with longer-term projects’.
In this phase decisions will be made about the order in which the implementation of the new system will be done.
Once the system’s development is complete, we enter this phase to monitor and act upon the change requests. If there is a sufficient need or accumulation of change requests, we enter one more cycle of the ADM process.
The ADM is a cyclical and iterative process where each phase checks the requirements (central element). In addition to the requirements each phase takes input from the previous phase and creates the input to the next phase.
Information systems architecture (phase C) involves both data architecture and application architecture to some extent.
TOGAF focused only on phase D (technology architecture) until version 7. Other phases are added in version 8.
Although this process is meant to be a generic one and is very well articulated, it may not be suitable equally for all organizations. For some organizations it may be overly prescriptive.
One main criticism levelled at TOGAF is that it is ‘deliverable-agnostic’ as it is more concerned with the process of developing artefacts. There is no emphasis on the quality or the format of the artefacts.
A major advantage with TOGAF is that the group has developed several extensions to this framework. For example, there is a lot of documentation on governance, competency models for architects, and certification of products, individuals and services for TOGAF compliance.
In addition to the ADM, TOGAF also provides what is known as enterprise continuum as a resource for developing enterprise architecture through re-usable building blocks. It defines two kinds of building blocks, architecture building blocks (ABBs) and solution building blocks (SBBs), and specifies how to develop architectures and solutions using ABBs and SBBs in a continuous and iterative fashion. The enterprise continuum is rather a philosophy composed of the Architecture and Solution continuums. The Architecture Continuum is a set of set of guidelines and tools that provide direction and support to use the Solutions Continuum so that you can ultimately build the technology architecture. The Solutions Continuum consists of a set of solutions to realize the architecture including commercial off-the-shelf solutions and the solutions that are built within the organization.
In summary, TOGAF provides a set of foundation architectures to aid architects understand the present and future needs of the architecture. In the enterprise version, TOGAF was expanded to cover the business, application and information aspects of enterprise architecture; the process for these is not as fully developed as the process for the technical architecture. Similarly, the relationships between the different aspects of architecture are also not completely captured and documented.
RM-ODP is a reference model that provides a framework for specification of distributed computing systems. It is based on the current practices in the distributed processing community as well as on the use of formal description techniques for specification of architectures. Its foundations are in the information management and systems thinking disciplines. It combines the concepts of open systems in specifying distributed systems. The RM-ODP model is found to be very useful by the entire architecture community and is being widely adopted.
The RM-ODP framework is the origin of the famous ‘4+1 view’ model. The RM-ODP framework guides the modelling process of systems architecture by giving five viewpoints that are considered essential. It provides the following explanation for each of the viewpoints (RM-ODP, 2007).
The enterprise viewpoint is concerned with the purpose, scope and policies governing the activities of the specified system within the organization of which it is a part. This viewpoint captures enterprise policies such as permissions given, prohibitions and obligations.
The information viewpoint is concerned with the kinds of information handled by the system and constraints on the use and interpretation of that information. It captures static, invariant and dynamic schemas of the information flowing through the system.
The computational viewpoint is concerned about decomposing the system functionally into a set of objects that interact at interfaces—enabling system distribution. The computational viewpoint captures interfaces among various sub-systems and their behaviour using object encapsulation methods.
The engineering viewpoint is concerned with the infrastructure required to support system distribution.
The technology viewpoint is concerned with the choice of technology to support system distribution.
There are several profiles of RM-ODP in use today. The representational aspects of RM-ODP are made use of in coming up with several profiles including the famous ‘4+1 View Model’. Shown in Figure 1.4, it is one of the best known RM-ODP profiles. This framework provides a viewpoint-based architecture approach (Kruchten, 1995).
The viewpoints given by the 4+1 View Model are the following.
Logical. Logical representation of key packages and subsystems. This supports mostly the functional requirements and represents the problem domain. Here the system is decomposed into several key abstractions in the form of objects and classes by exploiting principles of abstraction, encapsulation and inheritance.
Process: The process view is concerned about non-functional requirements such as availability, performance, reliability and security. The focus is on concurrency and distribution aspects and fault-tolerance. The process view addresses questions such as how various OS threads, tasks or processes communicate with each other. The process view can be described at several levels of abstraction, and at each level, the issues of concern and the detail are different.
Implementation: The implementation or development view describes how actual software is implemented (for example, code, directory structure, library structure). The software system is packaged as a set of sub–systems, each of which is developed by a small team of engineers. Each sub-system provides interfaces to other sub-systems, and these are typically arranged in layers, where each layer talks to only its neighbours.
Deployment: The deployment or physical view describes how actual processes are instantiated and deployed on physical hardware. This viewpoint is also primarily focused on non-functional requirements. This view shows the mapping of the software to different process nodes (such as a computer) and therefore it is expected that the deployment viewpoint should be very flexible in terms of its impact on the source code—that is, one should be able to decouple software packages that can be deployed on a given process node.
Use case: This view captures all important scenarios and in general the use cases of the system, which is essentially the behaviour of the system. This is the additional view (the ‘+1’) of the system, and strictly speaking this is a redundant view (and hence the name) if all other views are given. But it is useful in discovering all the architectural elements and also in validating the systems at various stages as it is being built. That is how most test cases are derived from the use cases of the system.
In summary, use the case view models enterprise objects through a set of scenarios. The logical view captures objects, classes, packages and relationships among them. The process view models the control flow and communication among objects. The structure of the software and various modules is captured by the implementation view. Thus, 4+1 views cover all aspects of architecture.
Within each viewpoint, the RM-ODP approach uses formal notations (or specification languages) that support architecture description.
One of the most useful notations for specifying computational viewpoints is the ODP interface definition language (ODP IDL). ODP IDL is a related international standard that is identical to CORBA IDL. ODP IDL enables the specification of object encapsulations that can be implemented on multiple infrastructures, such as CORBA, Microsoft COM and the Adaptive Communication Environment (ACE).
Since ODP IDL is programming language independent, a single interface specification suffices to define inter-operable interfaces for C, C++, Ada95, COBOL, Smalltalk, Java and Microsoft IDL. These mappings are defined by open system standards and supported by commercial products. Another useful notation for describing architecture viewpoints is the Unified Modeling Language (UML). Through out this book we make extensive use of UML in designing architectures.
RUP is also a variation of RM-ODP, and it makes use of the 4+1 View Model as described above as a foundation. RUP benefited from contributions of various high-profile researchers including Grady Booch, James Rumbaugh and Ivar Jacobson. The RUP framework in turn contributed to creating the Unified Modeling Language (UML).
The use case view is most important in RUP. In addition to all important scenarios and use cases, RUP recommends documenting what it calls ‘use case realization’. The purpose is to give the solution outline along with the problem description, that is, for a few important use cases or scenarios, to illustrate how the software actually works by giving the corresponding realizations. It explains how the functionality is achieved by various design model elements.
In addition to the other four views (logical, process, implementation and deployment), the RUP documentation consists of an additional and optional data view. The data view gives a description of any persistent data storage perspective of the system. For systems where the persistent data are not important, deriving the data model from the logical model is trivial. In these cases, the data model can be ignored.
The RUP documentation also recommends describing important characteristics of the software that impact the performance targets and the architecture. This description is presented along with the quality attributes and the non-functional attributes of the system.
Mary Shaw and David Garlan, in their book on software architectures (Shaw and Garlan, 1996), described several architectural styles with the following definition: ‘Architectural styles are recurring organizational patterns and idioms’. They are the established and shared understanding of common design forms. The Pattern-oriented Software Architectures (POSA) community popularized them as architecture patterns similar to design patterns. For POSA practitioners, the architectural style is an abstracted and re-usable characteristic of recurring composition and interaction of several architectures. The difficulty with architectural patterns is that although they occur regularly in systems design, they occur in ways that makes it difficult for us to detect them. The main reason for this difficulty is that each domain uses different names to refer to the same elements. To address this problem, architectural patterns are being catalogued, and more such patterns are getting added to this catalogue.
Each architectural style will have some basic properties. These properties define the characteristic of the style. An architectural pattern is determined by the following.
Vocabulary of the design elements. What are the components and the connector types? Which one of the components are storage types and which compute a function, etc.
A set of configuration rules. How can these components be composed? What are the topological constraints in connecting these components? Which are valid compositions and which ones are not?
A semantic representation. Every composition of design elements will have well-defined meanings. Each component and connector will have semantic constraints on what it can do and what it cannot.
Built-in analysis within the pattern. Some patterns have possible analysis of the system built in. For example, code generation is a kind of such built-in analysis.
There are several benefits of architectural patterns. Besides design level and code level re-use, they greatly contribute to the understanding of the system-level organization. For example, the moment one describes their system as ‘three tier architecture’, a lot of information is understood. Architectural patterns contribute to the inter-operability of the components. As the styles are standardized, it is possible to make the components across the platforms as can be seen in CORBA or EJB. Another large benefit of architectural patterns is that of pattern-specific analyses and visualization of the system. These pattern-specific depictions match the mental models of the engineers so that they are able to related to and work with and enhance the new architectures quickly.
Architecture patterns can be classified into two major categories: domain independent and domain dependent. Domain-independent styles deal with architectural characteristics that are globally applicable organizational characteristics and are popularly known as idioms or simply patterns.
The domain application-dependent architectural styles are also known as reference models as they capture specific configurations for specific application areas. Reference architectures typically constitute the architectural styles that are applicable to specific domains. It does not mean that they are not applicable outside their initial domains. It is possible that they might be useful in completely different domains. RM-ODP was initially designed for distributed processing, but we saw that many of the ideas from there are useful in non-distributed computing environments as well. Similarly, if we develop a reference model for a specific domain such as insurance, some of these styles can be useful for a different domain such as telecommunications.
The POSA community categorized domain-independent architectural patterns into the following types.
Structural patterns. This category provides ways to sub-divide the system into sub-systems. This category includes patterns such as pipes and filters, and blackboard and layered architectures
Distributed patterns. Patterns in this category help address the needs of distributed systems as opposed to centralized systems. Popular distributed patterns include peer-to-peer architectures and broker architectures. Please note that patterns such as pipes and filters and micro-kernel that are listed under other categories can also help in creating distributed systems (this is true in other cases also). The broker pattern helps in structuring the software systems that are decoupled from each other by coordinating communication using remote procedure calls.
Interactive patterns. Model–view–controller and presentation–abstraction–control patterns are popular interactive patterns which help in building systems that have very important user interface elements that should be largely independent of the functional core and the underlying data model.
Adaptable patterns. Systems evolve over time with new functionality and modifications to existing functionality. Adaptable patters such as micro-kernel and reflection help design systems that accommodate the new changes. A micro-kernel pattern separates a minimal functional core from extended functionality and customer-specific parts. The reflection pattern helps split the software into the meta-level and base level, where the meta-level helps software to be self aware and the base level includes the core functionality. Changes made to the meta-level information results in changing the base-level behaviour.
We will have an opportunity to discuss interactive patterns and adaptable patterns (chapters 3, 4 and 6) later in this book. In this section we discuss the following structural and distributed patterns:
pipers and filters
layered abstract machines
distributed peer-to-peer systems (including special cases such as client–server and master–slave).
In the pipes and filters pattern, components are filters and connectors are pipes. Each filter takes input in some form and produces output in some other form, which may or may not be similar to the input form. Hopefully, each filter will add value to the output stream because of processing done inside the filter. Each filter is independent and is unaware of the up and down stream filters. Pipes are conduits of the data streams. Famous examples of the pipes and filters architectural pattern are UNIX shells, signal processing systems and distributed systems.
There are several variations within this pattern. Linear sequences of filters are known as pipelines. Bounded pipes limits the amount of data on pipes. Typed pipes impose constraints on the data they conduit with strong typing. In batch sequential pipes data streams are not incremental.
Figure 1.5 provides an example of pipes and filters. In compiler design, typically the source code (input text or a set of strings) passes through a series of filters before the object code is generated. These filters include the preprocessor, lexical analyser (lexor), parser, semantic processor, optimizer and finally code generator. The text preprocessor removes noise such as comments and does the processing of macros or include files. The lexor takes each character to compose valid tokens and the parser builds a parse tree to check if the input text is syntactically correct. The semantic parser checks if the statements are meaningful. The optimizer does code optimization and finally the code generates the object code. In this diagram, each of these processors is shown as filters and the pipes shows the intermediate data that flow between the two filters.
The main advantage of the pipes and filters pattern is that the behaviour of the system is the sum of the behaviour of the individual components, and that makes the system easy to manage. It allows a high degree of reuse, replacement and addition of filters. It allows concurrent processing very naturally as filters are independent of each other. However, it also has very serious disadvantages because it is structured in the form of batch processing. It allows several bottlenecks in the process because the slowest and lowest common denominator is data transmission. The pipes and filters pattern is not suitable for interactive applications.
The backboard pattern is used for problems for which there is no known deterministic solution. In the blackboard solution strategy, there are several sub-systems, each working with its own strategy to create a solution.
There are two kinds of components in the blackboard architectural pattern. One is the central data structure called the blackboard and other is a set of processors operating on this blackboard. Many artificial intelligence (AI) systems, integrated software environments and compiler architectures employ blackboard architectural patterns. The systems control is entirely driven by the blackboard state.
The compiler design example that we have shown for the pipes and filters architectural pattern can be re-designed using the blackboard architectural style shown in Figure 1.6.
The advantages of blackboard architectures are that they allow experimentation, easy modifications and maintainability and provide support for fault tolerance. However there are some disadvantages. Blackboard architectures are not efficient, they are difficult to test, there is no guarantee of good solution, and the cost of development is high.
While capturing a complex system’s functionality, we typically divide the functionality into several layers of abstraction. Layering means grouping of functionality in an ordered fashion. In other words, layering architecture partitions the functionality into separate layers stacked vertically, each layer interacting with layers underneath. Such layering is called strict layering. There are non-strict layering architectures where this restriction of ordered interfacing is not followed, often resulting in sacrificing the benefits of layering. They however impose less overheads.
As shown in Figures 1.7a and 1.7b, layering is typically done by packaging application-specific functionality in the upper layers, deployment specific functionality into lower layers and the functionality that spans across the application domains in the middle layers. The number of layers and how these layers are composed is determined by the complexity of the problem and the solution.
In most layered architectures you will find the following layers:
The application layer. This layer contains the application-specific services.
The business layer. This layer captures several business-specific services or components that are common across several applications.
The middleware layer. This layer packages several functions such as GUI builders, interfaces to databases, platform-independent operating system services, reports, spreadsheets and diagram editing libraries and so on.
The database or system software layer. This bottom layer contains operating systems, databases and interfaces to special hardware components.
Depending on the complexity of the system, the business layer or the middleware layer can be split further into several layers. This helps in achieving better abstraction and clarity. However, there is generally only a single application-specific layer. If the problem space is well represented in business later, the solution spaces are well supported by middleware layer libraries. If the software systems needs to interface with several and diverse hardware devices, then there is a need to organize the system software layer, which means that it will have well-developed lower layers, with perhaps several layers of middleware and system software.
The layered architecture style is very popular among development teams in the current software development environment. There are several advantages with layered architectures which are behind its popularity. They include the following:
There are different semantics associated with the functionality of the system. With the layered approach, it is possible to define different semantics at each level separately.
Layered approach aids in making the system more modular. Modularity helps separate the local problems and addresses them more efficiently. This helps in achieving a high degree of coupling and also in keeping the coupling to a lower level. Low coupling with high cohesion is always a desired design principle.
Because of the modularity most layers end up embodying well-defined abstractions.
If you take a layered approach, you can scale up the application very naturally. You can always re-structure the layers by splitting a bulky layer into several smaller layers if that helps in achieving a better abstraction and efficiency.
Another major advantage with a layered structure is that you do not have to bother about interfaces of non-neighbouring layers. While designing any layer, the architect or the designer needs to look at the interfaces provided by only its neighbours.
The layered architecture style also has disadvantages. The two main disadvantages are that each layer will add additional overheads and it is often difficult to define clear and independent layers. First, each layer needs to be constructed in such a way that it exposes proper interfaces to its neighbouring layers. As the data or the process moves from one layer to another, there are many overheads including marshalling, un-marshalling, encryption and decryption. Second, it is not always possible to define layers independently. For example, when you make a change like adding one edit box in the user interface, it may be necessary to make changes in all layers—not just in the database layer. That means, for any change, small or big, it may be necessary to make changes in almost all the layers, which is not a very desirable situation.
Layered architectures are often realized as N-tier architectures. However, there are some differences between these two architectures. The first major difference is that in an N-tier architecture each tier is independent of the other tiers, whereas a layer is dependent on its upstream or downstream layers. Layering is ordered, and N-tier architecture does not impose ordering of the tiers. A non-strict layering can be loosely termed as an N-tier architecture.
A two-tier architecture represents the client–server model. A typical three-tier architecture consists of a presentation layer, application/business layer and data source layer. A four-tier system consists of a presentation layer, user session layer, application/domain layer and data source layer. This kind of architecture is needed to support the advanced behaviour required, which is achieved by abstracting out partial functionality from the business layer and packaging it as a user session layer. A four-tier model is like a three-tier model where the application layer is split into two. Similarly, a typical five-tier architecture may add either a workflow layer on the client or a business rule layer on the server (or both) to the four-tier architecture (Figure 1.8).
Given all these architectural patterns, it is important to note that different architectural patterns result in different architectures with highly different characteristics. It does not mean that all architectures based on the same pattern will be the same. A given architectural pattern may result in different architectures, and the pattern alone does not fully influence the resulting architecture. There will be sufficient room even after adopting a particular architectural style or a pattern for individual judgement and variation that is brought in by the individual architect. Customer requirements and interaction also brings in different emphases to the resulting architecture.
There are still many open issues in architectural patterns. In practice, the use of patterns is generally ad hoc. The boundary between the system and the pattern is not very clear. That is, it is difficult to delimit the system aspects that can be or should be specified by the pattern. It is often very difficult to compare the systems that are based on the different patterns and similarly compare two different patterns based on their properties. It is also not very clear how existing architectural patterns can be combined to form a new pattern though we know that most actual systems are based on combining multiple architectural patterns.
This is a generic style of which popular styles are the client–server and master–slave styles. In the distributed peer-to-peer pattern the components are independent objects or programs that offer public services or consume these services. Connectors in this style are the remote procedure calls (RPCs) over the computer networks.
The client–server pattern is a specialized case of the peer-to-peer style where there are only two peers: a server, which offers the service, and the client, which uses the service. In this pattern, the server interacts with several clients but it does not know identities or the number of its clients. Each client knows the identity of the server. In the client–server pattern, the components are clients and servers, and the connectors are the RPC-based interaction protocols.
The master–slave architecture style, another variation of the peer-to-peer system, the master component organizes work into distinct sub-tasks, and the sub-tasks are allocated to isolated slaves. Slaves report their results to the master, and the master integrates results. Examples of the master–slave architecture style include process control systems, embedded systems and fault tolerant systems.
The main advantages of the distributed peer-to-peer architectural style are that it enables inter-operability, re-usability, scalability and distributability. It is a natural choice for high-level heterogeneous distributed systems. This style is typically used in making the legacy systems re-usable by providing new interfaces to them. They also help in scaling by creating powerful servers that can support several clients. The main disadvantage with peer-to-peer systems is that they are very difficult to analyse and debug. This style is also criticized for being too simple in providing inter-operability mechanisms (such as legacy wrappers, data marshalling and un-marshalling, proxies and stubs for RPCs) unlike layered architectures.
Anti-patterns are counterparts to the patterns we have discussed. These are the patterns to be avoided. If we study them and are able to recognize them, then we should be able to avoid them. If we are not aware of anti-patterns, we risk repeating the mistakes others have made several times. By formally capturing the repeated mistakes, one can recognize the symptoms and then work towards getting into a problem situation. Anti-patterns tell us how we can go from the problem to a bad solution. In fact, they look like a good solution but when applied they backfire. Knowing bad practices is perhaps as valuable as knowing good practices. With this knowledge, we can re-factor the solution in case we are heading towards an anti-pattern. As with patterns, anti-pattern catalogues are also available. Books on anti-patterns (including Anti-Patterns: Re-factoring Software, Architectures, and Projects in Crisis, Anti-Patterns and Patterns in Software Configuration Management and Anti-Patterns in Project Management) and Web sites such as Portland Pattern Repository’s Wiki give several anti-patterns related to design, development, architecture, project management, etc. We will have the opportunity to discuss some popular anti-patterns such as Architecture by Implication, Cover the Assets, Design by Committee, Reinvent the Wheel, Spaghetti Code, Stovepipe Enterprise, Stovepipe System, Swiss Army Knife and Vendor Lock-In later in this book.
It is interesting to know about some of the popular anti-patterns relating to the way architects design architectures (not about specific architectural anti-patterns). Portland Pattern Repository’s Wiki lists some of them.
Over-generalized interfaces. This anti-pattern is found when architects become very ambitious and try to design a system with too much flexibility. This kind of attempt often results in very flexible systems but is almost impossible to maintain. A good example for this anti-pattern is designing all interfaces using text strings or xml messages. Since there are no explicit contracts between the interfaces, these kinds of systems end up being very hard to maintain.
Too many buzzwords. It is a very common to find architects using the latest buzzwords (such as SOA, SaaS, MDA, .NET, XML) in their architectures even when they do not fit into their scheme of things. It can only result in the creation of marchitectures (marketing architectures, more on them in next chapter) but not architectures. It does not help in solving problems. In fact, marchitectures work against creating effective solutions.
Product architecture. It is also common to find architects who are very familiar with one platform or a product to tend to create all their architectures very close to the architectures or design of these platforms or products. This again results in just a marketing architecture and will be of no use in solving problems.
FUD architecture. Some architects are afraid of being wrong and worried that their architecture will be changed later. With their fear they cannot really do any thing concrete and end up with a design that actually solves nothing.
We will discuss anti-patterns further in Chapter 3.
Software architecture is actually a discipline of problem solving. As a software architect progresses from being an expert in platform-specific technology know-how to an expert in platform-independent technology know-how, she or he masters several problem-solving techniques. An architect needs to be aware of the environment and recurring problems and solutions that are actually working. Once this awareness sets in, documenting, internalizing and applying problem-solving techniques readily distinguishes a good architect from others. The design and architecture patterns community encourages this kind of an approach.
In this section, we describe a problem-solving approach in general that is inspired by Polya’s famous work How to Solve It and discuss how systems thinking discipline can help you master problem-solving techniques.
George Polya was a Hungarian mathematician who spent his later days characterizing the general methods that people use to solve problems. He spent a lot of time in describing how to teach and learn problem solving. His book How to Solve It became a classic and is recommended highly not just for students of mathematics but for almost all engineering and science disciplines. His other books include Mathematics and Plausible Reasoning Volume I: Induction and Analogy in Mathematics and Mathematics and Plausible Reasoning Volume II: Patterns of Plausible Reasoning. Both these books address the subject of problem solving as well.
How to Solve It is a small volume describing problem solving. It provides general heuristics for solving problems of all kinds. The book starts with advice for teaching mathematics to the students in a classroom. It also describes a four-phase technique to solve any problem (Polya, 1956), which is the core of this book:
Understanding the problem (What is the unknown? What are the data? What are the conditions?)
Devising a plan (Have you seen the problem before? Have you seen the same problem in a slightly different form?)
Carrying out the plan (Can you see clearly that each step is correct?)
Examine the solution obtained (Looking back, how could it be better?)
If this technique fails, Polya advises, ‘If you can’t solve a problem, then there is an easier problem you can solve: find it’. A major part of this book is a dictionary (a catalogue) of heuristics. This book is a must-read for every budding architect.
If you look closely at the technique described by Polya and the process of designing architecture, they are very similar. Architects can learn a lot of techniques from Polya’s approach to problem solving.
The key in both domains is to acknowledge that there are different kinds of problems and different kinds of solutions. As you move from the problem space to the solution space, at each step you must make informed choices to match problems with solutions. You need to be careful while making decisions that impact the overall system organization.
All design decisions require a proper understanding of the problem as well as the solution. This means not just the understanding of various dimensions of the requirements but also the limitations of various design or solution alternatives. There are no silver bullets or magic recipes for good design.
One needs to understand various parts of the problem and distinguish between what is known, what is unknown, what the data are, what the process or functionality is and what the conditions are. It is recommended that suitable notations be used and figures drawn. All these techniques help architects understand and represent the problem correctly.
Problem analysis or the problem frames approach (Jackson, 2001), designed based on the ideas discussed, is used while gathering requirements and creating the specifications document. The problem frames technique enables an understanding of the problem context. It recommends drawing context diagrams, problem diagrams and various kinds of problem frames. A problem frame is similar to a problem pattern. It captures an abstraction of a class of problems which in turn guide you in finding an appropriate class of solutions.
After understanding various aspects of the problem space, we will have a better clarity regarding what kind of solutions we need to look for. Problems require a context, which is the subject matter of the solution that needs to be developed. The problem needs to capture all relevant parts of the real world, relationships between these parts and the properties of these parts. Most often, the knowledge about the problem domain exists in a very non-formal and ad hoc fashion. As the architect is learning about the domain, he or she needs to represent this ad hoc knowledge of the domain in a more formal description and ideally in a form that is similar to the problem frames because such a description brings the problem closer to possible solutions.
The solution characterizes the proposed software system or the machine that solves the problem. The solution space includes the software architecture, design patterns and programming idioms.
The most important contribution made by the problem-solving guide to the software architecture process is mapping the known problems to known solutions. That is devising a plan in Polya’s words. It encourages the architect to think about the problem in terms of a set of known problems and to check if the new problem is actually one of the old problems but in a slightly different form. It guides the architect while looking at the unknown aspect of the problem to look at all the known problems that have the same or similar unknown. If there are solutions for problems that are similar to the ones you are solving, it asks you to think if an old solution can be used. In case the entire solution is not useful as it is, it encourages you to consider if its result, or its method or even a part of this solution, is applicable in the new context. It also asks you to consider augmenting the old solution with some auxiliary element so that it can be useful. In addition it says, ‘If you cannot solve the proposed problem try to solve first some related problem’ (Polya, 1951, p. xvii).
Polya recommends transforming the proposed problem into one of the known and more accessible related problems. This can be done by:
generalizing the given problem
making the given problem a more special one
searching for a more analogous problem.
Other techniques recommended are:
solving a part of the problem instead of trying to solve it in its entirety
varying the given conditions of the problem by keeping only a part of the condition
determining the unknown by studying how it varies
deriving something useful from the data
looking for other appropriate data which may be available elsewhere to understand what is unknown in the current scenario
changing the unknown so that the data and the new unknown are closer to each other
changing the data so that the new data and the unknown are closer to each other
changing both the data and the unknown so that new data and the new unknown are closer to each other.
There are several heuristics related to the craft of software architecture that are inspired by Polya’s dictionary of problem-solving heuristics. For example, many architecture and design patterns are based on the principles of generalization, decomposing and recombining. Many successful architects document and apply their own personal library of heuristics.
Most real-world systems are very complex, and they are increasingly becoming more chaotic. There seems to be randomness in the way these systems function. In spite of making use of very sophisticated methodologies and techniques to understand the characteristics of these real-world systems, we are unable to explain systems’ behaviour in a convincing manner. Some systems are successfully implemented and some are not, both unfortunately because of wrong reasons. This is a constant feeling an architect or a system builder gets while building complex systems. There seem to be something missing.
We are trained to think in an analytical manner throughout our education. We call it rational thinking or adaptive thinking. Adaptive thinking is reactive to an event and looks for linear causal relations and thus misses the big picture or the environment in which an event occurs many times. This kind of thinking deals with independent variables of the system.
Systems thinking is a more scientific problem-solving approach than the rational thinking approach as it deals with inter-dependent variables and looks for non-linear or circular causal relations. Hence, systems thinking is a realistic and useful method for solving problems. Systems thinking is a way of looking at real-world systems for understanding them and possibly for improving them. However, it may be important to note that in an absolute sense systems do not exist—we use this term in a more metaphorical sense to give various perspectives on real-world problems. Systems thinking is increasingly becoming a profound problem-solving tool in software architecture.
To understand systems thinking, let us look at some of its basic concepts.
A system is a group of parts that work together to achieve a common goal. Parts are generally systems themselves and are composed of other parts, and this definition can be applied recursively. There is a concept of the connectedness between the parts of the system. Sherwood (2002) defines this connectivity in his famous book Seeing the Forest for the Trees: a Manager’s Guide to Applying Systems Thinking as follows:
If you wish to understand a system, and so be in a position to predict its behavior, it is necessary to study the system as a whole. Cutting it up into bits for study is likely to destroy the system’s connectedness, and hence the system itself.
Systems thinking is based on a popular fundamental principle which says ‘the whole is bigger than the sum of all its parts’. In general, the parts are affected by being in the system, and their behaviour may change when they leave the system. If the parts are changed or removed or added, the system changes its behaviour. The way these parts are assembled adds value that is more than the value brought in by individual parts.
Any system is defined and can be characterized in terms of the following:
Purpose. Every system exists because it has a purpose or objectives. It is impossible to characterize or understand a system whose purpose is not clear.
Input and output. Every system takes input from the environment and provides output back to the environment. The output does not necessarily and directly achieve the purpose of the system. In other words, the output of the system and the purpose of the system can be different.
Function. The function of the system transforms the input to the output.
Every system has two kinds of boundaries: an inner boundary and outer boundary. The inner boundary separates the system from rest of the systems or components with which the system interacts. The outside boundary in which a system exists is called the system’s environment. This outside boundary influences or affects the way systems function.
The difference between the purpose and the output is created by the inside cause and the outside cause. The inside cause occurs within the system, and we will have the ability to control this whereas the outside cause occurs from the environment on, which we may not have any control. The result or the output of the system can be influenced by either the inside cause or the outside cause.
For example, consider a retail store. The purpose of the store is to conduct business and make profits. If we cannot conduct business, this situation is an output. If we cannot open the store because of bad weather, the bad weather is an outside cause because we cannot change the weather. If we cannot open the stores because we forgot the keys to the store and leave them at our home, it becomes an inside cause, which is solvable.
Jamshid Gharajedaghi (1999) in his book Systems Thinking: Managing Chaos and Complexity: a Platform for Designing Business Architecture talks about five systems principles. These principles are important to understand because these principles govern the behaviour of systems we attempt to understand or build. These principles are the means using which we can understand the complexity of systems in a non-linear fashion. Understanding these principles help us overcoming limitations of our analytical or rational approach to understanding a system’s behaviour. In this section, we only introduce these principles. We advise the reader to refer to the original material for more details.
Principle of openness. This principle states that the behaviour of any system can be understood only in the context of its environment. Therefore, we can neither understand the problem nor design a solution free of context. Most often, while designing a system we tend to define the problem in terms of solution. This solution is typically independent of the context or the environment that must have worked in other situations. Since it worked for a different (may be similar) problem, we try to force this to the current problem without understanding the environment. This results in a non-solution, and there are ample experiences to prove this point. Hence the openness of the systems becomes very important in designing the right solution.
Principle of purposefulness. This principle brings out the difference between the rational and systems thinking approaches very clearly. It says ‘choice’ is in the heart of any system. A true development of a system is to enhance its capacity to choose. Designing is a means for enhancing choice. In the systems thinking literature, it is said that designers seek to choose rather than predict the future.
Systems do what they do as a matter of this choice. This is the purpose of the system. This choice has several dimensions including the rational, emotional and cultural ones. Rational choices are made based on self-interest (that of the decision maker), not the ones that are affected by this decision. This rational decision is not necessarily a wise choice. In the long run, it may turn out to be a lose–lose decision and the holistic thinking strives to make a win–win decision. The emotional decision is based on the excitement or beauty of that decision. The characteristics of an emotional decision include excitement and challenge, which most often determine the final choice being made. A cultural decision is based on the collective behaviour as opposed to the criteria of an individual. Culture provides certain default values (or decisions) when we do not explicitly know what to choose.
Principle of multi-dimensionality. This is arguably the most valuable principle of systems thinking. This principle tells us to look at two opposing tendencies of a given system as complimentary, as opposed to competitive. There is a mutual dependence between the opposite tendencies of systems. This is typically characterized by an ‘and’ relationship instead of an ‘or’ relationship. In traditional or analytical thinking these opposites are treated in such a way that a win for one is a loss another and vice versa. This is called a win–lose equation. The principle of multi-dimensionality emphasizes that win–lose, lose–lose and win–win equations are also possible. It also states that if x is good, then more of x is not necessarily better.
Principle of counter-intuitiveness. Earlier we have discussed how the complexity of open systems is beyond the reach of the analytical approach. Counter-intuitiveness states that the expected or desired results will not match actual results for intended actions. Most often, the actual results are the opposite of the desired results. Various examples illustrate this principle where things getting better before they get worse (or vice versa). As discussed before, either the system can be successfully implemented or it may be a failure, but both for wrong reasons. The principle of counter-intuitiveness tries to explain this criterion.
Principle of emergent properties. Emergent properties are the property of the system as a whole, not the parts. They exist due to the interactions among the parts, and hence they cannot be analysed. By nature, these properties are dynamic because they are products of the interactions between its parts. They are produced online and in real time. They cannot be saved for future use if the parts are no longer interacting in the same way.
The higher levels of understanding can be achieved by iteratively approaching the problem by first starting the enquiry or search process, then modifying and verifying initial assumptions and finally evolving closer to the solution that is satisfactory.
Design as a holistic process deals with three dimensions: structure, function and process, together with the environment in which the system operates.
To reduce unnecessary complexity and produce manageable simplicity, we need to do the following: start from a basic frame of reference, focus on relevant issues and avoid an endless search for useless information.
A crucial step in systems methodology is to clearly separate the problem definition from the solution design. This idea may sound familiar, but the treatment is different. The following are the steps (often iterative in nature) in systems methodology as described by Peter Checkland (1999) in his book Systems Thinking, Systems Practice.
Understanding the problem (often referred to as a mess in the systems literature) always starts in an unstructured state. Start your investigation and find out about something about the problem. The basic research into the problem area includes investigating who the key players are, how the process works, etc.
‘A picture is worth a thousand words’. Express the problem situation through rich pictures. Rich pictures capture a lot of information relating to the problem situation. Pictures show system boundaries, structure, information flows, communication channels and human activity.
Look at the problem situation from different perspectives. ‘Root definitions’ are written to elaborate a transformation. Every well-formed root definition has six elements: Customer (the one who benefits from a system), Actor (the one who transforms inputs into outputs and performs the activities defined in the system), Transformation process (conversion of inputs to outputs), World-view (making the transformation process meaningful in context), Owner (proprietor, who has the power to start up and shut down the system) and Environmental constraints (external elements including organizational policies and legal and ethical matters).
For each root definition, build conceptual models of what the system must do. In other words, root definitions provide information on what to do and this step begins to define how to do.
In this step, compare the conceptual models with the real world. That is, compare the results from steps 4 and 2 to find out their differences and similarities.
In light of the results observed in step 5, identify the changes that are possible and that are needed to improve the situation.
Come up with recommendations to improve the problem situation based on changes that are identified in step 6.
In this book, systems thinking as a problem-solving activity plays a significant but invisible role. In every case study, there is an opportunity to apply systems thinking in understanding the problem and designing a solution. Sometimes systems thinking principles are applied in an obvious way in the foreground, but most often they remain in the background. In other words, we make use of systems thinking as a source of light to illuminate the problem space as well as the solution space. Since every case study is an exercise of problem solving at different levels, we urge the reader to carefully identify the boundaries of the systems they are going to create or improve, the environment in which systems are operating and then think about the problem and solution spaces.
Please note that Figures 1.7a and 1.7b are exactly the same—they are only shown differently to illustrate the difference between a layered architecture and an N-tier architecture, which we will discuss very soon.
There are several good books to start learning about software architecture. One of the seminal books is the one written by Mary Shaw and David Garlan (Shaw and Garlan, 1996). Later many popular books including Software Architecture In Practice written by Len Bass, Paul Clements and Rick Kazman (Bass et al., 2003) and the entire series of books on software architecture published by Software Engineering Institute (SEI) of Carnegie Mellon University (CMU) and Addison Wesley give various perspectives on software architecture.
One very effective way of learning about the basic concepts of software architecture is to explore the pages of software architecture community Web sites. The following Web sites are very useful.
www.softwarearchitectures.com contains useful information about various aspects of software architecture.
Similarly, www.iasahome.org, which is the Web site of the International Association of Software Architects, and www.wwisa.org (home page of Worldwide Institute for Software Architects) contain resources useful for gaining expertise in software architecture.
www.softwarearchitectureportal.org is the Web site of IFIP Working Group 2.10 on Software Architecture. This Web site provides information about Working Group 2.10, the WICSA (Working IEEE/IFIP Conference on Software Architecture) conference series, the Software Architecture Village Wiki and other resources on software architecture.
Grady Booch maintains a good resource on his Web site, a sort of handbook on software architecture. You can find it at http://www.booch.com/architecture/index.jsp.
The Gaudi Project Web site, http://www.gaudisite.nl, contains very useful information about systems architecture. This includes e-books, courses, case studies and research papers. Philips Research funded and supported the Gaudi Project for many years, and it has gradually moved to the Embedded Systems Institute.
http://www.bredemeyer.com, is a very resourceful Web site from Bredemeyer Consulting focusing on the software architecture discipline and on enterprise architecture resources, along with their workshops and other training links.
Of course, for every thing else, we have Wikipedia. More specifically, start with the page on software architecture (http://en.wikipedia.org/wiki/Software_architecture) and explore all its siblings.
The UML 1.3 and 2.0 documentation gives some important definitions and concepts about software architectures. We have made use some of them in this chapter, and it is a good idea to refer to them if you want to investigate deeper into this area. Similarly, IEEE standards on software architecture and software design (such as 610.12-1990, referred in this chapter) are typically considered as starting points for any exploration in this area. You can find them at the IEEE Xplore site, http://ieeexplore.ieee.org/.
A very useful approach to understanding the problem space in general and software requirements in particular is known as the problem frames approach. It was developed by British software consultant Michael A. Jackson. In his book Software Requirements & Specifications (Jackson, 1995) he gives an outline of this approach and describes it more fully in his later book Problem Frames: Analysing and Structuring Software Development Problems (Jackson, 2001). We have already discussed Polya’s book on problem solving.
On systems thinking, there are again several good books. Gerald Weinberg’s An Introduction to General Systems Thinking, Peter Senge’s The Fifth Disciple, Peter Checkland’s Systems Thinking, Systems Practice, Sherwood’s Seeing the Forest for the Trees: a Manager’s Guide to Applying Systems Thinking and Jamshid Gharajedaghi’s Systems Thinking: Managing Chaos and Complexity: a Platform for Designing Business Architecture are some of the most influential books. Russell Ackoff’s various works including Recreating Corporations: a Design of Organizations for 21st Century (1999), The Democratic Corporation (1994) and Creating the Corporation Future (1981) form a very useful reading list for understanding systems thinking.