7. Emerging Trends in Software Architecture – Software Architecture: A Case Based Approach

Chapter 7. Emerging Trends in Software Architecture

ContributorBhudeb Chakravarti

Software Architecture Discipline—Past, Present and Future

Since the late 1980s, software architecture became the most important part of a software development life cycle. Starting from the broad organizational structure of the system, software architecture, at present, encompasses a wide set of notations, modelling languages, tools and a number of analysis techniques. The increasing use of object orientation in software development provided a base for using component-based architecture. The benefits of a built-in framework and reuse of components provided substantial incentive to use component-based architectures even when they are not ideal for the systems being developed.

A critical issue in design and construction of any complex software system is its architecture. A good architecture can ensure that a system will satisfy key requirements in areas such as performance, reliability, portability, scalability and interoperability. A bad architecture can be disastrous.

The first reference to software architecture occurred in 1969 at a conference on software engineering techniques organized by NATO (Kruchten et al., 2006). Some of the pioneers in the software architecture field, such as Tony Hoare, Edsger Dijkstra, Alan Perils, Per Brinch Hansen, Friedrich Bauer and Niklaus Wirth, attended the meeting. Since then and until the 1980s, software architecture mainly dealt with system architecture, that is, the physical structure of computer systems. The concept of software architecture as a discipline started to emerge in 1990. A 1991 article by Winston W. Royce and Walker Royce was the first to position software architecture between technology and process (Royce and Royce, 1991).

Even though articles were published in the open literature in the 1990s, software architecture and architectural styles remained under a veil of mystery. Garlan and Shaw (1993) first mentioned some architectural styles in their seminal paper. The authors mentioned in that paper that most systems typically involve some combination of several styles. There are 12 styles explicitly mentioned by the authors. Some of the architectural styles were extensively discussed in Chapter 1. The original list of architectural styles mentioned by Shaw and Garlan are the following:

  • Layered

  • Distributed processes and threads

  • Pipes and filters

  • Object oriented

  • Main program/sub-routines

  • Repositories

  • Event based (implicit invocation)

  • Rule based

  • State transition based

  • Process control (feedback)

  • Domain specific

  • Heterogeneous

Each of these styles does not describe a clear smooth continuum, that is, there is no clear set of attributes that enables each style to make definitive choices to distinguish it from other styles. Some styles focus on certain attributes that are not addressed by other styles. For example, the layered style is concerned with constraining the topology of connections, whereas the repository style merely says that a central repository is at the heart of the system.

The field of software architecture started to flourish in the mid-1990s. A number of professionals and academicians started publishing articles and books in this field. The first book on software architecture was by Witt et al. (1994). Then, Software Engineering Institute (SEI) published an article on software architecture analysis method (SAAM), the first in the SEI series of methods (Kazman et al., 1994). Several software professionals in large organizations such as IBM, Siemens, Nokia, Philips and Nortel started research in this field and several approaches and design patterns emerged from their works. One of the notable approaches that became very popular was the 4 + 1 view model of architecture (Kruchten, 1995).

Since then software architecture has crossed a number of crossroads. The professionals working in this field collectively created the IFIP Working Group and the Worldwide Institute of Software Architects. New methods such as BAPO and ATAM emerged and consolidated along with SAAM (Clements et al., 2002). Different architectural languages were introduced such as XML-based ADL that provide support for broad sharing of architectural models. Joining forces with the reuse and product families communities, software product lines became a sort of sub-discipline, attracting the attention of large manufacturing companies.

Researchers are at present working on model-driven architecture with UML, which has become a defunct standard. Some researchers are also working on architectural decisions, requirements for bringing architecture as part of the initial business model and testing of architectural models at an early stage of the development life cycle. Studies are also being to make architectural analysis a part of agile software development. Architecture of quality attributes and quality of architecture are going to become the most important factors in the coming years.

Next-Generation Systems

In the past, software architecture dealt with well-defined systems having well-defined interfaces for each of their components. In present scenario, due to the pressure of time to market, the components are being forced into the market without well-defined interfaces. System designers are also using a number of independent components either developed by them for previous systems or directly purchased from the market. The systems are also becoming more complex with diversified functions and stringent quality requirements. So it is time to replace the phrase ‘time to market’ with ‘time to quality systems’.

At present, there is no common understanding or standards on quality attributes. Only the quality requirements described in the business model and the performance-related quantitative requirements are considered when assessing the quality of the system. The quality attributes of the domain-specific requirements are still not included in the evaluation of the system. Although there are methods for analysing specific quality attributes, these are typically done in isolation. It is thus important to identify and list out the quality of a system under development and analyse the quality attributes in an integrated manner. It would also be a focused area to define the quality attributes in a quantitative manner and demonstrate success in achieving them.

Qualities of system can be categorized into four parts:

  • Runtime qualities

  • Non-runtime qualities

  • Architectural qualities

  • Business qualities

Runtime quality attributes emerge from the execution of a deployed system. Quality scenarios of runtime attributes relate to specific instances of runtime execution. Some of the runtime quality attributes are performance, reliability, security, availability, usability, functionality and interoperability. These quality sets are most important for the execution of the system and should be given due consideration while architecting the system.

Non-runtime quality attributes relate to the development, deployment or operation of the system. Quality scenarios that cater to non-runtime attributes are expressed in terms of system lifetime activities. Some of the non-runtime quality attributes are reusability, maintainability, testability, scalability, integrity and configurability.

Other than system quality attributes, the architect should also take care of architectural quality attributes and business quality attributes. Architectural quality attributes relate to the quality of the architecture proposed for the system. The correctness or completeness of the architecture, whether the system can be built from the architecture being proposed and whether the architecture proposes an integrated theme are some architectural quality attributes.

Business quality attributes include the cost and benefit of the system. The marketability or usefulness in the organization should be considered while architecting the system. The rollout time to develop and release the system, the lifetime of the system and the requirements of integrating with legacy systems are some business quality attributes.

Software architecture encompasses the quality attributes of the system and brings about a standard way of achieving them. The traceability between architectural decisions and quality requirements will help developers to take care of the quality attributes in a systematic manner. Architecture will also enable to tune the quality attributes as per the requirements and ensure easy implementation. In the coming years, the inclusion of quality requirements in architectural design will become mandatory for complex systems.

Though much study is done on eliciting requirements based on the architectural needs of the system, not much work is done on defining the architect’s role in identifying architectural requirements as a part of functional requirements. Thus, it is important to identify the strategy and methods to include functionalities into architectural components. It is also important to allocate quality attributes into architectural abstraction.

A number of research works are ongoing to create architecting methods for smooth transition from requirements to the architecture and finally to development. For example, Brandozzi and Perry (2003) have suggested a new process called ‘Preskriptor’, which is based on an architectural descriptor language. This process systematically ensures that the requirements related to architecture are being satisfied. Another method called component bus system and properties was presented by Egyed et al. (2001), which expresses requirements in a form that is more closely related to architecture.

In this method, the functional requirements are categorized based on various properties to group them in such a way to identify whether they should be implemented as components, bus or system properties. Liu and Easterbrook (2003) extended this method by introducing a rule-based framework that allows for requirements architecture mappings to be automated wherever possible. Liu and Mei (2003) viewed the mapping from requirements to architecture in terms of features, essentially a higher-level abstraction of a set of relevant requirements as perceived by users (or customers).

Another important aspect of architectural design is to consider risk assessment.

The strategy of including architectural requirements depends on organizational culture, global factors and the negotiation capability of the architect. It is important to have a holistic approach to consider the global issues such as the environment in which the system is being built, flexibility and rigidity of requirements, external technological factors, and so on. Architecture decision points should consist of tactics to resolve the issues related to the quality attributes of the system. Architectural patterns should be created and used to realize those tactics that satisfy the quality issues. There might be some conflicts or trade-offs between different architectural components or decision points, which should be resolved through effective negotiation.

Reusability and Reusable Services

The major challenge of a software architect today is to improve the production rate of software systems. Reusable components are one of the solutions in this area. Though the original idea of component-based development was the direct fallout of hardware components, it could not reach the height of reusability that electronics engineering achieved through integrated circuits. One of the reasons was the simplicity of software components. Software components are considered as simple units in the complete system, and developers considered it much easier to develop them than integrate them with their other components. It has been observed that though some of the components are readily available, developers re-develop them as it is much easier to do so. They do not consider it as waste of time to write the component again as it takes a very little time to develop it. But if the application under development requires hundreds of thousands of such components, the total time taken becomes a deciding factor.

Reuse Maturity Levels Needed in an Organization

In his interesting book Confessions of a Used Program Salesman: Institutionalizing Software Reuse, Will Tracz (1995) discusses the reuse ratio in organizations at various levels of maturity (see Table 7.1).

Table 7.1. Reuse ratio in organizations

Maturity levels

Reuse ratio

Requirements

No reuse

–20 to 20%

Smart people

Business as usual

Salvaging

10–50%

Depends on luck

Maintenance problems

Smart people

Planned reuse

30–40%

Management support

Incentives

Reuse library

Systematic reuse

50–70%

Reuse process

Reuse metrics

Education

Reuse library

Domain-oriented reuse

80–90%

Application generators

Architecture

Domain analysis

If an organization does not plan for reuse, it can achieve a reuse ratio of around 20% by accident. This is typically done because there are smart people in the organization who are able to make use of the existing resources and components to come up with new systems. However, notice that the reuse ratio also can swing to the negative side (up to –20 per cent), because when people are not lucky they end up developing a system from scratch after investing significant amount of time on trying make use of existing resources.

If there is a previous project that is similar to the current project, many organizations end up making use of what was done earlier, that is, salvaging previous work. Depending on the similarity between the projects, the reuse ratio can vary between 10 and 50 per cent. This is wide range, which means that the luck factor plays a major role. Again, we need smart people who are able to see the similarity and are able to adapt to the new requirements. One major problem is that since this reuse is not really planned the projects at the end of their life cycle may require to heavy maintenance costs.

In more mature organizations, where there is some investment into planned reuse and management support, the reuse ratio is higher, varying between 30 and 40 per cent. These numbers are much more consistent when compared with the previous situation. These kinds of organizations typically maintain a reuse library and management provides incentives to those who follow the philosophy of reuse.

Systematic reuse is possible in addition to planned reuse if there is a process that makes sure that optimal reuse is achieved. This kind of process includes checking the marketplace continuously for new reusable components, various metrics relating to the components and pilot implementations to evaluate these components so that the users of these components have in-house knowledge about the strengths and weaknesses of the new components. If an organization matures to have a systematic reuse in place, the reuse ratio can be very high—between 50 and 70 per cent. It means that more than half of the system is developed using already existing components. In addition to management support, we need to constantly educate developers about reuse.

The best reuse ratios are possible if organizations focus on building reusable domain components. The state-of-the-art software development almost mastered the reuse in software engineering aspects. Reuse in domain engineering is what we are headed for. If we achieve this we can get up to 90 per cent reuse ratio. Domain analysis, application generators and reference architectures are some of the prerequisites for achieving this kind of ratio.

We will discuss how we can move from lower levels of reuse ratio to higher levels. Though reusability has become a punch line for every IT organization, the cultural and business barriers to reuse are just too large. Despite the efforts of the IT manager, the architects and the development team members, it is a tough job to reuse business logic than components or objects. One of the solutions to have emerged to take care of this problem is reusable services. To make a reusable service successful, we should identify the services that are difficult to develop and need expensive development and knowledgeable resources.

Krueger (1992) proposed a framework to evaluate software reusability with the following four aspects (see also Zhu, 2005):

  • Abstraction: A reusable component should have a specific level of abstraction that helps software developers avoid sifting for details.

  • Selection: A reusable component should be easy to locate, compare and select.

  • Specialization: A group of reusable components should be generalized and the generalized form should be easy to be specialized to an application.

  • Integration: There must be an integration framework to help reusable components be installed and integrated into a newly developed application.

The business logic can be published in the form of a service in such a way that it can be discovered, matched, composed, executed, verified and monitored in real time and at runtime. This brings about a new paradigm of computing called Software as a Service.

Software as a Service

Software as a Service (SaaS) has become increasingly relevant to both enterprise and SMB customers and has the potential to impact the entire IT industry. Although the market size for SaaS was relatively small in 2005, to the tune of approximately US$ 6 billion, as per a McKinsey analysis report, it is expected to grow by more than 20 per cent annually. There are many factors that are driving this paradigm shift.

Some of the primary factors of choosing SaaS are listed below.

Need for frequent updates

Due to the constant changes in the business model, in the business environment and in the technology front, it has become necessary to provide frequent updates of the applications that have been supplied to customers. This is a painful process and involves much effort and higher cost. Sometimes, the cost is transferred to the customer in the form of maintenance fees. In SaaS, as there is one application installed globally and no locally installed software, companies avoid the headache of software upgrades or aging technology. In case software updates are required, they are sent automatically to clients, providing seamless, hassle-free updates. By centrally upgrading one version of the application, every user can access the same latest capability.

Provide secured environment

With a large number of Internet crimes occurring, it is very important to provide a secure environment to users for usage of any application. Many companies do not have the technical skill set or operational capabilities to monitor and respond to security threats to their networks. IT companies that build and provide SaaS applications have first-hand knowledge in tackling security issues, have the necessary hardware and software components and understand the importance of maintaining appropriate control over access to client data. While architecting SaaS applications, security is always considered to be the primary requirement as the applications are developed to be used over the Internet. Security measures added into an application as an add-on service is never as good as security built into the architecture from the beginning. SaaS providers actually offer better data security and better application reliability than in-house operations, because the data centres are up-to-date with security management and have built-in redundancy.

Drive for reduced total cost of operation

The implementation of a typical system includes data consolidation, security, a tailored interface and complex reporting needs. With a SaaS application, the system along with the data are stored, backed up and secured by the vendor. So there is a possibility to reduce the IT personnel costs significantly if not eliminating the entire IT team. Additionally, as the application is being maintained centrally by a team of experienced professionals, the training and operational costs are reduced drastically. SaaS models improve efficiency by enabling vendors to interactively collaborate with clients during implementation. Incorporating client feedback and providing immediate results is far easier with a SaaS application than with other platforms.

The primary advantages of the SaaS model is that the focus of the IT engineers will be shifted more towards the domain than the software engineering nuances. The companies can focus on the business domain, business models, cost model and the return on their investment. They can have a very thin IT department and can depend on SaaS providers for their IT needs. They can select the applications they need from the SaaS applications available in the market and use them as required. The best example is salesforce.com. A number of companies worldwide are using this service for their sales management. It provides all the support required by senior management for analysing their sales data, trend analysis and keeping track of their sales target.

SaaS is also viewed as ‘do it yourself development’. This implies that software professionals will have to become more of domain experts (on a lighter vein, considering that other professionals are keen to become software engineers and software engineers are going to become experts in other domains).

One of the major challenges of using the SaaS model is the integration of third-party applications. A reasonable amount of an organization’s IT budget is spent on interfaces and integration. Whenever a system is modified due to change in requirements or environment, the interfaces need to be rebuilt. It is a complex task when there are a large number of interfaces in a system. Addressing integration concerns has been particularly a key concern to enterprises and they are shifting to the more reliable integrated option ‘service-oriented architecture’. Thus, larger companies still look at SaaS as a quick way to solve a business problem and not as an application to replace something they already have in their portfolio.

Service-Oriented Architecture

The term service-oriented architecture (SOA) was coined in a research paper by Gartner analyst Yefim V. Natis (1996) as follows:

SOA is a software architecture that starts with an interface definition and builds the entire application topology as a topology of interfaces, interface implementations and interface calls.

He also mentioned that SOA is better named as interface-oriented architecture.

Much before SOA became popular, software professionals were using distributed object technologies such as DCOM and CORBA to create a set of design principles, guidelines and best practices. These distributed objects, developed in the form of components, used to be referred as services. It was like using object-oriented concepts in client–server architectures.

SOA has many definitions, which are widely different and a bit confusing to the software architecture fraternity. The World Wide Web consortium represents it as follows (Booth et al., 2004):

  • An SOA is a form of distributed systems architecture, where the provider and the requestor of the service are interacted through message exchange. The implementation of the process including its internal structures and processes are abstracted and kept hidden. The service is usually described in terms of those information that are exposed through the service definition. The services are designed in such a way that they provide only few functions and can be accessed through a platform-neutral standardized format (such as XML string).

  • A service is an abstract resource that represents a capability of performing tasks that form a coherent functionality from the point of view of provider entities and requester entities. To be used, a service must be realized by a concrete provider agent.

The definition mentioned by W3C is very generic in nature. Another definition, which is much specific and inspired by Sayed Hashimi (2003), states that a service is an exposed piece of functionality that is dynamically located and invoked. A service is self-contained and maintains its own states and the interface contract to the service is platform independent.

SOA supports three primary functions to provide services to consumers:

  • Create an application as a service, describe the interfaces and publish the service.

  • Discover a service already published.

  • Consume the service using the interfaces.

SOA is an architectural style that emphasizes implementation of components as modular services that can be discovered and used by clients. Services are defined by the messages through which they can be communicated and used. Services are published in the service registry with the details of their capabilities, interfaces, supporting protocols and policies. They do not publish the implementation details such as programming language and the platform on which it is hosted as they are of no concern to clients.

Though a service is usually used individually and supports a particular functionality, it may be integrated with other services to provide higher-level services. This promotes reuse of services and allows the architect to use them in an orchestrated manner.

As per Adrian Trenaman (2005), SOA is an architectural style for client–server applications. Distinguishing features of SOA include the following:

  • An SOA consists of clients and servers (services) connected by a communication sub-system known as a service ‘bus’.

  • Services are defined using formal interfaces (contracts) that hide implementation details from the client.

  • The bus supports both point-to-point and messaging styles of communication and supports enterprise qualities of service such as security and transactions.

  • Services are often reused by a number of applications.

In an SOA, services exhibit the following properties:

  • Its interface is defined using a formal contract (e.g., WSDL or IDL). The service implementation details are unknown to the client; the only connection (or coupling) between the client and server is the contract.

  • The service is self-contained and can be run with or without other services present.

  • Services may be used by other services.

  • The interface is stored in a well-known repository that prospective users can browse.

In SOA, the contracts between the service providers and consumers are formal. The implementation of the services are abstracted and isolated from the consumer.

SOA is essentially a collection of self-contained, pluggable, loosely coupled services that have well-defined interfaces and functionalities with little side effect. A service is thus a function that is self-contained and immune to the context or state of other services. These services can communicate with each other either by explicit messages that are descriptive and not instructive or by using master services coordinating or aggregating activities, for example, in a workflow.

The consumer–provider role is abstract and the precise relationship relies on the context of the specific problem. SOA achieves loose coupling among interacting software agents by employing two architectural constraints: (a) a small set of simple and ubiquitous interfaces to all participating software agents and (b) the interfaces being universally available for all providers and consumers.

Challenges in Implementing SOA

The primary challenge of considering SOA is to get a collective agreement from the stakeholders to use SOA as the enterprise framework for delivering services. The success of implementing SOA is based on the availability of services that are useful to the users of the services. It is, thus, very important for organizations to adopt service orientation as the key principle for their IT solutions.

Another challenge of SOA implementation is integration of services available in different parts of the organization. All the services should be placed in a centralized environment so that one service can access the others easily. As the services can be modified or new services can be added at any point in time, it is necessary that the services be discovered dynamically. The services should be able to expose their definitions and allow easy access.

SOA enables an integrated environment for all the services being offered by the organization. It is, thus, important to have an efficient governance structure for the orchestrations of the services. Governance is required not only for the management of the services but also for the management of the resources, such as business processes, business intelligence, hardware and shared software components, used to provide the services.

While defining the services, it is necessary to decompose the business processes to identify common processes that can be reused by different services. Reuse of services or business processes help the organization to achieve higher return on investment. It would be a challenging task while implementing SOA to identify the business processes that are used by many services. These services can be designed as elementary services and implemented as shared services.

It is also important to change the mindset of the people involved in implementing SOA in any organization. To make the SOA implementation a success, it is important to motivate all the stakeholders with information on the usefulness of SOA and create a roadmap with proper change management plan.

Web Services

Most people are familiar with accessing the Internet through a Web browser. When a user requests a Web page, the request is handled by a remote Web server, which returns the information in hypertext markup language (HTML)—a form that allows the browser to present it using a selection of fonts, colours and pictures, all factors that make it more useful and appealing.

Web services are distributed software components that provide information to applications rather than to humans through an application-oriented interface. The information is structured using extensible markup language (XML) so that it can be parsed and processed easily rather than being formatted for display. In a Web-based retail operation, for example, Web services that may be running on widely separated servers might provide account management, inventory control, shopping cart and credit card authorization services, all of which may be invoked multiple times in the course of a single purchase.

As Web services are based on a service strategy that uses common communication protocol to facilitate interaction between the service provider and the client of the service, they do not depend on any specific platform and can support a distributed heterogeneous environment.

SOA and Web Services

We initially described SOA without mentioning Web services, and vice versa. This is because they are orthogonal: service orientation is an architectural style, while Web services are an implementation technology. The two can be used together, and they frequently are, but they are not mutually dependent.

For example, although it is widely considered to be a distributed-computing solution, SOA can be applied to advantage in a single system, where services might be individual processes with well-defined interfaces that communicate using local channels, or in a self-contained cluster, where they might communicate across a high-speed interconnect.

Similarly, while Web services are well suited as the basis for a service-oriented environment, there is nothing in their definition that requires them to embody the SOA principles. While statelessness is often held up as a key characteristic of Web services, there is no technical reason that they should be stateless—statelessness would be a design choice of the developer, which may be dictated by the architectural style of the environment in which the service is intended to participate.

Grid Computing

Grid computing is a form of distributed computing in which the use of disparate resources such as compute nodes, storage, applications and data often spread across different physical locations and administrative domains and is optimized through virtualization and collective management.

Grids are often classified as either compute grids, which emphasize the shared use of computational resources, or data grids, which support federation, integration and mining of data resources. These distinctions mostly dictate the type of hardware infrastructure needed to support the grid; for example, nodes on a data grid may need to provide high-bandwidth network access, while a grid whose primary use is for long-running parallel applications is more in need of high-performance computational nodes. Other classifications, such as hallway, campus, enterprise and global grid, indicate the scale or other properties of the grid. In practice, grids tend to display characteristics of various functional classifications and their scale tends to increase over time, so the distinctions are often blurred.

Although grid computing was conceived in research organizations to support computer-intensive scientific applications and to share large research databases, enterprises of all types and sizes are beginning to recognize the technology as a foundation for flexible management and use of their internal IT resources, enabling them to better meet business objectives such as minimized cost and increased agility and collaboration.

Grid computing is thus the foundation for collaborative high-performance computing and data sharing for the adaptive enterprise and for the vision of utility computing, wherein computing resources will become pay-per-use commodities. While the precise definition is debated, common characteristics of grids are that they tend to be large and widely distributed, require decentralized management, comprise numerous heterogeneous resources and have a transient user population. Hence, grids exemplify the need for a highly scalable, reliable, platform-independent architecture that supports secure operation and standardized interfaces for common functions.

Service-Oriented Grids

Grids are facilitated by the use of grid middleware: software components and protocols that provide the required controlled access to resources. To date, grids have been built using, for the most part, either ad hoc public components or proprietary technologies. While various public and commercial solutions have been successful in their niche areas, each has its strengths and limitations, and they have limited potential as the basis for future-generation grids, which will need to be highly scalable and interoperable to meet the needs of global enterprises.

In recent years, however, it has become clear that there is considerable overlap between the goals of grid computing and the benefits of an SOA based on Web services; the rapid advances in Web services technology and standards have thus provided an evolutionary path from the ‘stovepipe’ architecture of current grids to the standardized, service-oriented, enterprise-class grid of the future.

The user submits a job request to the job-submit service, which locates the grid’s scheduling service and passes on the request. The scheduler locates and contacts the service that represents the requested application, and requests details of the resources that are required for the job. It then queries the registry to find all suitable resources and contacts them individually via their associated services to determine availability. If sufficient resources are available, the scheduler selects the best available set and passes their details to the application service with a request to begin execution; otherwise, it queues the job and runs it when the required resources become available. Once the job has been completed, the application service reports the result to the scheduler, which notifies the job-submit service. The job-submit service, in turn, notifies the user.

Note that this example is simplified for clarity: a real enterprise-class grid would involve much more sophisticated operation than is shown here, with the net result being a high degree of automation and optimization in the use of resources across the grid’s domain.

Enterprise Service Bus

The clients and servers in an SOA communicate via a common communication framework called a ‘service bus’. The term service bus is derived from computer bus where the computer components such as CPU, RAM and I/O are connected through the wire mess. Similarly, in SOA, each component provides an interface to the bus, which allows it to communicate with a large number of different kinds of components. One component places a message on the bus and another component receives it and serves that request.

The term service bus is used metaphorically to describe a sub-system that connects clients and servers using the same underlying protocols and technologies. The service bus is a single communication sub-system used by all the clients and services in an SOA (Figure 7.1).

Figure 7.1. The service bus in SOA

A service bus usually includes system services that provide clients and servers with commonly required functionality:

  • Service naming and lookup—for finding services at runtime

  • Service registry—for storing service contracts

  • Security

  • Transactions

  • Service management

  • Messaging

Typically, the services are implemented using the same communication infrastructure that clients and normal servers use.

The term enterprise service bus (ESB) first emerged around 2003 and has become synonymous with SOA. It has been hijacked by marketers and vendors and liberally redefined to suit their product sets. The ‘enterprise’ in ESB stresses that the product has features such as security and transactions and is suitable for use in a large-scale enterprise. ESBs provide support for contract-based services using either direct (synchronous point-to-point) or indirect (asynchronous messaging) communication paradigms.

Though technologies such as CORBA and DCOM have provided service-bus functionality for some time now, the industry has settled on the following rule-of-thumb: ‘if it doesn’t use XML, then it’s not an ESB’. To qualify as an ESB, a product must use the following Web services technologies:

  • XSD and WSDL for contract definition

  • SOAP and XML for protocol payload

  • UDDI for service registry

As Adrian Trenaman (2005) quips, ‘Perhaps ESB is a misnomer; it should have been called XML Services Bus or Web Services Bus’.

An ESB typically provides the following functionalities:

  • Data modelling with XML schema

  • Interface modelling with WSDL

  • Client and server development tools (code generation from WSDL, libraries, IDE support, etc.)

  • Synchronous point-to-point communication using SOAP/HTTP

  • Asynchronous messaging using SOAP as a payload over a messaging protocol that supports persistence of messages

  • Message payload transformation using XSLT

The ESB provides core infrastructure and tools for SOA. Many ESB components such as Registry and JMS are pluggable. Some of the open source ESB choices available include the following:

  • Celtix: Hosted on ObjectWeb, supported commercially by IONA Technologies

  • OpenESB: Hosted on java.net

  • Mule: Hosted on codehaus.org

  • ServiceMix: Hosted by LogicBlaze

Each of these ESBs typically provides a default JMS messaging provider. The registry system for WSDL contracts is defined by the UDDI standard. Some of the open source implementation of registry system are the following:

ESB uses WSDL for interface definition, which provides a clean separation of interface versus location (host name, port, etc.) Interfaces are defined using WSDL data types using XSD. ESB typically uses SOAP as a messaging payload. SOAP ‘wraps’ XML messages in an XML wrapper. Messages are transmitted as plain text.

ESB can support the following messaging styles:

  • One way (using SOAP/HTTP or SOAP/JMS)

  • Request response (using SOAP/HTTP or SOAP/JMS)

  • Document oriented (using SOAP/HTTP or SOAP/JMS)

  • Publish subscribe (using SOAP/JMS)

Thus, we observe that ESB offers great advantages in terms of technology adoption and acceptance of the underlying XML core.

Business Trends

A Gartner Research publication (Schulte et al., 2005) mentions that the introduction of Web services and the resulting popularization of SOA caused a major upheaval in this market. Prior to 2002, integration backbones were largely proprietary, and many application developers were unknowledgeable about SOA and event-driven design. From 2002 to 2004, a new type of middleware, ESBs, came into the market to exploit Web services standards and the growing interest in SOA. Simultaneously, the original integration suite suppliers enhanced their products to adopt Web services standards, business process management and other features. The platform middleware suppliers also broadened their offerings, adding support for Web services, portals, message-oriented middleware, integration and other features to their application servers to create a new product category, the APS. In 2005, all three types of products addressed customer needs for the integration backbone, but their relative strengths vary because of their diverse backgrounds and company strategies.

Another Gartner report (Barnes et al., 2005) mentions that among global organizations across all geographies, business trends are driving an expanded focus beyond the stability and reliability of operations and processes. It has been observed that organizations are increasingly focusing on agility and flexibility of their operation while simultaneously improving overall process visibility and consistency. SOA is being eyed as one of the tools to provide the desired flexibility as a group of services; once designed and implemented, SOA can be integrated and used to create new business processes. So the emphasis of organizations shifts towards process definition, visibility and control.

The increased focus on SOA will create a series of changes in the current operating model of many IT organizations that are not organized to support this new model. The impact will be especially acute across the following areas of IT governance (Gaugham, 2006):

  • Investment planning: SOA’s compartmentalized development model will force a change in how we build the business case for new and existing investments, how we prioritize them, and how we measure benefits since most current portfolio management processes are designed to support long-term projects and will not support the higher volumes of investment candidates.

  • Software development: SOA-based development is more like an assembly line, where the developers will be working on discrete services and assembling them with existing internal services or even third-party services. Testing models will also need to change to support distributed composite applications that may span multiple companies’ service repositories.

  • Change management: SOA will create an environment in which change is more dynamic since you can update one part of the composite application without breaking the whole.

  • Maintenance and operations: A composite application based on SOA will be distributed across the organizations and even outside the firewall with little or no control on the performance characteristics of the service. So it will change the way we manage the services and new methods based on service line architectures (SLAs) to be innovated to achieve the goal.

  • Security: SOA adoption will call for a new model for application security that will be required to take care of security aspects of services offered as well as the integrity requirements of composite applications.

To take care of the above-mentioned impacts organizations should evaluate the maturity of important processes that span across organizations, identify the best practices and improve the effectiveness of core processes. They should also clearly define a strong enterprise architecture to ensure that they can maintain the agility and flexibility of SOA while adapting to meet changing business requirements.

Though the technology required for SOA is very important, the cultural, procedural and organizational changes that are required to be implemented effectively will ultimately determine the success of SOA in most organizations. Another important factor to be considered is identifying the business scenarios in which SOA can be beneficial.

Though SOA is fundamentally about business process agility, it can also address some other business issues such as inability to respond to business demands for new application functionality, processes or information access, inconsistent or inadequate information and data quality and inefficient IT resource utilization (Barnes et al., 2005). It also enables organizations to reuse established applications cost effectively. Organizations get greater capability to modify, extend or improve individual services independently in cases where there is no change to the underlying contract. SOA helps business owners to take a more direct, active role in the design of IT systems. This enables IT systems to be more accurate in reflecting business processes and improves IT systems’ capability to adapt as processes change.

As SOA maximizes the reuse of applications, it helps in saving costs in developing new components and maintenance of developed components. As the consumers of the services are more concerned about the service contract and the service delivery and not about the service implementation, it is possible to test the service implementation independently and consistently, thus reducing the cost and time required for software testing.

Though SOA has a number of benefits to the business community, there are a number of business challenges that should be considered before adopting SOA. The most important challenge is non-availability of mature resources with SOA experience and expertise. In most of the cases, organizations need to spend a lot of time and cost to build a team with SOA expertise and develop the guidelines and process for implementing SOA. Organizations also need to invest more on network infrastructure as the requirements for an SOA environment are more complex.

As per a recent Gartner report (Schulte and Abrams, 2006), SOA will be used, in some part, in more than 50 per cent of new mission-critical applications and business processes designed in 2007, and in more than 80 per cent by 2010 (with a probability of .7).

Dimensions of Future Software Architecture

The software architecture field is at present at a crossroad, where it is important to redefine the dimensions that we should consider to take it to a new peak. In the working session of the IEEE Conference on Software Architecture in January 2007 (WICSA, 2007), the topic was discussed and the dimensions discussed below were identified.

Codification and Socialization

These are the processes by which an architect communicates architectural ideas to the stakeholder community. Codification refers to the strategy based on explicit reuse of architectural knowledge and solutions, while socialization refers to the strategy based on knowledge creation, sharing and transfer. Codification and socialization are complimentary in nature and they should be used in such a way that they reinforce each other.

For example, let us consider the ‘elevator pitch’ scenario: what if, by chance, you wind up in an elevator with a key person, such as the CIO, and you have 20 seconds to convince him or her that your ideas are worth exploring further? If you have a strong idea that has been proven and used, use the codification strategy and explain clearly how your idea can make a change and add value to the business. If you want to try a new idea, then use socialization strategy and ask feedback from the listener. Based on the feedback you have received from an earlier listener, make the ideas more consistent, more innovative and simpler to understand.

Though the choice of using codification or socialization strategy depends on the maturity of the solution and the business unit, it is observed that a combination of both has been successfully used in many industries among their business units.

Handling Quality Attributes

It is very important to identify the quality attributes and handle them in designing the architecture of the system. Though quality attributes linked to business and engineering needs are specified, all quality attributes are not yet quantitatively available. Usually, the quality requirements are plugged-in later during the coding phase. It is important to consider them during architectural design. Researchers are working on understanding domain quality attributes and ways to achieve them. Generation of quality standards and certification process is also on the cards. Another important factor is to use traceability between the quality attributes and architectural decision points and provide analysis of the architectural quality needs.

Architectural Automation

There are around 50 different architecture specification languages available, but none of them allows the automatic conversion to the building blocks. UML tried to achieve a part of it providing diagrams and tools, but it is not sufficient for architects or designers. Building blocks with defined quality attributes for specific domains should be created and architectural language to be used for the automation.

Architectural Responsibility

It is important to define the role of an architect and specify the degree to which an architect is responsible. In most organizations, the role of architects is consulting. They give their ideas and suggest some styles and patterns. They are usually not held responsible if the system faces any problem in the future. If the system development succeeds, the architect is not given the recognition. So, a clear definition of an architect’s role and authority has to be established and adhered to.

Accidental versus Intentional Architects

It is observed that in most cases architects are selected on the basis of their development/programming experience. They usually do not have any formal training on architecting systems. They also do not have a clear career path, so they want to learn more on project management than on architecting systems. It is, thus, important to create a career path for architects so that they intentionally chose this career path and excel in their work rather than being accidentally put in this role.

Domain Versatility and Adaptability

Often, the architectural design depends on the technology and platform to be used for the development of the system. The domain is not considered at all. The architect also does not have much domain knowledge. It is, thus, important for the architect to have domain knowledge and the maturity to link the domain information with architectural goals and techniques.

Technology Dependency

Technology is a tool to help the architect to design the system efficiently. Architecture should not be constrained by technology. Architects should specify the abstract solution without necessarily having any bindings to existing technology. Technology should not restrict the architect while designing the system. Actually, the architect should be fearless enough to architect the system such that it will influence future technology.

Inter-disciplinary Architecture

At present, an architecture is designed based on a single discipline, which is fully understood by the architect who is comfortable with the efficiency of the design. It is important to architect the system considering multiple disciplines fully integrated into a single system. The architecture should meet the perspective of all the stakeholders.

Critical Software Architecture Elements

In any discipline, fashions, terminology, platforms, methods and frameworks, which are the means to achieve the goal of building a complex software system that is usable, faster, better and cheaper, might change but there are always some critical elements that remain important. Service orientation is based on some such critical elements. They include serviceability, adaptability and productivity. For the software architecture field, we believe that these critical elements based on service orientation are very important along with performance, reliability and availability, but they are not as well understood.

In this article, we would like to discuss serviceability, adaptability and productivity from the SOA point of view (that is, how software architecture can be serviced, adapted and efficiently produced). Our prediction is that in the future these elements will become more important as the complexity of the software we are developing increases and the environment in which we operate changes.

Criticality of Serviceability

The importance of serviceability is growing as the systems being developed are transitioning from being backroom operations to becoming business critical. Here, serviceability means all the users of a given system are appropriately assisted by the system itself in achieving their goals and the functionality expected by the system is provided. These users include not just the so-called end customers but also the other users, including those who provide services to their customers using this system and the users responsible for maintaining the system by making sure the system runs appropriately. System administrators help maintain users, data and processing of the system. This list includes those who take backups, those who add/delete users of the system and those who watch the system during peak usage times. Programmers help enhance the system as the environment in which the system operates gets changed.

All of these users service the system so that it continues to serve end customers. In other words, the functional requirements of these users are different from functional requirements of end users. For example, a ‘programmer user’ whose responsibility is to enhance the system has a requirement that the system should support easy debugging and its architecture should allow adding new features with ease. Similarly, a person who maintains user data should be able to easily add, revoke and view the privileges of all the users. Most often, what are normally perceived as non-functional requirements, such as maintainability, reliability and availability, will become functional requirements of these special users.

In this context, it is very important to define the term service properly. At the minimum level, each service should have the following:

  • A clearly defined set of consumers or the customers or the users

  • A clear expectation of what functionality the service provides

  • A concrete understanding of the quality of service that is guaranteed

The guaranteed quality of service needs some attention here. Every service provided by an IT system and the IT system has a whole should clearly specify what the customer should expect. It should not hide this aspect. This is an explicit commitment a service will make to those who use this service. Each service should be built in such a way that it honours its commitment, and in case it cannot honour it commitment, it should know how to manage that situation. IT systems should be engineering in such a way that this serviceability aspect is built into them. It implies that the system’s architecture has to support easy serviceability and that the quality of service has to be engineered into the system. Performance engineering for software development (Smith, 1990) shows how performance (performance-oriented architecture) challenges are identified and addressed. Similarly, if other architectural drivers such as security (architecture for security) are identified, the challenges related to security needs to be identified and addressed.

Enterprises of today are increasingly service centric in their dealings with customers as well as with their suppliers and other partners. As an example, a banking customer today more or less acts as his or her own teller in the course of an online or ATM-based transaction. An unwieldy application might be enough with a trained employee, but not when it is directly accessed by a customer. With the customer being a direct consumer of IT services exposed by an enterprise, it is important to ensure that the service quality as promised to the customer is maintained.

It is very important to understand that an enterprise that is service centric is service centric all the way down.

A business service consumed by the customers of the enterprise is realized by a combination of systems and processes within the enterprise. For the service-level commitments of this service to be met in an efficient fashion, these commitments must translate into a set of appropriate and aligned commitments to any part of the enterprise involved in its delivery.

As an example, consider the service commitments one would expect from a hospital: patient records available all the time and at any time, critical systems available all the time, the formalities of completing a patient admission must be over within a small period of time, and so on. This is the quality-level demanded from the hospitals business systems.

These business functions are realized by a set of processes that include workflows with human, non-IT and IT system participation. These systems are further realized as software systems that finally run on a set of infrastructure. For the business commitment to be maintained, the entire ecosystem that supports the business function must be aware of and must be designed to support that commitment. Hence, if patient records need to be available all the time, the storage system inherits an SLA that demands that it is available all the time and that information inside it cannot be corrupted and that it is truly persistent. It also means that the storage system must be backed up, but the backup cannot take the storage system, as seen by the rest of the environment, offline.

Thus, the design of each component of the enterprise in the ecosystem that supports this business function needs to be done so that the delivery of the business service commitment is achieved in the most efficient fashion. The only way this can be done is if the entire ecosystem is modelled as a set of services. The business service is composed of a set of other services. Each service has stringent service-level commitments that it must honour to its consumers.

Criticality of Adaptability

Whatever the reason, be it changes in the requirements from the customer, changes in the environment in which the system operates or the need to create newer applications and modify existing applications, the need to build adaptable systems cannot be overemphasized. Adaptability aids in software being evolved over a period of time in a natural manner.

There are various models to tackle the adaptability aspect of the software systems, especially from the software architecture point of view. Unless the software architecture itself is adaptable the final software system cannot be made adaptable. Evaluation at the software architecture level is the highest level of abstraction one can aim to achieve. If a software system is designed in the most adaptable manner, it should be a self-managing system. In other words, adaptable programs change their behaviour automatically with the changes in the environment they operate.

However, adaptability is a tough problem. There is a classic debate between being generic and being specific. If the system is built to be very generic, it suffers from software bloating. In other words, there will be huge amount of code that will perhaps never be used because most of it is written to make the system more extendable or adaptable in case there are future requirements. This phenomenon not only makes the software inefficient in terms of its speed and other performance aspects but also it makes maintaining the software very hard, which actually contradicts the very goal of making it more evolvable. On the other hand, if you make the software very specific, though it enjoys advantages of high performance and low or no-fat code, it suffers from being non-adaptable to new environments. It is also important to come up with adaptability metrics so that adaptability requirements can be verified at a later and appropriate point in time.

There are various categories of adaptability solutions: architecture-based solutions, component-based solutions, code-based solutions, generic algorithmic techniques, dynamic adaptation techniques and adaptation methodologies (Subramanian, 2003). Without going into details of each of these categories, we would like to state that among these architecture-based techniques offer the highest level of abstraction. We will focus on the architecture-based approach to adaptability. In order to make high-level abstraction of the architecture possible, it is important to treat adaptability as one of the important non-functional requirements from the beginning.

An adaptable system should be capable of

  • Detecting any changes in the environment (environment change recognition)

  • Determining the changes to be made in the original system based on the changes detected in the environment (system change recognition)

  • Effect these changes in the original system to create a new system (system change)

In other words, the requirement of adaptability is decomposed into several sub-goals. This approach is influenced by the non-functional requirement approach to the adaptability problem as proposed in Subramanian (2003). In this approach, decomposition is based on knowledge available (from a knowledge base) about a class of systems. Based on this knowledge about sub-goals, methods for refining these sub-goals can be developed and put back in the knowledge base. This is essentially a knowledge-based approach very similar in its spirit to QFD or the house of quality model proposed by Hauser and Clausing (1988).

Criticality of Productivity

The architectural productivity is concerned about providing tools, methods and frameworks for an architect so that he or she can conceive systems that can be built quickly without compromising on the quality of the solution.

We can see better tools in the marketplace from vendors such as IBM Rational, Borland and Telelogic. These tools are much more helpful and useful for architects than what was available in the past. Of course, there is still a lot of scope to make them assist architects in more appropriate ways. As methods and other infrastructure improve, these tools automatically get enriched.

More than the tools, development platforms such as J2EE, .Net and Symbian/Series 60 offer more assistance to the architecture community and that too at a more rapid pace. These are, in a sense, platforms with built-in architectures. An architect needs to extend them for a given application. Of course, the disadvantage is that the architect needs to move very close to platform-dependent skills and knowledge. The architect needs to have complete insight about the capabilities provided by these platforms in order to build applications on top of them. Most of these platforms have XML-based or SOAP-based communication between various layers of the architecture.

From 1994 to 1998, there was much emphasis on architecture description languages (ADLs) among the research community in software architecture. We have seen several ADLs including Wright, ACME, Darwin, Rapide, C2, Koala and UML. Of these, only a few survived and proved to be useful. Koala and UML (note that many ADL purists do not agree that UML is an ADL) are perhaps more successful in terms of being useful in practice. ACME found more visibility among the research community. After the late 1990s, there was very little emphasis on ADLs except ADML in 2000 and UML 2.0 more recently. However, the body of knowledge created because of this activity was very useful in creating various methods and platforms within the software architecture field and indirectly aiding productivity goals.

Coming to methods in software architecture, several of them became very popular including SAAM, which was developed in 1995, RUP (1996), ATAM (1998), ATAM 2 (2003) and ADD (2003–2004). We have discussed some of them in detail in previous chapters. These methods have changed the way architects approach a problem, making it more systematic and predictable, thus having a tremendous impact on the productivity goal of the software architecture.

However, the best bet to increase productivity is the systematic approach to reuse. The productivity increases by increasing the reuse in software development. Whether it is component-based development or service orientation, reuse in its various manifestations will enhance the productivity in a very certain way if our approach is systematic.

Software product lines or product line architectures achieve this goal in a very systematic way. Clements and Northop (2001) defined SPL as follows:

A software product line is a set of software intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way.

Essentially, SPL is a strategic reuse approach. Starting from requirements analysis till building individual components, product lines amortize the investment into development of core assets for a given product line. The core assets are built for various phases of software development in the following order:

  • Requirements and requirements analysis

  • Domain model

  • Software architecture and design

  • Performance engineering

  • Documentation

  • Test plans, test cases and data

  • People: their knowledge and skills

  • Processes, methods and tools

  • Budgets, schedules and work plans

  • Components

If reuse is brought in earlier phases it will be more beneficial. There are three key concepts in software product lines:

  • Core assets: Product lines emphasize on making use of a common asset base in the architecture phase. This is very similar to what we discussed in the component-based development life cycle. There is an emphasis on developing the core assets. Typically, there are attached processes for this phase, such as maintaining the core asset repository and adding and retiring assets over a period of time.

  • Production: The production or actual implementation part of the software is guided by the production plan that aids developers in gluing together all the components put together by the architecture. The production plan considers what the existing assets are and what the scope of the product line is.

  • Management: To have a successful product line, managing various resources is essential. Management here includes achieving the right organizational structure, allocating resources, coordinating and supervising, providing training, rewarding employees appropriately, developing and communicating an acquisition strategy, managing external interfaces and creating and implementing a product line adoption plan.

These three concepts or activities are interrelated and highly iterative. For a successful product line, each of these is essential, as is the blending of all three. Also, it is important to note that there is no ‘first’ activity. Each of them should start at the same time and independently. However, there is a strong feedback loop between the core assets and the products. Strong management at multiple levels is needed throughout.

There are three approaches to product lines:

  • Proactive approach: In this approach, the core assets/components are developed first by investing in their development upfront. It might take longer for the first product to appear but later other products in the product line come to market quickly.

  • Reactive approach: Here the core assets are developed from existing products. The earlier products perhaps were developed without the product line in common. However, by studying them, core assets that are common across all or most of these products can be pulled together. This activity has to be guided by the product line architecture. Using this approach, one can start a product line with a low cost compared with other approaches.

  • Incremental approach: Using this approach involves the development of a part of the core asset base (architecture + a few components). Using this part, the initial few products are developed and released. These initial products help in extending the core asset base and this process continues in an iterative and incremental manner. This approach can be very beneficial if a good roadmap is created initially.

After we decide on the approach to take for the development of the SPL, we need to identify the practice areas. The book on software product lines by Clements and Northop (2001) identified 29 practice areas categorizing them into software engineering, technical management and organizational management. The software engineering practice areas include architecture development, architecture evaluation, COTS utilization, and so on. Technical management involves practice areas such as scoping, technical planning, tool support, configuration management, data collection, metrics and tracking. Business management includes business case development, customer interface management and market analysis.

For many organizations, focusing on 29 practice areas at the same time is not possible. There may be different practice areas that can be identified with respect to the context of an organization than those listed in the above source. Organizations need to figure out which practice areas are more important to initiate. One way to tackle this problem or resolve this conflict is to employ patterns as a way of expressing common context and problem-solution pairs.

For example, ‘what to build’ pattern or ‘cold start’ pattern gives us the contexts of a typical organization where there is a dilemma of how to initiate a product line process. This pattern also gives solutions to tackle various issues such as organizational planning, launching and institutionalizing, structuring the organization, funding, organizational risk management, customer interface management, developing an acquisition strategy, operations and training.

Software Service Lines or Service Line Architectures

We have discussed the major aspects of product lines to examine the possibility of applying them to the software service industry. We would like to call them as software service lines (SSLs) or service line architectures.

The emphasis here is to achieve reusability of business solutions through SSLs. This area is still at its infancy. Our purpose is to initiate the thought process in the area so that if a reader is interested he or she can take up the challenge of making inroads into this very important area.

Currently, many service companies are initiationg most projects from scratch. There is hardly any reusability across the groups. In a typical enterprise application development, business process modelling goes through several refinements and finally gets implemented using either tightly coupled components or loosely coupled services. Due to increasing demands, it is more viable to enhance reusable assets and maintain profitable margins. There should be a movement towards moving software service companies from labour-based model to asset-based model.

In one sense, software reuse in services setting might be like an oxymoron. The services industry typically gets paid by the man hour or man month. More time spent on the project results in higher payment. Some people think that if we are reducing the time on development by reusing core assets, it might act negatively on the bottom line. In addition there are also the following issues:

  • How to protect core assets and prior IP when everything developed should be given to the customer?

  • When to focus on developing core assets when everyone is busy working for various customers?

  • How do they negotiate value for prior (or background) knowledge brought into the project?

While these are very important questions, there are certainly ways in which these can be addressed. A lot depends on the customer and the organizational policy. The important goal is to bring in more value to the customer. The service organizations also need to go up in the value chain by offering critical and important services in a cost-effective manner. The emphasis should be on increasing per employee productivity (and hence the revenue) than focusing on recruiting more and more people into the organization in order to grow.

In the services industry, software development differs from product development in various ways. For example the way one acquires requirements is different. The sources and methods of acquisition are different. The way one analyses requirements would also differ. The way a customer is involved in creating the end product also differs significantly, as in a product there is no single and specific customer for whom a product gets designed, whereas in services, every project typically begins and ends with a single customer. So, the intended end user of the output is different for both cases.

However, thinking from a different perspective, there is no thin line separating products and services. Often, this difference gets blurred. In one case, the product is the process and in the other case the process is the product. In services, the speed at which the system gets developed is very important for the customer and hence for the developer. In a competitive world, cycle time is one of the more critical performance measures. Customers always demand faster, better and cheaper solutions.

As an analogy, we can consider the automobile industry to illustrate the difference between product lines and service lines. The assembly line plants to manufacture different cars can be compared with product lines. Note that cars of different models but belonging to the same product line can be manufactured in a single plant. But these plants are fixed. For example, Toyota’s Camry, Corolla and Avalon can come from the same plant as they belong to same product line.

Customizable assembly plants can be compared with service lines. Here, the plant configuration can be changed depending on the type of vehicle to manufacture. For example, the plant should be equipped to change its configuration if the product line changes. This means that the same plant can be used even if the product line changes but with a change in configuration. For example, imagine the same machinery producing Camry, Corolla, Land Cruiser and RAV4 in the same Toyota plant.

In this dynamic situation, it is very important to measure the quality of such service lines. Treat each of the reusable assets in the service line as an engineered object within an enterprise. Thus, their quality should be measured in the same way as the quality of other engineered objects is measured. That is, in terms of their quality attributes such as reliability, availability, performance and supportability. Reliability is a measure of its completeness, correctness and consistency. Its performance is a measure of its responsiveness, throughput and efficiency. Its supportability is a measure of how the asset is designed to be supported, how it handles its exceptions and how the asset is maintained and upgraded with minimal impact to its users. As we discussed before, supportability also defines the reporting requirements on service-level accomplishment against commitments to the consumers of the service.

While discussing product lines we said that core assets are developed at various stages of the software life cycle. All of the core asset areas are also applicable for service lines. In addition to them, the following core asset areas are important to amortize the investments in service lines:

  • Customer discovery

  • Customer engagement

  • Presales support

  • Delivery excellence

  • Vertical businesses (domain areas such as insurance and telecom)

  • Horizontal businesses (platform areas such as SAP, J2EE, .Net)

  • Support businesses (such as learning, enterprise systems)

Since product lines are relatively well studied and practiced, service lines can make use of the product lines knowledge body in defining scope, variation points and evaluation aspects. Scoping will determine what will be part of the core asset base and will not be. Variation points will tell us which locations within service lines, where different applications built using the same service line, are expected to vary. Evaluation of the service lines will be challenge as there are no formal models available yet. It will be interesting to see what kind of metrics and parameters will be evolving in order to evaluate the effectiveness of service lines.

In order to adapt a service line we need a visionary champion within the organization with sound knowledge of reuse models, service scoping and product lines. Over a period of time, we expect this to transform into a specialized discipline by itself. For each business unit within the service organization, either proactive or reactive or iterative approach can be used based on the appropriateness. Another requirement of any unit ready to adapt service lines is that it should be technology ready. It means readiness to embrace newer tools, methods and processes that aid the service line practice.

IBM has initiated and actively promoted a related area called service sciences, management and engineering (SSME). It is an interdisciplinary approach to design and implement service systems that are complex in nature and deals with people and technologies that provide value for others. SSME is a combination of science, management and engineering applied together. Various organizations such as Berkley, Arizona State University, CMU, MIT, Oxford, Tsing Hua, NCSU, Georgia Tech, San Jose State University and Stanford are already part of this initiative. This is essentially to promote research and practice in a service centric manner.

It will be interesting to see how this new area will shape itself in aiding the service industry.

Conclusions

Software architecture as a discipline has arrived. We have titles and roles in organizations with the keyword architect. We have more than 50 books written on software architecture and numerous articles. Many universities are teaching courses on software architecture. There are several conferences and workshops such as WICSA and SPLC that are regular events. Now no architect can start a design from scratch because there are several developmental platforms offering pre-created architectures such as J2EE, .Net and Symbian along with communication or data interchange standards such as XML and SOAP. One needs to follow the rules already given by these platforms, which is making the life of an architect easier in one way and hard in other way—easy because there is so much reuse of already proven platform components of the architecture and hard because there is so much of platform dependency.

Gone are the days when we designed, implemented and delivered the system to the customer and had a sigh of relief. Customer shipment is only the beginning of the real game in terms of system development. Actual constraints, requirements and quality issues will surface only after this stage. Hence, the important factors that give strength to a system include the openness, serviceability and extensibility of the architecture. This is about moving beyond software architecture and building a solution that will adapt itself to the changing environment and continue to be useful.

We also see a future trend that can be called lightweight software architecture, similar to the idea of lightweight or agile programming practices. This comes from the similar context that gave birth to extreme programming, aspect-oriented programming and other agile methodologies. Since the market to be served is highly dynamic, and as the lessons from practice show that changes are normal, we expect that the stability of the solution is very hard. In this dynamic market with a changing solution space the architecture must be lightweight. However, there are lots of issues that need to be resolved. For example, the concept of weight of architecture has to be determined in a more formal and clear way. Currently, this term is being used in an informal way. The benefit of lightweight architectures and how to create lightweight architectures is not very clearly known yet.

In summary, we can expect tremendous growth in the field of software architecture. Service orientation is expected to take its true meaning and will be practiced at many levels in an organization. There is also a prediction that UML will continue to dominate the next decade of software architecture models and frameworks (Kruchten et al., 2006). It is also predicted that model-driven architectures will dominate software architecture research whereas the work on ADLs will reach a plateau. Material on software architecture including case studies and text books will be created, and more tools will be available to help architects in the design process. Whatever happens, it is an important and promising area in software engineering. Budding architects need to be alert and watch out for developments.

Further Reading

David Garlan (2000) wrote a very influential article ‘Software Architecture: A Roadmap’ (available in The Future of Software Engineering, Anthony Finkelstein (Ed.), ACM Press, 2000, ISBN 1-58113-253-0.) that discusses a few important research as well as practice trends of software architecture. Some of the speculations made in this paper have already come true. Balancing buy-versus-build options, network-centric computing and pervasive computing are speculated as major future challenges.

Philippe Kruchten, Henk Obbink and Judith Stafford were guest editors of the special issue (March/April 2006) of IEEE Software Journal on ‘The Past, Present, and Future of Software Architecture’ (Kruchten et al., 2006). This issue has a collection of very interesting articles. The editorial article lists some of the best text books, articles, conferences and events in the field of architecture.

Service-oriented architecture is now very popular, with several books and other literature being available. The following are our recommendations:

A series of three books by Thomas Erl from Prentice Hall is very popular:

Nary Subramanian and his advisor Lawrence Chung wrote a couple of popular technical papers on software adaptability: ‘Adaptable Software Architecture Generation Using the NFR Approach’ (Subramanian, 2003) and ‘Process-Oriented Metrics for Software Architecture Adaptability’ (Subramanian and Chung, 2003). They give a new approach to increase the adaptability of software architecture.

For information on Service Sciences, Management and Engineering you can refer to the IBM Web site (www.research.ibm.com/ssme).