4. Architecture Evaluation—Developing a Futuristic Travel Search Engine – Software Architecture: A Case Based Approach

Chapter 4. Architecture Evaluation—Developing a Futuristic Travel Search Engine

ContributorsKirti GargDilip BhondeShubhangi Bhagwat

Background

What Is Architectural Evaluation?

Architecture evaluation can be defined as a task of quality assurance that makes sure that all the artefacts and work products relating to the architecture are evaluated. These work products include the architecture and the architectural descriptions. If the architecture of a system is an organizational investment, then a structured review may be considered as an insurance policy. Developing architecture without feedback from a non-partisan resource is bad practice. Paul Clements, Rick Kazman and Mark Klein in their book Evaluating Software Architectures: Methods and Case Studies (Clements et al., 2002) discuss various aspects of architectural evaluation. Here, we present some of them in a structured form of answering the following questions:

  • What is architectural evaluation?

  • Why should architecture be evaluated and reviewed?

  • When to evaluate and review?

  • Who should evaluate and review?

  • What should be reviewed?

  • How to review architectures?

Why Should Architecture Be Evaluated and Reviewed?

Reviewing architecture offers many direct and indirect advantages. These advantages can be enjoyed by the entire range of stakeholders of the project (developer organization, client users, end users, developers, project manager, architect(s), installers, system administrators, maintainers, etc.). The reasons for evaluating and reviewing architecture include the following:

  • A huge saving in cost and efforts has been observed through the early detection of errors and problems. Errors at the architectural level are usually logical errors that may prove to be very costly if left undetected. An early review and determination of such errors is a must to keep the cost and efforts within initial estimates. Architectural review tends to find the defects that can be fixed as well as analyse the defect trends so that the process of architecting can be improved.

  • Architectural evaluation leads to a group conversation about the functional and non-functional (quality-related) requirements of the proposed system. Stakeholders get an opportunity to realize that some of the requirements severely impact factors such as design, costs and risks. This can lead to a clarification and prioritization of requirements, hence easing the developmental job. Gathering all stakeholders may seem a potential source of changed requirements and plans, but it is actually a boon in disguise. As it is early in the development cycle with no major decisions taken, this is the perfect time to change plans and requirements if needed. If the evaluation or design of the architecture changes, the cost and money saved at this stage will be much less than when implementing these changes later.

  • Reviews ensure reduced risks of disaster projects or acquisitions through a thorough evaluation of projects, especially before acquisitions.

  • One of the most crucial aspects of development is to have a common understanding of the system among all stakeholders. A complete representation of the architecture is a prerequisite for architectural evaluation and review. Usually, standards such as 4+1 views of UML or IEEE 1471 standard representations can be used. This is like converting project data to a form that a stakeholder at any level can understand. Being in a form comprehensive to all and explained to all by the architect(s) leads to a common understanding of the system along with increased and proper documentation as a by-product.

  • A comparison of architectural options is possible when the review for system qualities (usually non-functional requirements and key functional requirements) is done early in the life cycle.

  • Reviews are excellent tools to promote organization-wide knowledge about a specific process or discuss best practices and are hence good instruments for organizational learning.

When to Evaluate and Review?

Architectural review is not necessarily a one-time job. It can be performed at different points in time, as per the need of review, available resources and, of course, fulfilment of the review prerequisites:

  • Regularly, as part of an iterative architecting process

  • Early, to validate the overall course of the project, just after finalizing the requirements specification documents and coming up with ideas for architecture or creation of a logical architecture

  • Toll-gate, as a check before major implementation starts, or in case of an acquisition, when the supplier is selected

  • (Too) late, to determine how to resolve architectural problems

The review can be a formal one, following the process thoroughly, or may be a quick informal review, done by the architecting team itself.

Reviews that are thoroughly planned are ideally considered an asset to the project, at worst a good chance for mid-course correction. It means that the reviews are scheduled well in advance, are built into the project’s work plans and budget and are expected to be followed-up. The review can be perceived not as a challenge to the technical authority of the project’s members but as a validation of the project’s initial direction. Planned evaluations are proactive.

Unplanned reviews are reactive steps, as the evaluation is unexpected and is usually an extreme measure to salvage the previous efforts. Usually, they come into picture when the management perceives that a project has a substantial possibility of failure necessitating a mid-course correction. This unplanned evaluation is more of an ordeal for project members, consuming scarce project resources and tightening schedules. It tends to be more adversarial than constructive.

Who Should Evaluate and Review?

Can anybody and everybody review architecture? The answer is no. Depending on the purpose of the review, different parties are involved. These can be the development team itself or another group in the organization or can be some expert external reviewer, preferably partisan, with no interest in the outcome of the evaluation, and who is an expert in evaluation.

To evaluate architecture without any bias, it is very important to bring in an external reviewer as it will help in getting an outsider perspective of a project. However, this can sometimes cause problems because the project’s architect and other people who are in key positions in the project may feel uncomfortable or even threatened. Hence, it is important to make them feel comfortable and make them realize that what is being evaluated is only the architecture and not the people who are responsible for it. It is important to communicate that the evaluation will be objective and is done in the interest of completing the project successfully. It is important for architects and managers to understand that having an eye from the outside provides an excellent situation to get feedback and find out if the design has weak spots. After all, the project team might have been working on the project for a long time without being able to clearly see certain problems (the same problem as code blindness, where a programmer cannot find his own defects).

The stakeholders are also involved in most of the reviews. It gives them an opportunity to voice their individual concerns. (For instance, if the head of customer service wants to have an account search facility that is cross-referenced by certain esoteric customer vitals, he can continue to make the case for its inclusion.) The stakeholders can also either agree or disagree if the showcased architecture meets the project goals.

The evaluation team must be assembled in alignment with the goals and purpose of review and in a way that addresses the following:

  • The team must be perceived (by members of the development project) as impartial, objective and respected. The team must be seen as being composed of people appropriate to carry out the evaluation, so that the project personnel do not regard the evaluation as a waste of time and so that the team’s conclusions carry weight.

  • The team should include people highly skilled in architecture and architectural issues and be led by someone with experience in designing and evaluating projects at the architectural level.

  • The team should include at least one system domain expert, someone who has built systems in the area being evaluated.

  • The team should be located as close as possible to the source of the artefacts it will examine. Locality will simplify logistics and enhance communication with project personnel.

Domain knowledge and unbiased views are two essential characteristics to be shown by the personnel involved in the review.

What Should Be Reviewed?

What can we review exactly when we talk about ‘architectural review’? A review process must have a well-defined goal. The deliverables and the inputs to the review process should be well defined and this is as essential as the need that the qualities being looked for should be well defined. Any of the following can be evaluated as per the need:

  • Functional requirements: Evaluating the architecture from the point of view of functional requirements is straightforward. The evaluation teams check if each of the functional requirements is fulfilled by the architecture and how it is being fulfilled. If the architect does his or her work carefully, this should not cause any problem. Checking for functional requirements makes sure that none of the critical functionalities has been missed by the architect. The evaluation team goes through all the use cases and each item in the SRS (software requirements specification) document and checks if the team are satisfied with the architecture. Most architecture tools will help in this process though you really do not need anything more than a simple scoring system.

  • Non-functional (quality) requirements: The non-functional requirements or the quality attributes are more challenging to evaluate as they create many indirect effects. For example, a layered architecture is more expensive and slower to build than a fat client, but for future changes the layered architecture is more flexible and does not limit development. The evaluation team takes into account all the typically used quality attributes even though the project members have not made them a requirement (or even seen them as important). It is the responsibility of the evaluators to come to the same conclusion (that individual quality attributes are not important enough to focus on) or to point out possible obstacles on the way (that individual quality attributes pose a problem for the project). For example, the project group and the stakeholders may not have portability in the requirements specification because they might think it is not important (or do not realize its importance) for the project. The evaluation team might have had experience with similar projects (and their aftermaths), so the team may suggest that the issue of portability is a necessity and project a time frame for its need in the project.

  • Architecture description: The architectural description documents can be reviewed for completeness, consistency and correctness. They can be reviewed against the requirements and the standard notations being used to check for discrepancies.

  • Overall architecture issues: Completeness, correctness, consistency, feasibility and testability of the architecture.

  • Future changes and impact: This can be a ‘what-if’ analysis to judge the impact of possible future changes in the architecture and changes in the environment that can lead to architectural changes. This kind of evaluation is usually done to check for ‘modifiable architectures’ that are assumed to serve for long durations.

  • Architecture process: The process of architecture itself can be reviewed to check if it has been followed. Believers of the thought that ‘in process lies the quality’ recommend and practise this kind of evaluation.

How to Review Architectures?

Let us ask the most important question. How to review? Architectural review, like any other review process, is to be carried out in three major steps—fulfilling the pre-conditions, conducting the review and generating or documenting the outputs/findings.

The pre-conditions are the set of activities that must be performed before a review can be performed successfully. These include a proper understanding of the review context, the plan of the review, obtaining proper organizational and logistic support, building the right team for review and, most importantly, getting the right and complete set of documents. These documents include the architectural representations, evaluation criteria, non-disclosure agreements from the involved parties, and so on.

Next is the actual review process itself, where the architecture is evaluated using one of the decided methods (discussed next). But simply executing the methods is not enough; a thorough record is to be prepared. All issues/concerns/comments/risks that come into the picture as a result of the review process should be elaborately recorded along with the comments from the reviewers.

These issues/concerns/risks should also be ranked after discussion among members of the review team.

As the concluding step, various outputs should be generated. These include the reports of review results, ranked issues, enhanced system documentations (as clarifications and prioritization occur) and the cost and benefit estimations for implementation of changes if any.

Postmortem

This case study exposes interesting issues relating to architecture review and evaluation. The team is executing a very ambitious project that aims to change the way the travel business runs. The challenge is to come up with an open architecture. There are many stakeholders in the project, including travel companies such as Trav’Mart and TravelPeople, travel agents, end users who plan their travel and corporate travel help-desk personnel. This architecture is like a one-time investment, for the client as well as for the developers. Clients would like to ensure that the proposed architecture is capable of fulfilling their current and future business needs. The developer organization would like to ensure that the proposed architecture will stand the test of time. The proposed system is designed to be in use for a long period and hence system administrators will ask for a system that is easy to maintain, modify and extend.

But how are they going to ensure that all these requirements will be fulfilled? Obviously, they do not want to wait for the system to be completely developed so that it can be evaluated. Hence, the option is a toll gate architectural review. Though we have discussed most of the fundamental aspects of architectural evaluation, there are still many questions that need to be answered and more clarity needs to be brought in. Is the architecture ripe enough to be reviewed? How will the evaluation be performed? Will the reviewer team have sufficient and necessary skills for review? The best way to answer these questions is to perform the evaluation process by following a proven review method that will guide us at each step.

Techniques for Evaluation and Review

The whole gamut of review techniques can be divided into two fundamental types:

  • Questioning techniques

  • Measuring techniques

The questioning techniques are relatively simple and include methods like a general discussion based on qualitative questions, evaluation through questionnaire(s), having a predefined checklist and using it for review or using some well-structured methods such as scenario-based evaluation methods. Scenario-based methods are gaining popularity and a lot of research is being carried out in these areas. Various methods such as the ATAM and the scenario-based architecture evaluation method (SAAM) are gaining popularity and have shown encouraging results.

The various measuring techniques are more technical in nature and usually involve quantitative methods like collecting answers to specific questions, collecting metrics and through simulation and prototypes.

A Review Method for Architectural Description and Architecting Process

The focus of such methods is to evaluate if the architect performs the right activities and if the resultant architecture (being represented by the description) is good enough for further use. Here, the description document is the artefact that is evaluated. For this, it is assumed that the document has been created using some standard. This standard can be 4+1 views of architecture or can be any other standard as long as it is approved and followed by the organization.

The review method

As with other evaluation methods, collection of the relevant set of documents is the first step. An evaluation team that may represent all the stakeholders is put together. The documents are read by the team members and commented upon.

The team uses some standard to evaluate the documents and check if the given architectural description represents the concerns of all the stakeholders. The standard being used as the checklist is usually a recommended practice such as the architecture description framework IEEE 1471.

This IEEE recommended practice, depicted as Figure 4.5, is for writing architectural descriptions, where the concerns of each stakeholder are represented adequately.

Figure 4.5. IEEE 1471-2000 recommended practice for architectural description of software-intensive system

Often, more detailed and very specific checklists can be used for certain models or documents. The reviewing team may interview the stakeholders for clarifications, confirmations and comments or to understand the process that was followed for architecting, if needed. The team then reports back its observations, perceived risks and suggested improvements.

Whatever method is applied for the review, the typical issues that are under investigation are the following:

  • Are all stakeholders considered?

  • Have all requirements been identified?

  • Have all quality issues been considered and selected appropriately?

  • Are key concerns of all stakeholders identified and their viewpoints defined?

  • Are these viewpoints rational and supported by facts?

  • Are the views documented and conformance gained from the concerned stakeholder?

  • What is the quality of the model/document that is being studied? Is the document organized, readable and consistent? Is it complete and free from ambiguities?

  • Does the architectural solution being provided appear rational?

  • Are the documents well managed?

Scenario-based Review Methods

Scenario-based methods are the most popular and advocated methods. They focus on analysing the architecture by identifying relevant scenarios (i.e., events/situations in a system’s life cycle) and reasoning their impact on the system (architecture).

As is well known, a scenario is a well-described and brief narrative of the usage of a system, typically from an end user’s point of view (and sometimes from the developer’s point of view). The systems functionality is nothing but a collection of such scenarios.

Scenarios are also important from a completely different angle. They give very good insight into how a given architecture can meet the quality attributes. Fulfilling the scenario is a good way of measuring the readiness of the systems architecture with respect to each of the quality attributes.

Though architecture can be adequately judged on the basis of the quality attributes, understanding and expressing the quality attributes itself is a difficult task. Hence, a reverse mechanism can be used, where a scenario represents the quality attribute. As scenarios can be thought of and expressed easily, they can be used to evaluate architectures as well.

Many scenario-based methods have come into the picture recently. But most of them have been found to be restricted to a limited set of quality attributes they are suitable for. And most of them are enhanced or restricted versions of the first such method, the SAAM. There are many methods available for architecture review and evaluation (Losavio et al., 2003). Besides SAAM, other popular methods that have evolved over time include

  • ATAM, architecture trade-off analysis method. Though ATAM is an improvement over SAAM, ATAM is restricted to non-functional requirements only, whereas SAAM is suitable for all kinds of requirements and quality goals.

  • CBAM, cost–benefit analysis method.

  • ALMA, architecture-level modifiability analysis.

  • FAAM, family architecture analysis method.

  • ARID, active reviews for intermediate designs (specifically used for incomplete architectures).

All of these methods can be abstracted out and viewed as modified versions of a more general method that is outlined below:

  • Establish the goals of the review: This can be to assess a single quality, compare various architectures, and so on. Select an appropriate review method as per the goals of the review.

  • Fulfil preconditions of the review method: The usual preconditions include obtaining a right set of architectural documents, forming teams, circulating architecture details among the team members, and so on. The team should also set goal(s) and scope for the review process.

  • Establish the issues of review: This can come through a preliminary study of the goals of review, requirements and architectural document.

  • Establish architectural description: The architecture is described to all and clarifications provided. It is preferred to use some standard notations for architectural documents so that all those involved with the review can understand them.

  • Elicit and classify scenarios: All the reviewers brainstorm and come with relevant scenarios that the system must satisfy or those that may impact the system. This can be done iteratively, or other heuristics can be applied to obtain the most useful and representative set of scenarios.

  • Determine the impact of the scenario on the various issues under consideration: The interaction among the various scenarios can also be studied to check out the overall impact over the system. The requirements and architectural descriptions can be augmented with the new information coming into the picture.

  • Overall evaluation and summarization: The observations are summed up, trends are discovered and various issues/recommendations are finalized through consensus.

  • Report findings/recommendations/issues: Though notes are scribed throughout the process, the final observations and findings are formally written and reported. The recommendations are given and often even the process is reported for purposes of learning and preserving knowledge.

Figure 4.6 summarizes these steps.

Figure 4.6. A general perspective on scenario-based evaluation methods

Most of the scenario-based methods involve the stakeholders in the evaluation process, where each stakeholder presents scenarios most relevant for his job description. Hence, the evaluation team relies on stakeholders’ ability to write true, feasible and useful scenarios, though coming up with meaningful scenarios is not easy. Scenario-based methods, especially the ATAM, has become very popular in recent years and is one of the most recommended practices.

Scenario-based methods are not really scientific or formal. In addition, it is very hard to make sure that all the scenarios identified for the evaluation task are complete, comprehensive, sufficient and necessary.

Scenario-based methods depend on brainstorming, which is very popular but not necessarily effective all the time. It is possible that different subject matter experts have different opinions on the characteristics of each scenario and the business logic.

In such situations, we need a better way than brainstorming for identifying correct scenarios using failure determination methods such as anticipatory failure determination (AFD; see http://www.ideationtriz.com/AFD.asp.

Case Analysis

The travel search engine is a very ambitious product. It is clear that the team is very meticulous and wants to come up with the best possible architecture. The team has developed the basic architecture and wants to evaluate it. Hence, the team aims at validating the architecture before proceeding further.

The first important question to ask at this juncture is, ‘Is this architectural description mature enough to run the evaluation process?’ All we have is the logical architecture and its description. We know in detail the requirements against which we wish to evaluate the architecture. But is this sufficient? It may appear a difficult task, but not an impossible one.

Let us first establish the goal of this evaluation process. The team has explicit goals for the evaluation. Its aim is to validate the architecture to fulfil the system requirements. But many of the requirements, especially the properties such as response time, depend on the implementation details as well, and hence they are difficult to judge.

The best possible option in these circumstances is to use the domain or functional requirements as quality goals that are to be reviewed. The next task is to identity the non-functional requirements that are independent of the platform-specific or implementation details of the architecture. Take them as quality goals and check if the proposed architecture fulfils them.

The various functional requirements that are critical are

  • Content and inventory search

  • Collaboration with agent systems and other existing systems (legacy and other competitors)

  • Pricing mechanism

  • Management of inventory

  • Personalized services and presentation of contents

  • Presentation services such as virtual tours

The various non-functional requirements that become important here are

  • Performance—Response time, number of concurrent users.

  • Availability—24 × 7 availability and planned maintenance.

  • Maintainability—The system will be used by various types of users, geographically distributed. The domain itself is evolving and the business rules change quickly. It should be easy to maintain, modify and evolve.

  • Scalability—As the number of users are bound to increase with time.

  • Security—The system will perform monetory transactions; hence, security is an important consideration.

These are the most important issues. There may be many requirements that will seem critical at times. These include the backup and restoration and connection with different types of systems (portability).

Which Method to Use?

The current system is still under development; hence, it is not possible to collect various metrics about its performance and other quality attributes. The best way to go about it seems to be to use a scenario- or checklist-based questionnaire. But checklists and questionnaires are more restrictive in this case as the opportunities for gathering the answers to questions and conducting brainstorming sessions for all stakeholders are very limited. Hence, applying scenario-based methods seems to be more appropriate. Since we have ample domain knowledge available, the scenarios can be easily built and evaluated.

Scenario-based evaluation methods usually bring forward many changes and open points that an architect must further reconsider. They are driven by the stakeholders’ views as they come up with scenarios that represent their view, and the architecture is evaluated for the execution of those scenarios. Hence, the evaluation team relies on the stakeholders’ ability to write true, feasible and useful scenarios. Coming up with really useful and representative scenarios is one of the most important and difficult steps in scenario-based methods. But the scenarios are also important in the sense that they help stakeholders in identification as well as proper expression of the current needs and future changes in the system. Thus, they improve the interaction between stakeholders and architects. Often, the scenario-based evaluation techniques highlight the rationale behind various architectural decisions. In fact, the methods were developed for enabling software architects to reason the system’s quality aspects earlier in the development process (Ionita et al., 2002).

Though a number of methods are available, we select a modified version of the SAAM method (Abowd et al., 1997; Ionita et al., 2002; Kazman et al., 1994). We have modified this method to suit the needs of our case and will utilize some general quality-attribute-based evaluation measures to increase the effectiveness of our method.

Software Architecture Analysis Method

SAAM is one of the earliest scenario-based software architecture analysis methods. Though initially developed to asses the modifiability issues in mind, the various quality claims of software architectures can be evaluated with the help of scenarios. In practice, SAAM has proven useful for quickly assessing many quality attributes such as modifiability, portability, extensibility, maintainability and functional coverage.

When analysing a single architecture, the review indicates the weak or strong points, together with the points wherein the architecture fails to meet its quality requirements. If two or more different candidate architectures providing the same functionality are compared, then the review can produce a relative ranking.

Prerequisites and inputs

System quality attributes that are going to be evaluated in the review session must be addressed in a certain context. This imposed the adoption of scenarios as the descriptive means to specify and evaluate qualities. A number of scenarios describing the interaction of a user with the system are the primary inputs to this architectural evaluation session. Besides scenarios, the system architecture description, the reference artefact on which the quality scenarios are mapped, must be available for all the participants.

Steps in the evaluation process

The process consists of six main steps, which typically are preceded by a short overview of the general business context and required functionality of the system (Ionita et al., 2002). These steps include

Step 1—Develop scenarios

This brainstorming exercise aims at identifying the type of activities that the system must support. All stakeholders perform this exercise and each stakeholder contributes scenarios that affect him. The scenarios should capture all major uses and users of the system, all quality attributes and the associated levels that the system must reach and, most importantly, the foreseeable future changes to the system. The scenarios gathered so far can be grouped as system scenarios.

This exercise is usually performed iteratively. After each iteration, more architectural information is shared and hence more scenarios are gathered each time. Thus, architecture description and scenario development influence each other. The recommendation is to perform these activities in parallel. For example, some scenarios that we can develop for the travel search engine are the following:

  • Time taken for completing the transaction after pressing the submit button with 54 concurrent users and a user base of 500, when it is a high volume online transaction such as a purchase order and the user is interacting through a broadband Internet connection.

  • Time taken to complete the transaction when the user fills a complex Web form such as a vendor master over a 56K modem link and the system is concurrently handling some 50 users.

  • What efforts will be needed when the product is being released to a non-English-speaking country?

  • Will there be platform independence for the business logic layer, when the application is being ported to different operating systems and Web servers.

  • Can a developer add functionality by adding sub-systems or components or simple modules? How much effort does one need?

  • Will the services be available in case of failures of the Web, J2EE and database servers or the application system? Can one expect 99 per cent availability, with 24 × 7 uptime, all 52 weeks?

  • What will be the authentication and access mechanism to the system for registered users? Do I need to sign into the system again and again to access various services of the system?

  • Can I connect to different resources and collect rich (including audio, video and images) content about various tourist spots and add them to my database?

  • Will I be able to extend the system to new information distribution systems such as iTVs and next-generation systems?

Step 2—Describe architecture(s)

The candidate architectures are presented in the second step. The description should be following a standard such as IEEE 1471-2000 or 4+1 UML views for architecture. The aim is to present an easy to understand static and dynamic representation of the system (components, their interconnections and their relation with the environment). The following can be included in the presentation:

  • Driving architectural requirements

  • Major functions, domain elements and data flow

  • Sub-system, layers and modules that describe the system’s decomposition of functionality

  • Processes, threads, synchronization and events

  • Hardware involved

  • Architectural approaches, styles and patterns employed (linked to attributes)

  • Use of COTS and other external components used

  • Trace of one to three major use case scenarios including runtime resources discussion

  • Architectural issues and risks with respect to meeting the driving requirements

Step 3—Classify and prioritize scenarios

The scenarios collected so far are classified as direct scenarios and indirect scenarios. A direct scenario is supported by the candidate architecture because it is based on the requirements that the system has evolved from. The direct scenarios are perfect candidates as a metric for the architecture’s performance or reliability.

An indirect scenario is that sequence of events for which realization or accomplishment of the architecture must suffer minor or major changes. The prioritization of the scenarios is based on a voting procedure.

The analogy of use cases and change cases with direct scenarios and indirect scenarios make their identification easy.

Step 4—Individually evaluate indirect scenarios

In the case of a direct scenario, the architect demonstrates how the scenario would be executed by the architecture. In the case of an indirect scenario, the architect describes how the architecture would need to be changed to accommodate the scenario. For each indirect scenario, identify the architectural modifications that are needed to facilitate that scenario, together with the impacted and/or new system’s components and the estimated cost and effort to implement the modification. Record all the scenarios and their impact on the architecture as well as the modifications being implied by the indirect scenarios.

It will be very useful to represent all the alternative architectural candidates in the form of a table (or as a matrix) to be able to easily determine which architecture candidate is better for a given scenario.

Step 5—Assess scenario interaction

When two or more scenarios request changes over the same component(s) of the architecture, they are said to be interacting with each other. In such a case, usually, a trade-off results and the architecture may need some modifications. A simple way may be to modify the affected components or divide them into sub-components in order to avoid the interaction of the different scenarios.

The impact of these modifications should also be evaluated for side effects.

Step 6—Create an overall evaluation

Finally, a weight is assigned to each scenario in terms of its relative importance to the success of the system. The weights tie back the scenario to the business goals and/or other criteria like costs, risks, time to market, and so on. Based on these scenario weights, an overall ranking can be proposed if multiple architectures are being compared. Alternatives for the most suitable architecture can be proposed, covering direct scenarios and requiring least changes in supporting indirect scenarios.

A scoring system can be used to indicate the importance of requirements and how the current architecture fulfils them. Each of the requirements is given a weight that indicates the importance of the requirement. The score is the estimate of how well the candidate architecture fulfils the requirement. An example is given in Table 4.1.

Table 4.1. Sample scorecard for evaluating software architecture features

Requirement

Weight

Score

Total

Book a tour

5

5

25

Search for a tourist spot and routes from Hawaii to the spot

5

4

20

Change screen layout to suit PDA screens

3

3

9

Portability

4

3

12

Total

  

66

The scoring can also follow the usual style of +1, –1 and 0, respectively, for present, absent and not required status of an attribute/requirement. Here, the total sum will give an objective image of the current system.

Increasing the effectiveness of your review

As mentioned earlier, we suggest a number of techniques that may be used with the basic process. One of the most time-consuming and crucial steps is to form scenarios that are meaningful, representative and helpful in the evaluation. Coming up with scenarios depend a lot upon the experience and expertise of the reviewer. It is important to understand that getting a good starting point at times becomes most difficult. The methods described below can help find meaningful scenarios.

Generate a quality attribute utility tree

The quality attributes are the driving force behind the architecture. There are four types of qualities:

  • Runtime qualities: These qualities are seen when the system is under execution, such as functionality, security and availability.

  • Non-runtime qualities: These are built in qualities, not depending upon the dynamic behaviour of the system. These include qualities such as modifiability, portability and testability.

  • Business qualities: These qualities directly affect the business, such as cost and schedule for development and marketability of the product.

  • Architectural qualities: These are the properties of the architecture process or the description. These qualities include conceptual integrity, correctness, completeness, and so on.

Though quality attributes can be the most direct way to evaluate the system, there are no universal or scalar measurements for quality attributes such as ‘reliability’. Rather, there are only context-dependent measures, meaningful only in the presence of specific circumstances of execution or development. And the context is represented by the scenarios. Hence, we identify scenarios instead of quality attributes directly.

To come up with a proper set of prioritized scenarios, we suggest the development of a quality attribute tree. Starting from a node called ‘utility’, the quality attributes that are most important for the system and are under investigation can be added as nodes to this root. Each of these quality attributes can be further refined until we get well-defined and well-formed scenarios for each ‘sub-quality attribute’. Refining the scenarios can be proceeded using the guidance provided by the ISO 9126-1 quality model (ISO/IEC, 1998), as in Figure 4.7.

Figure 4.7. Sub-characteristics of the ISO 9126-1 quality model

Note that these sub-characteristics given by the ISO standard need to be taken only as a starting point. Typically, a modified version of the above standard gets adapted based on the expertise of the team, available information as well as the suitability of the application. The adapted sub-characteristics of the ISO quality model is further refined to several levels deep to create a utility tree for the application.

Figure 4.8 shows a sample utility tree that can be generated for the travel search engine after adapting the above-mentioned standard suitably.

Figure 4.8. Utility tree for the travel search engine

This looks like a backward chaining process, but suits our purpose well.

Use quality attribute taxonomies

Another useful way to come up with scenarios is to use quality attribute taxonomies. Each of the quality attribute taxonomy gives the key concerns and the factors that affect the quality attribute. Knowing the concerns, scenarios can be refined; and knowing the factors, they can be evaluated.

Some sample taxonomies (Barbacci et al., 1995, 2002) are given in Figures 4.94.12.

Figure 4.9. A taxonomy on security

Figure 4.10. Taxonomy on software usability

Figure 4.11. Taxonomy on performance

Figure 4.12. Taxonomy on dependability

Identify risks, sensitivity points and trade-offs

Though the ATAM method is developed specifically for this purpose, we suggest that whatever method is being used, the reviewers should try to classify their concerns as follows:

  • Risks: alternatives that might create problems in the future in some quality attributes

  • Sensitivity points: alternatives for which even a slight change makes a significant difference in a quality attribute

  • Trade-offs: decisions affecting more than one quality attribute

They can also use some other appropriate classification. The aim is to classify so that identification of trends is easy. A careful analysis will not only show the trend exhibited by the architecture but also the architectural capabilities of the architect, thereby providing chances for improvement.

These methods improve the usefulness of the basic scenario-based evaluation method. As we did, the method should be adapted to suit particular organizational needs and available resources.

Conclusions

Evaluating architecture and finding bottlenecks and suitability of the architectural components is similar to finding out logical errors and other defects in programs early. The earlier we find them, the lesser are the costs and efforts involved. Use a suitable method (a simple checklist, questionnaire, scenarios, prototyping, simulation, metrics based, etc.) to evaluate/review the architecture.

Software architecture evaluation methods expose the specific limitations of the architecture in a very simple, straightforward and cost-effective way, which otherwise surface late in development and deployment and lead to disappointment, rework and increased costs. The various methods help us evaluate architectural decisions in light of quality attribute requirements and reveal how the various quality attributes interact with each other. They highlight the relationship between quality attributes and business drivers (e.g., affordability and time to market) that motivate them. Though the methods are not inexpensive, the cost–benefit ratios are favourable. Software architecture evaluation methods give an insight into the capability and incapability of the system architecture, which are not easily visible otherwise. Hence, their use should be encouraged.

In this chapter, we saw the rationale behind the evaluation and tried to answer the Ws (why, when, who, what) of evaluation. We also saw the basics of software architecture evaluation without binding us to just one technique. But the most essential step of this learning exercise is to apply this knowledge. This case study provides us with an opportunity to get hands-on experience that is real and represents a modern software development project scenario.

It is important to note that identifying all critical quality attributes of the application has to start from the end user’s point of view and how various sub-systems within the main system interact with each other. Identifying appropriate scenarios, thus, becomes very crucial to construct the utility tree of a given application.

In this case analysis, we have used only the scenario-based method, which makes use of more of intuitive approaches like brainstorming. It does not, however, offer any assurance on whether all the scenarios are accurately captured or whether the right utility tree is built at the end of the exercise. A more important limitation of this approach is that when we consider a few important scenarios, there could be several failure points that may not be obvious for the people who are sitting in the brainstorming room.

However, we are limited by the fact that as the system is still under development, we are not left with many choices. Had it been a production system where we have enough metrics gathered, we could have used more structured methods like measuring techniques that are more technical in nature. They usually involve quantitative methods like collecting quantitative answers to specific questions, collecting metrics and using simulation and prototypes.

Note that architecture evaluation is conducted at several stages in the life cycle of the development of a software. As more architectural descriptions are available we will be able to conduct evaluation in a better way. As the knowledge about the system increases as the development and ultimately its usage progresses, we will be able to apply more sophisticated evaluation techniques.

In any case, remember that ‘reviewing and evaluating architecture early in the development cycle’ is one of the most effective practices in engineering software systems.

Best Practices and Key Lessons from the Case Study

  • Review the architecture if the project is of 700 staff days or longer. This recommendation is based on the survey conducted by AT&T. On an average it takes 70 staff days to conduct the evaluation and the benefits are about 10 per cent of the overall project cost.

  • The most suitable time to review architecture is when the requirements have been established and a proposed architecture is established. As time passes in any development effort, decisions are made about the system under construction. With every decision, modifications to the architecture become more constrained and, thereby, more expensive. Performing the architecture evaluation as soon as possible reduces the number of constraints on the outcome of the evaluation.

  • Try and ensure that all possible stakeholders participate in the evaluation exercise right from the client representative to developer, system end user, system administrator, tester, and so on.

  • Provide read ahead material to the review team. This may include description of the architecture and discussion of the rationale behind architectural decisions.

  • Rank the quality and function requirements of the system before the evaluation begins. This will in turn guide the quality attributes to be considered on priority.

  • Reiterate the scenario-finding process using the base scenarios to come up with more and refined scenarios.

  • Use use cases or business and functional requirements as the starting point to identify various scenarios or checkpoints to evaluate the architecture.

  • The architectural drivers can form the first set of quality attributes to be analysed.

  • Industrial experience suggests that three to five is a reasonable number of quality attributes to try to accommodate.

  • It is important to check the existence and soundness of system acceptance criteria.

  • Each issue raised during the review should be documented.

  • When evaluating performance in terms of resource utilization, ensure that the workload information (consisting of the number of concurrent users, request arrival rates and performance goals for current scenario), a software performance specification (execution paths and programs, components to be executed, the probability of execution of each component, the number of repetitions of each software component and the protocol for contention resolution used by the software component) and the environmental information (system characteristics like configuration and device service rates, overhead information for ‘lower level’ services and scheduling policies) are all available.

  • Do not miss early warning signals. These include the presence of more than 25 top-level architecture components, one requirement driving the rest of the design, architecture depending upon alternatives in the operating system, use of proprietary components though standard components are available, architecture forced to match the current organization, component definitions coming from hardware division, too much of complexity (e.g., two databases, two start-up routines, two error locking procedures), exception driven design, and so on.

  • If any industry standards are to be fulfilled by the system, incorporate them in the list of most important requirements.

Further Reading

Among all the resources that are currently available, in opinion, the most comprehensive one is a book written by Paul Clements, Rick Kazman and Mark Klein, Evaluating Software Architectures: Methods and Case Studies, published by Addison Wesley, in 2002.

Software Engineering Institute (SEI) Web site of Carnegie Mellon University has a wealth of information in this area (www.sei.cmu.edu). In fact, SEI has its own philosophy and approach to architectural evaluation. Some interesting resources from this web site are the following:

  • Barbacci, M. R., Ellison, R., Lattanze, A. J., Stafford, J. A., Weinstock, C. B., Wood, W. G. Quality Attribute Workshops, 2nd edition. Report CMU/SEI-2002-TR-019. Pittsburgh, Pennsylvania: Software Engineering Institute, June 2002. www.sei.cmu.edu/publications/documents/02.reports/02tr019.html.

  • Abowd, G., Bass, L., Clements, P., Kazman, R., Northrop, L., and Zaremski, A. Recommended Best Industrial Practice for Software Architecture Evaluation. Pittsburgh, Pennsylvania: Software Engineering Institute, January 1997.

  • Barbacci, M. R., Klein, M. H., Longstaff, T. A., and Weinstock, C. B. Quality Attributes. Report CMU/SEI-95-TR-021. Pittsburgh, Pennsylvania: Software Engineering Institute, December 1995. www.sei.cmu.edu/publications/documents/95.reports/95.tr.021.html

  • Kazman, R., Len, B., Gregory A., and Webb, M. SAAM: A method for analysing the properties software architectures. In: Proceedings of the 16th International Conference on Software Engineering, Sorrento, Italy, May 1994, pp. 81–90.

Most of the relevant material about SEI’s own methodology on architecture evaluation, The Architecture Trade-off Analysis Method (ATAM), can be found at www.sei.cmu.edu/architecture/ata_method.html.

A good overview of scenario-based evaluation is given in the paper Scenario-Based Software Architecture Evaluation Methods: An Overview, written by Mugurel T. Ionita, Dieter K. Hammer and Henk Obbink. This paper was published in the workshop on Methods and Techniques for Software Architecture Review and Assessment at the International Conference on Software Engineering, Orlando, Florida, USA, May 2002.

The Software Architecture Review and Assessment (SARA) Working Group published a popular report called the SARA. Version 1.0 of this report is a result of the work done between 1999 and 2002. Philippe Kruchten is the main person behind this effort. This report is very useful for anyone who wants to understand the architectural evaluation. It can be found at: www.philippe.kruchten.com/architecture/SARAv1.pdf (earlier the same report was available at www.rational.com/media/products/rup/sara_report.pdf).

We also recommend another good article on architectural quality characteristics by Francisca Losavio et al. Quality characteristics for software architecture. Journal of Object Technology, Vol 2, no 2, March–April 2003, pp. 133–150. www.jot.fm/issues/issue_2003_03/article2.

We have mentioned earlier the limitations of the classical brianstorming methods. There are some structured approaches such as TRIZ (Russian acronym for Theory of the Solution of Inventive Problems) and I-TRIZ (Ideation TRIZ) to help in this situation. Anticipatory failure determination (AFD) is a sub-area within I-TRIZ. TRIZ, I-TRIZ and AFD are the bodies of knowledge that help in approaching the innovative solutions to problems. In particular, AFD is a method to detect, analyse and remove failure cases in systems or products or processes. Readers interested in this topic can refer to the Web site www.ideationtriz.com/ as well as the white papers available on the same Web site. For example, Phadnis and Bhalla (2006) apply these techniques to digitization of businesses.