After completing this chapter, you will be able to:
- – understand the importance of measurement;
- – understand the measurement process according to the ISO 12207 standard;
- – understand the Practical Software and Systems Measurement method;
- – understand the ISO/IEC/IEEE 15939 measurement standard;
- – understand the measurement perspective according to the CMMI® for Development;
- – understand the benefit of using surveys as a measurement tool;
- – understand how to implement a measurement program;
- – learn practical considerations for measurement;
- – understand the ISO/IEC 29110 measurement perspective;
- – learn the measurement requirements described in the IEEE 730 standard.
Software measurement has been a research topic in software engineering for over 30 years [FEN 07]. Sadly, many measurement programs can hardly report the most basic software measurement such as: schedule, costs, size, and efforts. This means that little recent and factual information is available to project teams and their management [LAN 08].
We recall here the definition of software engineering. This definition puts emphasis on the importance of measuring software activities and products.
Today's organizations that develop software, either as standalone products or as components of systems, must continue to improve their performance and their software. Consequently, they must establish a performance target for their software development and maintenance processes. This would allow for better decision making and assessments of the rate of improvement compared with the needs of clients.
Victor Basili summarizes the many problems relative to measurement [BAS 10]. Organizations developing software experience many problems when trying to implement measurement programs. As an example, they often try to collect too much data where a good amount of it is not useful. They often do not implement a process to analyze data in a way that can help with strategic and tactical decision making. This situation leads to many problems such as a reduction of the benefits that can be achieved from a proper measurement program and disillusionment on the part of customers, management, and software developers. The inevitable result is the failure of the measurement initiative.
Watts S. Humphrey described the key roles related to software measurement, that is, understand and characterize, evaluate, control, predict, and improve [HUM 89]:
- – understand and characterize: measures allow us to learn about software processes, products, and services. Measures also:
- establish baselines, standards, and business and technical objectives;
- document the software process models used;
- set improvement objectives for software processes, products, and services;
- better estimate effort and schedule costs for a specific project;
- – evaluate: measures can be used to conduct cost/benefit analysis and determine if the objectives have been met;
- – control: measures can help with project control of resources, processes, products, and services by sounding alarms when control limits are surpassed, performance criteria are not met and standards are not followed;
- – predict: when software processes are stable and under control, measures can be used to predict budgets, schedules, resources needed, risks, and even quality issues;
- – improve: measures allow us to identify the root causes of defects and other inefficiencies where improvements can be proposed.
As an example, we illustrate the use of measurement to control the quality of a software project during development. Figure 10.1 presents the defect density of software components that have been inspected. The dotted line shows the level of quality required for a software component. For example, component number 10 should be inspected again after defects of the first inspection have been corrected.
Measurement, when available, allows the software project manager [SEI 10a]: to better plan and objectively assess the project state as well as the tasks that have been assigned to a supplier; to track the actual project performance against approved plans and objectives; to quickly identify process and product problems in order to act on them; and to collect baseline data useful for benchmarking future projects. We will also see that measures can be used to better estimate project schedules, requests for proposals, answers from suppliers during the selection process, supplier and competitors' offers and proposed project schedules.
A measurement program is also helpful with the improvement of the quality of software acquisition, development, maintenance, and infrastructure processes. The program must use a measurement repository where data are collected, analyzed, and measurements are reported and available to all stakeholders within the organization. This measurement repository should be designed to answer all types of questions, for decision making and performance indicators. It should also allow for the coherent measurement of the software processes improving quality and allowing for efficient defect removal.
Measurement has been at the origin of all science and is partly responsible for all scientific advancements. Measurement contributes to the maturing of a concept by quantifying it. Using measurement allows software processes to move from an artisanal state to a controlled and repeatable state. Software engineers should design and use sound measures to improve the maturity of the software processes.
Measuring allows us to understand the past, better inform current activities and try to predict the quality of developed products. This capacity will benefit projects where performance has not always been very precise historically with unclear schedules, budget overruns, and final products that include defects. In the past, measurement was considered as overhead to the project. Software development managers use measurement in their processes and set objectives. Measurement results are then used to take short term and active decisions during delivery and operation of systems. This helps identify and solve business-IT alignment issues or even with assigning work dynamically. For example, Google's rule of thumb is that a system reliability engineer must spend 50% of his time on software development/maintenance. To enforce this threshold, they measure where time is spent and use this measure to ensure that teams consistently follow the proposed ratio.
Figure 10.2 describes the software engineering management knowledge area of the SWEBOK. On the right hand side of the figure, four software engineering measurement sub-topics are presented: (1) how to establish and sustain measurement commitment; (2) how to plan the measurement process; (3) how to perform the measurement process; and (4) how to evaluate measurement.
The concepts of cost of quality and software business models were presented in a previous chapter. In terms of cost of quality, measurement is considered a preventive cost in the sense that a large part of measurement investment focuses on error prevention in all the stages of the software life cycle processes; for example, the cost of collecting, analyzing, and sharing these data. Table 10.1 presents different cost items with respect to preventive costs.
Table 10.1 Preventive Costs
|Majorcategory||Sub-category||Definition||Typical cost item|
|Preventive cost||Establish quality fundamentals||Efforts to define quality measures, establish objectives, standards and thresholds, and analysis required on data.||Definition of success criteria for acceptance testing and quality standards/guidance.|
|Interventions toward projects and processes||Efforts to prevent bad quality or improve process quality.||Training, process improvement, measurement collection, and analysis.|
Source: Adapted from Krasner (1998) [KRA 98].
Measures are often used in the following software business models: custom systems written on contract and mass-market software. In these business models, policies, processes, and procedures are often used and followed closely to control the development progress and minimize risks and the impact of defects.
In this chapter, the first topic described in detail is the measurement processes as described in ISO 12207 and ISO 9001. To illustrate how to implement these recommendations, we then present the practical software and systems measurement (PSM) which was initially developed to guide American Defense software projects and later became an influential component that led to the emergence of the ISO/IEC/IEEE 15939 standard on software measurement [ISO 17c]. The ISO 15939 standard is also summarized to provide an overview of the software measurement process. After this introduction to the topic, the CMMI point of view is then presented. Next, we discuss how the survey can be an efficient measurement tool. It is another illustration of a simple measurement process. Then, the use of measurement in very small entities is presented. Finally, as with all the other chapters of the book, the last section describes the measurement requirements of IEEE 730 that should be included in the software quality assurance (SQA) plan (SQAP) of a project. We conclude with a review of how to successfully implement a software measurement program in your organization as well as suggestions on how to avoid pitfalls.
Measurement is one of the many processes described in the ISO 12207 standard. Its purpose is to collect, analyze, and report objective data and information to support effective software management and demonstrate the quality of the products, services, and processes [ISO 17].
As a result of the successful implementation of the measurement process [ISO 17]:
- information needs are identified;
- an appropriate set of measures, based on the information needs, is identified or developed;
- required data is collected, verified, and stored;
- the data is analyzed and the results interpreted;
- information items provide objective information that supports decisions.
The project shall implement the following activities and tasks in accordance with applicable organizational policies and procedures with respect to the measurement process [ISO 17]:
- – Prepare for measurement:
- define the measurement strategy;
- describe the characteristics of the organization that are relevant to measurement, such as business and technical objectives;
- identify and prioritize the information needs.
- select and specify measures that satisfy the information needs.
- define data collection, analysis, access, and reporting procedures.
- define criteria for evaluating the information items and the measurement process.
- identify and plan for the necessary enabling systems or services to be used.
- – Perform measurement:
- integrate manual or automated procedures for data generation, collection, analysis, and reporting into the relevant processes.
- collect, store, and verify data.
- analyze data and develop information items.
- record results and inform the measurement users.
To obtain an overview of the measurement process, ISO 12207 refers the reader to the ISO 15939 standard that will be presented in a later section.
ISO 9001 highlights the fact that a quality system needs a measurement component to be efficient. In addition to the typical process components, Figure 10.3 describes where performance measurement applies.
Clause 7.1.5 of ISO 9001 entitled “Monitoring and measuring resources” describes some measurement obligations [ISO 15]: “The organization shall determine and provide the resources needed to ensure valid and reliable results when monitoring or measuring is used to verify the conformity of products and services to requirements.”
Clause 9.1, entitled “Monitoring, measurement, analysis and evaluation,” states that [ISO 15]: “The organization shall determine:
- – what needs to be monitored and measured;
- – the methods for monitoring, measurement, analysis, and evaluation needed to ensure valid results;
- – when the monitoring and measuring shall be performed;
- – when the results from monitoring and measurement shall be analyzed and evaluated.”
Lastly, clause 10.3 of the ISO 9001 standard entitled “Continual improvement” describes another necessity of the use of measurement [ISO 15]: “The organization shall continually improve the suitability, adequacy and effectiveness of the quality management system.”
The PSM methods were developed for the American defense industry [JON 03]. It served as a major input to the ISO 15939 standard on systems and software engineering measurements. Given that standards do not usually explain how things should be done, the PSM is useful for its practical examples.
The objective of the PSM is to provide measurement guidelines to software project managers. In it, directives, examples, lessons learned and case studies are presented. It provides a measurement framework that is ready to be used by software project managers. It also explains how to define and design a software measurement program to support the information needs of customers when they acquire software and systems from external providers.
The PSM covers three perspectives: (1) the project manager so as to provide a good understanding of the measures and how to use them to manage their project; (2) the technical staff that conducts measurement during planning and execution phases; and (3) the management team so that they can understand the measurement requirements associated with software.
The nine principles of the PSM are (adapted from [PSM 00]):
- Use issues and objectives to drive the measurement requirements.
- Define and collect measures based on the technical and management processes.
- Collect and analyze data at a level of detail sufficient to identify and isolate problems.
- Implement an independent analysis capability.
- Use a systematic analysis process to trace the measures to the decisions.
- Interpret the measurement results in the context of other project information.
- Integrate measurement into the project management process throughout the life cycle.
- Use the measurement process as a basis for objective communications.
- Focus initially on project-level analysis.
As shown in Figure 10.4, quantitative project management includes the following specialities: risk management, measurement, and financial performance management. The PSM concentrates primarily on the measurement process but also includes the interface to other specialties like risk management and financial performance management.
It has been observed that measurement is not as effective if it used in an independent and isolated process. Measurement can be effective in describing overall project challenges and also point to issues in interrelated systems. Measurement will be more effective when included in all aspects of project management. For example, it will be more effective when integrated with risk management and financial performance management [PSM 00]. Another chapter of this book discusses risk management in detail.
The PSM is composed of the following parts [PSM 00]:
- – Part 1, The Measurement Process, describes the measurement process at a summary level and provides an overview of measurement tailoring, application, implementation, and evaluation. Part 1 explains what is required to implement the measurement process for a project.
- – Part 2, Tailor Measures, describes how to identify project issues, select appropriate measures, and define a project measurement plan.
- – Part 3, Measurement Selection and Specification Tables, provides a series of tables that help the user select the measures that best address the project's issues. These tables support the detailed tailoring guidance of Part 2.
- – Part 4, Apply Measures, describes how to collect and process data, analyze the measurement results, and use the information to make informed project decisions.
- – Part 5, Measurement Analysis and Indicator Examples, provides examples of measurement indicators and associated interpretations.
- – Part 6, Implement Process, describes the tasks necessary to establish the measurement process within an organization.
- – Part 7, Evaluate Measurement, identifies assessment and improvement tasks for the measurement program as a whole.
- – Part 8, Measurement Case Studies, provides three different case studies that illustrate many of the key points made throughout the Guide. The case studies address the implementation of a measurement process on a DoD weapons system, an information system, and a government system in the operations and maintenance life cycle phase.
- – Part 9, Supplemental Information, contains a glossary, list of acronyms, bibliography, project description, comment form, and an index.
- – Part 10, Department of Defense Implementation Guide, this addendum provides information specific to implementing the PSM guidance on Department of Defense programs. It addresses implementation issues of particular concern to DoD acquisition organizations.
The PSM approach to software measurement, as illustrated in Figure 10.5, addresses the following four key measurement activities (adapted from [PSM 00]):
The objective of this activity is to define the set of software and system measures that will provide the best understanding of challenges in the project at the lowest cost. A measurement plan documents the result of this first activity.
During this activity, measures are analyzed in order to provide the feedback necessary for effective decision making. Information on risks and financial performance can also be taken into account for decision making.
This activity consists of three tasks:
- – obtain organizational support: including the right to measure at all organizational levels;
- – define responsibilities concerning measurement;
- – provide resources, purchase required tools, and recruit personnel for this process.
This activity includes four tasks:
- – evaluate measures and indicators as well as their results;
- – evaluate the measurement process according to three perspectives: (1) the quantitative performance evaluation of the measurement process; (2) a conformity assessment of the process executed versus the one that was planned; and (3) the measurement capability as compared to a standard recommendation;
- – update the experience base with lessons learned;
- – identify and implement improvements.
The key roles and responsibilities associated with measuring are (adapted from [PSM 00]):
- – executive manager: is typically a manager responsible for more than one project. This manager defines the expected high level performance and the business objectives. He ensures that individual projects align with the general measurement policy. He uses the measurement outputs to make decisions;
- – project or technical manager: this individual or group identifies the project challenges, reviews the measurement analysis, and acts on the information. In the case of the acquisition of complex software, the customer and the external provider will have a dedicated project manager that will use this information to make joint decisions.
- – measurement analyst: this role can be assigned to a person or a group. The responsibilities include the design of the measurement plan, data collection and analysis, as well as the presentation of results to all the stakeholders. Typically, in large and complex software acquisition projects, both the external provider and the customer have a measurement analyst assigned to the project;
- – project team: is the team responsible for the acquisition, development, and maintenance/operation of the software and systems. This team can include government or industry organizations as part of an Integrated Product Team (IPT). The project team collects measurement data periodically and uses it to orient engineering decisions.
The PSM defines seven categories of information that should be produced for software projects (adapted from [MCG 02]):
- schedule: this measurement category aims at tracking the progress of the project at each step and milestone. A project that experiences delays will have a hard time meeting its delivery objectives. The project manager may have to make decisions such as reducing the functionality to be delivered or sacrificing its quality;
- resources and costs: this measurement category evaluates the balance between the work to be done and the availability of human resources to do this work. A project that overruns its personnel budget will have difficulty completing the work unless some functionality is dropped or the quality reduced;
- software size and stability: this measurement category addresses the stability of the progress made with respect to the delivery of functional and non-functional requirements. It uses the delivered and tested functional size to assess the delivery trend. The stability measurement considers functional change rates. Scope creep is characterized by a growing number of change requests being submitted. This situation will likely extend the schedule and increase the human resource costs;
- product quality: another dimension of a software project that needs to be controlled is the product quality. This measurement category considers the current state of the defect removal trend for both functional and non-functional requirements. When a defective product is delivered to the customer acceptance testing step, it generates a large number of defect reports. Forcing delivery in this condition will drastically impact maintenance efforts;
- process performance: this measurement category assesses the ability of the external providers to meet both the contract clauses as well as the requirements identified in the attachments of the contract. An external provider with weak control of his processes or experiencing weak productivity is an early sign of possible delivery problems;
- effectiveness of technology used: this measurement category measures the effectiveness of the technology chosen to be used by the project to address the requirements. Relatively technical measures assess the software engineering techniques, like reuse, development methods and frameworks and software architectural concerns. It aims at discovering the use of risky technologies or those that have not been mastered;
- user satisfaction: this last category assesses how the customers feel about the progress of the project and how it meets their requirements.
The next section presents the ISO/IEC/IEEE 15939 standard, which is the current international standard for software process measurement.
This section presents four key software measurement activities of ISO 15939 as well as examples of measures. ISO 15939 defines a software measurement process that applies to both the systems and software engineering disciplines for software suppliers and acquirers.
The ISO 15939 standard is aligned with the measurement requirements of ISO 9001. It elaborates the measurement process for software projects as described in ISO 15288 and ISO 12207.
In this standard, the measurement process is represented by a model that describes the activities and tasks to specify, implement, and interpret results. It does not describe how to perform these tasks nor give examples of measures. Its purpose is to describe the activities and tasks that are necessary to successfully identify, define, select, apply, and improve measurement within an overall project or organizational measurement structure. It also provides definitions for measurement terms commonly used within the system and software disciplines [ISO 17c].
As a result of the successful implementation of the measurement process, you can expect the following outcomes [ISO 17c]:
- information needs are identified;
- an appropriate set of measures, based on the information needs, is identified or developed;
- required data is collected, verified, and stored;
- the data is analyzed and the results interpreted;
- information items provide objective information that supports decisions;
- organizational commitment for measurement is sustained;
- identified measurement activities are planned;
- the measurement process and measures are evaluated;
- improvements are communicated to the measurement process owner.
Note that the first five outcomes presented above are the same as those described in ISO 15288 and ISO 12207.
A software measurement process should detail its activities and tasks to achieve its goal. Figure 10.6 presents the reference process model. It contains four activities, where each has a certain number of tasks [ISO 17c]:
- Establish and sustain measurement commitment;
- Prepare for measurement;
- Perform measurement;
- Evaluate measurement.
The model includes a feedback loop to the information technology life cycle processes and assumes that the organization has formalized them (i.e., technical and management processes). The activities are represented by an iterative cycle allowing for continuous feedback and improvement. This is an adaptation of the Plan-Do-Check-Act model widely used in process improvement.
The measurement repository described in Figure 10.6 collects data during a project iteration and stores historical data for all projects and software engineering processes.
The typical functional roles that are mentioned in the ISO 15939 standard are: stakeholder, sponsor, measurement user, measurement analyst, data provider, and measurement process owner.
The measurement process is launched by the measurement requirements, also known as the technical and management information needs of the organization. The activities and tasks are described in Figure 10.7.
Annex A of the ISO 15939 standard is only informative. It presents the model that links the information needs to the measures. This model shows what the measurement planner has to design during the planning, execution and evaluation stages. Three types of measures are presented: base measures, derived measures, and indicators. In this section, the measurement model is explained and followed by an example of its use.
Figure 10.8 presents the model and its components. Components are explained from the top to the bottom of the figure [ISO 17c]:
- – An entity is an object (e.g., a process, product, project, or resource) that is to be characterized by measuring its attributes. Typical engineering objects can be classified as products (e.g., design document, network, source code, and test case), processes (e.g., design process, testing process, and requirements analysis process), projects, and resources (e.g., the systems engineers, the software engineers, the programmers, and the testers). An entity may have one or more properties that are of interest to meet the information needs. In practice, an entity can be classified into more than one of the above categories.
- – An attribute is a property or characteristic of an entity that can be distinguished quantitatively or qualitatively by human or automated means. An entity may have many attributes, only some of which may be of interest for measurement. The first step in defining a specific instantiation of the measurement information model is to select the attributes that are most relevant to the measurement user's information needs. A given attribute may be incorporated in multiple measurement constructs supporting different information needs.
- – A measure is defined in terms of an attribute and the method for quantifying it. A measure is a variable to which a value is assigned. A base measure is functionally independent of other measures. A base measure captures information about a single attribute. Data collection involves assigning values to base measures. Specifying the expected range or type of values of a base measure helps to verify the quality of the data collected.
- – A measurement method is a logical sequence of operations, described generically, used in quantifying an attribute with respect to a specified scale. The operations may involve activities such as counting occurrences or observing the passage of time. The same measurement method may be applied to multiple attributes. However, each unique combination of an attribute and a method produces a different base measure. Some measurement methods may be implemented in multiple ways. A measurement procedure describes the specific implementation of a measurement method within a given organizational context:
- The type of measurement method depends on the nature of the operations used to quantify an attribute. Two types of method may be distinguished:
- Subjective: quantification involving human judgment.
- Objective: quantification based on numerical rules such as counting. These rules may be implemented by human or automated means.
- – A derived measure is a measure that is defined as a function of two or more values of base measures. Derived measures capture information about more than one attribute or the same attribute from multiple entities. Simple transformations of base measures (for example, taking the square root of a base measure) do not add information, thus do not produce derived measures. Normalization of data often involves converting base measures into derived measures that can be used to compare different entities.
- – A function is an algorithm or calculation performed to combine two or more base measures. The scale and unit of the derived measure depend on the scales and units of the base measures from which it is composed as well as how they are combined by the function.
- – An indicator is a measure that provides an estimate or evaluation of specified attributes derived from a model with respect to defined information needs. Indicators are the basis for analysis and decision making. These are what should be presented to measurement users. Measurement is always based on imperfect information, so quantifying the uncertainty, accuracy, or importance of indicators is an essential component of presenting the actual indicator value.
- – An information product is one or more indicators and their associated interpretations that address an information need; for example, a comparison of a measured defect rate to planned defect rate along with an assessment of whether or not the difference indicates a problem.
Figure 10.9 presents an example of a productivity measure originating from annex A of ISO 15939. The decision maker in this example needs to select a specific productivity level as the basis for project planning. The measurable concept is that productivity is related to effort expended and number of requirements implemented. Thus, effort and requirements are the measurable entities of concern.
This example assumes that the productivity can be estimated based on past performance. Thus, data for the base measures (numbered entries in the following table) need to be collected and the derived measure computed for each project in the data store.
The decision criteria are shown at the bottom of Figure 10.9. They are numeric boundaries or objectives used to assess the need for an action or for additional attention to productivity. Decision criteria help in the interpretation of a measure. They can be calculated or be based on the conceptual understanding of what is expected. Regardless of how the productivity number is arrived at, the uncertainty inherent in engineering means that there is a considerable probability that the estimated productivity will not be realized exactly. Estimating productivity based on historical data enables the computation of confidence limits that help to assess how close actual results are likely to come to the estimated value [ISO 17c].
Informative annex A of ISO 15939 also describes a “software artifact” quality measure followed by an example of project advancement measures. Informative annex B of the standard shows the mapping between work products and the measurement activities that have created the artifact. The measurement plan is the result of the execution of the planned activities and tasks. Informative annex F is an informative section listing typical information found in a measurement plan. Figure 10.10 presents examples.
Informative annex C of the standard describes criteria for selecting a measure, whereas informative annex D presents evaluation criteria for information products. Informative annex E presents evaluation criteria for the measurement process and informative annex G describes criteria for reporting information elements.
This section will explain some of the measurement practices proposed by the staged representation of the CMMI®-DEV model. The practices appear in many process areas of the model: in the generic goals (GG), in the generic practices (GP), in the specific goals (SG), and in the specific practices (SP) of specific process areas like “project planning,” “project monitoring and control,” “organizational process definition,” and “quantitative project management.”
Measures collected by maturity level 1 organizations are often of poor reliability because at that maturity level, their processes are often chaotic and not documented. At maturity level 2, also referred to as “managed,” organizations have processes that are planned and executed. Therefore, at that level, it is possible to measure processes and software products. We recall here that one of the generic practices of maturity level 2 is “GP 2.8 Monitor and Control the Process,” and it refers to some process attributes such as the percentage of projects that use progress and performance measures and the number of outstanding open and closed corrective actions [SEI 10a].
Concerning software products developed by suppliers, the CMMI-DEV recommends that the acquirer needs to closely follow the project quality, schedule, and costs. Measurement and data analysis are key activities of project monitoring.
The ISO 15939 standard was used by the CMMI-DEV “measurement and analysis” process area. This allows both the systems engineering and software engineering communities to share the same measurement recommendations. The next text box describes the level 2 objectives and specific practices of the measurement and analysis process area.
This process area is used by many other process areas of the model. For example, for measuring project performance, the project monitoring and control process area should be consulted; for controlling software products, refer to the configuration management process area; for requirements traceability, the requirements management process area contains measurement guidelines; for organizational measurement, refer to the organizational process definition process area. To learn more about the appropriate use of statistical methods, the quantitative project management process area of CMMI-DEV provides more guidance.
Worldwide, there are a large number of Very Small Entities (VSEs) that develop and maintain software. These are organizations, companies, departments, and projects involving up to 25 people. In an earlier chapter, the ISO 29110 was introduced. ISO 29110 proposes a four-stage roadmap referred to as a profiles. The profiles apply to VSEs in start-up mode; small projects that have a limited duration of six person-months; those that create only one product with only one team; VSEs that have more than a project with more than one team; and a VSE that wants to improve the management of its business and its competitiveness.
The activities of the ISO 29110 project management process that are related to measurement are: project planning activity, where size, effort, calendar, and resources are estimated and used in the preparation of the project plan, as well as in the project assessment and control activity, where progress is evaluated against the project plan.
The tasks of the software implementation process of ISO 29110 related to measurement are mainly those related to defects identified and corrected during reviews and testing.
Surveys are used by organizations to obtain an overview of complex questions, aid problem resolution, and support decision making. Surveys are tools that allow information to be collected quickly and anonymously. It can be done during meetings and, most often, using an internet survey tool that sends questionnaires or invites to participants to answer a survey by clicking on a web link.
Concerning SQA, surveys can be used to obtain service satisfaction information from individuals and organizations like developers, project managers, testers, configuration management, and sometimes suppliers. For example, a few weeks after the deployment of a new measurement program, surveys can be prepared by SQA to assess the level of satisfaction of customers of the organization concerning its products and services.
In this section, two case studies are presented: one survey conducted by the SEI concerning measurement and one survey conducted by the ISO working group for VSEs.
What is a survey? According to Kasunic, of the SEI [KAS 05], a survey is a collection of data and an analysis method where solicited individuals answer questions or comment on declarations previously elaborated.
The SEI developed a survey process with seven steps [KAS 05]:
- identify the research objectives;
- identify and characterize the target audience;
- design the sampling plan;
- design and write the questionnaire;
- pilot test the questionnaire;
- distribute the questionnaire;
- analyze results and write a report.
According to Kasunic, a good survey has to be systematic, impartial, representative, quantitative, and repeatable. Although surveys show good results compared with other data collection techniques, they have limitations [KAS 05]:
- – To generalize for a population, a survey must follow strict procedures in defining which participants are studied and how they are selected.
- – Following the rules and implementing the survey with the rigor that is necessary can be expensive with respect to cost and time.
- – Survey data are usually superficial. It is not typically possible to go into any detail—that is, we are not capable of digging deeply into people's psyches looking for fundamental explanations of their unique understandings or behaviors.
- – Surveys can be obtrusive. People are fully aware that they are the subjects of a study. They often respond differently than they might if they were unaware of the researcher's interest in them.
The SEI conducted this survey to understand the state of the practice of software measurement. The following text box describes the results.
First of all, we would think that measurement is a quick and easy thing to do. But there are many obstacles to successfully implementing a measurement program:
- – we do not know why we are collecting measures;
- – we intend to collect too many measures;
- – some measures are not collected in the same manner in other projects;
- – there are no adequate tools to easily collect and analyze measurements;
- – measures are deployed without having been tested in pilot projects;
- – measurement adds to the current workload;
- – we think measures will not be used;
- – we believe measures will be used to assess our individual performance;
- – there is no commitment to measurement from the organization;
- – there is little support for the measurement program.
To overcome these potential obstacles, a seven-step implementation approach has been created, tested, and is recommended [DES 95]:
- demonstrate the value and potential of the measurement program to upper management to gain their support;
- implicate the delivery personnel early in the design of the program;
- identify the key processes to be improved where most benefits would arise;
- identify the measurement goals and objectives for these key processes;
- design and publish the measurement program for comments;
- identify the tools/processes to be used for measurement and test them;
- launch a first pilot and then extend gradually.
Senior managers do not readily see the relevance of initiating measurement programs in software engineering since they perceive them to be expensive and bureaucratic. They also mention significant time delays prior to obtaining the expected results and the limited impact of these measurements which is limited to only a few sub-groups within the whole software engineering department. Furthermore, they often get contradictory advice from experts on the strategies for initiating a measurement program.
To address the issue of the relevance of measurement with respect to management concerns, the benefits and alignment to organizational strategy must be identified for the measurement program. This first step consists of finding the necessary information that will help the managers make a decision on the relevance of implementing a measurement program within the organization. Demonstration of the benefits of a software engineering measurement program is challenging because many results are not tangible and are realized over a long period of time.
It appears that there is almost always major reluctance from staff to accept a measurement program. Project managers do not usually like control and productivity measures. On the one hand, nobody likes to be “measured.” On the other hand, when measurement programs are implemented, they are often labor-intensive data collection processes. To address these issues, we must offer useful tools to automate the data collection process. We must at the same time, find ways to help project managers control the data collection process and develop analytical skills to extract information from the data and measures available.
This step consists in finding the necessary arguments that will lead the staff involved in the data collection process to accept and support the measurement program.
This step consists in evaluating the maturity level of the software development organization. The CMMI model and assessment results provide much more information than the well-publicized single-digit maturity level. Based on this assessment of the organizational maturity level, multiple key candidates for process improvements are provided. Furthermore, the CMMI models helps with the selection of the priorities to be given to the key processes targeted for improvement programs.
The purpose of this step is to determine the goals and objectives of the measurement program. For each CMMI process area, there is one or more goals. A goal describes the purpose of what should be achieved (e.g., improve the estimation of a development project). An objective is the wording of a goal to be reached (with or without specifying the achievement conditions) through measurable behavior over time.
Organizational goals must also correspond to the ability to achieve them. The selection of key processes considering the organization's maturity level is not enough. Processes already in place are also important. From this perspective, an organization should not have too many goals, and they must be prioritized. An organization cannot achieve all of its goals within the first year if this organization is just embarking on a measurement program.
This step consists in designing a measurement program that will allow management not only to see if the objectives have been reached, but also to understand why if they have not been reached. Figure 10.11 suggests the components of a measurement program: tools, standards, definitions, and a choice of measures. The implementation of this design will vary from one organization to another.
This step consists in modeling all the measures to be collected to meet the objectives. These measures must specify measurement units and, whenever feasible, be based on standards.
This step must also define the validation process and the control reports.
This step consists in deploying a measurement program through:
- – selection of a pilot site;
- – personnel training;
- – assigning responsibilities and tasks;
- – setting-up the measurement group.
The responsibilities for the measurement objectives are distributed at different levels for different types of staff: senior management, measurement program manager, experts, and developers. This is illustrated in Table 10.2.
Table 10.2 Responsibility Matrix [DES 95]
|Level/personnel||Upper management||Measurement programmanager||Experts||Developers|
|Strategic||Define strategic objectives||
Supply information about the measurement program
Ensure objectives are consistent
|Ensure the consistency of resources|
Endorse and promote the measurement program
Approve tactical objectives
|Track/validate the coherence of objectives||
Assist the leader
Using the objectives, define the entities, attributes and measures
Assist delivery personnel
Create reports and have them approved
Define and document resources
|Participate in the definition of tactical objectives|
Supply needed resources
Approve operational objectives
Manage the measurement program
Implement the program and tools
Provide status/recommend adjustments to the measurement program
Follow-up and implement tools
Conduct statistical analysis
Design the measurement repository
Participate in the definition of operational objectives
Participate in data collection
The success of creating and sustaining a software measurement program relies on constant support from higher management. The known presence of a leader is essential for other staff to contribute to measurement. The leader will often be part of senior management and motivate personnel.
There is probably a link between process maturity and the success of this program. Only more mature organizations will support process improvement in a structured way by clarifying goals and objectives. Chances for success are better than if this commitment is not communicated.
Published studies show that when measurement programs are not supported by leadership, they do not last very long.
The next text box presents common errors of measurement.
This section presents base measures recommended by the SEI. In every software project, management usually wants the same type of information (adapted from Carleton et al. (1992) [CAR 92]):
- – what is the size of the product to be developed?
- – do we have enough qualified/available personnel for the task ahead?
- – can we meet the schedule?
- – what level of quality is expected of this product?
- – where are we according to plans?
- – how are we doing on costs?
The base measures proposed by the SEI are: a size measure, an effort measure, a calendar measure, and a quality measure. For size, the SEI recommends the number of lines of source code for the following reasons (adapted from [PAR 92]):
- – it is easy, simply count the end of line markers;
- – counting methods do not greatly depend on the programming language;
- – it is easy to automate the counting of physical lines of code;
- – the majority of the data used to create cost estimation models like COCOMO [BOE 00] used lines of code.
With regards to the effort measure, the SEI recommends using the number of staff-hours. The organization should track both normal and overtime hours, whether they are paid or not. Ideally, hours dedicated on important activities like requirements, design, and testing should be calculated separately.
The SEI recommends also adopting structured methods to define two important aspects with respect to a measure for schedules: the dates and the exit criteria. It is recommended to compare dates (planned and actual) associated with project milestones, reviews, and audits.
Examples of calendar measures or schedule completion criteria are: milestones, end of phase reviews, audits, and deliverables approved by the client.
With regards to quality, the SEI recommends measuring problems and defects. It is recommended that the problems and software defects be used to help determine when products will be ready for customer delivery and to provide data for the improvement of both processes and products.
In the 1990s, Professor Austin of the Harvard Business School [AUS 96] warned organizations about the unexpected side effects of measurement. Austin showed how a measurement program may cause dysfunctional behavior and could even affect organizational performance. Most of the literature on software measurement focuses on its technical aspects and ignores the cultural or human side of it [MCQ 04]. For example, the following text box describes how human behavior can be modified when observed and measured.
We have seen that developing a measurement program is not easy. There are many drawbacks that can lead to its failure. The first pitfall is to use the measure to evaluate the performance of a developer instead of measuring the performance of the process and the tools he is using. For example, data from an inspection process should not be used to measure the productivity of the author of the document. A second pitfall is to develop an ambitious action plan which attempts to measure everything (as described in the following text box).
Another pitfall is to develop a measurement plan without involving developers. Often when the measurement plan is shared, a wave of resistance may occur and could result in its abandonment. You will also want to avoid developing measures that are not used for decision making. In such cases, without understanding the use of the proposed measures, developers will be less motivated to collect them in a precise manner. For example, an organization may invest a large amount of time to measure the size of its software without measuring other key aspects such as effort or quality. This size measurement, in and of itself, is not sufficient to make decisions. The following text box shows other common pitfalls to measurement.
The attitude of the people involved in measurement, that is to say the definition of measures, data collection, and analysis, is an important criterion to ensure that the program is useful to the organization. Measurement can affect the behavior of the individual observed. When something is measured, it is implied that it is important.
Every staff member wants to look good and therefore would like the measures to help him look good. When developing a measurement program, think of the behaviors we want to encourage and the behaviors that we do not want to encourage.
For example, if you measure software productivity in terms of lines of code per hour, developers will target that goal to meet the objective of productivity. They will perhaps focus on their own work to the detriment of the team and the project. Worse, they can find ways to program the same functionality with additional lines of code to impact the measure.
There is only one way to avoid this behavior and it is to focus measurement on processes and products instead. Here are some suggestions to promote the implementation of effective measures (adapted from [WES 05]):
- – once measures are collected, they should be used to make decisions. One sure way to undermine the measurement program is to accumulate them in a database and ignore them for decision making;
- – given that software development is an intellectual task, it is recommended to develop a set of measures to fully capture the complexity of the task. As a minimum, quality, productivity and project schedule should be measured;
- – to gain developer commitment to this program, they must have a sense of belonging (ownership). Participation in the definition, collection, and analysis of the measures will improve this sense of belonging. People working with a process on a daily basis have an intimate knowledge of this process. They can help suggest ways to better measure the process, ensure the accuracy of valid measurements and propose ways to interpret the results to maximize their utility;
- – provide regular feedback to the team about the data collected;
- – focus on the need to collect data (when team members see the data actually used, they are more likely to view the collection activity as important);
- – if team members are kept informed of how the measures are used, they are less likely to become suspicious of the measurement program;
- – benefit from the knowledge and experiences of team members by involving them in the analysis of data for process improvement efforts.
An anecdote about the invention of the inspection method by Fagan while he was working at IBM [BRO 02] was presented earlier in the chapter on reviews. In the following text box, we continue this story with a brief description of the difficulties he encountered.
Obtaining precise, complete, and analyzed measures add a non-negligible cost to the IT budget. For example, a measurement program for software products can cost between 2% and 3% of a project's budget. According to Grady (1992) [GRA 92], organizations that use measurement obtain a competitive edge compared to their competitors who do not. Alternatively, the cost associated with not having any measures can be seen by all of the software projects that fail to meet their budget, schedule, and quality objectives. Those with measures have the advantage of making sound decisions that will allow them to obtain greater customer satisfaction.
Measurement, described in the IEEE 730 standard, is helpful in demonstrating that the software processes can and do produce software products that conform to requirements. This confirmation includes evaluating intermediate and final software products along with methods, practices, and workmanship. Evaluation further includes measurement and analysis of a software process as well as product problems and related causes and provides recommendations about ways to correct current problems. IEEE 730 also explains that the measurement activities and tasks can supply objective data to improve an organization's life cycle management process. Similarly, evaluating software products for compliance can identify improvement opportunities.
This standard provides a number of questions that the project team should use during planning and execution in order to ensure its conformity to the measurement requirements. For example, [IEE 14]:
- – Are requirements specific, measureable, attainable, realistic, and testable?
- – Have the information needs required to measure the effectiveness of technical and management processes been identified?
- – Has an appropriate set of measures, driven by the information needs, been identified and developed?
- – Have appropriate measurement activities been identified and planned?
- – Is the review process of the project measured and effective?
- – Have all corrective actions that were implemented proven to be effective as determined by effectiveness measures?
- – Have the measurement process and specific measures been evaluated?
- – Have improvements been communicated to the measurement process owner?
Of the 16 SQA activities recommended by this standard, activity 5.4.6 describes the measurement of software products while activity 5.5.5 describes the recommended process measurement of a software project.
Effective SQA processes identify what activities to do, how to confirm the activities are performed, how to measure and track the processes, how to learn from measures to manage and improve the processes, and how to encourage using the processes to produce software products that conform to established requirements. SQA processes are continually improved based on objective measures and actual project results. During SQA planning, the project team will define specific measurements for assessing project software quality and the project performance against project and organization quality management objectives. The following activities are recommended [IEE 14]:
- Identify applicable process requirements that may affect the selection of a software life cycle process.
- Determine whether the defined software life cycle processes selected by the project team are appropriate, given the product risk.
- Review project plans and determine whether plans are appropriate to meet the contract based on the chosen software life cycle processes and relevant contractual obligations.
- Audit software development activities periodically to determine consistency with defined software life cycle processes.
- Audit the project team periodically to determine conformance to defined project plans.
- Perform Task 1 through Task 5 above for subcontractor's software development life cycle.
Performing these activities should provide the following outcomes [IEE 14]:
- – Documented software life cycle processes and plans are evaluated for conformance to the established process requirements.
- – Project life cycle processes and plans conform to the established process requirements.
- – Non-conformances are raised when software life cycle processes and plans do not conform to the established process requirements.
- – Non-conformances are raised when software life cycle processes and plans are not adequate, efficient, or effective.
- – Non-conformances are raised when execution of project activities does not conform to software life cycle processes and plans.
- – Subcontractor software life cycle processes and plans conform to the process requirements passed down from the acquirer.
Measurement, from a product perspective, determines whether product measurements demonstrate the quality of the products and conform to standards and procedures established by the project. This is even more important when a supplier is involved. Prior to delivery, determine the degree of confidence the supplier has that the established requirements are satisfied and that the software products and related documentation will be acceptable to the acquirer. The project will then collect measurement data sufficient to support these satisfaction and acceptability decisions. A contract may demand that the acquirer, prior to delivery, determine whether software products are acceptable. The following activities are recommended [IEE 14]:
- Identify the standards and procedures established by the project or organization.
- Determine whether proposed product measurements are consistent with standards and procedures established by the project.
- Determine whether the proposed product measurements are representative of product quality attributes.
- Analyze product measurement results to identify gaps and recommend improvements to close gaps between measurements and expectations.
- Evaluate product measurement results to determine whether improvements implemented as a result of product quality measurements are effective.
- Analyze product measurement procedures to confirm they are sufficient to satisfy the measurement requirements defined in the project's processes and plans.
- Perform Task 1 through Task 6 above for software products developed by all subcontractors.
Performing these activities should provide the following outcomes [IEE 14]:
- – Software product measurements conform to the project's processes and plans, and conform to standards and procedures established by the project or organization.
- – Software product measurements accurately represent software product quality.
- – Software product measurements are shared with project stakeholders.
- – Software product measurements are performed on software products developed by the supplier as well as all of the supplier's subcontractors.
- – Software product measurements are presented to management for review and potential corrective and preventive action.
- – Non-conformances are raised when required measurement activities are not performed as defined in project plans.
Finally, IEEE 730 refers to the ISO 15939 measurement recommendations, presented in section 10.5, as measurement recommendations to be implemented in a SQAP.
Following are the factors that help or adversely affect software quality in an organization.
- FLORAC W. A. and CARLETON A. D. Measuring the Software Process, Addison-Wesley, Boston, MA, 1999.
- HUMPHREY W. S. Managing the Software Process, Addison-Wesley, Boston, MA, 1989.
- IISAKKA J. and TERVONEN I. The darker side of inspection. In: First Workshop on Inspection in Software Engineering (WISE'01), Paris, July 2001.
- WEINBERG G. M. Quality Software Management, Volume 2: First Order Measurement. Dorset House, New York, 1993.
You need to measure the size of a software. Before programming a measurement tool, you will need to specify what will and what will not be measured for a specific programming language. Choose a programming language and write the specifications for this measurement tool.
In the same organization, many software have been developed using different programming languages. You have access to size, effort, and quality measures (e.g., the number of defects). How will you proceed to compare the productivity and the quality of these software?
List criteria that would allow you to choose measures for a specific project.
What are the principal questions that a project manager should ask and for which a good measurement program will provide answers?
Write the task description to hire someone that will be responsible for the measurement analysis for your organization.