1. The Basics – ROI Basics, 2nd Edition

1

The Basics

What’s Inside This Chapter

This chapter explores the fundamentals of the ROI Methodology, a process that has become fundamental to many organizations around the world. The chapter covers three key topics:

• defining return on investment (ROI)

• following the ROI process model

• putting ROI to use.

 

 

Defining ROI

What is ROI? ROI is the ultimate measure of accountability that answers the question: Is there economic value added to the organization for investing in programs, processes, initiatives, and performance improvement solutions? Organizations rely on many economic indicators. Table 1-1 summarizes the typical financial measures important to resource allocation decisions.

Table 1-1. Financial Measures

Financial Measure

Acronym

Description

Return on Investment

ROI

Used to evaluate the efficiency or profitability of an investment or to compare the efficiency of a number of investments.

Calculation: Compares the annual net benefits of an investment to the cost of the investment; expressed as a percentage.

ROI (%) = (Net Benefits / Costs) × 100

Return on Equity

ROE

Measures a corporation’s profitability by revealing how much profit a company generates with the money that shareholders have invested. Used for comparing the profitability of a company to that of other firms in the same industry.

Calculation: Compares the annual net income to shareholder equity.

ROE = (Net Income / Shareholder Equity) × 100

Return on Assets

ROA

Indicates how profitable a company is in relation to its total assets. Measures how efficient management is at using its assets to generate earnings.

Calculation: Compares annual net income (annual earnings) to total assets; expressed as a percentage.

ROA (%) = (Net Income / Total Assets) × 100

Return on Average Equity

ROAE

Modified version of ROA referring to a company’s performance over a fiscal year.

Calculation: Same as ROA except the denominator is changed from total assets to average shareholders’ equity, which is computed as the sum of the equity value at the beginning and end of the year divided by two.

ROAE = Net Income / Average Shareholder Equity

Return on Capital Employed

ROCE

Indicates the efficiency and profitability of a company’s capital investments. ROCE should always be higher than the rate at which the company borrows; otherwise any increase in borrowing will reduce shareholders’ earnings.

Calculation: Compares earnings before interest and tax (EBIT) to total assets minus current liabilities.

ROCE = EBIT / Total Assets – Current Liabilities

Present Value

PV

Current worth of a future sum of money or stream of cash flows given a specified rate of return. Important in financial calculations including net present value, bond yields, and pension obligations.

Calculation: Divides amount of cash flows (C; or sum of money) by the interest rate (r) over a period of time (t).

PV = C / (1 + r)t

Net Present Value

NPV

Measures the difference between the present value of cash inflows and the present value of cash outflows. Another way to put it: measures the present value of future benefits with the present value of the investment.

Calculation: Compares the value of a dollar today to the value of that same dollar in the future, taking into account a specified interest rate over a specified period of time.

Internal Rate of Return

IRR

Makes the net present value of all cash flows from a particular project equal to zero. Used in capital budgeting. The higher the IRR, the more desirable it is to undertake the process.

Calculation: Follows the NPV calculation as a function of the rate of return. A rate of return for which this function is zero is the internal rate of return.

Payback Period

PP

Measures the length of time to recover an investment.

Calculation: Compares the cost of a project to the annual benefits or annual cash inflows.

PP = Costs / Benefits

Benefit-Cost Ratio

BCR

Used to evaluate the potential costs and benefits of a project that may be generated if the project is completed. Used to determine financial feasibility.

Calculation: Compares project annual benefits to its cost.

BCR = Benefits / Costs

As shown, each metric compares monetary benefits to costs or investments in some way. Each one has a purpose. Not all metrics, however, are suitable for demonstrating returns gained from non-capital expenditures such as talent development. Instead, three measures can be used for most types of investment, allowing decision makers to use them to compare results across a wide spectrum of programs and projects. The measures are:

• benefit-cost ratio (BCR)

• return on investment (ROI)

• payback period (PP).

The BCR is the output of benefit-cost analysis, an economic theory grounded in welfare economics. As far back as 1667, Sir William Petty, most remembered as a political economist, found that public health expenditures to combat the plague would achieve what is now referred to as a BCR of 84 to 1 (84:1). Later BCR grew in prominence in France with Jules Dupuit’s 1844 article on the utility of public works. Economists in the United States adopted BCR in the early 1900s when it was used to justify projects initiated under the River and Harbor Act of 1902 and the Flood Control Act of 1936.

The concept of ROI, comparing earnings to investment, has been used in business for centuries to measure the success of a variety of investment opportunities. Harvard Business Review’s 75th anniversary edition in 1997 traced tools used to measure results in organizations (Sibbett 1997). During the 1920s, ROI was the emerging tool to place a value on the payoff of investments. While its initial use was in evaluating the value of capital investments, it has become the leading indicator of value added as a result of human resources and talent development programs. This growth in use for such programs stems, in part, from the 1973 work of Jack J. Phillips who began using it to demonstrate value for a cooperative education program. His use of ROI grew and was formally recorded in his seminal work found in the Handbook of Training Evaluation and Measurement Methods (Gulf Publishing, 1983). Over the years, his application of ROI found its way into the broader HR community and has since been adopted across most disciplines.

ROI and BCR provide similar indicators of investment success, though one, ROI, presents the earnings (net benefits) as compared to the cost and is multiplied by 100 to report it as a percentage. The other, BCR, compares gross benefits to costs. Below are the basic formulas used to calculate the BCR and the ROI:

What does the output of the two formulas mean? A BCR of 2:1 means that for every $1 invested, you receive $2 back. This translates into an ROI of 100 percent, which indicates that for every $1 invested, you receive $1 back after the costs are recovered (you get your investment back plus $1).

BCRs were used in the past primarily in public sector settings, while ROI was used primarily by accountants managing funds in business and industry. Either can be, and are, used in both settings, but it is important to understand the difference. In many cases the BCR and ROI are reported together.

The third measure, payback period (PP), is used to determine the point in time when program owners can expect to recover their investments. Typically, it is used to compare alternative investment opportunities. Those with the shorter PP are usually more desirable. PP does not consider time value of money, nor does it consider future benefits. But it is a simple measure that indicates the break-even point, or a BCR of 1:1, which translates to an ROI of 0 percent. The output of the PP formula is the number of months or years before the projects pay the cost back. The formula for PP is:

While ROI is the ultimate measure due to it demonstrating the gain over and beyond the costs, basic accounting practice says that reporting the ROI metric alone is insufficient. To be truly meaningful, ROI must be reported with other performance measures.

Noted

Periodically, someone will report a BCR of 3:1 and an ROI of 300 percent. This is not possible. ROI is the net benefits divided by the costs, which translates to 200 percent. The net benefit is equal to benefits minus costs.

ROI and the Levels of Evaluation

ROI for talent development programs is reported in the context of the five-level evaluation framework, representing program results important to various stakeholders. These levels are categories of data.

Think About This

A blended learning program led to the reduction in calls escalated from the service desk to field support by an average of 20 per month. The monthly value of this reduction was $175 per call × 20 per month, or $3,500 per month. The first-year benefit of the program was $3,500 × 12, or $42,000. The fully loaded cost for designing, developing, implementing, and evaluating the program was approximately $30,000. Below are the calculations for the BCR, ROI, and PP. Note how, while they tell a similar story, the math and metrics are different.

BCR = Program Benefits / Program Costs
         = $42,000 / $30,000
         = 1.40 or 1.40:1

Translation: For every $1 invested in the program, the organization gained $1.40 in gross benefits.

ROI (%) = (Net Program Benefits / Program Costs) × 100
               = ([$42,000 – $30,000] / $30,000) × 100
               = ($12,000 / $30,000) × 100
               = 40 percent

Translation: For every $1 invested in the program, the organization recovered its $1 investment and gained an additional $0.40 in net benefit. Or, a 40 percent return over and beyond the investment.

PP = Program Costs / Program Benefits
     = $30,000 / $42,000
     = 0.71 × 12 = 8.57 months

Translation: Given an investment of $30,000 and benefits of $42,000, the organization should recover the program costs within 8.57 months. This suggests that all benefits beyond those gained in 8.57 months will be additional value.

Level 1: Reaction and Planned Action—Data representing participants’ reactions to the program and their planned actions are collected and analyzed. Reactions may include participants’ views of the course content, facilitation, and learning environment. This category also includes data found to be predictive of application of the acquired knowledge and skills. Such measures include those indicating the participant’s perspective of the relevance and importance of the content. Others include the amount of new information presented in the program and participants’ willingness to recommend the program to others.

Level 2: Learning—Data representing the extent to which participants acquired new knowledge and skills are collected and analyzed. This category also includes the level of confidence participants have in their ability to apply what they have learned.

Level 3: Application and Implementation—Data are collected and analyzed to determine the extent to which participants effectively apply their newly acquired knowledge and skills. This category also includes data that describes the barriers that prevent application and any supporting elements (enablers) in the knowledge transfer process.

Level 4: Impact—Also referred to as business impact, these data are collected and analyzed to determine the extent to which participants’ applications of acquired knowledge and skills positively influenced key measures that were intended to improve as a result of the program. Measures can include both hard and soft data. Hard data are measures of output, quality, cost, and time. Soft data are measures of customer satisfaction, job satisfaction, work habits, and innovation. To ensure credibility and reliability in reporting impact, a step to isolate the program’s effects on the improvement in these measures from other influences is always taken.

Level 5: Return on Investment—Impact measures are converted to monetary values and compared with the fully loaded program costs. You may see an improvement in productivity, for example, but the questions remain: What is that improvement worth? How does that value compare to the cost of the program? To calculate an ROI, you must answer these two questions. If the monetary value of productivity’s improvement exceeds the cost, your calculation results in a positive ROI.

Noted

The levels of evaluation are categories of data; timing of data collection does not necessarily define the level to which you are evaluating. Level 1 data can be collected at the end of the program (as is typical) or in a follow-up evaluation months after the program (not ideal).

Levels 4 and 5 data can be forecasted before a program is implemented or at the end of the program. The true impact is determined after the program is implemented when actual improvement in key measures can be observed. Through analysis, this improvement is isolated to the program, accounting for other factors. The basics of forecasting ROI are described in the appendix.

Each level of evaluation answers basic questions regarding the program success. Table 1-2 presents these questions along with those associated with the investment itself (Level 0: Input). Level 0 is not a category of results, which is why it is not considered in the five-level framework. But Level 0 data are important, in that they represent how much the organization is investing in talent development. Together, these categories of data represent a framework that serves as the basis for assessing, measuring, and evaluating talent development programs.

Table 1-2. Framework of Data and Key Questions

Level of Evaluation

Key Questions

Level 0:
Input

• How many people attended the program?

• Who were the people attending the program?

• How much was spent on the program?

• How many hours did it take to complete the program?

Level 1:
Reaction and Planned Action

• Was the program relevant to participants’ jobs and missions?

• Was the program important to participants’ jobs and mission success?

• Did the program provide new information?

• Do participants intend to use what they learned?

• Would participants recommend the program to others?

• Is there room for improvement with facilitation, materials, and the learning environment?

Level 2:
Learning

• Did participants acquire the knowledge and skills presented in the program?

• Do participants know how to apply what they learned?

• Are participants confident to apply what they learned?

Level 3:
Application and Implementation

• How effective are participants at applying what they learned?

• How frequently are participants applying what they learned?

• If participants are applying what they learned, what is supporting them?

• If participants are not applying what they learned, why not?

Level 4:
Impact

• So what if the application is successful?

• To what extent did application of learning improve the measures the program was intended to improve?

• How did the program affect output, quality, cost, time, customer satisfaction, employee satisfaction, and other measures?

• How do you know it was the program that improved the measures?

Level 5:
Return on Investment

• Do the monetary benefits of the improvement in impact measures outweigh the cost of the program?

Think About This

In 2010, ROI Institute and ATD partnered on a study to determine what measures are most compelling to senior leaders. Impact data ranked first, ROI ranked second, and awards ranked third.

The Chain of Impact

Reported together, the levels of data in the framework tell the complete story of program success or failure. Figure 1-1 presents the chain of impact that occurs as organizations invest in talent development programs: participants react positively to those programs; acquire knowledge, skills, information, or insights; and apply that newly acquired knowledge. As a consequence, the application and actions influence key business measures. The impact on those measures is known, because a step is taken to isolate the effects of the program from other influences. When these measures are converted to monetary value and compared with the fully loaded program costs, the ROI can be calculated. Because not all impact measures are converted to money, it is important to focus on and report the intangible benefits of the program. Intangible benefits are Level 4 measures not converted to money. So, they do not represent a new “category” of data. Rather, they are noted specifically to complement the ROI metric.

Figure 1-1. Chain of Impact

You may be wondering if it’s possible to calculate ROI without the other levels of data. Technically, the answer is, “yes.” Each category of data is independent, except for Level 5, which depends on Level 4 measures to start the benefit-cost comparison process. But, if you have a negative or extremely high ROI and you have not collected and analyzed data at the lower levels, how do you explain the results? Together the data in this chain of impact describe a compelling story. As important, it provides the data required to improve programs when the ROI is less than desirable.

The Evaluation Puzzle

To make ROI work, five pieces of an evaluation puzzle must come together, as shown in Figure 1-2. The evaluation framework previously described is the first piece of the puzzle, providing a way to categorize and report data. The second piece is the process model.

Figure 1-2. Evaluation Puzzle

Source: Phillips (2017).

Process Model

The process model serves as a step-by-step guide to help maintain a consistent approach to evaluation. There are four phases to the process, each containing critical steps that must be taken to get to credible information. These four phases are described in more detail later in this chapter:

1.  Plan the Evaluation:

○ Align programs with the business.

○ Select the right solution.

○ Plan for results.

2.  Collect Data:

○ Design for input, reaction, and learning.

○ Design for application and impact.

3.  Analyze Data:

○ Isolate the effects of the program.

○ Convert data to monetary value.

○ Tabulate program costs.

○ Identify intangible benefits.

○ Calculate the ROI.

4.  Optimize Results:

○ Communicate results to key stakeholders.

○ Use black box thinking to increase funding.

Noted

The ROI Methodology was originally developed in 1973 by Jack J. Phillips. Jack, at the time, was an electrical engineer at Lockheed Aircraft (now Lockheed Martin) in Marietta, Georgia, who taught test pilots the electrical and avionics systems on the C-5A Galaxy. He was also charged with managing a co-operative education program designed as part of Lockheed’s engineer recruiting strategy. His senior leader told him that in order to continue funding the co-operative education program, Jack needed to demonstrate the return on Lockheed’s investment (ROI). The senior leader was not looking for an intangible measure of value, but the actual ROI.

ROI and cost-benefit analysis had been around for decades, if not centuries. But neither had been applied to this type of program. Jack did his research and ran across a concept referred to as four-steps to training evaluation, developed by an industrial-organizational psychologist named Raymond Katzell. Don Kirkpatrick wrote about these steps and cited Katzell in his 1956 article titled “How to Start an Objective Evaluation of Your Training Programs.” Because the concept had not been operationalized nor did it include a financial metric describing the ROI, Jack added the economic theory of cost-benefit analysis to the four-step concept and created a model and standards to ensure that reliable data, including the ROI, could be reported to his senior leadership team.

Jack’s 1983 Handbook of Training Evaluation and Measurement Methods put the five-level evaluation framework and the ROI process model on the map. As he moved up in organizations to serve as head of learning and development, senior executive VP of human resources, and president of a regional bank, he had his learning and talent development and HR teams apply this approach to major programs.

Then, in 1994, his book, Measuring Return on Investment Volume 1, published by the American Society of Training & Development (ASTD), now the Association for Talent Development (ATD), became the first book of case studies describing how organizations were using the five-level framework and his process to evaluate talent development programs.

Over the years, Jack Phillips, Patti Phillips, and their team at ROI Institute have authored more than 100 books describing the use of the ROI Methodology. The application of the process expands well beyond talent development and human resources. From humanitarian programs to chaplaincy, and even ombudsmanship, Jack’s original work has grown to be the most documented and applied approach to demonstrating value for money of all types of programs and projects.

Think About This

Is your organization large with autonomous divisions? Many organizations pursuing ROI fit this description. If competition exists between divisions, it can lead to divisions purposefully approaching evaluation (and many other things) differently. If each division approaches evaluation, including ROI, using different methodologies and different standards, doesn’t it stand to reason that you won’t be able to compare the results? Whether it is the approach presented in this book or something else, find a method, develop it, use the standards that support it, and apply it consistently.

Operating Standards and Philosophy

This puzzle piece ensures consistent decision making around the application of the model. Standards provide the guidance needed to support the process and ensure consistent, reliable practice. By following 12 guiding principles, consistent results can be achieved:

1.  When conducting a higher-level evaluation, collect data at lower levels.

2.  When planning a higher-level evaluation, the previous level of evaluation is not required to be comprehensive.

3.  When collecting and analyzing data, use only the most credible sources.

4.  When analyzing data, select the most conservative alternative for calculations.

5.  Use at least one method to isolate the effects of a project.

6.  If no improvement data are available for a population or from a specific source, assume that little or no improvement has occurred.

7.  Adjust estimates of improvement for potential errors of estimation.

8.  Avoid use of extreme data items and unsupported claims when calculating ROI.

9.  Use only the first year of annual benefits in ROI analysis of short-term solutions.

10.  Fully load all costs of a solution, project, or program when analyzing ROI.

11.  Define intangible measures as measures that are purposely not converted to monetary values.

12.  Communicate the results of the ROI Methodology to all key stakeholders.

These guiding principles help maintain a conservative and credible approach to data collection and analysis. They serve as decision-making tools, influencing decisions on the best approach by which to collect data, the best source and timing for data collection, the most appropriate approach for isolation and data conversion, costs to include, and the stakeholders who should receive the results.

Applications and Practice

Applying the ROI Methodology while adhering to the guiding principles is not a simple task, but it does not have to be difficult. Applications and practice, the fourth piece of the evaluation puzzle, provide a deeper understanding of this comprehensive evaluation process. Case application also provides evidence of program success—because without the story, who will know? Thousands of case studies have been developed describing the application of the ROI Methodology. These case studies represent work from business and industry, healthcare, government, and even community and faith-based initiatives.

Professionals who are beginning their pursuit of ROI can learn from these case studies, as well as those found in other publications; however, the best learning comes from actual application. Conducting your own ROI study will allow you to see how the framework, process model, and operating standards come together. It also serves as the starting line for your track record of program success.

Implementation

Conducting just one study adds little value to your efforts to continuously improve and account for your talent development programs. The key is implementation—the last and most critical piece of the evaluation puzzle. Anyone can conduct one ROI study; the key is sustaining the practice. Building the philosophy into everyday decisions about your talent development process is imperative if you want to sustain a culture of results-based talent development. This requires assessing your organization’s culture for accountability; assessing your organization’s readiness for ROI; defining the purpose for pursuing this level of evaluation; building expertise and capability; creating tools, templates, and standard processes; and adopting technology that will enable optimal use of information that flows from data collection and analysis.

The ROI Process Model

To demonstrate ROI, it is important to follow a step-by-step process to ensure consistent, reliable results. Figure 1-3 presents the ROI Model.

Plan the Evaluation

Planning your ROI evaluation is not only an important first phase in the evaluation process, but an important step in the selection and development of talent development solutions. Without a plan it will be difficult for you to know where you are going, much less when you arrive. Your plan begins with clarifying the business needs for your program and ensuring the most feasible solution has been identified given the needs. Once the correct program has been identified, the next step is to develop specific, measurable objectives and design the program around those objectives. From there you develop your data collection plan. This includes defining the measures for each level of evaluation, selecting the data collection instrument, identifying the source of the data, and determining the timing of data collection. Any available baseline data for the measures you are taking should be collected during this time.

Figure 1-3. ROI Methodology Process Model

Next, develop the ROI analysis plan. This means selecting the most appropriate technique to isolate the effects of the program on impact data and the most credible method for converting data to money. Cost categories and communication targets are developed. As you develop these planning documents, you will also identify ways in which the evaluation approach can be seamlessly integrated into the program.

Collect Data

Once the planning phase is completed, data collection begins. Levels 1 and 2 data are collected during the program with common instruments, including end-of-course questionnaires, written tests and exercises, and demonstrations. Follow-up data, Levels 3 and 4, are collected sometime after the program when application of the newly acquired knowledge and skills becomes routine and when enough time has passed to observe impact on key measures. A point to remember is that if you have identified the measures that need to improve through initial analysis, you will measure the change in performance in those same measures during the evaluation. It is feasible to believe that your data collection methods during the evaluation could be the same as those used during the needs analysis.

Analyze Data

Once the data are available, analysis begins using the approach chosen during the planning stage. Now it’s a matter of execution. Isolating the effects of the program on impact data is a first step in data analysis. This step is taken when collecting data at Level 4. Too often overlooked in evaluating success of talent development programs, this step answers the critical question, “How do you know it was your program that improved the measures?” While some will say this is difficult, we argue (and have argued for years), it doesn’t have to be. Besides, without this step, the results you report will lack credibility.

The move from Level 4 to Level 5 begins with converting Level 4: Impact measures to monetary value. Often this step instills the greatest fear in talent development professionals, but once you understand the available techniques to convert data along with the five steps of how to do it (which are covered in chapter 5), the fear usually subsides.

Fully loaded costs are developed during the data analysis phase. These costs include needs assessment (when conducted), design, delivery, and evaluation costs. The intent is to leave no associated cost unreported.

Intangible benefits are also identified during this phase. These are the Level 4 measures not converted to monetary value. They can also represent any unplanned program benefits.

The last step of the data analysis phase is the math. Using simple addition, subtraction, multiplication, and division, the ROI is calculated.

Optimize Results

This is the most important phase in the evaluation process. Evaluation without communication and communication without action are worthless endeavors. If you tell no one how the program is progressing, how can you improve the talent development process, secure additional funding, justify programs, and market programs to future participants?

There are a variety of ways to report data. There are micro reports that include the complete ROI impact study; there are macro reports for all programs that include scorecards, dashboards, and other reporting tools.

But communication must lead to action—and that action requires stepping back and analyzing what is learned from the data. Black box thinking is required if you want to get value from your program evaluation investments. The job of talent development professionals is not to “train” people, but to drive improvement in output, quality, cost, time, customer satisfaction, job satisfaction, work habits, and innovation. This occurs through the development of others; but to do it well, it means assessing, measuring, and evaluating and then taking action based on the findings. Figure 1-4 offers something to remember about the evaluation process.

Figure 1-4. Evaluation Leads to Allocation

Putting ROI to Use

The ultimate use of data generated through the ROI Methodology is to show the value of programs, specifically economic value. But there are a variety of other uses for these data, including to justify spending, improve the talent development process, and gain support.

Justify Spending

Justification of spending is becoming more commonplace in the talent development practice. Talent development managers are often required to justify investing in new programs, the continuation of existing programs, and changes or enhancements to existing programs.

New Programs

In the past, when the talent development function had “deep pockets,” new programs were brought on board every time a business book hit the New York Times bestseller list. While many of those programs were inspiring, there was no business justification for them. Today, new programs undergo a greater amount of scrutiny. At a minimum, talent development managers consider the costs and provide some esoteric justification for investing the resources.

For those who are serious about justifying investments in new programs, ROI is a valuable tool. A new program’s ROI can be forecasted using a variety of techniques, but some programs may require pre-programming justification. There are two approaches for this: pre-program forecasts and ROI in pilot programs. Although these approaches are beyond the scope of this book, the appendix includes basic descriptions of the forecasting techniques.

Existing Programs

Calculating ROI in existing programs is more common in practice than forecasting success for new programs, although there is an increased interest in program justification prior to launch. Typically, ROI is used to justify investments in existing programs where development and delivery have taken place, but there is concern that the value does not justify continuing.

Along with justifying the continuation of existing programs, ROI is used to determine the value of changing delivery mechanisms, such as incorporating blended learning or placing a program online with no in-person interaction. It is also used to justify investing in additional support initiatives that supplement the learning transfer process. Four approaches to ROI can assist in justifying the investment in existing programs: forecasting at Levels 1, 2, and 3, and the post-program evaluation. Post-program evaluation is the basis for this book.

Improve the Talent Development Process

The most important use of ROI is to improve the talent development process. Often talent development staff and program participants are threatened by the thought of being evaluated to such an extent. However, program evaluation is about making decisions concerning the program and the process, not about the individual performance of the people involved. ROI can improve the talent development process by helping staff set priorities, eliminate unsuccessful programs, and reinvent the talent development function.

Noted

Many people fear a negative ROI; however, more can be learned through evaluation projects that achieve a negative ROI than those achieving a high, positive ROI.

Set Priorities

In almost all organizations, the need for talent development exceeds the available resources. A comprehensive evaluation process, including ROI, can help determine which programs rank as the highest priority. Programs with the greatest impact (or the potential for greatest impact) are often top priority. Of course, this approach has to be moderated by taking a long view, ensuring that developmental efforts are in place for a long-term payoff. Also, some programs are necessary and represent commitments by the organization. Those concerns aside, the programs generating the greatest impact or potential impact should be given the highest priority when allocating resources.

Eliminate Unsuccessful Programs

You hate to think of eliminating programs—to some people this translates into the elimination of responsibility and, ultimately, the elimination of jobs. This is not necessarily true. For years, the talent development function has had limited tools to eliminate what are known to be unsuccessful, unnecessary programs. ROI provides this tool.

Think About This

Imagine that talent development staff, participants, and participant supervisors know a vendor-supplied customer service program provides little value to the organization. Participants provide evidence of this with their comments on the end-of-course questionnaire. Unfortunately, senior leaders ignore the Level 1 data, asking for stronger evidence that the program is ineffective.

With this edict, the evaluation team sets the course for implementing a comprehensive evaluation. The evaluation results show that in the first year, the program achieved a –85 percent ROI; the second-year forecast shows a slightly less negative ROI of –54 percent. Leaders immediately agree to drop the program.

Sometimes you need to speak the language of business to get your point across.

Reinvent the Talent Development Function

Implementing a comprehensive evaluation process can have many long-term payoffs, one of which is the reinvention of the talent development function. While evaluating to the ROI level is not necessary for all programs, the process itself provides valuable data that can help eliminate unsuccessful programs or reinvent those that are successful but expensive. The funds saved by making these decisions can be transferred to the front-end assessment, resulting in better, more focused programs. This allows for better alignment between talent development and the business, and perpetuates long-term alignment.

Basic Rule 1

Not every program should be evaluated to impact and ROI. ROI is reserved for those programs that are expensive, have a broad reach, drive business impact, have the attention of senior managers, or are highly visible in the organization. However, when evaluation does go to impact and ROI, results should be reported at the lower levels to ensure that the complete story is told.

Gain Support

A third use for ROI is to gain support for programs and the talent development process. A successful talent development function needs support from key executives and administrators. Showing the ROI for programs can alter managers’ and supervisors’ perceptions and enhance the respect and credibility of the learning staff.

Key Executives and Administrators

Senior executives and administrators are probably the most important group to the talent development function; they commit resources and show support for functions achieving results that positively affect the strategy of the organization. Known for their support of learning, executives and administrators often suggest training as the solution to every problem. Unfortunately, because training is not always the solution, when the problem persists after the program, executives and administrators can quickly change their thinking. That is why it is common to find talent development absent from high-level decision making.

To ensure talent development’s seat at the table, it is necessary for talent development staff and management to think like a business—focusing programs on results and organizational strategy. ROI is one way this focus can occur. Talent development can easily position the function to be a strategic player in the organization by thinking through an opportunity or financial problem that needs to the solved; translating that into the business need; assessing the job performance that needs to be applied to meet the business need; determining the skills necessary to ensure successful job performance; and, finally, deciding the best approach to deliver the knowledge needed to build the skills. ROI evaluation provides the economic justification and value of investing in the mechanism selected to solve the problem.

Managers and Supervisors

Mid-level and frontline supervisors can sometimes be talent development’s biggest antagonists. They often question the value of programs because they aren’t interested in what their employees learn; rather, they are interested in what employees do with what they learn. Talent development must take learning a step further by showing the effect of what employees do with what they learn, with particular emphasis on measures representative of output, quality, cost, and time. If talent development programs can show results linked to the business and talent development staff can speak the language of business, mid-level managers and supervisors may start to listen to them more closely.

Employees

Showing the value of programs, including ROI, can enhance the talent development function’s overall credibility. By showing employees that the programs offered are serious programs achieving serious results, the talent development function can show that training is a valuable way to spend time away from their pressing duties. Also, by making adjustments in programs based on the evaluation findings, employees will see that the evaluation process is not just a superficial attempt to show value.

Getting It Done

It is easy to describe the basics and benefits of using such a comprehensive evaluation process as the ROI Methodology, but this approach is not for everyone. Given that, your first step toward making ROI work for your organization is assessing the degree to which your talent development function is results based. Complete the assessment in Exercise 1-1 to see where you stand. Then ask a client to complete the survey and compare the results.

In the next chapter, you will learn how to create a detailed plan for your evaluation.

Exercise 1-1. Talent Development Program Assessment

Instructions: For each of the following statements, circle the response that best matches the talent development function at your organization.

1.  The direction of the talent development function at your organization:

a. shifts with requests, problems, and changes as they occur

b. is determined by talent development and adjusted as needed

c. is based on a mission and a strategic plan for the function

2.  The primary mode of operation of the talent development function is to:

a. respond to requests by managers and other employees to deliver training services

b. help management react to crisis situations and reach solutions through training services

c. implement many talent development programs in collaboration with management to prevent problems and crisis situations

3.  The goals of the talent development function are:

a. set by the talent development staff based on perceived demand for programs

b. developed consistent with talent development plans and goals

c. developed to integrate with operating goals and strategic plans of the organization

4.  Most new programs are initiated:

a. by request of top management

b. when a program appears to be successful in another organization

c. after a needs analysis has indicated that the program is needed

5.  When a major organizational change is made you:

a. decide only which programs are needed, not which skills are needed

b. occasionally assess what new skills and knowledge are needed

c. systematically evaluate what skills and knowledge are needed

6.  To define talent development plans:

a. management is asked to choose talent development programs from a list of canned, existing courses

b. employees are asked about their talent development needs

c. talent development needs are systematically derived from a thorough analysis of performance problems

7.  When determining the timing of training and the target audiences you:

a. have lengthy, nonspecific talent development training courses for large audiences

b. tie specific talent development training needs to specific individuals and groups

c. deliver talent development training almost immediately before its use, and it is given only to those people who need it

8.  The responsibility for results from talent development:

a. rests primarily with the talent development staff to ensure that the programs are successful

b. is the responsibility of the talent development staff and line managers, who jointly ensure that results are obtained

c. is a shared responsibility of the talent development staff, participants, and managers all working together to ensure success

9.  Systematic, objective evaluation, designed to ensure that participants are performing appropriately on the job, is:

a. never accomplished; the only evaluations are during the program and they focus on how much the participants enjoyed the program

b. occasionally accomplished; participants are asked if the training was effective on the job

c. frequently and systematically pursued; performance is evaluated after training is completed

10.  New programs are developed:

a. internally, using a staff of instructional designers and specialists

b. by vendors; you usually purchase programs modified to meet the organization’s needs

c. in the most economical and practical way to meet deadlines and cost objectives, using internal staff and vendors

11.  Costs for training and talent development are accumulated:

a. on a total aggregate basis only

b. on a program-by-program basis

c. by specific process components, such as development and delivery, in addition to a specific program

12.  Management involvement in the talent development process is:

a. very low with only occasional input

b. moderate, usually by request, or on an as-needed basis

c. deliberately planned for all major talent development activities, to ensure a partnership arrangement

13.  To ensure that talent development is transferred into performance on the job, you:

a. encourage participants to apply what they have learned and report results

b. ask managers to support and reinforce training and report results

c. use a variety of training transfer strategies appropriate for each situation

14.  The talent development staff’s interaction with line management is:

a. rare; you almost never discuss issues with them

b. occasional; during activities, such as needs analysis or program coordination

c. regular; to build relationships, as well as to develop and deliver programs

15.  Talent development’s role in major change efforts is to:

a. conduct training to support the project, as required

b. provide administrative support for the program, including training

c. initiate the program, coordinate the overall effort, and measure its progress—in addition to providing training

16.  Most managers view the talent development function as:

a. a questionable function that wastes too much time of employees

b. a necessary function that probably cannot be eliminated

c. an important resource that can be used to improve the organization

17.  Talent development programs are:

a. activity oriented (all supervisors attend the Talent Development Workshop)

b. individual results based (the participants will reduce their error rate by at least 20 percent)

c. organizational results based (the cost of quality will decrease by 25 percent)

18.  The investment in talent development is measured primarily by:

a. subjective opinions

b. observations by management and reactions from participants

c. dollar return through improved productivity, cost savings, or better quality

19.  The talent development effort consists of:

a. usually one-shot, seminar-type approaches

b. a full array of courses to meet individual needs

c. a variety of talent development programs implemented to bring about change in the organization

20.  New talent development programs and projects, without some formal method of evaluation, are implemented at your organization:

a. regularly

b. seldom

c. never

21.  The results of talent development programs are communicated:

a. when requested, to those who have a need to know

b. occasionally, to members of management only

c. routinely, to a variety of selected target audiences

22.  Management involvement in talent development evaluation:

a. is minor, with no specific responsibilities and few requests

b. consists of informal responsibilities for evaluation, with some requests for formal training

c. is very specific. All managers have some responsibilities in evaluation

23.  During a business decline at your organization, the talent development function will:

a. be the first to have its staff reduced

b. be retained at the same staffing level

c. go untouched in staff reductions and possibly beefed up

24.  Budgeting for talent development is based on:

a. last year’s budget

b. whatever the training department can “sell”

c. a zero-based system

25.  The principal group that must justify talent development expenditures is:

a. the talent development department

b. the human resources or administrative function

c. line management

26.  Over the last two years, the talent development budget as a percentage of operating expenses has:

a. decreased

b. remained stable

c. increased

27.  Top management’s involvement in the implementation of talent development programs:

a. is limited to sending invitations, extending congratulations, and passing out certificates

b. includes monitoring progress, opening and closing speeches, and presentations on the outlook of the organization

c. includes participating in the program to see what’s covered, conducting major segments of the program, and requiring key executives be involved

28.  Line management involvement in conducting talent development programs is:

a. very minor; only talent development specialists conduct programs

b. limited to a few supervisors conducting programs in their area of expertise

c. significant; on the average, over half of the programs are conducted by key line managers

29.  When an employee completes a talent development program and returns to the job, their supervisor is likely to:

a. make no reference to the program

b. ask questions about the program and encourage the use of the material

c. require use of the program material and give positive rewards when the material is used successfully

30.  When an employee attends an outside seminar, upon return, they are required to:

a. do nothing

b. submit a report summarizing the program

c. evaluate the seminar, outline plans for implementing the material covered, and estimate the value of the program

 

Interpreting the Talent Development Program Assessment

Score the assessment instrument as follows:

• 1 point for each (a) response

• 3 points for each (b) response

• 5 points for each (c) response

Score Range and Analysis

120–150: Outstanding environment for achieving results with talent development. Great management support. A truly successful example of results-based talent development.
90–119: Above average in achieving results with talent development. Good management support. A solid and methodical approach to results-based talent development.
60–89: Needs improvement to achieving desired results with talent development. Management support is ineffective. Talent development programs do not usually focus on results.
30–59: Serious problems with the success and status of talent development. Management support is nonexistent. Talent development programs are not producing results.