2. Plan Your Work – ROI Basics, 2nd Edition

2

Plan Your Work

What’s Inside This Chapter

This chapter presents the basics in planning your evaluation:

• aligning programs with the business

• defining program objectives

• developing the evaluation plan.

 

 

Aligning Programs With the Business

While instructional systems design models such as ADDIE (analysis, design, development, implementation, and evaluation) position evaluation at the end, planning for evaluation needs to occur at the beginning—during the analysis phase. This ensures the right measures for evaluation are taken in the end. It leads to a more streamlined and simpler approach to evaluation, not to mention a better program. Evaluation answers the question “why?”

Talent development professionals serve this purpose through their programs and initiatives. To successfully answer the “why,” it is important to first align the program with the needs of the organization, and then identify a feasible solution given those needs.

The evaluation framework shown in Table 1-2 serves as the basis for program alignment. By using the five levels to categorize data captured in the analysis phase, talent development professionals can ensure they identify the ultimate needs of the organization as well as the learning needs of the individuals who participate in their programs. This framework is also the basis for defining objectives. As will be presented later in this chapter, objectives serve as the architectural blueprint for a program. Defining objectives at all levels ensures that the program is designed for the results that matter to all stakeholders.

Noted

“There is nothing so useless as doing efficiently that which should not be done at all.”

—Peter Drucker, Austrian-born American management consultant, educator, and author

Figure 2-1 presents the alignment model. As shown on the left, analysis begins with an examination of the payoff opportunities and business needs, and then moves toward the assessment of performance gaps where feasible solutions come to light. Then learning needs are defined, followed by preference needs that answer the question, “How best do we roll out and deliver the content?” Input needs represent the project plan itself (for example, who will participate, how many will participate, what is the cost of delivery). These needs lead to the objectives, which are central to program and evaluation success. They answer the questions “How will we design, develop, and implement the program?” and “How will we measure program success?” The right side of the model represents evaluation, answering the question “What results come from this program?” The measures required to answer this question depend first on clarifying the payoff needs.

Figure 2-1. Alignment Model

Payoff Needs

Every organization faces opportunities to make money, save money, avoid cost, or contribute to the greater good while making money, saving money, or avoiding cost. Identifying payoff needs is the first step in the alignment process. Payoff needs can be opportunities to pursue or problems to solve. They answer questions such as:

• Is this program worth doing?

• Is the problem worth solving?

• Is the opportunity worth pursuing?

Payoff needs come in the form of profit increase, costs reduction, and cost avoidance. These opportunities are often obvious and relatively easy to connect to monetary values. For example:

• Claim process time has increased 30 percent.

• Our system downtime is double last year’s performance.

• The company’s safety record is the worst in the industry.

• We have an excessive number of product returns—40 percent higher than last year.

On the other hand, payoff opportunities are sometimes not so obvious, such as:

• Let’s implement a team building program.

• We must develop leadership competencies for all managers.

• We need to implement a company-wide cultural transformation initiative.

• It’s time to build capability for future growth.

While these seem like admirable goals, the payoff of each is not so clear. To gain clarity, consider the following:

• Why is this an issue?

• What happens if you do nothing?

• Why is this critical?

• How is this linked to strategy?

• What business measures will improve?

• What value will this bring?

Business Needs

While considering the payoff needs, the business needs will often become apparent. Business needs are the specific organizational measures that, if improved, will help address the payoff need. Measures that represent business needs come in the form of output, quality, cost, time, customer satisfaction, job satisfaction, work habits, and innovation.

These measures represent either hard data or soft data. Hard data are objectively based and easily converted to money, for example, citizens vaccinated, graduation rate, loans approved, budget variances, length of stay, cycle time, failure rates, and incidents.

Soft data are subjectively based and, while it is possible, require more effort to convert to money. Examples include teamwork, networking, client satisfaction, organizational commitment, employee experience, creativity, brand awareness, and readiness.

Identifying payoff needs and defining specific business measures up front will help ensure the program targets the right opportunity or addresses the right problem. It also enables easier post-program evaluation.

With payoff and business needs in hand, the next step in the alignment process is to answer the following questions:

• What needs to change to influence the business measures?

• What can enable this change?

• What is the best solution?

Answers to these questions come by assessing performance needs.

Performance Needs

Some talent development professionals are moving from order taker to value creator. These individuals are resisting the temptation to say “yes” to every request for a new program. Rather, they try to uncover the problem or opportunity and identify business measures in need of improvement. Then they identify a solution or solutions that will best influence the business need. Their role is evolving into a performance consulting role, positioning them as critical business partners to leaders throughout the organization.

Success in this movement requires the talent development professional to assess the performance gaps that, if changed, will address business needs. This means they must have a mindset for curiosity and inquiry and be willing to:

• Examine data and records.

• Initiate the discussion with the client.

• Use benchmarking from similar solutions.

• Use evaluation as a hook to secure more information.

• Involve others in the discussion.

• Discuss disasters in other places.

• Discuss consequences of not aligning solutions to the business.

They must also be willing to use, or find others who can use, the wide variety of diagnostics tools available to support this level of needs analysis (see Table 2-1 for some examples).

Table 2-1. Diagnostic Tools to Support Performance Needs Analysis

Determining which tool or tools to use depends on the scope of the problem or opportunity. An expensive problem warrants comprehensive analysis. For example, at Southeast Corridor Bank (SECOR Bank), the organization was facing customer dissatisfaction and departure, due in part to the inefficiencies in the branches. Analysis of the problem showed that inefficiencies were due to the excessive turnover of bank tellers (71 percent) and the inability of those who remained to serve customer needs. Due to the cost of this problem, leadership decided to invest in a comprehensive performance needs assessment to understand the underlying causes of turnover. One technique employed was the nominal group technique, which involved a series of focus groups with people who represented those who were leaving. SECOR Bank was able to identify 10 reasons why people were leaving the bank and rank them in order of importance (Phillips, Phillips, and Ray 2016). A less expensive problem would have called for a less expensive approach.

This level of analysis usually leads to the most feasible solution or solutions for the business need at hand, with some solutions more obvious than others. In the SECOR Bank study, the team realized it could solve the five most prominent causes of turnover with one solution: a program that would offer training and development opportunities to enable tellers to better serve customers, as well as engage them in the business and with the customers to a greater extent than they had been.

Noted

You can use collaborative analytics to discern opportunities to improve output, quality, and cost, as well as employee engagement, customer experience, and other business measures. It is also useful in determining the impact change in collaborative networks has on business measures. While its use is still in its infancy, it is important that talent development professionals become familiar with the opportunities it offers. A good place to begin this learning journey is a research piece authored by Rob Cross, Tom Davenport, and Peter Gray, titled “Driving Business Impact Through Collaborative Analytics” (Connected Commons, April 2019).

Learning Needs

Addressing the performance needs uncovered in the previous step typically requires a learning component to ensure all parties know what they need to do and how to do it. In some cases, a learning program becomes the solution. In other cases, nonlearning solutions such as processes, procedures, policies, and technologies are the most feasible approach to closing the performance gap that will lead to improvement in business measures. Assessing learning needs is not relegated only to pre- and post-knowledge assessments of program participants. Examples of other techniques include: subject matter expert input, job and task analysis, observations, demonstrations, and management assessments.

It is important to go beyond technical knowledge and tactical skill assessment, especially when there is great opportunity at stake. People need to know the “how” as well as the “what,” “why,” and “when.” It is also important to remember that learning needs assessment is important for multiple stakeholders, not just program participants. Supervisors, senior leaders, and the direct reports of the target audience all play a role in ensuring programs are successful.

Preference Needs

Preference needs drive program requirements. Individuals prefer certain content, processes, schedules, or activities for the structure of a program. These preferences inform how best to roll out and deliver a program. If the program is a solution to a problem or if it is leveraging an opportunity, preference needs define how best to implement the program and how participants should perceive it for it to be successful from their perspective. Designing programs based on audience preference increases the odds that participants will commit to them and will be equipped to do what needs to be done to drive the measures that matter.

Input Needs

The last phase of analysis is the project plan, which represents projected investment in the program. Here needs are determined in terms of number of offerings, who will likely participate when, and how many people will participate during each session. The program team will also decide on in-house and external resources to leverage. Travel, food, facilities, and lodging issues are also defined at this stage. At the end of this phase, the program team will estimate the full cost of the program.

The Alignment Model in Action

Table 2-2 presents the output of a basic application of the alignment process. The event that led to this output was a discussion between a chief learning officer (CLO) and the president of operations for a large chip manufacturing company (Phillips and Phillips 2005). The president was concerned that his people spent too much time in training that did not matter. Upon questioning the president, the CLO learned that the concern was not too much training, but rather too much time in meetings in general. She also gained insight into how the meetings were being run and the extent to which follow-through on commitments made in those meetings was taking place. Together they came to an agreement that the president would actively engage in supporting learning transfer in an effort to reduce the cost of spending too much time in meetings. They also agreed that the cost reduction would exceed the cost of solution implementation, targeting an ROI of 25 percent. This would indicate to the president that for every $1 invested in the solution, the organization would gain an additional $0.25.

Table 2-2. Output of Alignment Process

Level of Need

Needs

Payoff Needs

What is the economic opportunity or problem?

• Specific dollar amount is unknown. Estimate thousands in U.S. dollars due to time wasted in meetings.

Business Needs

What are the specific business needs?

• Too many meetings (frequency of meetings per month)

• Too many people attending meetings (number of people per month)

• Meetings are too long (average duration of meetings in hours)

Performance Needs

What is happening or not happening on the job that is causing this issue?

• Meetings are not planned

• Agendas are not developed prior to the meeting

• Agendas are not being followed

• Consideration of the time and cost of unnecessary meetings is lacking

• Poor facilitation of meetings

• Follow-up action resulting from the meeting is not taking place

• Conflict that occurs during meetings is not being appropriately managed

• Proper selection of meeting participants is not occurring

• Good meeting management practices are not implemented

• Consideration of cost of meetings is not taking place

Learning Needs

What knowledge, skill, or information is needed to change what is happening or not happening on the job?

• Ability to identify the extent and cost of meetings

• Ability to identify positives, negatives, and implications of meeting issues and dynamics

• Effective meeting behaviors

Preference Needs

How best can this knowledge, skill, or information be communicated so that change on the job occurs?

• Facilitator-led workshops

• Job aids and tools

• Relevant and useful information is required

Input Needs

What is the projected investment?

• 72 supervisors and team leaders who lead meetings

• Average salary $219 per day

• Break out in three groups

• Two-day workshop for all 72 people

• Program fee for 72 people (includes facilitations and materials)

• Estimated travel and lodging

• Cost of facilities for six days (2 days × 3 offerings)

• Prorated cost of needs assessment

• Estimated cost of evaluation (≈5% program cost)

• Estimated cost: $125,000

Think About This

Sometimes it is important to forecast the ROI for a program prior to investing in it; for example, if the program is very expensive or when deciding between different delivery mechanisms. Pre-program forecasting is also important when deciding between two programs intended to solve the same program. The appendix includes a basic description of pre-program forecasting as well as descriptions of how to forecast ROI with data representing the other levels of evaluation.

Defining Program Objectives

Program objectives represent the expectation for success. More importantly, they serve as the architectural blueprint that talent development professionals should follow if they want to design programs for results. They answer the question, “How?” meaning, “How will we design, develop, and implement this program?” and “How will we measure the program’s success?”

Program objectives reflect the same framework used in categorizing evaluation data. The key in writing program objectives is to be specific in identifying measures of success and to ensure that the measures align with those discovered through the needs assessment. All too often, broad program objectives are written or the measures that define those objectives are irrelevant to the need for the program. Vague and irrelevant objectives hurt the design of the program, impair the evaluation process, and lead to meaningless results.

Noted

Specificity drives results. Vague and nebulous leads to vague and nebulous.

Level 1: Reaction and Planned Action Objectives

Level 1 objectives are critical in that they describe expected immediate and long-term satisfaction with a program. They describe issues important to the success of the program, including facilitation, relevance and importance of content, logistics, and intended use of knowledge and skills. But there has been criticism of the Level 1 evaluation surrounding the use of the Level 1 overall satisfaction as a measure of success. Overuse of the overall satisfaction measure has led many organizations to make funding decisions based on whether participants like a program; later, they realize the data were misleading.

Level 1 objectives should identify issues that are important and measurable rather than esoteric indicators that provide limited useful information. They should be attitude based, clearly worded, and specific. Level 1 objectives specify whether the participant has had a change in thinking or perception as a result of the program and underscore the linkage between attitude and the program’s success. While Level 1 objectives represent a satisfaction index from the consumer perspective, these objectives should also have the capability to predict program success. Given these criteria, it is important that Level 1 objectives represent specific measures of success. A good predictor of the application of knowledge and skills is the perceived relevance by participants of program content. So, a Level 1 objective may be:

At the end of the course, participants will perceive program content as relevant to their jobs.

A question remains, however: “How will you know you are successful with this objective?” This is where a good measure comes in. Table 2-3 compares a broad objective with a more specific measure.

Table 2-3. Compare Broad Objective With More Specific Measure

Objective

Measure

At the end of the course, participants will perceive program content as relevant to their jobs.

• 80 percent of participants rate program relevance a 4.5 out of 5 on a Likert scale.

For those of you who are more research driven, you might want to take this a step further by defining (literally) what you mean by “relevance.” For example, relevance may be defined as:

• knowledge and skills that participants can immediately apply in their work

• knowledge and skills that reflect participants’ day-to-day work activity.

Now the measures of success can be even more detailed. Table 2-4 compares the broad objective to the more detailed measures. Success with these two measures can be reported individually, or you can combine the results of the two measures to create a “relevance index.”

Table 2-4. Compare A Broad Objective With More Specific and Detailed Measures

Objective

Measures

At the end of the course, participants will perceive program content as relevant to their jobs.

• 80 percent of participants indicate that they can immediately apply the knowledge and skills in their work as indicated by a 4.5 rating out of 5 on a Likert scale.

• 80 percent of participants view the knowledge and skills as reflective of their day-to-day work activity as indicated by rating this measure a 4.5 out of 5 on a Likert scale.

Breaking down objectives to multiple, specific measures provides a clearer picture of success; however, multiple measures also lengthens your Level 1 data collection instrument. The question to consider is, “Do you need a long questionnaire with many questions representing many measures to determine success with an objective?” For a program planned for ROI evaluation, no. Keep your lower-level evaluation instruments simple (yet, meaningful) when planning an evaluation that includes impact and ROI. Conserve your resources for the more challenging tasks of Level 4 and Level 5 evaluation.

Think About This

Overall satisfaction is sometimes referred to as a measure of how much participants liked the program’s snacks. Recent analysis of a comprehensive Level 1 end-of-course questionnaire showed that participants viewed the program as less than relevant and not useful, and had little intention to use what they learned. Scores included:

○ Knowledge and skills presented are relevant to my job: 2.8 out of 5.

○ Knowledge and skills presented will be useful to my work: 2.6 out of 5.

○ I intend to use what I learned in this course: 2.2 out of 5.

Surprisingly, however, respondents scored the overall satisfaction measure, “I am satisfied with the program,” as 4.6 out of 5. Hmm, it must have been the cookies!

Level 2: Learning Objectives

There is ongoing interest in evaluating the acquisition of knowledge and skills. These drivers include growth in the number of learning organizations, placing emphasis on intellectual capital, and increased use of certifications as discriminators in the selection process. Given this, Level 2 objectives should be well defined.

Level 2 objectives communicate expected outcomes from instruction; they describe competent performance that should be the result of learning. The best learning objectives describe behaviors that are observable and measurable. As with Level 1 objectives, Level 2 objectives are outcome based. Clearly worded and specific, they spell out what the participant must be able to do as a result of learning.

Basic Rule 2

When conducting a higher- level evaluation, collect data at lower levels.

There are three types of learning objectives:

Awareness—participants are familiar with terms, concepts, and processes.

Knowledge—participants have a general understanding of concepts and processes.

Performance—participants are able to demonstrate the knowledge and skills acquired.

A typical learning objective may be:

At the end of the course, participants will be able to use Microsoft Word.

Sounds reasonable. But what does “successful use” look like? How will you know if you have achieved success? You need a measure, as shown in Table 2-5. Now, you can evaluate the success of learning.

Table 2-5. Compare Broad Objective With Implementation Measures

Objective

Measures

At the end of the course, participants will be able to use Microsoft Word.

Within a 10-minute time period, participants will be able to demonstrate to the facilitator the following applications of Microsoft Word with zero errors:

• File, save as, save as web page

• Format, including font, paragraph, background, and themes

• Insert tables, add columns and rows, and delete columns and rows

Level 3: Application and Implementation Objectives

Where learning objectives and their specific measures of success tell you what participants can do, Level 3 objectives tell you what participants are expected to do when they leave the learning environment. Application objectives describe the expected outputs of the talent development program, which include competent performance resulting from training, and provide the basis for evaluating on-the-job performance changes. The emphasis is placed on applying what was learned.

The best Level 3 objectives identify behaviors that are observable and measurable; they are outcome based, clearly worded, specific, and spell out what the participant has changed as a result of the learning. A typical application objective might read something like:

Participants will use effective meeting behaviors.

Again, you need specifics to evaluate success. What are effective meeting behaviors and to what degree should participants use those skills? Some examples of measures are shown in Table 2-6. With defined measures, you now know what success looks like.

An important element of Level 3 evaluation is that this is where you assess success with learning transfer. Is the system supporting learning? Here you look for barriers to application and supporting elements (enablers). It is critical to gather data around these issues so that you can take corrective action when evidence of a problem exists. You may wonder how you can influence issues outside your control—say, when participants indicate that their supervisor is preventing them from applying newly acquired knowledge. Through the evaluation process, you can collect data that arm you to engage in dialogue with supervisors. Bring the supervisors into the fold and ask them for help. Tell them there is evidence that some supervisors are not supporting learning opportunities and you need their advice as how to remedy the situation.

Table 2-6. Compare Application Objective With Measurable Behaviors

Objective

Measures

Participants will use effective meeting behaviors.

• Participants will develop a detailed agenda outlining the specific topics to be covered for 100 percent of meetings.

• Participants will establish meeting ground rules at the beginning of 100 percent of meetings.

• Participants will follow up on meeting action items within three days following 100 percent of meetings.

A comprehensive assessment at Level 3 provides you with tools to begin this dialogue with all stakeholders. Through it you may find that many managers, supervisors, and colleagues in other departments don’t understand the role of talent development, nor do they have a clear understanding of the adult learning process. This is an opportunity to teach them, and thereby, increase their support.

Level 4: Impact Objectives

Success with Level 4 objectives is critical when you want to achieve a positive ROI for the talent development investment. Level 4 objectives provide the basis for measuring the consequences of application of skills and knowledge and place emphasis on achieving bottom-line results. The best impact measures are both linked to the knowledge and skills in the program and easily collected. Level 4 objectives are results based, clearly worded, and specific. They spell out what the participant has accomplished in the business unit as a result of the program. Four types of impact objectives involving hard data are output focused, quality focused, cost focused, and time focused. Three common types of impact measures involving soft data are customer service focused, work climate focused, and work habit focused.

Say you work for a large, multinational computer manufacturer that prides itself on the quality of the computer systems purchased and the service provided when there is a problem. The company makes it easy for purchasers to get assistance by selling lucrative warranties on all its products. One particular system, the X-1350, comes with a three-year warranty that includes the “gold standard” for technical support for an additional $105.

In the past year, there has been an increase in the number of call-outs to repair contractors, particularly with regard to the X-1350. This increase is costing the company not only money, but also customer satisfaction. A new program is implemented to improve the computer’s quality. A typical impact objective might read something like:

However, that objective is too broad. To determine if your efforts will have their intended impact, focus your impact measure on specific measures—in this example, measures of quality. Table 2-7 shows the objective and specific measures of success.

Table 2-7. Compare Broad Objective With Impact Measures

Objective

Measure

Improve the quality of the X-1350.

• Reduce the number of warranty claims on the X-1350 by 10 percent within six months after the program.

• Improve overall customer satisfaction with the quality of the X-1350 by 10 percent as indicated by a customer satisfaction survey taken six months after the program.

• Achieve top scores on product quality measures included in industry quality survey.

Specific measures describe the meaning of success. They also serve as the basis for the questions that you ask during the evaluation.

Level 5: ROI Objectives

Level 5 objectives target the specific economic return anticipated when an investment is made in a program. This objective defines “good” when asked, “What is a good ROI?” There are four options when considering the target ROI:

• Set the ROI at the level of other investments.

• Set the ROI at a higher standard.

• Set the ROI at break-even.

• Set the ROI based on client expectations.

Set ROI at the Level of Other Investments

Setting ROI at the same level as other investments is not uncommon. Many talent development groups use this approach to ensure a link with operations. To establish this target, ask your finance and accounting teams what the average return is for other investments.

Set ROI at a Higher Standard

Another approach to establishing the Level 5 objectives is to raise the bar for talent development. Set the target ROI at a higher level than the other investments. Because talent development affects so many and contributes so much to the organization, a higher than normal expected ROI is not unreasonable.

Set ROI at Break-Even

Some organizations are satisfied with a 0 percent ROI—break-even. This says that the organization got the investment back. For instance, if an organization spends $50,000 on a particular program, the monetary benefit was $50,000—there was no gain beyond the investment return, but the investment came back. Many organizations, such as nonprofit, community-, and faith-based organizations value break-even ROI.

Set ROI Based on Client Expectations

A final strategy to setting the Level 5 objective is to ask the client. Remember that the client is the person or group funding the program. They may be willing to invest in a program given a certain return on that investment.

Developing the Evaluation Plan

There are two basic documents that you will complete when planning your ROI impact study. These are the data collection plan and the ROI analysis plan. By completing these plans thoroughly, you will be well on your way to conducting an ROI study. Once completed, have the client sign off on your approach to the evaluation. By taking this important step, you gain buy-in and the confidence of knowing that you have support for your planned approach. But before you set off on planning the actual ROI study, it is important to clarify the purpose and feasibility of pursuing such a comprehensive evaluation. Yes, the needs assessment may align the program with key impact measure that, if improved, will lead to a payoff, but is the program really suitable for impact and ROI evaluation?

Purpose

Purpose keeps you focused on the “why” of the evaluation. This provides a basis for using the data once generated. All too often, evaluation is done without understanding the purpose of the process; therefore, you let the raw data sit for days, months, and sometimes years before you consider analyzing it to see what happened.

Defining the purpose of the evaluation helps determine the scope of the evaluation project. It drives the type of data to be collected as well as the type of data collection instruments to be used.

Evaluation purposes range from demonstrating the value of a particular program to boosting credibility for the entire talent development function. Typical evaluation purposes can be categorized into three overriding themes:

• making decisions about programs

• improving programs and processes

• demonstrating program value.

Making Decisions About Programs

Decisions are made every day, with and without evaluation data. But, with evaluation data, the talent development function can better influence those decisions. Evaluation data can help you make decisions about a program prior to its launch, for example, when you forecast the ROI in a pilot program. Once you know the results of the evaluation, you can decide whether to pursue the program further.

Evaluation data can help the talent development staff make decisions about internal development issues. For example, Level 1 data provide information that helps determine the extent to which facilitators need additional skill building. Level 2 data can help you decide whether an additional exercise will better emphasize a skill left undeveloped. Level 3 data not only tell supervisors the extent to which their employees are applying new skills, but also the extent to which events under their control are preventing employees from applying the skills. Data from Levels 4 and 5 help senior managers and executives decide whether they will continue investing in certain programs.

Noted

Decisions are made with or without evaluation data. By providing data, the talent development team can influence the decision-making process.

The levels of evaluation provide different types of data that influence different decisions. Table 2-8 presents a list of decisions that evaluation data, including ROI, can influence.

Table 2-8. Decisions Made With Evaluation Data

Decision

Level of Evaluation

Talent development staff want to decide whether they should invest in skill development for facilitators.

Level 1

Course designers are concerned the exercises do not cover all learning objectives and need to decide which skills need additional support.

Level 2

Supervisors are uncertain as to whether they want to send employees to future training programs.

Levels 3 and 4

The clients of the talent development team are deciding if they want to invest in expanding a pilot leadership program for the entire leadership team.

Level 5

Senior managers are planning next year’s budget and are concerned about allocating additional funding to the talent development function.

Levels 1–5 (scorecard)

The talent development staff are deciding whether they should eliminate an expensive program that is getting bad reviews from participants, but a senior executive plays golf with the training supplier.

Level 5

A training supplier is trying to convince the talent development team that their leadership program will effectively solve the turnover problem.

Level 5 (forecast/pilot)

Supervisors want to implement a new initiative that will change employee behavior because they believe the talent development program did not do the job.

Level 3 (focus on barriers and enablers)

Improving Programs and Processes

One of the most important purposes in generating comprehensive data using the ROI Methodology is to improve talent development programs and processes. As data are generated, the programs being evaluated can be adjusted so that future presentations are more effective. Reviewing evaluation data in the earlier stages allows the talent development function to implement additional tools and processes that can support the transfer of learning.

Evaluation data can help the talent development function improve its accountability processes. By consistently evaluating programs, the talent development function will find ways to develop data more efficiently through technology or through the use of experts within the organization. Evaluation will also cause the talent development staff to view its programs and processes in a different light, asking questions such as, “Will this prove valuable to the organization?” “Can we get the same results for less cost?” “How can we influence the supervisors to better support this training program?”

Demonstrating Program Value

A fundamental purpose of conducting comprehensive evaluation is to show the value of talent development programs—specifically, the economic value. But when considering individual programs you plan to evaluate, you often have to ask yourself “value to whom?”

Value is not simply defined. Just as learning occurs at the societal, community, team, and individual levels, value is defined from the perspective of the stakeholder:

• Is a program valuable to those involved?

• Is a program valuable to the system that supports it?

• Is a program economically valuable?

Value can be defined from three perspectives. These perspectives are put into context by comparing them to the five-level evaluation framework. Table 2-9 presents these perspectives. The consumer perspective represents the extent to which those involved in the program react positively and acquire some level of knowledge and skills as a result of participating. The system perspective represents the supporting elements within the organization that make the program work. The economic perspective represents the extent to which knowledge or skills transferred to the job positively affect key business measures; when appropriate, these measures are converted to monetary value and compared with the cost of the program to calculate an economic metric, ROI.

Table 2-9. Value Perspectives

Consumer Perspective

The consumers of talent development are those who have an immediate connection with the program. Facilitators, designers, developers, and participants represent consumers. Value to this group is represented at Levels 1 and 2. Data provide the talent development staff feedback so they can make immediate changes to the program as well as decide where developmental needs exist. These data provide a look at what the group thought about the program and how they each fared from a knowledge and skills acquisition perspective compared to the group. Some measures—those representing utility of knowledge gain—are often used to predict actual application of knowledge and skills.

System Perspective

The system represents those people and functions that support learning within an organization. This includes participant supervisors, participant peers and team members, executives, and support functions, such as the IT department or the talent development function. In many cases, the system is represented by the client.

Although Level 3 data provide evidence of participant application of newly acquired knowledge and skills, the greatest value in evaluating at this level is in determining the extent to which the system supports learning transfer. This is determined by the barriers and enablers identified through the Level 3 evaluation.

Economic Perspective

The economic perspective is typically that of the client—the person or group funding the program. Although the supervisor will be interested in whether the program influenced business outcomes and the ROI, it is the client—who is sometimes the supervisor, but more often senior management—who makes the financial investment in the program. Levels 4 and 5 provide data representing the economic value of the investment.

Table 2-10 presents the value perspectives compared with the frequency of use of the data provided by each level of evaluation. Although there is value at all levels, the lower levels of evaluation are implemented most frequently and tend to be of greater value to clients. This is due to the feasibility of conducting evaluations at the lower levels versus the higher levels.

Table 2-10. Value Perspective Versus Use

Feasibility

Program evaluations have multiple purposes—when you evaluate at Level 5 to influence funding decisions, you still need Level 1 data to help you improve delivery and design. This is one reason the lower levels of evaluation are conducted more frequently than the higher levels. Other drivers that determine the feasibility of evaluating programs to the various levels include the program objectives, the availability of data, and the appropriateness for ROI.

Program Objectives

As described earlier, program objectives are the basis for evaluation. Program objectives drive the design and development of the program and show how to measure success. They define what the program is intended to do, and how to measure participant achievement and system support of the learning transfer process. All too often, however, minimal emphasis is placed on developing objectives and their defined measures at the higher levels of evaluation.

Availability of Data

A question to consider is “Can you get the information you need to determine if the objectives are met?” The availability of data at Levels 1 and 2 is rarely a concern. Simply ask for the opinion of the program participants, test them, or facilitate role plays and exercises to assess their overall knowledge, skills, and insight. Level 3 data are often obtained by going to participants, their supervisors, their peers, and their direct reports. The challenge is in the availability of Level 4 data. While the measures are typically monitored on a routine basis, the question is often how the talent development team can access them. The first step is to determine where they are housed, and then build a relationship such that the owners of the measures will partner with you so you can access the data you need. Occasionally, reliance on participants to provide information on the measures is the best approach. But if they are not the right audience, how will they access the data?

Program objectives and data availability are key drivers in determining the feasibility of evaluating a program to ROI; however, some programs are just inappropriate for ROI.

Appropriateness for ROI

How do you know if a program is appropriate for ROI evaluation? By answering the following questions:

• What other factors will influence improvement in the business measures?

• How will you isolate the effects of your program on improvement in the impact measures from other influences?

• Can impact measures be converted to monetary value given cost constraints and so that the value is perceived as credible?

• Does the profile of the program meet specific criteria?

The first two questions represent the most important, yet, most misunderstood step in the ROI process—isolating the effects of the program. Talent development professionals will sometimes question the feasibility and appropriateness of this step. But, if you report business impact or ROI in programs without taking this step, the information will be invalid. If you suggest that your sales program generated enough profit to overcome the costs of the program resulting in a 50 percent ROI, someone in the organization will ask, “How do you know your sales training program was what generated that profit?”

There are a variety of ways to isolate the effects of programs, which are described in chapter 4. Control group methodology is one; however, while it’s the gold standard of techniques, it is often the least feasible. If you do not intend to take this step in the process, then don’t report business impact or ROI. You’ll set yourself up to lose credibility.

The next question is fundamental in moving from Level 4 to Level 5—converting data to monetary value. Omit this step, and you cannot report ROI. As discussed earlier, ROI is an economic indicator comparing the monetary benefits of a program to the fully loaded costs. If you cannot convert a measure to monetary value given the cost constraints under which you are working or so that it is perceived as a credible value, report the improvement as intangible (still an important benefit, just not one that will be included in the ROI calculation).

There are a variety of ways to calculate monetary value for impact measures including standard values, historical costs, expert opinion, estimations, and previous studies. (Read chapter 5 for more information.) The key is converting the measure given cost constraints and so stakeholders perceive the value as credible.

Noted

Not all programs are suitable for impact and ROI evaluation; but when you do evaluate to these levels, use at least one method to isolate the effects of the program and credibly convert data to monetary value.

The last question to consider in assessing the appropriateness of a program in going to ROI is the program profile—does it meet specific criteria? An inexpensive program offered one time, never to be offered again, is not suitable for ROI. Why invest resources in conducting such a comprehensive evaluation on a program for which the data serve no valuable or ongoing purpose? Basic skill building is not always suitable for ROI, for example, a program for basic computer skills. Sometimes you just want to know that participants know how to do something rather than what the impact of their doing it has on key business measures. Induction programs are not always suitable for ROI, especially entry-level programs in which participants are just beginning their professional careers.

So what programs are suitable for ROI? Those programs that are:

• expected to have a long life cycle

• linked to organization strategy

• connected to organization objectives

• expensive, requiring resources, time, and money

• targeted to a large audience

• highly visible throughout the organization

• of interest to management

• intended to drive major change within the organization.

Table 2-11 presents targets based on a benchmarking study of users of the ROI Methodology. These targets can serve as a guide in developing your evaluation strategy.

Table 2-11. Percentage of Programs Evaluated at Each Level

Data Collection Plan

The data collection plan lays the initial groundwork for the ROI study. This plan holds the answers to the questions:

• What do you ask?

• How do you ask?

• Whom do you ask?

• When do you ask?

• Who does the asking?

What Do You Ask?

The answers to this question lie in the program objectives and their respective measures. Specific measurable objectives and measures of success serve as the basis for the questions you intend to ask. When broad objectives are developed, the measures must be clearly described so that you know when success is achieved.

How Do You Ask?

How you ask depends on a variety of issues, including resources available to collect data. Level 1 data are typically collected using the end-of-course questionnaire. To collect Level 2 data, use tests, role plays, self-assessments, demonstrations and simulations, or peer and facilitator assessments. Follow-up data collection (Levels 3 and 4) is the most challenging; however, there are a variety of options, including questionnaires, focus groups, interviews, action plans, and performance monitoring. These options provide flexibility and ensure that the lack of data collection methods is not a barrier to following up on program application and impact.

Whom Do You Ask?

Your source of data is critical. You will go only to the most credible source; sometimes this includes multiple sources. The more sources providing data, the more reliable the data. The only condition is the cost of going to those multiple sources.

When Do You Ask?

Timing of data collection is critical and getting it right can be a challenge. You want to wait long enough for new behaviors to have had time to become routine, but not so long that the participants forget how they developed the new behavior. You also want to wait long enough for impact to occur, but most executives aren’t willing to wait an extended period of time. Therefore, you have to pick a point in time at which you believe application and impact have occurred. Timing, just like the measures themselves, should be defined during the development of the program objectives and based on the needs of the organization.

Who Does the Asking?

Who will be responsible for each step in the data collection process? Typically, the facilitator collects data at Levels 1 and 2. For the higher levels of evaluation, representatives of the evaluation team are assigned specific roles. One of these roles is data collection. A person or team is assigned to the task of developing the data collection instrument and administering it. This includes developing a strategy to ensure a successful response rate.

Table 2-12 presents an example of a completed data collection plan.

ROI Analysis Plan

The second planning document is the ROI analysis plan, which requires that you identify:

• methods for isolating the effects of the program

• methods for converting data to monetary value

• cost categories

• intangible benefits

• communication targets for the final report

• other influences and issues during application

• comments.

The ROI analysis plan also includes a column for comments or any notes that you might need to take regarding the evaluation process.

Methods for Isolating the Effects of the Program

Decide the technique you plan to use to isolate the effects of the program on your Level 4 measures. The method of isolation is typically the same for all measures, but often you find in working with some measures that you can use one technique, whereas working with other measures may require you to use another technique.

Methods for Converting Data to Monetary Value

Next, determine the methods you will use to convert your Level 4 measures to monetary value. In some cases, you will choose not to convert a measure to monetary value. When that is the case, just leave that space blank. Otherwise, select a technique described in chapter 5.

Cost Categories

This section includes all costs for the program. These costs include the needs assessment, program design and development, program delivery, evaluation costs, and some amount that’s representative of the overhead and administrative costs for those people and processes that support your programs. Each cost category is listed on the ROI analysis plan.

Intangible Benefits

Not all measures will be converted to monetary value. There is a four-part test in chapter 5 that helps you decide which measures to convert and which not to convert. Those measures you choose not to convert to monetary value are considered intangible benefits. Move the Level 4 measures that you don’t convert to monetary value to this column.

Communication Targets for the Final Report

In many cases, organizations will plan their communication targets in detail. Here, during the evaluation planning phase, you will identify at a minimum those audiences to whom the final report will be submitted. Four key audiences always get a copy or summary of the report: the participants, talent development staff, supervisors of the participants, and client.

Other Influences and Issues During Application

When planning, it’s important to anticipate any issues that may occur during the training process that might have a negative effect or no effect on your identified impact measures. You can also use this part of the plan to list any issues you foresee.

Comments

The final element of the ROI analysis plan is the comments. Here, you can put notes to remind yourself and your evaluation team of key issues, comments regarding potential success or failure of the program, reminders for specific tasks to be conducted by the evaluation team, and so forth.

The importance of planning your data collection for your ROI analysis cannot be stressed enough. Planning in detail what you are going to ask, how you are going to ask, who you are going to ask, when you are going to ask, and who will do the asking, along with the key steps in the ROI analysis, will help ensure successful execution. Additionally, having clients sign off on your plans will ensure support when the evaluation results are presented.

Table 2-12. Completed Data Collection Plan

Table 2-13 presents a completed ROI analysis plan.

Getting It Done

Now it is time for you to go to work. Before you go any further in this book, select a program that is suitable for ROI. If this is your first ROI study, consider selecting a program in which you are confident that success will be achieved. Success with your first study is an incentive for the next one.

Once you have identified the program, answer the questions presented in Exercise 2-1. In the next chapter, you will learn methods for collecting data and begin developing the data collection plan (Table 2-14).

Exercise 2-1. Questions to Start Thinking About Data Collection

Program:

Evaluation Team:

Expected Date of Completion:

1. What is your purpose in conducting an ROI evaluation on this program?

2. What are the broad program objectives at each level of evaluation?

    Level 1:

    Level 2:

    Level 3:

    Level 4:

    Level 5:

3. What are your measures of success for each objective?

    Level 1:

    Level 2:

    Level 3:

    Level 4:

    Level 5:

4. Transfer your answers to questions 2 and 3 to the first two columns in the data collection plan (Table 2-14).

Table 2-13. Completed ROI Analysis Plan

Table 2-14. Data Collection Plan