7. Sustain Momentum – ROI Basics, 2nd Edition

7

Sustain Momentum

What’s Inside This Chapter

Now that you know the basics of developing an ROI impact study, it’s time to learn how to keep up the momentum. This includes:

• identifying resistance to implementation

• overcoming resistance to implementation

• making the ROI Methodology routine.

 

 

Identifying Resistance

Resistance to comprehensive evaluation like the ROI Methodology will be based on fear, lack of understanding, opposition to change, and the efforts required to make a change successful. To not only identify resistance but overcome it, you need to start with the talent development team, go to the management team, and then do a gap analysis.

Start With the Talent Development Team

The biggest resistance will probably come from within the talent development team. As Pogo, the famous comic strip character, once said, “We have met the enemy, and he is us.” The staff may resist the extra efforts required to use ROI on the talent development process. The problems, concerns, or fears that arise must be uncovered.

Feedback from the talent development staff should be collected from formal meetings or questionnaires aimed at uncovering the areas of concern. What are the pressure points? What are the issues? What are the problems? They will quickly surface in this type of meeting or instrument. It is best to get all problems, concerns, and fears out in the open so that they can be addressed.

Also collect informal feedback from the individuals whose support is needed for the ROI process to work properly. Pay particular attention to those recognized as official or unofficial leaders. Formal assessment, feedback on concerns, and informal feedback expose many of the staff ’s issues with ROI. Table 7-1 shows typical statements of resistance.

Some of these concerns are realistic; others are not. Implementing the ROI Methodology will, no doubt, take additional effort and generate change in the way in which talent development is implemented in the organization. This process will require making painful changes when programs are not living up to expectations. However, this process also has many positive outcomes. Yet, because of the concerns or fears, individuals may not be able to see the positive. For most implementations, many of the concerns about ROI are based on either lack of understanding or belief in the myths about ROI—a problem that can easily be confronted in a proper implementation process.

Table 7-1. Typical Objections to ROI.

• This costs too much.

• We don’t need this.

• This takes too much time.

• Who is asking for this?

• This is not in my job duties.

• I did not have input on this.

• I do not understand this.

• Our clients will never buy this.

• What happens when the results are negative?

• How can we be consistent with this?

• The ROI process is too subjective.

• Our managers will not support this.

• ROI is too narrowly focused.

• This is not practical.

Go to the Management Team

The management team presents its own resistance challenges and will have questions about talent development that must be analyzed and addressed. The first issue to recognize is that different levels of management have different concerns about the talent development process and ROI. Are the immediate managers of participants involved? If so, then their concerns should be addressed. Sometimes, the middle level of management, those who budget for talent development and support it in a variety of ways, may be the target. At other times, the concerns may come from top management where the ultimate commitment to talent development is crystallized. These individuals decide to what extent the talent development function exists by providing the necessary resources and by supporting the process with highly visible actions.

Once the target is identified, the next step is to collect feedback. Meetings and questionnaires are also appropriate for the management team. The responses can reveal much about management’s perceptions of the success of the talent development process. The results quickly show concerns and areas where action is needed.

Others who are involved may have concerns that you should address. If there are outsourcing partners, input should be obtained from them as well. External groups, such as customer groups, involved with talent development should also be included. The important issue is to make sure that those involved in supporting and sustaining the talent development process will have opportunities to sort out their concerns. Table 7-2 shows these groups’ typical reactions to accountability issues and efforts, which may be surprising to the talent development staff.

Table 7-2. Typical Accountability Reactions

Accountability Issues

• Is all this training really needed?

• How is talent development helping our business?

• Can we do this with less cost?

• Do we have what it takes?

• Why does this take so long?

• Show me the money.

Reaction to ROI

• Is this more new jargon?

• Is this the ROI that I know?

• How can you do this?

• Why didn’t you do this earlier?

• Is this credible?

• Can we do this for every program?

Conduct a Gap Analysis

Given the concerns from the staff and various support and stakeholder groups, you should conduct a gap analysis. A gap analysis focuses on where things are compared to where they need to be—for example, where management support is versus where it needs to be for the ROI process to work. It may be helpful to conduct gap analyses in a variety of different areas as shown in Table 7-3.

Table 7-3. Typical Gap Categories

• Staff capability for ROI

• Results-based talent development

• Alignment with business needs

• Effective policies, procedures, and templates

• Appropriate environment for transfer of learning

• Effective management support

• The perception of value of talent development

One of the most important issues is to assess the staff’s capability with ROI. If there is a gap between actual versus needed knowledge and understanding of ROI, specific actions must be taken so that all individuals involved will be on track to use the process properly.

Another area that may need adjustment is the talent development cycle. Evaluation must be considered early and often in the cycle. Data collection may need to be built into some processes, requiring participants and others to provide data as part of the learning process.

A third area of concern is business alignment—the extent to which programs are presently aligned to the business when compared to the best possible alignment. Often you must change practices and processes so that programs are more directly linked to business needs from the very beginning.

Policies, procedures, and guidelines often have to be changed so that evaluation becomes standardized, consistent, and routine. Policies and guidelines include statements about the percentage of programs that will be taken to various levels of evaluation, the extent of up-front business alignment with programs, and other important procedures.

Another important area to assess is the gap between reality and expectation in the workplace, which has to be analyzed and often changed to support the transfer of learning. In the initial analysis, the workplace must be free of barriers to learning transfer. Supporters and enablers should be in place to assist the transfer of learning from a program to on-the-job application. You should consider learning transfer issues before, during, and after programs are designed and implemented.

Next, management support is a key issue and specific efforts may be needed to improve support on different levels. To get managers involved, make sure they have the appropriate information and show them what the talent development process is doing for them. A variety of support processes can make a difference in the success or failure of a program.

Finally, perceptions have to change—perceptions about the value of the talent development process and its contribution to the organization. Although the change may take time and require clear and wide-ranging evidence of success, it is necessary. With this gap analysis, the specific steps can be taken to narrow and close these gaps, so that you can overcome resistance to accountability efforts.

Overcoming Resistance to Implementation

To overcome resistance requires a methodical approach with a variety of actions to remove, minimize, or go around the barriers and problems identified in the gap analysis. When you overcome the resistance, you can accomplish implementation. Figure 7-1 shows the building blocks necessary to overcome the resistance to ROI implementation. The building blocks are approached from the first actions at the bottom of the figure to the last actions on the top, so that each block can be put in place before moving to the next.

Figure 7-1. Building Blocks for Overcoming Resistance

Identifying Roles and Responsibilities

A variety of roles and responsibilities are required to achieve successful implementation. An important role is the ROI champion who helps identify and delegate important responsibilities to ensure successful implementation.

Identifying a Champion

As a first step in the process, one or more individuals should be designated as the internal leader for ROI. As in most change efforts, someone must take responsibility for ensuring that the process is implemented successfully. The ROI champion is usually the one who understands the process best and sees the potential of the ROI Methodology. This leader must be willing to teach and coach others. Table 7-4 presents the various roles of the ROI champion.

Table 7-4. Roles of the ROI Champion

The ROI leader is usually a member of the talent development staff who has this responsibility full time in larger organizations or part time in smaller organizations. The typical job title for a full-time ROI leader is manager or leader of measurement and evaluation. Some organizations assign this responsibility to a team and empower them to lead the ROI effort.

Delegating Responsibilities to Ensure Success

Determining specific responsibilities is a critical issue because there can be confusion when individuals are unclear about their specific assignments in the ROI process. Responsibilities fall into two broad groups. The first group is the measurement and evaluation responsibility for the entire talent development staff. This group is involved in designing, developing, delivering, coordinating, and supporting programs; providing input on the design of instruments; planning an evaluation; collecting data; and interpreting the results. Typical responsibilities include:

• ensuring that the needs assessment includes specific business results measures

• developing specific Level 3: Application objectives and Level 4: Impact objectives for each program

• focusing the content of the program on performance improvement—ensuring that exercises, tests, case studies, and skill practices relate to the desired objectives

• keeping participants focused on application and results objectives

• communicating rationale and reasons for evaluation

• assisting in follow-up activities to capture application and impact results data

• providing assistance for data collection, data analysis, and reporting

• developing plans for data collection and analysis

• presenting evaluation data to a variety of groups

• helping with the design of instruments.

Although it may be inappropriate to have each member of the staff involved in all of these activities, each individual should have at least one or more responsibilities as part of routine job duties. This assignment of responsibility keeps the ROI process from being disjointed and separate from major talent development activities. More important, it brings accountability to those who develop, deliver, and implement the programs.

The second group is the technical support function. Depending on the size of the talent development staff, it may be helpful to have technical experts provide assistance with the ROI Methodology. These experts supplement technical expertise, not relieve others of evaluation responsibilities. Some organizations have found this approach to be effective. When this type of support is developed, responsibilities revolve around eight key areas:

• designing data collection instruments

• providing assistance for developing an evaluation strategy

• coordinating a major evaluation project

• analyzing data, including specialized statistical analyses

• interpreting results and making specific recommendations

• developing an evaluation report or case study to communicate overall results

• presenting results to critical audiences

• providing technical support in any phase of the ROI process.

Who is responsible for each part of the evaluation needs attention throughout the process. It is not unusual to require others in support functions to be responsible for data collection. These responsibilities are defined when a particular evaluation strategy plan is developed and approved.

Noted

It will only take 3 to 5 percent of your talent development budget to create and integrate a robust measurement and evaluation practice. That’s pennies compared to value of the opportunities lost if you don’t have one.

Preparing the Staff

Staff preparation is critical. Working with evaluation is a new endeavor for many leaders as well as talent development staff. For this reason, it is important to consider what knowledge, skills, and experiences the leaders and staff need to ensure successful implementation.

Developing the ROI Leaders

In preparation for the assignment to ROI leaders, individuals usually obtain special training to build specific skills and knowledge in the ROI Methodology. The role of the implementation leader serves a variety of specialized duties.

At times, the ROI implementation leader serves as technical expert, giving advice and making decisions about some of the issues involved in evaluation design, data analysis, and presentation. As an initiator, the leader identifies programs for ROI analysis and takes the lead in conducting a variety of ROI studies. When needed, the implementation leader is a cheerleader, bringing attention to the ROI process, encouraging others to become involved, and showing how value can be added to the organization. The implementation leader is also a communicator—letting others know about the process and communicating results to target audiences. All the roles can come into play at one time or another as the leader implements ROI in the organization.

Developing the Staff

A group that will often resist the ROI Methodology is the staff who must design, develop, deliver, and coordinate talent development solutions. These staff members often see evaluation as an unnecessary intrusion into their responsibilities—absorbing precious time and stifling their freedom to be creative.

You should involve the staff on each key issue in the process. As policy statements are prepared and evaluation guidelines developed, staff input is essential. It is difficult for the staff to be critical of something they helped design, develop, and plan. Using meetings, brainstorming sessions, and task forces, the staff should be involved in every phase of developing the framework and supporting documents for ROI. In an ideal situation, the staff can learn the process in a two-day workshop and, at the same time, develop guidelines, policy, and application targets. This approach is very efficient, completing several tasks at the same time.

Using ROI As a Learning Tool—Not a Performance Evaluation Tool

One reason the staff may resist the ROI Methodology is that the effectiveness of their programs will be fully exposed, placing their reputation on the line. They may have a fear of failure. To overcome this, the process should clearly be positioned as a tool for process improvement and not a tool to evaluate talent development staff performance, at least during its early years of implementation. Talent development staff will not be interested in developing a tool that will be used to expose their shortcomings and failures.

Evaluators can learn more from failures than from successes. If the program is not working, it is best to find this out quickly and understand the issues. If a program is ineffective, it will eventually be known to the clients and the management group, if they are not aware of it already. Lack of results will cause managers to become less supportive of talent development. Dwindling support appears in many forms, ranging from reducing budgets to refusing to let participants be involved in programs. If the weaknesses of programs are identified and adjustments are made quickly, not only will effective programs be developed, but also the credibility and respect for the function and the staff will be enhanced.

Revising Policies and Procedures

Another key part of implementation is revising the organization’s policy concerning measurement and evaluation, often a part of policy and practice for developing and implementing talent development programs. The policy statement contains information developed specifically for the measurement and evaluation process. It is frequently developed with the input of the talent development staff, key managers or sponsors, and the finance and accounting staff. Sometimes policy issues are addressed during internal workshops designed to build skills with measurement and evaluation. Table 7-5 shows the topics in the measurement and evaluation policy for a large organization.

Table 7-5. Results-Based Internal Talent Development Policy

1.  Purpose.

2.  Mission.

3.  Evaluate all programs, which will include the following levels:

• Level 1: Reaction and Planned Action (100%)

• Level 2: Learning (no less than 70%)

• Level 3: Application and Implementation (50%)

• Level 4: Impact (usually through sampling) (10%) (highly visible, expensive)

• Level 5: ROI (7%).

4.  Evaluation support group (corporate) will provide assistance and advice in measurement and evaluation, instrument design, data analysis, and evaluation strategy.

5.  New programs are developed following logical steps beginning with needs analysis and ending with communicating results.

6.  Evaluation instruments must be designed or selected to collect data for evaluation. They must be valid, reliable, economical, and subject to audit by evaluation support group.

7.  Responsibility for talent development program results rests with facilitators, participants, and supervisors of participants.

8.  An adequate system for collecting and monitoring talent development costs must be in place. All direct costs should be included.

9.  At least annually, the management board will review the status and results of talent development. The review will include plans, strategies, results, costs, priorities, and concerns.

10.  Line management shares in the responsibility for program evaluation through follow-up, pre-program commitments, and overall support.

11.  Managers and supervisors must declare competence achieved through talent development programs. When not applicable, the talent development staff should evaluate.

12.  External consultants must be selected based on previous evaluation data. A central data or resource base should exist.

13.  All external programs of more than one day in duration will be subjected to evaluation procedures. In addition, participants will assess the quality of external programs.

14.  Talent development program results must be communicated to the appropriate target audience. As a minimum, this includes management (participants’ supervisors), participants, and all learning staff.

15.  Key talent development staff members should be qualified to do effective needs analysis and evaluation.

16.  A central database for program development must be in place to prevent duplication and serve as program resource.

17.  Union involvement is necessary in total talent development plan.

The policy statement addresses critical issues that will influence the effectiveness of the measurement and evaluation process. Typical topics include adopting the five-level evaluation framework presented in this book; requiring objectives at the higher levels at least for some, if not all, programs; and defining responsibilities for talent development.

Policy statements guide and direct the staff and others who work closely with the ROI Methodology. They keep the process clearly focused and enable the group to establish goals for evaluation. They also provide an opportunity to communicate basic requirements and fundamental issues regarding performance and accountability. More than anything else, policy statements serve as a learning tool to teach others, especially when they are developed in a collaborative and collective way. If policy statements are developed in isolation and do not have the ownership of the staff and management, they will not be effective or useful.

Guidelines and processes for measurement and evaluation are important to show how to use the tools and techniques, guide the design process, provide consistency in the ROI Methodology, ensure that appropriate methods are used, and place the proper emphasis on each of the areas. The guidelines are more technical than policy statements and often contain detailed procedures showing how the process is undertaken and developed. They often include specific forms, instruments, and tools necessary to facilitate the process.

Establishing Goals, Plans, and Timetables

As pointed out in chapter 2, planning is a critical part of the process—plan your work; work your plan. This rings true with taking steps to sustain your evaluation practice, such as setting targets and developing a project plan.

Setting Targets

Establishing specific targets for evaluation levels is an important way to make progress with measurement and evaluation. Targets enable the staff to focus on the improvements needed with specific evaluation levels. In this process, the percentage of programs planned for evaluation at each level is developed. The first step is to assess the present situation. The number of all programs, including repeated sections of a program, is tabulated along with the corresponding level of evaluation presently conducted for each course. Next, the percentage of courses using Level 1: Reaction questionnaires is calculated. The process is repeated for each level of the evaluation. The current percentages for Levels 3, 4, and 5 are usually low.

After detailing the current situation, the next step is to determine a realistic target for each level within a specific timeframe. Many organizations set annual targets for changes. This process should involve the input of the talent development staff to ensure that the targets are realistic and that the staff is committed to the process and targets. If the talent development staff does not develop ownership for this process, targets will not be met. The improvement targets must be achievable, while at the same time, challenging and motivating. Table 7-6 shows the recommended annual targets for evaluation at the five levels.

Table 7-6. Evaluation Targets

Level of Evaluation

Percentage of Programs Evaluated at This Level

Level 1: Reaction and Planned Action

90–100%

Level 2: Learning

60–90%

Level 3: Application and Implementation

30–40%

Level 4: Impact

10–20%

Level 5: ROI

5–10%

Using this example, 90 to 100 percent of the programs are measured at Level 1, which is consistent with many other organizations. Only 60 to 90 percent of the programs are measured at Level 2 using a formal method of measurement. At this level, informal methods are not counted as a learning measure. Level 3 represents a 30 to 40 percent follow-up. Ten to 20 percent are planned for evaluation at Level 4 and half of those are planned for evaluation to Level 5. These percentages are typical and often recommended.

Target setting is a critical implementation issue. It should be completed early in the process with full support of the talent development staff. Also, if practical and feasible, the targets should have the approval of the key management staff, particularly the senior management team.

Developing a Project Plan

An important part of the planning process is to establish timetables for the complete implementation process. The timetables become a master plan for completing the different elements, beginning with assigning responsibilities and concluding with meeting the targets previously described. Figure 7-2 shows an ROI implementation project plan for a large petroleum company.

Think About This

What percentage of your programs do you evaluate at each level? How do your targets compare to the recommended targets above?

• Level 1 ______ percent

• Level 2 ______ percent

• Level 3 ______ percent

• Level 4 ______ percent

• Level 5 ______ percent

From a practical basis, this schedule is a project plan for transition from the present situation to a desired future situation. The more detailed the document, the more useful it will become. The project plan is a living, long-range document that should be reviewed frequently and ad- justed as necessary. More important, it should always be familiar to those who are routinely working with the ROI Methodology.

Figure 7-2. ROI Implementation Project Plan for a Large Petroleum Company

Completing ROI Projects

The next major step is to complete the ROI projects. A small number of projects are usually initiated, perhaps two or three programs. The selected programs most often represent the functional areas of the business, such as operations, sales, finance, engineering, and information systems. It is important to select a manageable number so the projects will be completed.

Ultimately, the number of programs tackled depends on the resources available to conduct the studies, as well as the internal need for accountability. Using the profile above, for an organization with 200 programs, this means that 10 to 20 percent (20 to 40) of the programs will have Level 4: Impact and/or Level 5: ROI results studies conducted annually.

As the projects are developed and the ROI implementation is under way, status meetings should be conducted to report progress and discuss critical issues with appropriate team members. For example, if a leadership program is selected as one of the ROI projects, all the key staff involved in the program (design, development, and delivery) should meet regularly to discuss the status of the project. This keeps the project team focused on the critical issues, generates the best ideas to tackle problems and barriers, and builds a knowledge base to implement evaluation in future programs.

Noted

Not every offering of a program is evaluated to impact or ROI. This type of evaluation is typically conducted on select offerings. So, while 20 unique programs may be targeted for ROI evaluation, it is likely only one or two will be evaluated to those levels.

These meetings serve three major purposes: reporting progress, learning, and planning. The meeting usually begins with a status report on each ROI project, describing what has been accomplished since the previous meeting. Next, discussions take place about the specific barriers and problems encountered. During the discussions, new tactics, techniques, or tools are brought up. Also, the entire group discusses how to remove barriers to success and focuses on suggestions and recommendations for next steps, including developing specific plans. Finally, the next steps are developed, discussed, and configured. Ultimately, these projects must be completed, and the results communicated to the appropriate audiences.

Using Technology

To ensure the measurement and evaluation are efficiently and effectively administered will require the use of technology. Throughout this book, a few types of technologies that support evaluation were mentioned. Technologies can range from simple, inexpensive software purchases to complete systems for managing large amounts of data. Five areas are often addressed when technology is considered in the context of measurement and evaluation.

First, the data collected for Level 1 and the self-assessments at Level 2 need to be managed efficiently using technology. Because of the high percentage of programs evaluated, technology must be used so that data administration and integration will not consume too many resources. A variety of tools are available ranging from using scannable documents to subscription software for processing Levels 1 and 2 data on an outsource basis. This level of data requires only simple analysis.

The second area involves Level 2 data that goes beyond the self-assessment applications. Designing tests that are more objective and checking the validity and reliability of tests may require test design software, ranging from simple test construction software to detailed software for designing all types of tests, including simulations.

The third area is software for follow-up evaluations. This often involves the use of surveys, interviews, and focus group information. A variety of software packages are available to process data from surveys and questionnaires, including qualitative analysis for focus groups and interviews.

The fourth area of consideration is software for conducting detailed results studies. Some software packages are available to carry out experimental research designs, such as a control group analysis, while others are designed to automate ROI studies using questionnaires and action plans.

Fifth, your organization’s learning management system may provide some, if not all, of the technology needed to administer the management and evaluation processes. Many learning management system providers have built-in evaluation tools or links to the most common available tools to manage the data needed for Levels 1, 2, and 3, and sometimes even 4 and 5, in the analysis.

In short, technology is an important way to ease implementation. Appropriate use of technology reduces the amount of time to collect, tabulate, analyze, and report data. When time is minimized, implementation is much easier.

Sharing Information

Because the ROI Methodology is new to many individuals, it is helpful to have a peer group experiencing similar issues and frustrations. Tapping into an international network, joining or creating a local network, or building an internal network are all possible ways to use the resources, ideas, and support of others.

One way to integrate the information needs of talent development professionals for an effective ROI evaluation process is through an internal ROI network. The concept of a network is simplicity itself. The idea is to bring people who are interested in ROI together throughout the organization to work under the guidance of trained ROI evaluators. Typically, advocates within the department see both the need for beginning networks and the potential of ROI evaluation to change how the department does its work. Interested network members learn by designing and executing real evaluation plans. This process generates commitment for accountability as a new way of doing business for the department.

Preparing the Management Team

Several actions can be taken with the management team to ensure that they are supporting evaluation and using the data properly. In some cases, they need to understand more about ROI. Four specific efforts need to be considered.

First, present data to the management team routinely so that they understand the value of talent development, particularly Level 3: Application and Implementation, which translates directly into new skills in the workplace, and Level 4: Impact, which relates directly to goals and key performance indicators. The management team also needs Level 5: ROI, which shows the value of learning compared to the cost. Having routine information in these areas helps them build an appreciation for the value of talent development so that their support will increase in the future.

Second, get your management team more involved in the evaluation process. In addition to reviewing data, managers may be asked to help make decisions about the fate of, or adjustments to, a program. They may be needed in collecting some of the data and supporting data collection efforts. In some cases, they may be specifying what data are needed, including assisting with the up-front business alignment. Manager input is needed throughout the accountability cycle, from the initial business alignment to setting objectives to assisting with evaluation.

Third, ensure managers get full credit for improvements. Although this is a communication and reporting issue, it is critical that managers support accountability efforts in the future. All the improvements in the workplace (which generated the ROI) should be credited to the proper individuals, with the key manager being the person responsible for it. If the talent development function takes credit for the success of the program, the relationship can sour quickly. Give the praise where it is deserved and needed.

Fourth, teach or brief managers on the ROI Methodology. Managers need to understand what the process is about and what it can do—and not do—for them. They need to understand the resources involved in conducting credible ROI studies, so they can help the talent development staff use this tool more selectively. To accomplish this, the organization might offer a special workshop, “Manager’s Role in Talent Development.” Varying in duration from half day to two days, this workshop can shape critical skills and change perceptions to enhance the support of the ROI Methodology. After, managers will have an improved perception of the impact of learning, a more clear understanding of their role in the talent development process, and often a renewed commitment to make learning work in their organization.

Making the ROI Methodology Routine

After the ROI Methodology is implemented in the organization, it must be sustained; it must become routine so that it doesn’t deteriorate and fade away. Making it routine requires building it into a process perceived as necessary, essential, and almost effortless. This section reviews the key steps designed to make it routine.

For lasting value, measurement, evaluation, and ROI must not be seen as a one-time event or an add-on process. Evaluation studies must be planned and integrated into the talent development process as early as possible. The tasks, processes, and procedures of evaluation must be painless, which will increase the odds that they will be used regularly. When evaluation becomes routine, it will become an accepted and important—and sometimes required—element in the talent development cycle.

Making Planning Routine

Intuitively, most professionals realize that planning is an important way to minimize problems, reduce resources, and stay focused on the outcome. Nowhere is this more true than when planning a comprehensive evaluation. Planning minimizes the time required later, keeps the evaluation efficient and less expensive, and helps all stakeholders to become focused on tasks and processes. It also serves to gain buy-in from key clients and makes evaluation routine.

Planning is essential whenever a major evaluation study is conducted. Even if the program has been operational for some time and the evaluation is suddenly requested, planning is needed to decide how to collect, process, and report data. Ideally, the evaluation plan should be in place before the talent development program is operational so that the planning may influence its design, development, and delivery.

Implementing and communicating the evaluation plan is the next step to making planning routine. This means detailing the sequence of events as they should occur from the time that the evaluation plan is developed until all the evaluation results have been communicated.

These planning documents can be completed in a matter of hours when the various team members and stakeholders are available to provide input. The payoff is tremendous, as planning not only makes the process faster and more efficient, but also enhances the likelihood that it will become routine.

Integrating Evaluation Into Talent Development Programs

One of the most effective ways to make evaluation routine is to build it into the program. This approach changes the perception of evaluation from an add-on process to one that is an integral part of the application of learning. It means going beyond reaction and learning evaluation data, the capture of which is usually built into the talent development programs.

Built-in evaluations can be accomplished in several ways. One of the most effective is to use action plans that serve as application tools for the knowledge and skills learned in the program. The action plan is included as part of the program, and its requirement is communicated early. Appropriate agenda time is taken to explain how to develop and use the action plan and, ideally, participants are provided program time to complete it. The follow-up on the success of the action plan provides data for evaluation. In this context, the action plan becomes an application tool instead of an evaluation tool. The commitment to the participants is that the completed action plan data will be summarized for the entire sample group and returned to them so that each member can see what others have accomplished. This provides a little of “what’s in it for me” for the participants. Action plans are used to drive not only application and implementation data, but also impact data.

Another built-in technique is to integrate the follow-up questionnaire with the talent development program. Ample time should be provided to review the items on the questionnaire and secure a commitment to provide data. This step-by-step review of expectations helps clarify confusing issues and improves response rates as participants make a commitment to provide the data. This easy-to-accomplish step can be a powerful way to enhance data collection. It prevents the need for constant reminders to participants to provide data at a later follow-up date.

Using Shortcuts

One of the most significant barriers to the implementation of measurement and evaluation is the potential time and cost involved in implementing the process. An important tradeoff exists between the task of additional analysis versus the use of shortcut methods, including estimation. In those tradeoffs, shortcuts win almost every time. An increasing amount of research shows shortcuts and estimates, when provided by those who know a process best (experts), can be even more accurate than more sophisticated, detailed analysis. Essentially, evaluators try to avoid the high costs of increasing accuracy because it just doesn’t pay off.

Sometimes, the perception of excessive time and cost is only a myth; at other times, it is a reality. Most organizations can implement the evaluation methodology for about 3 percent to 5 percent of the talent development budget. Nevertheless, evaluation still commands significant time and monetary resources. A variety of approaches have commanded much attention recently and represent an important part of the implementation strategy.

Take Shortcuts at Lower Levels

When resources are a primary concern and shortcuts need to be taken, it is best to take them at lower levels in the evaluation scheme. This is a resource allocation issue. For example, if Level 4: Impact evaluation is conducted, Levels 1 to 3 do not have to be as comprehensive. This shift places most of the emphasis on the highest level of the evaluation.

Fund Measurement and Evaluation With Program Cost Savings

Almost every ROI impact study generates data from which to make improvements. Results at different levels often show how the program can be altered or completely redesigned to make it more effective and efficient. These actions can lead to cost savings. In a few cases, the program may have to be eliminated because it is not adding value and no amount of adjustment will result in program improvement. In this case, substantial cost savings can be realized as the program is eliminated. A logical argument can be made to shift a portion of these savings to fund additional measurement and evaluation. Some organizations gradually migrate to a budget target of 5 percent for measurement and evaluation spending by using the savings generated from the use of evaluation. This provides a disciplined and conservative approach to additional funding.

Think About This

As a percentage of the total talent development budget, how much do you currently spend on evaluation? What will it take to increase your funding?

Use Participants

One of the most effective cost-saving approaches is to have participants conduct major steps of the process. Participants are the primary source for understanding the degree to which learning is applied and has driven success on the job. The responsibilities for the participants should be expanded from the traditional requirement of involvement in learning activities and application of new skills. They must be asked to show the impact of those new skills and provide data about success as a routine part of the process. Consequently, the role of the participant can be expanded from learning and application to measuring the impact and communicating information.

Use Sampling

Not all programs require comprehensive evaluation, nor should all participants necessarily be evaluated in a planned follow-up. Thus, sampling can be used in two ways. First, you may select only a few programs for Levels 4 and 5 evaluation. Those programs should be selected based on the criteria described earlier in this book. Next, when a program is evaluated, in most cases, only a sample of participants should be evaluated to keep costs and time to a minimum.

Use Estimates

Estimates are an important part of the process. They are also the least expensive way to arrive at a number or value. Whether isolating the effects of the talent development program or converting data to monetary value, estimates can be a routine and credible part of the process. The important point is to make sure the estimate is credible and follows systematic, logical, and consistent steps.

Use Internal Resources

An organization does not necessarily have to employ consultants to develop ROI studies and address other measurement and evaluation issues. Internal capability can be developed, eliminating the need to depend on consultants, which can reduce costs. This approach is perhaps one of the most significant timesavers. The difference in using internal resources versus external consultants can save as much as 50 to 60 percent of the costs of a specific project.

Use Standard Templates

Most organizations don’t have the time or resources to customize each evaluation project. To the extent possible, develop standard instruments that can be used over and over. If customization is needed, it is only a minor part of it. For example, the reaction questionnaire should be standardized and automated to save time and to make evaluation routine. Learning measurements can be standard and built into the reaction evaluation questionnaire, unless methods that are more objective are needed, such as testing, simulation, and skill practices. Follow-up evaluation questionnaires can be standard, with only a part of the questionnaire being customized. Patterned interviews can be developed as standard processes. Focus group agendas also can be standard. Standardize as much as possible so that evaluation forms are not reinvented for each application. As a result, tabulation is faster and often less expensive. When this is accomplished, evaluation will be routine.

Use Streamlined Reporting

Reporting data can be one of the most time-consuming parts of evaluation, taking precious time away from collecting, processing, and analyzing data. Yet, reporting is often the most critical part of the process, because many audiences need different information. When the audience understands the evaluation methodology, they can usually digest information in a brief format. For example, it is possible to present the results of a study using a one-page format. It is, however, essential for the audience to understand the approach to evaluation and the principles and assumptions behind the methodology; otherwise, they will not understand what the data mean.

The good news is that many shortcuts can be taken to supply the data necessary for the audience and manage the process in an efficient way. All these shortcuts are important processes that can help make evaluation routine because when evaluation is expensive, time consuming, and difficult, it will never become routine.

Getting It Done

Now it is time to develop your ROI implementation plan using Exercise 7-1. Items may be added or removed so that this becomes a customized document. This plan summarizes key issues presented in the book and will help you as you move beyond the basics of ROI.

Exercise 7-1. Measurement and Evaluation Strategy and Plan

This document addresses a variety of issues that make up the complete measurement and evaluation strategy and plan. Each of the following items should be explored and decisions made regarding the specific approach or issue.

 

Purposes of Evaluation

From the list of evaluation purposes, select the ones that are relevant to your organization:

❑ Determine success in achieving program objectives.

❑ Identify strengths and weaknesses in the talent development process.

❑ Set priorities for talent development resources.

❑ Test the clarity and validity of tests, cases, and exercises.

❑ Identify the participants who were most (or least) successful with the program.

❑ Reinforce major points made during the program.

❑ Decide who should participate in future programs.

❑ Compare the benefits to the costs of a talent development program.

❑ Enhance the accountability of talent development.

❑ Assist in marketing future programs.

❑ Determine if a program was an appropriate solution.

❑ Establish a database to assist management with decision making.

 

Are there any others?

 

Overall Evaluation Purpose Statement

State the purpose for conducting an evaluation:

 

 

Stakeholder Groups

Identify specific stakeholders that are important to the success of measurement and evaluation:

 

 

 

Evaluation Targets and Goals

List the approximate percentage of programs currently evaluated at each level. List the number of programs you plan to evaluate at each level by a specific date.

 

Staffing

Indicate the philosophy of using internal or external staff for evaluation work and the number of staff involved in this process part time and full time.

• Internal versus external philosophy:

 

• Number of part-time staff:
   » Names or titles:

 

• Number of full-time staff:
   » Names or titles:

 

Responsibilities

Detail the responsibilities of different groups in talent development. Generally, specialists are involved in a leadership role in evaluation, and others are involved in providing support and assistance in different phases of the process.

Group Responsibilities
   
   
   
   

 

Budget

The budget for measurement and evaluation in best-practice organizations is 3 to 5 percent of the learning and development budget. What is your current level of measurement and evaluation investment? What is your target?

 

 

 

Data Collection Methods

Indicate the current data collection methods used and planned for the different levels of evaluation.

Building Capability

How will staff members develop their measurement and evaluation capability?

Action Audience Who Conducts or Organizes?

ROI briefings (one to two hours)

   

Half-day ROI workshop

   

One-day ROI workshop

   

Two-day ROI workshop

   

ROI certification

   

Coaching

   

ROI conferences

   

Networking

   

Use of Technology

How do you use technology for data collection, integration, and scorecard reporting, including technology for conducting ROI studies? How do you plan to use technology?

Method Current Use Planned Use

Surveys

Tests

Other data collection

Integration

ROI

Scorecards

Communication Methods

Indicate the specific methods you currently use to communicate results. What methods do you plan to use?

Method Current Use Planned Use

Meetings

Interim and progress reports

Newsletters

Email and electronic media

Brochures and pamphlets

Case studies

Use of Data

Indicate how you currently use evaluation data by placing a “✓” in the appropriate box. Indicate your planned use of evaluation data by placing an “X” in the appropriate box.

Questions or Comments