Hypothesis, evidence, proof

Patti P. Phillips, Ph.D. and Jack J. Phillips, Ph.D.,

Global credibility expert Mitchell Levy recently interviewed us separately for his new book, “Credibility Nation.” Mitchell was impressed with our focus on credibility in our work.

Five steps in our ROI Methodology have the words “make it credible.” The key to showing the value of your learning and talent development programs is to be credible with the data, assumptions and analysis. It’s helpful to think about the concepts you encountered (and maybe even avoided) when you studied statistics or took a basic research methods course: hypothesis, evidence and proof. Let’s connect these ideas to a typical learning program evaluation.

The hypothesis for a program is the results you think it will deliver. That’s not just learning, or the use of the learning, but the impact of the program in workplaces and communities. Hypotheses connect to an impact measure. For example, a leadership development program will decrease employee turnover, or a new technical training program will decrease errors.

Beginning with the end in mind with clearly defined measures sets you on the right path. Every learning program should have an impact as its ultimate goal. The first step in our model is to start with why by connecting the program to business measures. Part of this hypothesis is setting the objectives for impact, showing how much impact is possible. Starting with a clearly stated objective is an important, impressive, and necessary hypothesis for your new program.

The evidence comes from data collection. As the program is designed, developed, and implemented, data collection begins and continues. Measures of reaction, learning, application, and impact are all evidence of success. To make a difference with the program, you need application and impact. Measuring impact pinpoints improvement in the measure. The improvement is evidence that you have made a difference. But other influences can affect that measure, and they are almost always present. Think about measures such as sales, productivity, retention, and waste that are usually influenced by many other solutions.

The third item is the proof that your program has made a difference. This involves isolating the effects of your program on the impact data, separating it from other influences. Isolating the effects can always be done, and it provides proof that you’ve made a difference. There are many techniques to accomplish this, ranging from classic experimental to control group where one group gets the program and the other does not. If they are well-matched, this shows the impact of the program.

Another technique is simple trend-line analysis, where a pre-program impact measure is trended into the post-period. The actual is then compared with the trend, and the difference is the impact from the program if two conditions are met:

  • “Would the pre-program trend have continued in the post-period if we did nothing?” If the answer is yes, the first condition is met.
  • “Did anything else occur in the post-period that would have affected this measure?” If the answer is no, the second condition is met.

There are other techniques involving mathematical modeling and estimates from the most credible source. When estimates are used to sort out the effects of the program, the estimate is always adjusted for the error of the estimate, essentially removing the amount of error that is in the process. Three of the 12 Guiding Principles of the ROI Methodology drive this technique:

  • Adjust estimates of improvements for the potential error of estimation.
  • When collecting and analyzing data, use only the most credible sources.
  • Use at least one method to isolate the effects of a program.

For more detail on different techniques to isolate the effects of a program on the impact data, please contact us at [email protected].

The good news is that isolating the program’s effects can always be done, and it is being done. To date, more than 6,000 people have obtained the credential of Certified ROI Professional or CRP. To do so, they had to sort out the effects of their program on the impact data they collected. Many professionals are showing that this can be done credibly, and it’s defensible. Senior executives and chief financial officers support this process. This step provides proof of the program’s success and provides the ultimate credibility for your program.

If you want to be credible, set a clear hypothesis connected to business impact, measure that impact data, and take a step to show the proof that you’ve made a difference — it’s just that easy.

This article was originally published May 27, 2021, on www.chieflearningofficer.com