We just spent two days running professional development workshops for dozens of trainers in a regional centre. At the conclusion, we thanked our client, got the audience to clap themselves for their engagement and participation, then packed our bags and left. 

“How do we know if the training we’ve completed has had the affect we intended? Did we do a good job or not? Did the client feel as though they received value for their investment? “

Whether the training in accredited or non-accredited, there are a number of ways we can answer these questions. And they fall under the heading of evaluation methodologies. 

Firstly, there are a few models that have gained the status of industry standards for evaluation: The Kirkpatrick model, CIPP model and the Phillips ROI method all fit this definition. I’ll be looking at one of these models shortly, but I do encourage you to look into the models yourself by searching online and reading examples that are provided. This is a great way to see how they’re put into practice. 

Secondly, there are a number of processes that can be put into place to gather information to help you evaluate training. These include pre and post training assessments, surveys and questionnaires, interviews, observations, client reports, and focus groups and comparative analysis. 

There are numerous ways of actioning these in order to run an effective evaluation and get the answers to those questions we asked at the start.

For this article though, I want to focus purely on selecting the ‘best’ method for evaluating the quality of training in reference to the performance evidence required. Just to clarify, the performance evidence is the expected performance standards described in national units of competency and accredited modules. So they represent the ability of the students to perform the given tasks at a competent level. 

For the non-accredited PD workshop I mentioned at the start, the performance requirements would be set out in the initial goals agreed with the client. For example, one goal may have been to gain a clear understanding of the new national standards for training organisations. Which would be measurable by using a written or oral assessment of some form. 

We should choose one model to evaluate performance in this instance. As you’ll see in a moment, choosing the model or method first is essential. In this case, I’ll use the Kirkpatrick model here because it’s designed to evaluate the training at each phase of the program. The model has four levels:

Level 1: Reaction

  • This step involves gauging the participants’ initial reactions to the training. This can be done through feedback forms, surveys, or discussions immediately after the training session. Questions should focus on participants’ satisfaction with the training, the relevance of the content, and the quality of the delivery. So, after a workshop, distribute a survey asking participants to rate various aspects of the training, such as the usefulness of the information, the engagement of the instructor, and the applicability of the skills learned.

Level 2: Learning

  • This level assesses what the participants have actually learned from the training. You can measure this through tests, quizzes, or practical demonstrations before and after the training. These assessments should align with the learning objectives set at the beginning of the program. Before the training, administer a pre-test to assess baseline knowledge. After the training, conduct a similar post-test to evaluate the increase in knowledge or skills.

Level 3: Behaviour

  • This stage evaluates how well the training translates into behaviour change in the workplace. It involves observing and assessing participants over time to see if they apply the skills or knowledge learned. Follow-up surveys, interviews with participants and their supervisors, and direct observation can be used. So, a few months post-training, you could conduct interviews or surveys with trainees and their managers to assess if the trainees are applying new skills on the job. You may even observe their performance in real-life work situations if possible.

Level 4: Results

  • The final level measures the ultimate impact of the training on organisational goals. This involves analysing key performance indicators such as productivity, sales, quality, or revenue. This data is typically gathered from organisational records and might require a longer period to accurately assess. Compare sales data or customer satisfaction ratings from before and after the training period to gauge the impact of a sales training program.

Let’s reflect on these levels. What did you notice about each level and how it relates to the processes shared earlier? 

If you said that the processes are just parts of each level – then you’d be correct. Processes like interviews, surveys and so on, are the TOOLs that you use to make the model fit your situation and gather the data you need to properly evaluate the training outcomes. In the instance of performance metrics, you’d put more weight on organisational feedback and observations as these tools will directly measure performance, whereas surveys and questionnaires are measuring understanding and knowledge transfer. 

The data you receive can then be analysed to get a clear understanding of the effectiveness of the training. This will give you information that can be fed back into the future development of the training program for the purposes of improving things for the next time it’s run. Additionally, the feedback can be applied across other programs where there are similarities between clients, content or performance requirements. 

So a quick summary – review different evaluation methods and models. Select a method/model, choose the most appropriate tools to gather the information you need to answer your quality-based questions, and finally, analyse the data you receive to feed improvements back into your training program/s. 

Lastly, enjoy the process, as you can gain a lot of satisfaction from seeing your training improve over time. It’s not just about receiving happy sheets from your participants at the end of a training session to let you know everything went just fine – it’s about seeking genuine quality outcomes for your students and ultimately, offering the best training solutions to your clients.