Kirkpatrick’s Training Evaluation Model

Donald L Kirkpatrick, Professor Emeritus, University Of Wisconsin, first published his ideas in 1959, in a series of articles in the Journal of American Society of Training Directors. The articles were subsequently included in Kirkpatrick’s book Evaluating Training Programs. He was president of the American Society for Training and Development (ASTD) in 1975. Donald Kirkpatrick’s 1994 book, Evaluating Training Programs, defined his originally published ideas of 1959, thereby further increasing awareness of them, so that his theory has now become arguably the most widely used and popular model for the evaluation of training and learning. Kirkpatrick’s training evaluation model is now considered an industry standard across the HR and training communities.

Four Levels of Kirkpatrick’s Training Evaluation Model

The basic structure of Kirkpatrick’s training evaluation model focuses on four levels namely, “Reaction, Learning, Behavior and Results”.

  1. Reaction, or the extent to which learners were satisfied with the programme;
  2. Learning, or the extent to which learners took on board the course content;
  3. Behavior, or the extent to which learners applied their knowledge in role; and
  4. Results, or the extent to which targeted outcomes were achieved, such as cost reduction, increased quality and productivity.

Kirkpatrick's Training Evaluation Model - Kirkpatrick's Model - Kirkpatrick's Framework

Starting with Reaction, as the word implies, evaluation at this level measures how the learners react to the training. In order to obtain the necessary information for this level, it is often measured with attitude questionnaires that are passed out after most training classes. The target outcome is the learner’s perception (reaction) of the course. Learners are often keenly aware of what they need to know to accomplish a task. If the training program fails to satisfy their needs, a determination should be made as to whether it’s the fault of the program design or delivery.

Reaction evaluation is how the delegates felt and their personal reactions to the training or learning experience. Common examples of determining this are:

  • Did the trainees like and enjoy the training?
  • Did they consider the training relevant?
  • Was it a good use of their time?
  • Did they like the venue, the style, timing, domestics, etc?
  • Level of participation.
  • Ease and comfort of experience.
  • Level of effort required making the most of the learning.
  • Perceived practicability and potential for applying the learning.

The next level Learning is the extent to which participants change attitudes, improve knowledge, and increase skill as a result of participating in the learning process. It addresses the question, Did the participants learn anything?

The learning evaluation requires some type of post-testing to ascertain what skills were learned during the training. In addition, the post-testing is only valid when combined with pre-testing, so that you can differentiate between what they already knew prior to training and what they actually learned during the training program. Learning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience.

This can be assessed by considering if:

  • Did the trainees learn what was intended to be taught?
  • Did the trainee experience what was intended for them to experience?
  • What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?

With Performance (behavior), this evaluation involves testing the student’s capabilities to perform learned skills while on the job, rather than in the classroom. It determines if the correct performance is now occurring by answering the question, “Do people use their newly acquired learnings on the job?” Behavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation. And in order to narrow down the results of this level we look at:

  • Did the trainees put their learning into effect when back on the job?
  • Were the relevant skills and knowledge used
  • Was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?
  • Was the change in behaviour and new level of knowledge sustained?
  • Would the trainee be able to transfer their learning to another person?
  • Is the trainee aware of their change in behaviour, knowledge, skill level?

In the final level, Results it measures the training program’s effectiveness, that is, “What impact has the training achieved?” These impacts can include such items as monetary, efficiency, moral, teamwork, etc. Results evaluation is the effect on the business or environment resulting from the improved performance of the trainee.

Measures would typically be business or organizational key performance indicators, such as volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc.

According to Kirkpatrick, the subject of evaluation or the level at which evaluation takes place is dependent on the phase during which the evaluation takes place. In Kirkpatrick’s model, each successive evaluation level is built on information provided by the lower level. According to this model, evaluation should always begin with level one, and then, as time and budget allows, should move sequentially through levels two, three, and four. Information from each prior level serves as a base for the next level’s evaluation. Thus, each successive level represents a more precise measure of the effectiveness of the training program, but at the same time requires a more rigorous and time-consuming analysis.

Benefits of Kirkpatrick’s Model

Kirkpatrick’s training evaluation model is relatively simple to understand and presents a useful taxonomy for considering the impact of training programmes at different organisational levels. As discussed above, there are risks and weaknesses to using the individual levels in isolation. However, Kirkpatrick did not mean for the framework to be so used. Rather, each level of evaluation is intended to answer whether a fundamental requirement of the training program was met, with a view to building up a picture of the whole-business impact of the training. All levels are important as they contain diagnostic checkpoints for their predecessors enabling root cause analysis of any problems identified. For example, if participants did not learn (Level Two), participant reactions gathered at Level One (Reaction) may reveal barriers to learning that can be addressed in subsequent programmes. Thus, used correctly, the Kirkpatrick’s training evaluation model can benefit organisations in a number of ways.

Firstly, the evaluation framework can validate training as a business tool. Training is one of many options that can improve performance and profitability. Proper evaluation allows comparisons and informed selection in preference to, or in combination with, other methods. Secondly, effective evaluation can justify the costs incurred in training. When the money is tight, training budgets are amongst the first to be sacrificed. Only by thorough, quantitative analysis can training departments make the case necessary to resist these cuts. Thirdly, the right measurement and feedback can help to improve the design of training. Training programmes need continuous improvement and updating to provide better value and increased benefits. Without a formal evaluation, the basis for change is subjective. Lastly, systematic evaluation techniques can allow organisations to make informed choices about the best training methods to deliver specific results. A variety of training approaches are available at different prices with different outcomes. By using comparative evaluation techniques, organisations can make evidence-based decisions about how to get the most value for money, and thereby minimize the risk of wasting resources on ineffective training programmes.

Criticisms of Kirkpatrick’s Model

Despite its popularity, Kirkpatrick’s training evaluation model is not without its critics. Some argue that the model is too simple conceptually and does not take into account the wide range of organisational, individual, and design and delivery factors that can influence training effectiveness before, during, or after training. Contextual factors, such as organisational learning cultures and values, support in the workplace for skill acquisition and behavioral change, and the adequacy of tools, equipment and supplies can greatly influence the effectiveness of both the process and outcomes of training. Other detractors criticize the model’s assumptions of linear causality, which assumes that positive reactions lead to greater learning, which in turn, increases the likelihood of better transfer and, ultimately, more positive organisational results.

Training professionals also criticize the simplicity of the Kirkpatrick’s training evaluation model on a practical level. Since it offers no guidance about how to measure its levels and concepts, users often find it difficult to translate the model’s different initiatives. They are often obliged to make assumptions and leaps of logic that leave their cost-benefit analyses open to criticism. Most are able to gather Level 1 and Level 2 feedback and metrics with relative ease, but find the difficulty, complexity and cost of conducting an evaluation increases as the Levels advance and become more vague. Only five per cent of organisations measure ROI (and they do so for a small percentage of their programs) and fewer than ten per cent regularly measure business impact. Paradoxically, therefore, it is precisely the elements that Heads of Learning and Development want to measure, that they end up measuring the least.

On a more fundamental level, some have taken issue with the content of Kirkpatrick’s training evaluation model. Philips (1994), for example, adds a fifth level to the framework in order to address the recurring need for organisations to measure return on investment in training and development activity. The model fundamentally overlooks the role of learning and development as a business support function. Whilst it is appropriate for business critical lines to be measured according to the outputs for which they are directly accountable, e.g. revenue, profit or customer satisfaction, it is not reasonable to measure HR and Training by the same means. Since these non-revenue-generating functions exist to support strategic initiatives and to make business lines run better, their business impact needs to be measured differently. Since Kirkpatrick’s Training Evaluation Model overlooks this, practitioners who attempt to apply it to their business activity end up spending large amounts of time and energy trying to evaluate direct business impact, where there is only indirect responsibility.

Leave a Reply

Your email address will not be published. Required fields are marked *