kirkpatrick.docx

(23 KB) Pobierz

kirkpatrick's four levels of training evaluation

This grid illustrates the basic Kirkpatrick structure at a glance. The second grid, beneath this one, is the same thing with more detail.

level

evaluation type (what is measured)

evaluation description and characteristics

examples of evaluation tools and methods

relevance and practicability

1 

reaction

·         reaction evaluation is how the delegates felt about the training or learning experience

·         eg., 'happy sheets', feedback forms

·         also verbal reaction, post-training surveys or questionnaires

·         quick and very easy to obtain

·         not expensive to gather or to analyse

2 

learning

·         learning evaluation is the measurement of the increase in knowledge - before and after

·         typically assessments or tests before and after the training

·         interview or observation can also be used

·         relatively simple to set up; clear-cut for quantifiable skills

·         less easy for complex learning

3 

behaviour

·         behaviour evaluation is the extent of applied learning back on the job - implementation

·         observation and interview over time are required to assess change, relevance of change, and sustainability of change

·         measurement of behaviour change typically requires cooperation and skill of line-managers

4 

results 

·         results evaluation is the effect on the business or environment by the trainee

·         measures are already in place via normal management systems and reporting - the challenge is to relate to the trainee

·         individually not difficult; unlike whole organisation

·         process must attribute clear accountabilities

 

 

kirkpatrick's four levels of training evaluation in detail

This grid illustrates the Kirkpatrick's structure detail, and particularly the modern-day interpretation of the Kirkpatrick learning evaluation model, usage, implications, and examples of tools and methods. This diagram is the same format as the one above but with more detail and explanation:

level

evaluation type (what is measured)

evaluation description and characteristics

examples of evaluation tools and methods

relevance and practicability

1 

reaction

·         reaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example:

·         did the trainees like and enjoy the training?

·         did they consider the training relevant?

·         was it a good use of their time?

·         did they like the venue, the style, timing, domestics, etc?

·         level of participation

·         ease and comfort of experience

·         level of effort required to make the most of the learning

·         perceived practicability and potential for applying the learning

·         typically 'happy sheets'

·         feedback forms based on subjective personal reaction to the training experience

·         verbal reaction which can be noted and analysed

·         post-training surveys or questionnaires

·         online evaluation or grading by delegates

·         subsequent verbal or written reports given by delegates to managers back at their jobs

·         can be done immediately the training ends

·         very easy to obtain reaction feedback

·         feedback is not expensive to gather or to analyse for groups

·         important to know that people were not upset or disappointed

·         important that people give a positive impression when relating their experience to others who might be deciding whether to experience same

2 

learning

·         learning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:

·         did the trainees learn what what intended to be taught?

·         did the trainee experience what was intended for them to experience?

·         what is the extent of advancement or change in the trainees after the training, in the direction or area that was intended?

·         typically assessments or tests before and after the training

·         interview or observation can be used before and after although this is time-consuming and can be inconsistent

·         methods of assessment need to be closely related to the aims of the learning

·         measurement and analysis is possible and easy on a group scale

·         reliable, clear scoring and measurements need to be established, so as to limit the risk of inconsistent assessment

·         hard-copy, electronic, online or interview style assessments are all possible

·         relatively simple to set up, but more investment and thought required than reaction evaluation

·         highly relevant and clear-cut for certain training such as quantifiable or technical skills

·         less easy for more complex learning such as attitudinal development, which is famously difficult to assess

·         cost escalates if systems are poorly designed, which increases work required to measure and analyse

3 

behaviour

·         behaviour evaluation is the extent to which the trainees applied the learning and changed their behaviour, and this can be immediately and several months after the training, depending on the situation:

·         did the trainees put their learning into effect when back on the job?

·         were the relevant skills and knowledge used

·         was there noticeable and measurable change in the activity and performance of the trainees when back in their roles?

·         was the change in behaviour and new level of knowledge sustained?

·         would the trainee be able to transfer their learning to another person?

·         is the trainee aware of their change in behaviour, knowledge, skill level?

 

·         observation and interview over time are required to assess change, relevance of change, and sustainability of change

·         arbitrary snapshot assessments are not reliable because people change in different ways at different times

·         assessments need to be subtle and ongoing, and then transferred to a suitable analysis tool

·         assessments need to be designed to reduce subjective judgement of the observer or interviewer, which is a variable factor that can affect reliability and consistency of measurements

·...

Zgłoś jeśli naruszono regulamin