Over the last few weeks we have been discovering Instructional Design (ID), the practice of maximizing the effectiveness, efficiency and accessibility of instruction and other learning experiences. The ID process can be said to have a number of steps:
1. Determine the current state and needs of the learner.
2. Define the end goal of instruction.
3. Develop a learning intervention to assist in the acquisition of new skills, knowledge or expertise.
Once these stages have been undertaken, the learning materials need to be implemented and improved (or evaluated), and that’s what we’re going to investigate today.
The implementation phase of the instructional design process is when the instruction, as designed and developed, is actually delivered.
Training materials are collected and compiled, the environment is arranged, and the course is prepared for delivery.
For some courses, (particularly in e-learning environments) a pre-test is made available. This can determine whether the learner can already perform some of the objectives (or larger units of instruction). If that is the case, learners can skip certain modules or part of the curriculum. A pre-test can also determine whether a learner is undertakes an individualized learning path – a basic form of adaptive learning design.
When designing a new course or learning program, the design and materials should be tested during a pilot course. The pilot enables the instructional designer and trainers to have an opportunity to review and revise the course before it is fully implemented.
The instructor who actually delivers the training must be knowledgeable and skilled in the competency-based training approach used by the instructional designer when designing the course.
A common useful way and comprehensive way of looking at improvement is in terms of the Kirkpatrick Model. Donald L Kirkpatrick first published his ideas on evaluating learning in 1959 in a series of articles in the US Training and Development Journal. The articles were subsequently included in his book Evaluating Training Programs (originally published in 1975; I have the 2006 edition).
In this text he outlined and further developed his theories on evaluating culminating in the Four-Level Model, arguably the most widely used and popular approach for the evaluation of training and learning. Kirkpatrick’s Four-level model is now considered an industry standard across the HR and training communities (see Table 1).
Table 1 Kirkpatrick's Four-level Model
|Level||Kirkpatrick's Model Description||Learning Effect|
|1.||Reactions||Evaluate participants' satisfaction with the learning intervention.|
|2.||Learning||What do participants know they didn't know before? How are they using knowledge in their jobs?|
|3.||Personal Behaviour Benefits||What is the learning and performance effect of the intervention on the participant's behaviour?|
|4.||Organization Benefits||Has the development of higher levels of domain knowledge improved organizational productivity?|
According to the model, evaluation should always begin with Level One, and as time and budget allows, should move sequentially through Levels Two, Three, and Four. Information from each prior level serves as a base for the next level’s evaluation. Each successive level represents a more precise measure of the effectiveness of the training program: however each level also requires a more rigorous and time-consuming analysis.
Kirkpatrick’s model aside, the course materials, objectives, delivery, test items, audience profile – all of the instructional components in fact – need to be evaluated. Assessing these elements regularly is especially important for repeating courses or asynchronous courseware. As an example, if substantial majority (70% or 80% of the learners) fail a criterion test item, it would be reasonable to look again at the design of the related piece of instruction.
Kirkpatrick, D. & Kirkpatrick, P. (2006) Evaluating Training Programs. 3rd ed. San Francisco, CA: Berrett-Koehler Publishers, Inc.