Verification of training effectiveness is required by most of the frequently used management system standards. Even companies with well-established training programs struggle with how to evaluate and moreover, how to realize value from their efforts.
Naturally, I see many methods of verification of training effectiveness in my audits – sometimes inspiring, sometimes – not so much. At the very least, it all makes me think. Without giving away anyone’s secrets, I thought I’d share my own thoughts.
The Requirement for Training Effectiveness
Verification of training effectiveness shows up in several of the popularly implemented standards; 9001:2008 (and in the DIS of the 2015 version), 13485:2003 (Medical devices), OHSAS 18001:2007 (Safety), and 27001:2013 (Information Systems).
You may note the omission of ISO 50001:2011 (Energy management) – verification of training effectiveness isn’t there, at least not directly.
And 14001:2004 (Environmental management), the current version, does not have a requirement to evaluate training effectiveness (old school), but the second draft (CD2) does have it. So, if you’re in that world – best start considering how you’re going to meet that need.
I’ll use the text from 9001 except where there’s a noteworthy difference in one of the other standards:
Competence, training and awareness
The organization shall
a) determine the necessary competence for personnel performing work affecting conformity to product requirements,
b) where applicable, provide training or take other actions to achieve the necessary competence,
c) evaluate the effectiveness of the actions taken,
d) ensure that its personnel are aware of the relevance and importance of their activities and how they contribute to the achievement of the quality objectives, and
e) maintain appropriate records of education, training, skills and experience
OHSAS 18001 (Safety) is particularly succinct (note how it also addresses risk):
The organization shall identify training needs associated with its OH&S risks and its OH&S management system. It shall provide training or take other action to meet these needs, evaluate the effectiveness of the training or action taken, and retain associated records.
The main point for our discussion today is that people get trained, and the effectiveness of that training must be evaluated.
Something as simple as this:
Ideally, though, if the training wasn’t effective one might question the training method. Another natural response, not unwholly unwarranted, is to assume the person being trained is at fault; that he or she just didn’t “get it”. This is, in my experience, the typical reaction – and usually without justification.
But it need not be so. This is a larger topic, and mostly beyond the detail level of this post, but books and careers are made on the study of learning. If you’re interested in a fairly detailed work on that topic there’s How Learning Works: Seven Research-Based Principles for Smart Teaching. It’s geared more toward a university environment but it gives a fairly thorough understanding of the challenge.
For those of us juggling the job you were hired for plus the task of training, a great book choice is Design For How People Learn (Voices That Matter) by Julie Dirksen. It is certainly a practical and useful guide to the fundamental concepts of instructional design.
Failing that, simply showing someone a powerpoint or loading a VHS tape doesn’t necessarily have any chance to provide an effective learning experience for some people – even if you do make them sign a piece of paper afterwards saying their eyes were mostly open most of the time.
A better flow diagram may look like this:
Or, how about this? We’ve added a “lessons learned” that will modify the materials or methods for next time.
Essentially, what I’m saying is before you train – evaluate what types of training would be suitable for both the task and the individual. Then, execute the training and make the evaluation of effectiveness. With this information in hand, go back and tweak the materials while providing any feedback to the training methodology for next time – and provide any needed retraining.
This is simply an extension of the Plan, Do, Check Act (PDCA) methodology and is at the core of these standards.
Some Specific Methods to Evaluate Training Effectiveness
Let’s look at some common ways to evaluate training effectiveness.
Tried and true, and most corporations’ go-to method of determining training effectiveness. Written tests do lend themselves well to safety-related training, or clear requirements-based content such as ITAR or even general policies with “dos and don’ts”.
Just be sure that if testing is used that there is a minimum score needed. I’ve reviewed test results and have found individuals that have received fairly close to a zero and still “pass”.
What happens in this situation is that if there is a low hurdle to jump it soon leads to corrections needed or non-conformances found in the process or product.
Typically these initial non-conformances have found the root cause to be related to training and this type of method defect is weeded out early.
Having said that, Testing is a simple solution and is likely going to be in your bag of evaluation options, if done properly.
An On-going Review of Process Metrics
Essentially, the rationale is saying, “Hey, we have trained people, we measure the important metrics and our trends are good; within limits and we are improving where possible.”
It definitely can work. Naturally, the challenge is twofold; defining the meaningful metrics and measuring them in a consistent manner. Goals should be established already in response to The Standards’ other requirements.
If this method is used, be sure to clearly define how it works and have data to back it up. Remember, one of the other requirements is to maintain records to show the verification of training effectiveness – so clearly define what the record is.
It could be as simple as asking the individual who was trained, “How do you think the training went?” – this could be coupled with a short “trip report” (or not). This does tend to have limitations in terms of when it can be used, however. It is well-suited to external trainings or trainings to executive or higher-end technical positions, though it likely can be adapted to any situation.
A Review of an External Certification
Specific to externally performed trainings where the attendee completes a course and is given a post-course test or other plausible evaluation. Again, records would be needed to show this.
At Employee Reviews
A bit tricky in practice since these reviews often take place once a year but it is quite a common solution. Works particularly well for procedure or work-instruction-based trainings.
Similar to the “review of ongoing process” method described above, this technique would predetermine criteria for success; coupling training that has occurred over the period.
The rationale being that successful completion of tasks and assignments; the day-to-day job would indicate effective training. Again, defining this process in detail, including records, is key to success.
Note that claiming effective training through the lack of problems or defects is not the same thing, especially if no one is formally looking at metrics in the first place.
So, That’s five methods for evaluating training effectiveness to consider:
•An on-going review of process metrics
•A review of external certification
•At employee reviews
There are certainly more, particularly software-driven solutions as part of an HRIS application (Human Resource Information Systems) – but typically these need a little coaxing to fit the need (and are generally expensive).
I’m quite interested in other methods you might know of.
In Search of Value
The key though, is that your methods of training effectiveness, whatever they are, give something back to the organization.
The whole point is that it is a waste of resource to take up someone’s time with a training that isn’t absorbed and incorporated into the position.
What is the training trying to accomplish? If it is training to a procedure or work instruction – is the document needed in the first place?
Is it actually a training? Sometimes what companies call “training” is really only a communication – there is a difference, even if subtle. Typically a training teaches, while a communication informs – the distinction is up to the company to decide. It’s important to make that distinction since if it is training, the hoops come up (as I like to say). Training means evaluate training effectiveness; they go hand-in-hand.
If a process must be documented in order to ensure things go smoothly, then it is worth training people to follow that process. To ensure the training is valuable, ensure that the process is doing what it is supposed to be doing.
From the Top Down
The business is there for a reason; a policy is stated and goals and objectives support that policy.
People are hired with the needed competencies. When there is a gap between what they know coming into the organization and what they need to know to meet the goals and objectives – then training is needed.
Processes and documents are defined to realize the policy – these are often the things that must be learned. And when people know what they are doing – it all works according to plan.
It all points back to monitoring the processes, goals and objectives – this is the key to providing training that provides value. Verifying that the training does what it is supposed to do – evaluating training effectiveness – is simply protecting that investment in time and money.
Thanks for reading, I hope it helps – but don’t worry, it won’t be on the test.
Sal Coraccio is the Director of the Quality Systems Division for TUV USA.
He has been a Management System Auditor for 25 years.