<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=115389302216927&amp;ev=PageView&amp;noscript=1">
LET'S TALK

Evaluating Training Effectivenes

    mega-menu-graphic

    Storyline Scheduled Public Courses

    2 min read

    Beyond Simple Likes and Dislikes: How to Really Evaluate E-Learning

    By Andrew Jackson on Wed, Mar 6,2013

    I don't know about you, but the word evaluation can send a shiver down my spine. For many of us in learning and development it's a word that can have so many negative connotations, we sometimes fudge or avoid thinking about it completely.

    I think these negative associations are because, typically, we take too narrow a view of the word.  For most people evaluation is about whether or not the learners liked the course or the trainer - or the chocolate biscuits served up at break time.

    This kind of evaluation really gives us little more than broad, hard to quantify opinions about something. What we really need to do is start  thinking  about evaluation as a means to really identify what's effective about a piece of learning. And what's not.

    If we adopt this broader view of evaluation, then it has a place through the entire design and development process, not just at the end. This is true for any kind of learning, but is especially true for e-learning.

    I say that because unlike classroom training, e-learning is more time-consuming and more expensive to refine once it's been created. If you are evaluating its potential effectiveness at every stage in the design process, it's much more likely to hit its target first time, avoiding the need for costly revisions.

    We can take a leaf out  of the usability designers book here.  They do something very close to what I'm about to describe with website design. It's a simple, practical exercise which frequently gets overlooked or skipped over in a typical e-learning design process.

    Work 1-to-1 with some typical learners
    This is something you should do while you are still in the prototype or storyboard stage of development. The only difficult parts are getting access to a learner or two and co-ordinating diaries. I say 'only'. I know those can be two major difficulties. But it's worth persisting, because the dividends this exercise  pays are tremendous.

    Sit with the learner. Have them evaluate the prototype or storyboard and give you their feedback. There are various things you can look at. How clear or understandable is the content?  Are the proposed interactions or activities relevant and meaningful? Can they make sense of the overall interface and the specific navigation?

    Do this with a handful of learners and you'll very quickly get a sense of what is problematic or confusing for everyone and what is just a subjective opinion held by a single individual.

    You'll need to be a good note taker. Because you'll usually get plenty of valuable comments which you won't want to forget. Better still, (with the learner's permission) you might consider recording what they have to say.

    Jakob Nielsen tells a funny story about how website designers react the first time they do an activity like this. The first user is wheeled in and starts to look at the design. Some things just don't make sense. 'They must be a particularly stupid user, not to get that" thinks the designer.  Then the second user is wheeled in. Same problem with the design. Then the third. Same problem again. And so on. Until the designer 'gets it' and the penny drops: their design is the problem. Not the intelligence of the users.

    And that's the beauty of carrying out an exercise like this, during your e-learning development. It strips out any ego that might've found its way into the design. It forces you as the designer to really see how the learners react to it. In the end, this helps you make changes that your learners will only thank you for.
     

     

    Topics: Instructional Design Measurement and evaluation e-learning
    3 min read

    An Olympics confession, Improving Performance and the Power of Kaizen

    By Andrew Jackson on Tue, Aug 21,2012

     It's time to confess. July 2005,  when we learned we would be the hosts of the 2012 Olympic Games, I wasn't that fussed. I wasn't anti. But not being much of a sports fan, the excitement mostly passed me by.

    Little did I think, 7 years later,  I would be cheering Team GB along and delighting in the fantastic achievements of the winners and empathising so much with the losers.

    In case you're wondering, I haven't suddenly become a devoted sports fan, but I couldn't help being swept up by the interest we all have in seeing truly remarkable individuals succeed. And the L&D bit of my brain couldn't help be fascinated by how this group of people had achieved so much stunning success.

    Actually, my interest started a couple of weeks before the Olympics with Bradley Wiggins winning the Tour de France. (Another confession - I'd never heard of the bloke until about a week before the Tour de France started).

    In the deluge of press coverage following the competition, we started to get some insights into how that fantastic win came about.

    Several things grabbed my attention. First, when Wiggo and team announced their ambition, most people thought they were bonkers. Second, not only have they proved those doubters wrong, they have done so far sooner than even they had imagined they could. Finally, 2011 had been a truly abysmal year for them and anybody looking on from the outside would probably have laughed even louder at the possibility of them achieving their stated ambition.

    So what changed? What turned things around so rapidly and so decisively?

    I can't claim to have the absolute scoop on all this, but here's what I gleaned from watching interviews on TV and reading articles in the press.

    That truly abysmal year I just mentioned was the catalyst for change and, ultimately, success. It was reaching a terrible, crushing low in their performance that forced the team to step back, re-asses and re-think their entire approach.

    They went against conventional wisdom. From what I can understand, the conventional wisdom in the cycling world is that you get better by being in lots of competitions. That seems intuitive doesn't it?  Practice makes perfect, after all.

    They decided to go for the counter-intuitive. Cut back on the number of competitions and focus instead on training and preparation for competitions they were going to enter.

    They completely re-engineered their approach to training and preparation. This involved breaking the entire process down, examining every aspect in detail and squeezing performance improvements out of every last bit of it.

    This, it turns out, is the secret of Team GB's success, too.  They refer to it as 'the science of marginal gains'. Dave Brailsford sums it up nicely in a recent BBC interview:

    "The whole principle came from the idea that if you broke down everything you could think of that goes into riding a bike, and then improved it by 1%, you will get a significant increase when you put them all together. There's fitness and conditioning, of course, but there are other things that might seem on the periphery, like sleeping in the right position, having the same pillow when you are away and training in different places. They're tiny things but if you clump them together it makes a big difference."

    The Japanese were the pioneers of something very similar in the world of business  - you may have heard of  kaizen. It's the 'continuous improvement of working practices'.

    Two things strike me about all this. First, most employees in most organisation are taught to fear failure in their day-to-day work almost as much as they fear receiving a redundancy notice. In fact, for many, the two are inextricably linked. If the first happens, the second will almost certainly follow.

    Yet, as the example of team Wiggo shows, failure is sometimes the most powerful motivator for subsequent success. Nobody wants or sets out to fail. It feels awful when it happens and it can be soul destroying. And I'm certainly not suggesting organisations should go around encouraging their employees to fail.

    But, I'd bet a fairly large sum of money that organisations which take a grown-up view of failure are better places to work and, overall, end up being more successful.

    Second, because employees fear failure so profoundly, most follow conventional solutions. So in many organisations, everyone just chugs along in quiet desperation. Everyone knows it could be so much better, but who's going to rock the boat and suggest outrageously unconventional change? Only a brave soul, but oh boy, the ones who do are likely to reap the benefits.
    Topics: Instructional Design Learning Psychology Measurement and evaluation
    2 min read

    Evaluating Training 2: Wear the Red Pants with Pride

    By Andrew Jackson on Fri, May 27,2011

    Last time, I shared Jim Kirkpatrick's story of 'red pants (trousers) syndrome' to illustrate how difficult it can be to get people to change the way they do things if they are unsupported after a training event.

    The Kirkpatrick four levels are all about minimising outbreaks of 'red trousers syndrome'. They encourage you to start at the end of the learning process, identify the results you want to achieve and figure out what kind of learning needs to take place to make that happen.

    Key to all this is not taking the 'sheep dip' approach to learning. In other words, the 'figuring out' of what you need has to take account of the fact that traditional approaches to designing learning are not necessarily the most effective.

    This is borne out by some astonishing results Jim shared with us. They are from long-term research carried out by Rob Brinkerhoff, comparing the benefits of a fairly traditional approach to training (emphasis on a one-off event) with a more collaborative approach (more balance between a training event and follow up activities). Here's a summary of the results.

    In a traditional approach to training design, 90% of the time is spent on design and development of the training event and only 10% on pre and post development activity. In this approach, typically the following happens to learners:
    • 15% do not try the new skills
    • 70% try to implement the learning but fail
    • 15% achieve and sustain the new learning
    In a more collaborative approach, the training designers work very closely with the client and 25% of time is devoted to pre-training prep and 50% to post-training follow-up. (Note: only 25% of the time is devoted to the training event itself). In this approach, typically the following happens to learners:
    • 5% do not try the new skills
    • 10% try to implement the learning but fail
    • 85% achieve and sustain the new learning
    This is one of the most compelling pieces of research-based evidence I have seen for a long time. It has made me realise that here at Pacific Blue we should make much greater efforts than we currently do to encourage, you, our clients to engage in this kind of collaborative approach.

    There's no question this is a more complex approach. It involves the co-operation of colleagues and managers who may not be taking part in the training event. But look at the results.

    The good news, (as I've mentioned in previous emails) - we think some of the pain of getting colleagues involved can be minimised through some aspects of mobile learning. This has the potential to provide quite personalised follow-up for learners and to enable virtual support networks and communities without taking up vast amounts of colleagues' time.

    If you are interested in discovering more about effectively evaluating your learning, check out our courses and services.
    Topics: Measurement and evaluation
    2 min read

    Evaluating Training 1: Would You Wear the Red Pants?

    By Andrew Jackson on Fri, May 27,2011

    I had a great day recently at the Training Zone Live event. The highlight of the day for me was Jim Kirkpatrick's session on his (and his Dad's) four levels for evaluating the effectiveness of training.

    Jim was feeling a little jet-lagged, having just flown in from Australia the day before - but he ran an inspiring session, nevertheless.

    At one point, he explained to us that his Australian audience had introduced him to 'red pants syndrome' (that's pants in the American sense, by the way, so 'red trousers syndrome' for us Brits).

    So 'red trousers syndrome' is where you go on a training course and learn to do something in a particular way, then go back on the job and start implementing what you've learned. Only to discover that no one else much is bothering.

    In other words, it's a bit like wearing a pair of red trousers to work everyday, when everyone else wears black ones. You come in on the first day, feeling pretty pleased with your new look. But you quickly realise people are staring at your new trousers. Maybe over time they start to comment negatively on your appearance. Perhaps they even start avoiding you.

    In that situation, how long are you going to hold out? How long will it be before you start wearing black trousers, too?

    A nice metaphor to highlight the big problem that exists in many organisations. The one where the training happens, everyone feels enthused, but within a relatively short time they all go back to doing things in the same old way.

    Jim is pretty clear on what the consequences of not addressing this problem will be: training departments as we know them will eventually become obsolete.

    But as Jim explained, if you start at the end and identify the results you want to achieve and work back to work out exactly what you need to achieve those results, you can greatly minimise an outbreak of 'red trousers syndrome'.

    Next time, I'd like to share the results of some long-term research carried out by one of Jim's colleagues. This shows how avoiding 'sheep dip' training can have a massive impact on changing behaviours and embedding learning.
     
    If you are interested in discovering more about effectively evaluating your learning, check out our courses and services.
    Topics: Measurement and evaluation