Learning and Development – Measurement

20151104_145237It’s pretty much unavoidable to look at the topic of learning and development in organizations without looking at measurement. Unfortunately the topic of measurement in learning and development often creates levels of frustration that no other area of learning and development does.

I think a lot of this frustration, a lot of the OUCH! is created by the typical way in which we understand organizations which of course informs how we understand learning and development. As noted in earlier posts the primary way learning and development occurs in organizations is through content focused events. Even though these are seen as cost effective they are still expensive. The real problem though is that they are typically seen as the only thing, the only activity that is supposed to change behavior and thus positively affect performance in the organization.

If you have a single event that has a large price tag and that single event is supposed to be the primary variable in affecting performance it makes all kinds of sense to ask, ‘what is the return on this investment?’

I think that in many ways the frustrations felt in trying to respond to this question are not so much frustrations with the question itself, but that the question surfaces the real problem of content focused learning events.

They don’t change behavior!

We all know this but we continue to engage in these singular events and then end up doing a whole lot more non valuable work trying to measure their impact and it cannot be effectively done!

OUCH!

Given that the point of L&D initiatives is to change behavior you have a real problem when the above question is asked if your primary design for learning is a content focused event.

A lot of the OUCH! in measuring the effectiveness of learning and development disappears when the design shifts to extended time frame context focused initiatives. When we look at things like executive coaching, mentoring and even action learning initiatives two things tend to happen in terms of measurement:

  1. It is not a priority
  2. It takes a subjective or qualitative format

It could be argued that this happens because this type of learning design tends to be reserved for more senior people and they have the power to legitimize these two points. You could also argue that the effectiveness of the design itself is what is causing the above to occur. My guess is that it is both. But if you are in an organization or situation when measurement of L&D is a priority the second point is very important.

The most effective way to measure the impact of learning and development is to use subjective or qualitative methods.

If you want to go deeper into the details of qualitative measurement I have found value in the book  Qualitative Research & Evaluation Methods: Integrating Theory and Practice by Michael Quinn Patton. There are other resources focusing on this area as well if you check into it a little further.

In a nutshell however, especially with L&D initiatives qualitative measurement is most effective when it takes the following format:

  • Collection of individual ‘stories’ of application and impact of the learning initiative to business scenarios.
  • Analysis of enough stories to extract ‘themes’ of the impact.
  • Sharing of these themes and making the actual stories available for review by others.

In many ways the informal process of sharing stories is what sustains the use of things like coaching and mentoring. You will often hear people (often senior people) who passionately tell their stories of how valuable a coaching process has been or how much impact a mentor had on them early in their career.

As you move to extended time frame, context focused designs you are really just formalizing and doing a bit more analysis of this story sharing process.

This type of measurement or evaluation is a shift from the typical attempts at quantitative evaluation so it is important to incorporate this shift right into the design of any initiative. Trying to add this on to the end of an initiative typically is quite difficult. People need to know how evaluation is going to happen so they are prepared and can consider their stories right from the start.

The other thing this type of evaluation does is put the primary accountability for evaluation on the learner and how they are applying their learning in a business context. Most quantitative evaluation methods of learning initiatives do a very poor job at this.

If you are in an organization that is adamant about measuring the return on investment of learning, the faster you can get to qualitative evaluation the better. The causal factors affecting the value and impact of learning are very complex. Quantitative evaluation of learning almost always will force you into looking for simple causal (often one to one or A to B) factors. Since these do not exist for complex learning topics your quantitative measures are always at risk of scrutiny by someone who wants to question them and you will be mostly defenseless when this happens.

Moving to a qualitative evaluation process inhibits this significantly. It is very hard to refute a large number of practical stories from participants that say the learning is having a business impact. On the other hand it is also very hard to refute a large number of stories that say the learning is having little impact!

However isn’t this what we want from our evaluation of learning?

What are your learning evaluation stories?

 

 

Advertisements

One Response

  1. The following comment was sent to me via email by a reader and colleague who had some problems posting it here.

    Corporate training is fueled by the human perfection myth and the erroneous assumption that the reason people don’t DO is because they don’t know. Ergo – tell them and they will do. (I say tell them since so much training is can opener or sheep dip style instruction, misguided by the way teaching happens in schools.)

    I just spent some time with a program that was guided by Kirkpatrick theory. This, like so many other snake oil training guides (e.g. dale’s cone, learning styles) is just theory accepted as fact… but Kirkpatrick makes a good argument for not measuring results only at the point of training. If training is having little impact there are usually lots of reasons that have little to do with the training selected, how it was done or whether anyone learned anything.

    Most of the real important learning can NEVER be in a class, course, event or eLearning. I contend that the most important learning is embedded in processes of reflection and shared understanding that develop as the results (whatever is important) and the actions are compared using data (quantitative and qualitative) in an open, honest and non-politically charged manner. And it is a regular and iterative process embedded in the fabric of the work itself.

    Real learning cannot be outsourced to L and D and is the responsibility of line management. Training can and perhaps should, but not learning. If training results were to be better measured. Training might be called out to be how ineffective it is and then be guided by learning science, not decades-old hocus pocus.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: