Just had this quick interchange with Patti Shank on twitter:
This is a totally fair comment on Patti’s part — you can’t force someone to be motivated (and undoubtedly some of our disagreement stems from semantics – not that THAT ever happens on twitter). A lot of the conversation around gamification (for a heated throw down on the topic read the comments here) is about the dubious and likely counterproductive effects of extrinsic rewards as motivators. According to Alfie Kohn in his book Punished by Rewards, a big part of the problem with extrinsic motivators is that it’s about controlling the learner, not helping or supporting them.
So that I totally agree with – you can’t control your learner, or control their motivation.
But design decisions do have an impact on human behavior. For example, this chart show the rate of people who agree to be organ donors in different European countries:
In the blue countries, choosing to be a organ donor is selected by default, and the person has to de-select it if they do not want to be a donor. In the yellow countries, the default is that the person will not be an organ donor, and the person has to actively choose to select organ donor status.
Now it could be that some people aren’t paying attention, but at least some of that difference is presumably due to people who do notice, but just roll with the default (you can read more about it here – scroll down to the Dan Ariely section).
So the way something is designed can make a difference in behavior. Of course, that’s not a training example, so let’s take a closer look at how training might come in to play.
Is it a training problem?
Robert Mager used this question as a litmus test:
“If you held a gun to the person’s head, would they be able to do the task?”
He further discusses this in his book on Analyzing Performance Problems but later uses the less graphic “could they do the task if their life depended on it?” question (Thiagi advocates for the version “Could they do it if you offered them a million dollars?” if you prefer a non-violent take).
So basically, if someone could do the behavior under extreme pressure, then they clearly know how to do it, and it’s not a knowledge or skills problem, and therefore outside of the domain of training (could be up the person’s specific motivation, could be a workplace management issue, etc.).
Here’s where I disagree
I think the way you design learning experiences can have an impact on the likelihood of people engaging in the desired behavior, and that it is part of an instructional designer’s responsibility. I don’t think you can control people, or force the issue, but I do think the experience they have when they are learning about something can make a difference in the decisions they make later.
The Technology Acceptance Model
The technology acceptance model is an information systems model that looks at what variables affect whether or not someone adopts a new technology. It’s been fairly well research (and isn’t without its critics), but I find it to be a useful frame. At the heart of the model are two variables:
It’s not a complicated idea – if you want someone to use something, they need to believe that it’s actually useful, and that it won’t be a major pain in the ass to use.
TAM specifically addresses technology adoption, but those variables make sense for a lot of things. You want someone to use a new method of coaching employees? Or maybe a new safety procedure? If your audience believes that it’s pointless (ie not useful), or it’s going to be a major pain (ie not easy to use), then they will probably figure out ways around it. Then it either fails to get adopted or you get into all sorts of issues around punishments, incentives, etc.
I keep TAM in mind when I design anything that requires adopting a new technology or system or practice (which is almost everything I do). Some of the questions I ask are:
- Is the new behavior genuinely useful? Sometimes it’s not useful for the learner – it’s useful to the organization, or it’s a compliance necessity. In those cases, it can be a good idea to acknowledge it and make sure the learner understands why the change is being made – that it isn’t just the organization messing with their workflow, but that it’s a necessary change for other reasons.
- If it is useful, how will the learner know that? You can use case studies, examples, people talking about how it’s helped them, or give the learner the experience of it being useful through simulations. Show, Don’t Tell becomes particular important here. You can assert usefulness until you are blue in the face, and you won’t get nearly as much buy-in as being able to try it, or hearing positive endorsements from trusted peers.
- Is the new behavior easy-to-use? If it’s not, why not? Is it too complex? Is it because people are too used their current system? People will learn to use even the most hideous system by mentally automating tasks (see these descriptions of the QWERTY keyboard and the Bloomberg Terminal), but then when you ask them to change, it’s really difficult because they can no longer use those mental shortcuts and the new system feels uncomfortably effortful until they’ve had enough practice.
- If it’s not easy to use, is there anything that can be done to help that? Can the learners practice enough to make it easier? Can you make job aids or other performance supports? Can you roll it out in parts so they don’t have to tackle it all at once? Can you improve the process or interface to address ease-of-use issues?
Everett Rogers’ Diffusion of Innovations
- Relative Advantage – the degree to which an innovation is perceived as being better than the idea it supersedes
- Compatibility – the degree to which an innovation is perceived to be consistent with the existing values, past experiences and needs of potential adopters
- Complexity – the degree to which an innovation is perceived as difficult to use
- Trialability – the opportunity to experiment with the innovation on a limited basis
- Observability – the degree to which the results of an innovation are visible to others
There is obviously some crossover with TAM, but If I’m designing a learning experience for a new system, I use this as a mental checklist:
- Are the learners going to believe the new system is better?
- Are there compatibility issues that need to be addressed?
- Can we do anything reduce complexity?
- Do the learners have a chance to see it being used?
- Do the learners have a chance to try it out themselves?
- and, How can they have the opportunity to have some success with the new system?
Now, if somebody really, really doesn’t want to do something, designing instruction around these elements probably isn’t going to change their mind (Patti’s not wrong about that). And if a new system, process or idea is really sucky, or a pain in the ass to implement, then it’s going to fail no matter how many opportunities you give the learner to try it out.
But here’s the thing – I can design a training intervention that can teach a learner how to use a new system/concept/idea, which could meet the Mager requirement (they could do it if their life depended on it), but I will design a very different (and I think better) learning experience if I consider these motivation factors as well.
I don’t want to take ownership of the entire problem of motivating learners (waaaaaay too many variables outside of my scope or control), but I do believe I share in the responsibility of creating an environment where they can succeed.
And bottom line, I believe my responsibility as a learning designer is to do my best to motivate learners by creating a learning experience where my learners can kick ass, because in the words of the always-fabulous Kathy Sierra kicking ass is more fun (and better learning).
Davis, F. D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly 13(3): 319–340
Johnson, Eric J. and Goldstein, Daniel G., Do Defaults Save Lives? (Nov 21, 2003). Science, Vol. 302, pp. 1338-1339, 2003. Available at SSRN: http://ssrn.com/abstract=1324774
Mager, Robert and Pipe, Peter, Analyzing Performance Problems: Or, You Really Oughta Wanna–How to Figure out Why People Aren’t Doing What They Should Be, and What to do About It
Rogers, Everett Diffusion of Innovations