Is learner motivation your responsibility?

Just had this quick interchange with Patti Shank on twitter:


This is a totally fair comment on Patti’s part — you can’t force someone to be motivated (and undoubtedly some of our disagreement stems from semantics – not that THAT ever happens on twitter).  A lot of the conversation around gamification (for a heated throw down on the topic read the comments here) is about the dubious and likely counterproductive effects of extrinsic rewards as motivators.  According to Alfie Kohn in his book Punished by Rewards, a big part of the problem with extrinsic motivators is that it’s about controlling the learner, not helping or supporting them.

So that I totally agree with – you can’t control your learner, or control their motivation.

But design decisions do have an impact on human behavior.  For example, this chart show the rate of people who agree to be organ donors in different European countries:

In the blue countries, choosing to be a organ donor is selected by default, and the person has to de-select it if they do not want to be a donor.  In the yellow countries, the default is that the person will not be an organ donor, and the person has to actively choose to select organ donor status.

Now it could be that some people aren’t paying attention, but at least some of that difference is presumably due to people who do notice, but just roll with the default (you can read more about it here – scroll down to the Dan Ariely section).

So the way something is designed can make a difference in behavior.  Of course, that’s not a training example, so let’s take a closer look at how training might come in to play.

Is it a training problem?

Robert Mager used this question as a litmus test:

“If you held a gun to the person’s head, would they be able to do the task?”

He further discusses this in his book on Analyzing Performance Problems but later uses the less graphic “could they do the task if their life depended on it?” question (Thiagi advocates for the version “Could they do it if you offered them a million dollars?” if you prefer a non-violent take).

So basically, if someone could do the behavior under extreme pressure, then they clearly know how to do it, and it’s not a knowledge or skills problem, and therefore outside of the domain of training (could be up the person’s specific motivation, could be a workplace management issue, etc.).

Here’s where I disagree

I think the way you design learning experiences can have an impact on the likelihood of people engaging in the desired behavior, and that it is part of an instructional designer’s responsibility.  I don’t think you can control people, or force the issue, but I do think the experience they have when they are learning about something can make a difference in the decisions they make later.

There are a couple of models that influence my thinking on this, but the two I use most often are the Technology Acceptance Model, and Everett Rogers Diffusion of Innovations.

The Technology Acceptance Model

The technology acceptance model is an information systems model that looks at what variables affect whether or not someone adopts a new technology.  It’s been fairly well research (and isn’t without its critics), but I find it to be a useful frame.  At the heart of the model are two variables:

It’s not a complicated idea – if you want someone to use something, they need to believe that it’s actually useful, and that it won’t be a major pain in the ass to use.

TAM specifically addresses technology adoption, but those variables make sense for a lot of things.  You want someone to use a new method of coaching employees?  Or maybe a new safety procedure?  If your audience believes that it’s pointless (ie not useful), or it’s going to be a major pain (ie not easy to use), then they will probably figure out ways around it. Then it either fails to get adopted or you get into all sorts of issues around punishments, incentives, etc.

I keep TAM in mind when I design anything that requires adopting a new technology or system or practice (which is almost everything I do).  Some of the questions I ask are:

  • Is the new behavior genuinely useful? Sometimes it’s not useful for the learner – it’s useful to the organization, or it’s a compliance necessity. In those cases, it can be a good idea to acknowledge it and make sure the learner understands why the change is being made – that it isn’t just the organization messing with their workflow, but that it’s a necessary change for other reasons.
  • If it is useful, how will the learner know that? You can use case studies, examples, people talking about how it’s helped them, or give the learner the experience of it being useful through simulations.  Show, Don’t Tell becomes particular important here.  You can assert usefulness until you are blue in the face, and you won’t get nearly as much buy-in as being able to try it, or hearing positive endorsements from trusted peers.
  • Is the new behavior easy-to-use? If it’s not, why not? Is it too complex? Is it because people are too used their current system?  People will learn to use even the most hideous system by mentally automating tasks (see these descriptions of the QWERTY keyboard and the Bloomberg Terminal), but then when you ask them to change, it’s really difficult because they can no longer use those mental shortcuts and the new system feels uncomfortably effortful until they’ve had enough practice.
  • If it’s not easy to use, is there anything that can be done to help that? Can the learners practice enough to make it easier?  Can you make job aids or other performance supports?  Can you roll it out in parts so they don’t have to tackle it all at once?  Can you improve the process or interface to address ease-of-use issues?

Everett Rogers’ Diffusion of Innovations

The other model I find really useful is from Everett Rogers’ book Diffusion of Innovations.  If you haven’t read it, go buy it now.  Yes, NOW.  It’s actually a really entertaining read because it’s packed with intrguing case studies.
It’s loaded with useful stuff, but the part I want to focus on right now are his characteristics of innovation that affect whether a user adopts or rejects an innovation:
  • Relative Advantage – the ‘degree to which an innovation is perceived as being better than the idea it supersedes
  • Compatibility – the degree to which an innovation is perceived to be consistent with the existing values, past experiences and needs of potential adopters
  • Complexity – the degree to which an innovation is perceived as difficult to use
  • Trialability – the opportunity to experiment with the innovation on a limited basis
  • Observability – the degree to which the results of an innovation are visible to others

There is obviously some crossover with TAM, but If I’m designing a learning experience for a new system, I use this as a mental checklist:

  • Are the learners going to believe the new system is better?
  • Are there compatibility issues that need to be addressed?
  • Can we do anything reduce complexity?
  • Do the learners have a chance to see it being used?
  • Do the learners have a chance to try it out themselves?
  • and, How can they have the opportunity to have some success with the new system?

Now, if somebody really, really doesn’t want to do something, designing instruction around these elements probably isn’t going to change their mind (Patti’s not wrong about that).  And if a new system, process or idea is really sucky, or a pain in the ass to implement, then it’s going to fail no matter how many opportunities you give the learner to try it out.

But here’s the thing – I can design a training intervention that can teach a learner how to use a new system/concept/idea, which could meet the Mager requirement (they could do it if their life depended on it), but I will design a very different (and I think better) learning experience if I consider these motivation factors as well.

I don’t want to take ownership of the entire problem of motivating learners (waaaaaay too many variables outside of my scope or control), but I do believe I share in the responsibility of creating an environment where they can succeed.

And bottom line, I believe my responsibility as a learning designer is to do my best to motivate learners by creating a learning experience where my learners can kick ass, because in the words of the always-fabulous Kathy Sierra kicking ass is more fun (and better learning).

—————————————

References

Davis, F. D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly 13(3): 319–340

Johnson, Eric J. and Goldstein, Daniel G., Do Defaults Save Lives? (Nov 21, 2003). Science, Vol. 302, pp. 1338-1339, 2003. Available at SSRN: http://ssrn.com/abstract=1324774

Mager, Robert and Pipe, Peter, Analyzing Performance Problems: Or, You Really Oughta Wanna–How to Figure out Why People Aren’t Doing What They Should Be, and What to do About It

Rogers, Everett Diffusion of Innovations

Fraught Decisions

The last post (on the pay gap in e-Learning) was a bit of a digression in terms of topics for this blog, but it kind of wasn’t at the same time.  I don’t usually get into industry stuff, but I also think that the gender pay gap is an interesting topic from a learning point of view.

The reason I say that: it’s a complicated, messy issue, and sometimes we have to create learning experiences for complicated, messy issues.  I frequently hear from clients that they really want to teach people good, critical, decision-making skills. My heart always sinks a little when they say that, because critical thinking and decision-making are definitely skills developed over a long period of time, and not really something in which you can make a huge dent during a 2-hr e-Learning course.

If you are going to create a learning experience for something that is complicated and messy, it would probably be helpful to understand what exactly is making it complicated and messy.  One of the most useful things I’ve read on this is the discussion of fraught choices in the book Nudge by Richard Thaler and Cass Sunstein.

In the book, they discuss fraught choices – basically situations where people are least likely to be able to make good choices.  Thaler and Sunstein argue (for the most part rightly) that these choices are likely candidates for choice architecture, but more on that later.

Fraught Decisions

The identifying characteristics for fraught decisions are:

  • Benefits Now – Costs Later (and also Costs Now – Benefits Later)
  • Degree of Difficulty
  • Frequency
  • Feedback
  • Knowing What You Like

One of the book’s examples of a fraught decision is saving for retirement – you have costs now, but don’t see the benefits for years or decades, it’s very difficult to determine what the right amount really will be, and also difficult to wade through all the fund options, tax laws and retirement plans that make little or no sense to the lay person, you probably only make these decisions once a year or so, while you do get feedback in the form of account statements, it’s difficult to interpret that feedback (“Did the account go up because of something I did, or is it just the state of the market?”), and unless you are a professional or a retirement account wonk, you aren’t like to have innate likes or dislikes to guide you (“You know, I just really like the feel of no-load mutual funds.”).

Unfraught Decisions

By contrast, an unfraught decision might be buying a sweater.  You buy a sweater, and get the benefit immediately.  In most cases it’s not a particularly difficult task (Go to store, decide you like the blue one, buy it).  You buy clothing fairly often — far more often than you choose retirement options.  You get pretty immediate feedback (“Why yes, this *is* a new sweater” or “Damn, this is itchy” or “What was I thinking?”), and you are likely to have innate preferences that require little cognitive load to manage (Appliqué Teddy Bears = No).

Applique Teddy Bears = No

So, let’s go back to our salary negotiation example.

Let’s say you’ve been tasked with designing learning for “How to Handle Your Salary Negotiation.”  I think we can all agree that this meets the definition of a fraught decision:

  • Time lag between benefits and costs – Some of the benefits/costs happen immediately, but much of both those benefits and costs will follow you for years in your career.
  • Degree of difficulty – it’s very difficult to determine what to ask for — there are all the variables of the field, your qualifications, the position itself, the employer’s situation.
  • Frequency – Unless you are changing jobs a LOT, you do not do this frequently (I’ve only done it twice in the last ten years).
  • Feedback – knowing what you should ask is like the Price is Right game — get as close as you can without going over, and you pretty much never get precise feedback (“You know, we’re glad to have you on board! And, by the way, you could have gotten another four grand and an extra week’s vacation if you’d pushed a little bit more!”).
  • Knowing what you like – You think you know what you like (“More!“), but it’s more complicated than that.

So, what are the implications for the design of instruction?

Thaler and Sunstein’s book isn’t about learning design — they talk about choice architecture.  For fraught decisions, how can you structure the choice (the options and defaults) so that people can make the best possible decisions?  Quite frankly, I would love to hear their ideas about choice architecture to address the gender pay gap.

But learning designers may or may not be able to influence that choice architecture, so what can they do instead if their subject matter is seriously fraught?  A few possibilities:

  • Time lag between benefits and costs – this one is tough, because there’s a lot of wiring that works against us.  While one of the human brain’s killer apps is specifically the ability to defer a benefit now for later gain, we still aren’t all that good at it (see this and this and this). Instructional Solutions: If you really want people to consider the now vs. later, you need instruction that speaks to the affective self (e.g. storytelling with emotional impact or scenarios with consequences), or tools that help them envision that future state (e.g. projection or modeling tools or simulations).
  • Degree of difficulty – I think this the only one of the characteristics that standard instructional design addresses at all well.  Instructional Solutions: There are a lot of options (helping people create mental models, breaking content down into manageable chunks, job aids, etc.), but one good resource is  Jeroen van Merrienboer’s book on Complex Learning which goes into great detail on this topic.
  • Frequency – lack of frequency means that real-world trial and error learning is pretty much out, but you can put it into your learning design.  Instructional Solutions: Practice.  Lots of practice.  Multiple practice scenarios with as much context as you can possibly muster. You also want to consider distributing some of that practice over time (see the Spacing Implementation Quick-Audit available here).
  • Feedback – in a lot of situations, the difficulty with this one isn’t actually designing instruction for it — you can create scenarios with real-world consequences pretty easily.  The difficulty here can be ensuring that your data is good — empirically speaking, are these actually the consequences people will encounter in the real world? The temptation here is to design to the ideal solution – this is how you want the outcome to happen. But can you back those outcomes up through research or observation of high performers?  Instructional Solutions: Get real data about the consequences whenever possible, and use it to inform the content and design of your learning scenarios.
  • Knowing what you like – When a topic is too esoteric for there to be an innate pull in one direction or the other, how can you help people develop helpful instincts or preferences? You might ask why it matters, but well-informed liking and preference can be very useful shortcuts in fraught decision-making. Instructional Solutions: There are a number of ways to help people develop a sense of preference, such as exposure to lots and lots of examples or embedding the information in stories. Also, decision-making job aids can act as a stand-in shortcut when preference is absent.

So, think about our salary negotiations training — how would you design training for that fraught decision*?

How would you approach it?  Which of the solutions above would be most helpful?  What else can you think of that I’ve missed?

Nothing above is revolutionary in terms of training solutions, but if you’ve ever had difficulty selling a scenario or practice-based approach to a stakeholder, this might be a useful wedge for why those approaches are necessary if you want to teach those pesky critical thinking or decision-making skills.

I’ve been thinking a lot lately about good matching of specific instructional techniques to specific challenges — I think that instructional design tends to wield it’s brush pretty broadly sometimes (“Scenario-based learning is good” is true a lot of times, but always?), and that we need more critical-thinking about matching the tools to the solution, rather than matching the solution to the available tools (I really like where BJ Fogg is going with this in behavior change, btw).

So, what experiences have you had with fraught choices?  Any strategies you would like to add?  Would love to hear from you in the comments below.

(* Don’t you like how I made this about giving you the opportunity to think it through, rather than about the fact that I’m running out of time to get this blog post done?)