New DFHPL Facebook Group – Following the Conversation

So nerdy shop talk is basically my favorite thing, and the internet is a spectacular place to geek about whatever your passion is.  Where those conversations happen seems to shift as the internet evolves.  I used to have most of my nerd conversations on Twitter, but things seem to have shifted to Facebook or LinkedIn more.  I do like the possibilities of longer conversations that are provided by threaded discussions, and I’m opting for Facebook over LinkedIn for the time being.

If you are so inclined, come hang out:

https://www.facebook.com/groups/designforhowpeoplelearn/

Link to the facebook group for Design For How People Learn

 

Extrinsic Motivation and Games

Hey folks, this is a really excellent discussion of the issues and research around using extrinsic rewards as a way to motivate behavior. Chris Hecker is looking at the question through the lens of game design, but it really, really applies to learning design as well. exrewards

There’s a write-up at the website, and a recording of the talk if you scroll down.  It’s long-ish, but well worth the listen.

checker_talk

 

Found this via Amy Jo Kim on twitter: https://twitter.com/amyjokim

 

 

Is learner motivation your responsibility?

Just had this quick interchange with Patti Shank on twitter:


This is a totally fair comment on Patti’s part — you can’t force someone to be motivated (and undoubtedly some of our disagreement stems from semantics – not that THAT ever happens on twitter).  A lot of the conversation around gamification (for a heated throw down on the topic read the comments here) is about the dubious and likely counterproductive effects of extrinsic rewards as motivators.  According to Alfie Kohn in his book Punished by Rewards, a big part of the problem with extrinsic motivators is that it’s about controlling the learner, not helping or supporting them.

So that I totally agree with – you can’t control your learner, or control their motivation.

But design decisions do have an impact on human behavior.  For example, this chart show the rate of people who agree to be organ donors in different European countries:

In the blue countries, choosing to be a organ donor is selected by default, and the person has to de-select it if they do not want to be a donor.  In the yellow countries, the default is that the person will not be an organ donor, and the person has to actively choose to select organ donor status.

Now it could be that some people aren’t paying attention, but at least some of that difference is presumably due to people who do notice, but just roll with the default (you can read more about it here – scroll down to the Dan Ariely section).

So the way something is designed can make a difference in behavior.  Of course, that’s not a training example, so let’s take a closer look at how training might come in to play.

Is it a training problem?

Robert Mager used this question as a litmus test:

“If you held a gun to the person’s head, would they be able to do the task?”

He further discusses this in his book on Analyzing Performance Problems but later uses the less graphic “could they do the task if their life depended on it?” question (Thiagi advocates for the version “Could they do it if you offered them a million dollars?” if you prefer a non-violent take).

So basically, if someone could do the behavior under extreme pressure, then they clearly know how to do it, and it’s not a knowledge or skills problem, and therefore outside of the domain of training (could be up the person’s specific motivation, could be a workplace management issue, etc.).

Here’s where I disagree

I think the way you design learning experiences can have an impact on the likelihood of people engaging in the desired behavior, and that it is part of an instructional designer’s responsibility.  I don’t think you can control people, or force the issue, but I do think the experience they have when they are learning about something can make a difference in the decisions they make later.

There are a couple of models that influence my thinking on this, but the two I use most often are the Technology Acceptance Model, and Everett Rogers Diffusion of Innovations.

The Technology Acceptance Model

The technology acceptance model is an information systems model that looks at what variables affect whether or not someone adopts a new technology.  It’s been fairly well research (and isn’t without its critics), but I find it to be a useful frame.  At the heart of the model are two variables:

It’s not a complicated idea – if you want someone to use something, they need to believe that it’s actually useful, and that it won’t be a major pain in the ass to use.

TAM specifically addresses technology adoption, but those variables make sense for a lot of things.  You want someone to use a new method of coaching employees?  Or maybe a new safety procedure?  If your audience believes that it’s pointless (ie not useful), or it’s going to be a major pain (ie not easy to use), then they will probably figure out ways around it. Then it either fails to get adopted or you get into all sorts of issues around punishments, incentives, etc.

I keep TAM in mind when I design anything that requires adopting a new technology or system or practice (which is almost everything I do).  Some of the questions I ask are:

  • Is the new behavior genuinely useful? Sometimes it’s not useful for the learner – it’s useful to the organization, or it’s a compliance necessity. In those cases, it can be a good idea to acknowledge it and make sure the learner understands why the change is being made – that it isn’t just the organization messing with their workflow, but that it’s a necessary change for other reasons.
  • If it is useful, how will the learner know that? You can use case studies, examples, people talking about how it’s helped them, or give the learner the experience of it being useful through simulations.  Show, Don’t Tell becomes particular important here.  You can assert usefulness until you are blue in the face, and you won’t get nearly as much buy-in as being able to try it, or hearing positive endorsements from trusted peers.
  • Is the new behavior easy-to-use? If it’s not, why not? Is it too complex? Is it because people are too used their current system?  People will learn to use even the most hideous system by mentally automating tasks (see these descriptions of the QWERTY keyboard and the Bloomberg Terminal), but then when you ask them to change, it’s really difficult because they can no longer use those mental shortcuts and the new system feels uncomfortably effortful until they’ve had enough practice.
  • If it’s not easy to use, is there anything that can be done to help that? Can the learners practice enough to make it easier?  Can you make job aids or other performance supports?  Can you roll it out in parts so they don’t have to tackle it all at once?  Can you improve the process or interface to address ease-of-use issues?

Everett Rogers’ Diffusion of Innovations

The other model I find really useful is from Everett Rogers’ book Diffusion of Innovations.  If you haven’t read it, go buy it now.  Yes, NOW.  It’s actually a really entertaining read because it’s packed with intrguing case studies.
It’s loaded with useful stuff, but the part I want to focus on right now are his characteristics of innovation that affect whether a user adopts or rejects an innovation:
  • Relative Advantage – the ‘degree to which an innovation is perceived as being better than the idea it supersedes
  • Compatibility – the degree to which an innovation is perceived to be consistent with the existing values, past experiences and needs of potential adopters
  • Complexity – the degree to which an innovation is perceived as difficult to use
  • Trialability – the opportunity to experiment with the innovation on a limited basis
  • Observability – the degree to which the results of an innovation are visible to others

There is obviously some crossover with TAM, but If I’m designing a learning experience for a new system, I use this as a mental checklist:

  • Are the learners going to believe the new system is better?
  • Are there compatibility issues that need to be addressed?
  • Can we do anything reduce complexity?
  • Do the learners have a chance to see it being used?
  • Do the learners have a chance to try it out themselves?
  • and, How can they have the opportunity to have some success with the new system?

Now, if somebody really, really doesn’t want to do something, designing instruction around these elements probably isn’t going to change their mind (Patti’s not wrong about that).  And if a new system, process or idea is really sucky, or a pain in the ass to implement, then it’s going to fail no matter how many opportunities you give the learner to try it out.

But here’s the thing – I can design a training intervention that can teach a learner how to use a new system/concept/idea, which could meet the Mager requirement (they could do it if their life depended on it), but I will design a very different (and I think better) learning experience if I consider these motivation factors as well.

I don’t want to take ownership of the entire problem of motivating learners (waaaaaay too many variables outside of my scope or control), but I do believe I share in the responsibility of creating an environment where they can succeed.

And bottom line, I believe my responsibility as a learning designer is to do my best to motivate learners by creating a learning experience where my learners can kick ass, because in the words of the always-fabulous Kathy Sierra kicking ass is more fun (and better learning).

—————————————

References

Davis, F. D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly 13(3): 319–340

Johnson, Eric J. and Goldstein, Daniel G., Do Defaults Save Lives? (Nov 21, 2003). Science, Vol. 302, pp. 1338-1339, 2003. Available at SSRN: http://ssrn.com/abstract=1324774

Mager, Robert and Pipe, Peter, Analyzing Performance Problems: Or, You Really Oughta Wanna–How to Figure out Why People Aren’t Doing What They Should Be, and What to do About It

Rogers, Everett Diffusion of Innovations

Daniel Pink and Framing the Task

So I finally got around to listening to the Daniel Pink TED talk on Motivation — it had been lingering in my google reader for a while.  I had the same reaction that I’ve had to Daniel Pink in the past, which is that he starts strong, but gets soft as he goes along (I’m following along with A, then I see how it leads to B, then C, then – wait! how did we get to K already?), but it’s definitely (like most TED talks) worth a listen, and I’m happy he’s talking about the topic of Motivation.

He does a good job of throwing ideas out to start the conversation, which is the main strength of the talk here.  He talks about a study that looked a creative problem-solving task.

You will get the full explanation in the video, but basically you are given a box of tacks, a candle, matches and wall, and you have to figure out how to attach the candle to the wall.

The answer is (spoiler alert) to dump the tacks out of the box, and use it as a holder for the candle.

tacks

This has been around for quite a while, but the really interesting aspect of it is that Daniel Pink talked about an additional study that looked at financial incentives for completing this task.  The researchers used two conditions:  people who started with tacks in the box, and people who started with the tacks out of the box (an easier task that involves less creative problem solving).

Basically, financial incentives improve the performance in the easier task, but made performance worse on the harder task.

Daniel Pink goes on to talk about how incentives for knowledge workers are structured all wrong, and the notion that all you need to do to motivate performance is say “if you do X, you’ll get Y reward” is totally feeble.   Good point, and I will definitely get and read the book when it comes out.

BUT…

There’s another path here to follow, which is looking at how you frame the task.

In training/learning environments, we are (at least theoretically) all about trying to improve people’s performance.  and how the task is framed makes a huge difference in how easy or difficult the task is to accomplish.  Basically, when we are creating learning experiences, are we looking for opportunities to take the tacks out of the box for our learners?

I’m not talking about dumbing it down — I’m talking about reducing unneeded complexity, and framing the task in a way the increases the likelihood of success.

There are methods out there, like job aids, that can help with this quite a bit.  But I also wanted to talk about implementation intentions.

implementation_intentions

There’s an academic/researcher named Peter Gollwitzer who spends a lot of time on this (info here and here).  Basically, he explains implementation intention as follows:

Implementation intentions are if-then plans that connect anticipated critical situations with responses that will be effective in accomplishing one’s goals.  Whereas … goals specify what one wants to do/achieve (i.e., “I intend to perform behavior X!”…), implementation intentions specify the behavior that one will perform in the service of goal achievement if the anticipated critical situation is actually encountered (i.e. “If situation Y occurs, then I will initiate goal-directed behavior Z!”).

Basically, if you are trying to quit smoking, you need more than the goal (“I’m going to stop smoking.”), you need the implementation intention of how to actually do it.

cig

So you could say:

If I get a craving, I will distract myself.


You have situation Y (“If I get a craving”) and behavior Z (“I will distract myself.”).  This is more effective than just the goal (“I will quit smoking.”).  But you can make it much more effective by being specific:

If I get twitchy for a cigarette, I will chew gum

If stress makes me want a cigarette, I will call my sister

When I want an afternoon smoke break, I will take a 5-minute walk outside

So could you use this as tool in training classes/applications?  I’m specifically thinking about a lot of the soft-skill training that goes on – single event training that has historically not led to much behavioral change. How many times have you learned about a good idea/tool/concept and never done anything with it?

How about an activity where you have people identify their own anticipated critical situations, and have a specific behavioral strategy for responding (when I have problem X with my difficult employee, I will do Y)? Remember, the specificity is crucial to success.

I think the reason that I was thinking about this in conjunction with the Daniel Pink material was the idea that things are different for knowledge workers (his whole-brain idea) than they are for more production workers.  That by taking the tacks out of the box, you fundamentally change the nature of the activity.

So what I really like about the implementation intention is that, again, it’s not dumbing things down, and can apply regardless of the subject.  For example, brainstorming:

Goal: Brainstorming as many different uses for a common knife as possible

knife

How many do you think you could come up with?  How about if you frame the question like this:

If I have found one solution, I will immediately try to find another as soon as possible.

Do you think you’d come up with more responses in the second condition?  Research that Gollwitzer describes suggests that it’s more effective for group brainstorming (specifically, they were looking at the impact on social loafing, btw).

So, if you are interested in giving it a try, create your own version of this statement:

The next time I

have an instructional design problem where I am concerned about transfer of behavior,

I will

see if I can create an activity that allows the learner to develop implementation intentions.

What would your statement look like?

———————————————————-

References:

Gollwitzer, P. M. (2006). Successful goal pursuit. In Q. Jing, H. Zhang, & K. Zhang (Eds.), Psychological science around the world (Vol. 1, pp. 143-159). Philadelphia: Psychology Press.

Gollwitzer, P. M., Fujita, K., & Oettingen, G. (2004). Planning and the implementation of goals. In R. F. Baumeister & K. D. Vohs (Eds.), Handbook of self-regulation: Research, theory, and applications (pp. 211-228). New York: Guilford Press.

Endress, H. (2001) Die Wirksamkeit von Vorsätzen auf Gruppenleistungen: Eine empirische Untersuchung anhand von brainstorming [Implementation intentions and the reduction of social loafing in a brainstorming task]. Unpublished MA thesis University of Konstanz, Germany