I should have posted this a few days ago, but I’m doing a webcast tomorrow (Wednesday May 15th, 1pm ET) for ASTD on using the psychology of game design for learning. Talking about some familiar stuff (flow, hyperbolic discounting) and a few new things (visceral feedback). Not sure if you need to be an ASTD member to attend, but I *think* you can just sign up:
This is excerpted and expanded from a post that I wrote for the Tin Can blog
We’ve talked about WIIFM (What’s in it for me?) for years – it’s one of those things you always hear that you need to include in learning experiences to persuade your learners to pay attention.
I’ve started to think that’s a really unsatisfactory view of the world – most of the people I know don’t need a sales pitch to do their jobs, or to learn something to help them do that. Instead, they need to know that the thing they are learning is actually useful and necessary.
One of my favorite studies is this one from Dan Ariely called Man’s search for meaning: The case of Legos.
The paper starts with a discussion of meaning and work:
“Most children think of their potential future occupations in terms of what they will be (ﬁremen, doctors, etc.), not merely what they will do for a living. Many adults also think of their job as an integral part of their identity. At least in the United States, “What do you do?” has become as common a component of an introduction as the anachronistic “How do you do?” once was, yet identity, pride, and meaning are all left out from standard models of labor supply.”
The paper goes on to explain “we view labor as meaningful to the extent that (a) it is recognized and/or (b) has some point or purpose.”
They did two actual experiments — one where they had participants do a word problem exercise, and a second where participants were constructing figures with legos.
All the participants were paid money for their efforts, but some of the participants had their papers shredded as soon as they were done (without anyone even looking at the page), or their lego figures immediately broken back up in front of them (I particularly love that they labeled this last instance as the “Sisyphean” condition).
You can read the details here, but essential, people worked significantly longer or for less money in the condition where their work wasn’t meaningless. That shouldn’t be the case if people where primarily motivated by what they could get out of the situation (i.e. $$$). Dan Pink talks about several similar studies in his book Drive, when he talks about the importance of autonomy, mastery and purpose.
So, my issue with WIIFM is that, while it probably doesn’t hurt to let people know about the benefits of something, it’s not really a complete answer.
How about WCIDWT?
I think we should talk about WCIDWT (What can I do with that?). If I have the knowledge or skill that you are trying to teach me, what will I be able to do that I couldn’t before?
Kathy Sierra talks about this when she compares old school marketing (“Buy this because we kick ass”) vs a focus on the user (“Buy this because we want you to kick ass”). What can *you* (the end-user) do to be more awesome, to know more and to do more.
I’ve been playing around with the idea of accomplishment-based learning — using accomplishments as the fundamental organization of content and learning experiences, so that the very structure of the course is about learners accomplishing thing (*real* accomplishments – not finish-the-lesson or pass-the-test accomplishments). For example which photoshop course would you rather take?
So, my issue with WIIFM is that it feels transactional — I’m trying to *buy* your attention by waving shiny things, when instead it should be about your goals, and what you can do. WIIFM also feels disrespectful of learners for those same reasons.
Thoughts? Opinions? Examples? Violent disagreement? Would love to hear about it in the comments below.
So, I’ve had a crazy spring so far — between a brutal travel schedule and some unexpected health stuff (all resolved now), there’s barely been time to draw breath.
There have been lots of good things, including some interesting projects in the works. A particular good thing recently was a really nice review of the book by Clive Shepherd:
“There’s book a I’ve been meaning to write which I hoped would address the problem. I tentatively called it ‘What every L&D professional needs to know about learning’ (not so catchy I know). But I’ve been beaten to the gun by Julie Dirksen.” – Clive Shepherd
Still giddily fanning myself a bit over that…
For local folks (Minneapolis/St. Paul area), there are a few things going on also:
On Thursday (April 12th, 2012), I’m doing the Design for Behavior Change talk for the local UPA (Usability Professionals Association) chapter. The event starts at 6:15 PM, and the talk starts at 6:45 PM. You can get details here http://www.upamn.org/events?eventId=456463&EventViewMode=EventDetails
Also, the fantastic Connie Malamed (author of Visual Language for Designers and http://theelearningcoach.com/) is in town this week, so check out her talk on Friday:
Your Brain on Graphics: Research-Inspired Design, Friday April 13th
Information here: http://www.pactweb.org/ (you can also get details about her 1/2 day workshop at that link)
Program Details: Learning through visuals opens up new pathways in the brain. You can optimize opportunities for visual learning and provide better learning experiences when you understand how people perceive and process visual information. During this presentation, you will learn how graphics can leverage the strengths and compensate for the weaknesses of our cognitive architecture. You’ll learn how to make design decisions based on research. We’ll look at lots of examples in the process. Topics include: * How our brains are hardwired for graphics * How to speed up your visual message * How to make graphics cognitively efficient * How to speak to the emotions through visuals * How to visualize abstractions This presentation is for anyone who selects, conceives of, designs or creates visuals or anyone interested in visual communication.
Location: The Metropolitan, 5418 Wayzata Boulevard, Golden Valley, MN 55418 When: 8:30-11am
(She also wrote a very nice review of the book, btw)
So, still need a gift for the design geek on your holiday shopping list?*
I’ve mentioned Stephen Anderson before (I’m a big ol’ fan), but I particularly love his Mental Notes cards, which cover dozens of psychology principles that impact how we design. Need to jump start your design process? Pull a few cards out the deck, and talk about how you can incorporate those ideas.
You can order them here: http://getmentalnotes.com/
I particularly mention it now because (aside from the fact that these are awesome) Stephen is donating half the proceeds right now.
* Yes, I know it’s a little late to order holiday presents (story of my life), but you can print some sample cards to use a placeholder gift until the real ones arrive.
Check out this fantastic article on decision fatigue in the New York Times. It addresses a lot of things I’ve been interested in lately, like blood sugar levels and self-control. I think this is a really useful topic for learning folks to be aware of, because we frequently ask our learners to exert self-control to stay focused and concentrate on the subject matter being taught.
Just had this quick interchange with Patti Shank on twitter:
This is a totally fair comment on Patti’s part — you can’t force someone to be motivated (and undoubtedly some of our disagreement stems from semantics – not that THAT ever happens on twitter). A lot of the conversation around gamification (for a heated throw down on the topic read the comments here) is about the dubious and likely counterproductive effects of extrinsic rewards as motivators. According to Alfie Kohn in his book Punished by Rewards, a big part of the problem with extrinsic motivators is that it’s about controlling the learner, not helping or supporting them.
So that I totally agree with – you can’t control your learner, or control their motivation.
But design decisions do have an impact on human behavior. For example, this chart show the rate of people who agree to be organ donors in different European countries:
In the blue countries, choosing to be a organ donor is selected by default, and the person has to de-select it if they do not want to be a donor. In the yellow countries, the default is that the person will not be an organ donor, and the person has to actively choose to select organ donor status.
Now it could be that some people aren’t paying attention, but at least some of that difference is presumably due to people who do notice, but just roll with the default (you can read more about it here - scroll down to the Dan Ariely section).
So the way something is designed can make a difference in behavior. Of course, that’s not a training example, so let’s take a closer look at how training might come in to play.
Is it a training problem?
Robert Mager used this question as a litmus test:
“If you held a gun to the person’s head, would they be able to do the task?”
He further discusses this in his book on Analyzing Performance Problems but later uses the less graphic “could they do the task if their life depended on it?” question (Thiagi advocates for the version “Could they do it if you offered them a million dollars?” if you prefer a non-violent take).
So basically, if someone could do the behavior under extreme pressure, then they clearly know how to do it, and it’s not a knowledge or skills problem, and therefore outside of the domain of training (could be up the person’s specific motivation, could be a workplace management issue, etc.).
Here’s where I disagree
I think the way you design learning experiences can have an impact on the likelihood of people engaging in the desired behavior, and that it is part of an instructional designer’s responsibility. I don’t think you can control people, or force the issue, but I do think the experience they have when they are learning about something can make a difference in the decisions they make later.
The Technology Acceptance Model
The technology acceptance model is an information systems model that looks at what variables affect whether or not someone adopts a new technology. It’s been fairly well research (and isn’t without its critics), but I find it to be a useful frame. At the heart of the model are two variables:
It’s not a complicated idea – if you want someone to use something, they need to believe that it’s actually useful, and that it won’t be a major pain in the ass to use.
TAM specifically addresses technology adoption, but those variables make sense for a lot of things. You want someone to use a new method of coaching employees? Or maybe a new safety procedure? If your audience believes that it’s pointless (ie not useful), or it’s going to be a major pain (ie not easy to use), then they will probably figure out ways around it. Then it either fails to get adopted or you get into all sorts of issues around punishments, incentives, etc.
I keep TAM in mind when I design anything that requires adopting a new technology or system or practice (which is almost everything I do). Some of the questions I ask are:
- Is the new behavior genuinely useful? Sometimes it’s not useful for the learner – it’s useful to the organization, or it’s a compliance necessity. In those cases, it can be a good idea to acknowledge it and make sure the learner understands why the change is being made – that it isn’t just the organization messing with their workflow, but that it’s a necessary change for other reasons.
- If it is useful, how will the learner know that? You can use case studies, examples, people talking about how it’s helped them, or give the learner the experience of it being useful through simulations. Show, Don’t Tell becomes particular important here. You can assert usefulness until you are blue in the face, and you won’t get nearly as much buy-in as being able to try it, or hearing positive endorsements from trusted peers.
- Is the new behavior easy-to-use? If it’s not, why not? Is it too complex? Is it because people are too used their current system? People will learn to use even the most hideous system by mentally automating tasks (see these descriptions of the QWERTY keyboard and the Bloomberg Terminal), but then when you ask them to change, it’s really difficult because they can no longer use those mental shortcuts and the new system feels uncomfortably effortful until they’ve had enough practice.
- If it’s not easy to use, is there anything that can be done to help that? Can the learners practice enough to make it easier? Can you make job aids or other performance supports? Can you roll it out in parts so they don’t have to tackle it all at once? Can you improve the process or interface to address ease-of-use issues?
Everett Rogers’ Diffusion of Innovations
- Relative Advantage – the degree to which an innovation is perceived as being better than the idea it supersedes
- Compatibility – the degree to which an innovation is perceived to be consistent with the existing values, past experiences and needs of potential adopters
- Complexity – the degree to which an innovation is perceived as difficult to use
- Trialability – the opportunity to experiment with the innovation on a limited basis
- Observability – the degree to which the results of an innovation are visible to others
There is obviously some crossover with TAM, but If I’m designing a learning experience for a new system, I use this as a mental checklist:
- Are the learners going to believe the new system is better?
- Are there compatibility issues that need to be addressed?
- Can we do anything reduce complexity?
- Do the learners have a chance to see it being used?
- Do the learners have a chance to try it out themselves?
- and, How can they have the opportunity to have some success with the new system?
Now, if somebody really, really doesn’t want to do something, designing instruction around these elements probably isn’t going to change their mind (Patti’s not wrong about that). And if a new system, process or idea is really sucky, or a pain in the ass to implement, then it’s going to fail no matter how many opportunities you give the learner to try it out.
But here’s the thing – I can design a training intervention that can teach a learner how to use a new system/concept/idea, which could meet the Mager requirement (they could do it if their life depended on it), but I will design a very different (and I think better) learning experience if I consider these motivation factors as well.
I don’t want to take ownership of the entire problem of motivating learners (waaaaaay too many variables outside of my scope or control), but I do believe I share in the responsibility of creating an environment where they can succeed.
And bottom line, I believe my responsibility as a learning designer is to do my best to motivate learners by creating a learning experience where my learners can kick ass, because in the words of the always-fabulous Kathy Sierra kicking ass is more fun (and better learning).
Davis, F. D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly 13(3): 319–340
Johnson, Eric J. and Goldstein, Daniel G., Do Defaults Save Lives? (Nov 21, 2003). Science, Vol. 302, pp. 1338-1339, 2003. Available at SSRN: http://ssrn.com/abstract=1324774
Mager, Robert and Pipe, Peter, Analyzing Performance Problems: Or, You Really Oughta Wanna–How to Figure out Why People Aren’t Doing What They Should Be, and What to do About It
Rogers, Everett Diffusion of Innovations
The last post (on the pay gap in e-Learning) was a bit of a digression in terms of topics for this blog, but it kind of wasn’t at the same time. I don’t usually get into industry stuff, but I also think that the gender pay gap is an interesting topic from a learning point of view.
The reason I say that: it’s a complicated, messy issue, and sometimes we have to create learning experiences for complicated, messy issues. I frequently hear from clients that they really want to teach people good, critical, decision-making skills. My heart always sinks a little when they say that, because critical thinking and decision-making are definitely skills developed over a long period of time, and not really something in which you can make a huge dent during a 2-hr e-Learning course.
If you are going to create a learning experience for something that is complicated and messy, it would probably be helpful to understand what exactly is making it complicated and messy. One of the most useful things I’ve read on this is the discussion of fraught choices in the book Nudge by Richard Thaler and Cass Sunstein.
In the book, they discuss fraught choices – basically situations where people are least likely to be able to make good choices. Thaler and Sunstein argue (for the most part rightly) that these choices are likely candidates for choice architecture, but more on that later.
The identifying characteristics for fraught decisions are:
- Benefits Now – Costs Later (and also Costs Now – Benefits Later)
- Degree of Difficulty
- Knowing What You Like
One of the book’s examples of a fraught decision is saving for retirement – you have costs now, but don’t see the benefits for years or decades, it’s very difficult to determine what the right amount really will be, and also difficult to wade through all the fund options, tax laws and retirement plans that make little or no sense to the lay person, you probably only make these decisions once a year or so, while you do get feedback in the form of account statements, it’s difficult to interpret that feedback (“Did the account go up because of something I did, or is it just the state of the market?”), and unless you are a professional or a retirement account wonk, you aren’t like to have innate likes or dislikes to guide you (“You know, I just really like the feel of no-load mutual funds.”).
By contrast, an unfraught decision might be buying a sweater. You buy a sweater, and get the benefit immediately. In most cases it’s not a particularly difficult task (Go to store, decide you like the blue one, buy it). You buy clothing fairly often — far more often than you choose retirement options. You get pretty immediate feedback (“Why yes, this *is* a new sweater” or “Damn, this is itchy” or “What was I thinking?”), and you are likely to have innate preferences that require little cognitive load to manage (Appliqué Teddy Bears = No).
So, let’s go back to our salary negotiation example.
Let’s say you’ve been tasked with designing learning for “How to Handle Your Salary Negotiation.” I think we can all agree that this meets the definition of a fraught decision:
- Time lag between benefits and costs – Some of the benefits/costs happen immediately, but much of both those benefits and costs will follow you for years in your career.
- Degree of difficulty - it’s very difficult to determine what to ask for — there are all the variables of the field, your qualifications, the position itself, the employer’s situation.
- Frequency - Unless you are changing jobs a LOT, you do not do this frequently (I’ve only done it twice in the last ten years).
- Feedback - knowing what you should ask is like the Price is Right game — get as close as you can without going over, and you pretty much never get precise feedback (“You know, we’re glad to have you on board! And, by the way, you could have gotten another four grand and an extra week’s vacation if you’d pushed a little bit more!”).
- Knowing what you like – You think you know what you like (“More!“), but it’s more complicated than that.
So, what are the implications for the design of instruction?
Thaler and Sunstein’s book isn’t about learning design — they talk about choice architecture. For fraught decisions, how can you structure the choice (the options and defaults) so that people can make the best possible decisions? Quite frankly, I would love to hear their ideas about choice architecture to address the gender pay gap.
But learning designers may or may not be able to influence that choice architecture, so what can they do instead if their subject matter is seriously fraught? A few possibilities:
- Time lag between benefits and costs – this one is tough, because there’s a lot of wiring that works against us. While one of the human brain’s killer apps is specifically the ability to defer a benefit now for later gain, we still aren’t all that good at it (see this and this and this). Instructional Solutions: If you really want people to consider the now vs. later, you need instruction that speaks to the affective self (e.g. storytelling with emotional impact or scenarios with consequences), or tools that help them envision that future state (e.g. projection or modeling tools or simulations).
- Degree of difficulty - I think this the only one of the characteristics that standard instructional design addresses at all well. Instructional Solutions: There are a lot of options (helping people create mental models, breaking content down into manageable chunks, job aids, etc.), but one good resource is Jeroen van Merrienboer’s book on Complex Learning which goes into great detail on this topic.
- Frequency - lack of frequency means that real-world trial and error learning is pretty much out, but you can put it into your learning design. Instructional Solutions: Practice. Lots of practice. Multiple practice scenarios with as much context as you can possibly muster. You also want to consider distributing some of that practice over time (see the Spacing Implementation Quick-Audit available here).
- Feedback - in a lot of situations, the difficulty with this one isn’t actually designing instruction for it — you can create scenarios with real-world consequences pretty easily. The difficulty here can be ensuring that your data is good — empirically speaking, are these actually the consequences people will encounter in the real world? The temptation here is to design to the ideal solution – this is how you want the outcome to happen. But can you back those outcomes up through research or observation of high performers? Instructional Solutions: Get real data about the consequences whenever possible, and use it to inform the content and design of your learning scenarios.
- Knowing what you like – When a topic is too esoteric for there to be an innate pull in one direction or the other, how can you help people develop helpful instincts or preferences? You might ask why it matters, but well-informed liking and preference can be very useful shortcuts in fraught decision-making. Instructional Solutions: There are a number of ways to help people develop a sense of preference, such as exposure to lots and lots of examples or embedding the information in stories. Also, decision-making job aids can act as a stand-in shortcut when preference is absent.
So, think about our salary negotiations training — how would you design training for that fraught decision*?
How would you approach it? Which of the solutions above would be most helpful? What else can you think of that I’ve missed?
Nothing above is revolutionary in terms of training solutions, but if you’ve ever had difficulty selling a scenario or practice-based approach to a stakeholder, this might be a useful wedge for why those approaches are necessary if you want to teach those pesky critical thinking or decision-making skills.
I’ve been thinking a lot lately about good matching of specific instructional techniques to specific challenges — I think that instructional design tends to wield it’s brush pretty broadly sometimes (“Scenario-based learning is good” is true a lot of times, but always?), and that we need more critical-thinking about matching the tools to the solution, rather than matching the solution to the available tools (I really like where BJ Fogg is going with this in behavior change, btw).
So, what experiences have you had with fraught choices? Any strategies you would like to add? Would love to hear from you in the comments below.(* Don’t you like how I made this about giving you the opportunity to think it through, rather than about the fact that I’m running out of time to get this blog post done?)