Stephen Anderson – From Paths to Sandboxes

Sat in on Karl Fast and Stephen Anderson‘s Design for Understanding workshop at the IA Summit last week, and it was double-plus-good.

Here are Stephen’s slides from his IA Summit presentation.  Excellent stuff relating to autonomy in learning environments, and multitudes more:

Extrinsic Motivation and Games

Hey folks, this is a really excellent discussion of the issues and research around using extrinsic rewards as a way to motivate behavior. Chris Hecker is looking at the question through the lens of game design, but it really, really applies to learning design as well. exrewards

There’s a write-up at the website, and a recording of the talk if you scroll down.  It’s long-ish, but well worth the listen.

checker_talk

 

Found this via Amy Jo Kim on twitter: https://twitter.com/amyjokim

 

 

Gameful Webinar – Recording

The recording of the webinar that I did for the Gameful folks has been posted – it’s available  here:

http://gameful.org/groups/gameful-webinar-series/forum/topic/gameful-webinar-%E2%80%93-sunday-february-12-2012/

We wound up with a troll in the room towards the end, who kept posting links to -erm- unsavory sites, so be careful about clicking links in the chat (The ones in the actual presentation slides are safe).  Made for a slightly odd experience.

Slides and links can be found here: http://usablelearning.com/about/presentations/leef/

Is learner motivation your responsibility?

Just had this quick interchange with Patti Shank on twitter:


This is a totally fair comment on Patti’s part — you can’t force someone to be motivated (and undoubtedly some of our disagreement stems from semantics – not that THAT ever happens on twitter).  A lot of the conversation around gamification (for a heated throw down on the topic read the comments here) is about the dubious and likely counterproductive effects of extrinsic rewards as motivators.  According to Alfie Kohn in his book Punished by Rewards, a big part of the problem with extrinsic motivators is that it’s about controlling the learner, not helping or supporting them.

So that I totally agree with – you can’t control your learner, or control their motivation.

But design decisions do have an impact on human behavior.  For example, this chart show the rate of people who agree to be organ donors in different European countries:

In the blue countries, choosing to be a organ donor is selected by default, and the person has to de-select it if they do not want to be a donor.  In the yellow countries, the default is that the person will not be an organ donor, and the person has to actively choose to select organ donor status.

Now it could be that some people aren’t paying attention, but at least some of that difference is presumably due to people who do notice, but just roll with the default (you can read more about it here – scroll down to the Dan Ariely section).

So the way something is designed can make a difference in behavior.  Of course, that’s not a training example, so let’s take a closer look at how training might come in to play.

Is it a training problem?

Robert Mager used this question as a litmus test:

“If you held a gun to the person’s head, would they be able to do the task?”

He further discusses this in his book on Analyzing Performance Problems but later uses the less graphic “could they do the task if their life depended on it?” question (Thiagi advocates for the version “Could they do it if you offered them a million dollars?” if you prefer a non-violent take).

So basically, if someone could do the behavior under extreme pressure, then they clearly know how to do it, and it’s not a knowledge or skills problem, and therefore outside of the domain of training (could be up the person’s specific motivation, could be a workplace management issue, etc.).

Here’s where I disagree

I think the way you design learning experiences can have an impact on the likelihood of people engaging in the desired behavior, and that it is part of an instructional designer’s responsibility.  I don’t think you can control people, or force the issue, but I do think the experience they have when they are learning about something can make a difference in the decisions they make later.

There are a couple of models that influence my thinking on this, but the two I use most often are the Technology Acceptance Model, and Everett Rogers Diffusion of Innovations.

The Technology Acceptance Model

The technology acceptance model is an information systems model that looks at what variables affect whether or not someone adopts a new technology.  It’s been fairly well research (and isn’t without its critics), but I find it to be a useful frame.  At the heart of the model are two variables:

It’s not a complicated idea – if you want someone to use something, they need to believe that it’s actually useful, and that it won’t be a major pain in the ass to use.

TAM specifically addresses technology adoption, but those variables make sense for a lot of things.  You want someone to use a new method of coaching employees?  Or maybe a new safety procedure?  If your audience believes that it’s pointless (ie not useful), or it’s going to be a major pain (ie not easy to use), then they will probably figure out ways around it. Then it either fails to get adopted or you get into all sorts of issues around punishments, incentives, etc.

I keep TAM in mind when I design anything that requires adopting a new technology or system or practice (which is almost everything I do).  Some of the questions I ask are:

  • Is the new behavior genuinely useful? Sometimes it’s not useful for the learner – it’s useful to the organization, or it’s a compliance necessity. In those cases, it can be a good idea to acknowledge it and make sure the learner understands why the change is being made – that it isn’t just the organization messing with their workflow, but that it’s a necessary change for other reasons.
  • If it is useful, how will the learner know that? You can use case studies, examples, people talking about how it’s helped them, or give the learner the experience of it being useful through simulations.  Show, Don’t Tell becomes particular important here.  You can assert usefulness until you are blue in the face, and you won’t get nearly as much buy-in as being able to try it, or hearing positive endorsements from trusted peers.
  • Is the new behavior easy-to-use? If it’s not, why not? Is it too complex? Is it because people are too used their current system?  People will learn to use even the most hideous system by mentally automating tasks (see these descriptions of the QWERTY keyboard and the Bloomberg Terminal), but then when you ask them to change, it’s really difficult because they can no longer use those mental shortcuts and the new system feels uncomfortably effortful until they’ve had enough practice.
  • If it’s not easy to use, is there anything that can be done to help that? Can the learners practice enough to make it easier?  Can you make job aids or other performance supports?  Can you roll it out in parts so they don’t have to tackle it all at once?  Can you improve the process or interface to address ease-of-use issues?

Everett Rogers’ Diffusion of Innovations

The other model I find really useful is from Everett Rogers’ book Diffusion of Innovations.  If you haven’t read it, go buy it now.  Yes, NOW.  It’s actually a really entertaining read because it’s packed with intrguing case studies.
It’s loaded with useful stuff, but the part I want to focus on right now are his characteristics of innovation that affect whether a user adopts or rejects an innovation:
  • Relative Advantage – the ‘degree to which an innovation is perceived as being better than the idea it supersedes
  • Compatibility – the degree to which an innovation is perceived to be consistent with the existing values, past experiences and needs of potential adopters
  • Complexity – the degree to which an innovation is perceived as difficult to use
  • Trialability – the opportunity to experiment with the innovation on a limited basis
  • Observability – the degree to which the results of an innovation are visible to others

There is obviously some crossover with TAM, but If I’m designing a learning experience for a new system, I use this as a mental checklist:

  • Are the learners going to believe the new system is better?
  • Are there compatibility issues that need to be addressed?
  • Can we do anything reduce complexity?
  • Do the learners have a chance to see it being used?
  • Do the learners have a chance to try it out themselves?
  • and, How can they have the opportunity to have some success with the new system?

Now, if somebody really, really doesn’t want to do something, designing instruction around these elements probably isn’t going to change their mind (Patti’s not wrong about that).  And if a new system, process or idea is really sucky, or a pain in the ass to implement, then it’s going to fail no matter how many opportunities you give the learner to try it out.

But here’s the thing – I can design a training intervention that can teach a learner how to use a new system/concept/idea, which could meet the Mager requirement (they could do it if their life depended on it), but I will design a very different (and I think better) learning experience if I consider these motivation factors as well.

I don’t want to take ownership of the entire problem of motivating learners (waaaaaay too many variables outside of my scope or control), but I do believe I share in the responsibility of creating an environment where they can succeed.

And bottom line, I believe my responsibility as a learning designer is to do my best to motivate learners by creating a learning experience where my learners can kick ass, because in the words of the always-fabulous Kathy Sierra kicking ass is more fun (and better learning).

—————————————

References

Davis, F. D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly 13(3): 319–340

Johnson, Eric J. and Goldstein, Daniel G., Do Defaults Save Lives? (Nov 21, 2003). Science, Vol. 302, pp. 1338-1339, 2003. Available at SSRN: http://ssrn.com/abstract=1324774

Mager, Robert and Pipe, Peter, Analyzing Performance Problems: Or, You Really Oughta Wanna–How to Figure out Why People Aren’t Doing What They Should Be, and What to do About It

Rogers, Everett Diffusion of Innovations

Why “Clear and Easy to Understand” can be bad

So, as an instructional designer, part of my job is to make things clear and easy to understand, right?

Well, it turns out that’s not necessarily the best option.

Cathy Moore just put up a blog post that has her checklist for evaluating your own e-Learning design.  You rate where your learning falls on a continuum.  In particular, I noticed this item:

This isn’t a new idea, but it’s particular powerful one — use consequences instead of disembodied-voice-of-the-eLearning-gods-type-feedback.

Sure, the “correct/incorrect” feedback may be easier to understand or have no possibility of misunderstanding, but it’s a disservice to your learners (and not just because it’s boring).

It’s been particularly resonating with me because of something that was said in this podcast on Show, Don’t Tell for fiction writers (mp3 here) from Storywonk:

“The difference is that in Telling there’s absolutely no role for the viewer or the  reader to put anything together…In Showing, the viewer has a chance to put two things together…it’s giving them the opportunity to put stuff together themselves and to actually be active in the story…”

“It’s so much more engaging as an audience member… if I am left to put stuff together myself and not have it all assembled for me and handed in front of me that this is the way it is.” 

You need to give your…readers stuff to do.  Give them a way to be an active participant, and by allowing them to draw conclusions based on little clues that you leave, you engage them in the story and they become part of [it]…” 

– Lani Diane Rich (aka Lucy March)

(emphasis and any transcription errors are my mine)

I thought that was a really interesting take on the issue.  From a learning point of view, reading a text is considered to be one of the more passive ways to learn, but your text can be really passive (let me just hand everything to you ) or it can be made more active through showing rather than telling.

I think this matches up really well with the point that Cathy is making.  It’s one thing to say “It’s really important for health care practitioners to wash their hands” and entirely another to uncover the fact that a horrible staph infection is threatening vulnerable patients in the hospital.

A little friction is necessary for learning.  Making something very easy to understand is actually doing your learners a disservice. I just saw this fascinating critique of the Kahn Academy videos (found via the Action-Reaction Blog).  In it, Derek Muller explains that the “easier to understand” version of a science video had worse outcomes:

Learners who heard “clear and easy to understand” explanations did worse than students who were confused by discussions of misconceptions.  In fact, learners from the “clear and easy to understand” camp frequently thought they’d understood when they hadn’t (watch the video – it’s really really good).

This goes along with the incredibly interesting study that came out of a few months ago that looked at this question (I paraphrase):

What’s the best way to study for a test?

a) Read the text 

b) Read the text in consecutive sections

c) Create a concept map of the material (described in the NY Times as “arrang[ing] information from the passage into a kind of diagram, writing details and ideas in hand-drawn bubbles and linking the bubbles in an organized way”).

d) Retrieval practice (a free-form essay test followed by re-reading, and a second test).

Dave Ferguson has a good write up of this study with a link to the actual paper, but the test-taking condition beat the other hands down.  They were even better at concept-mapping the material a week later than the students who had actually been part of the concept-mapping group. The researchers speculate that it’s partially because the learners were forced to confront their own knowledge gaps and reconcile them rather than just recognizing the material and assuming they knew it.

Another interesting perspective on this is from this study: Making sense of discourse: An fMRI study of causal inferencing across sentences

Subjects were shown sentence pairs.  Some of the sentence pairs went together very easily (x obviously causes y), some required some interpretation to see the connection, and some were pretty unrelated.  For example:

Main sentence: “The next day his body was covered in bruises.”

That sentence was preceded by one of these statements:

    • “Joey’s brother punched him again and again.” (highly causally related – x obviously caused y)
    • “Joey’s brother became furiously angry with him.” (intermediately causally related – you’ve got to read between the lines a little)
    • “Joey went to a neighbor’s house to play.” (pretty much unrelated)

The subjects spent the most time on the middle sentences — they were related but forced the subjects to connect some dots to see the connection.  The study saw a greater degree of brain activation in many areas for those sentences, and they were better remembered later.

Semi-gratuitous Pretty Brain Pictures

So, in the end, there appears to be something really beneficial about wrestling with stuff a bit and drawing their own inferences — you need to have a certain amount of learning friction.

I’m not arguing you should make things deliberately obtuse (there’s a difference between challenging and confusing), but if learner can connect the dots too easily, they don’t retain the learning as well (or as Derek Muller points out — they may think they DO know when they really don’t).

And if you can create opportunities for the learners to confront their own assumptions, and give them access to their own gaps, the overall results will be much better.

Whaddya all think?  And any ideas for good ways to add a little friction?

—————————————————————-

As an aside, this has interesting implications for Level 1 Evaluation (Level 1 = What was the learner reaction? Usually interpreted as “Did your learners like it?).  It suggests that a positive learner reaction (“It was clear and easy to understand!”) can actually be a counterproductive measure in certain circumstances. Hmm.

References:

– Karpicke JD, Blunt JR (2011): Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping, Science 11 February 2011: 331 (6018), 772-775.

– Kuperberg GR, Lakshmanan BM, Caplan DN, Holcomb PJ (2006): Making sense of discourse: An fMRI study of causal inferencing across sentences. Neuroimage 33:343–361.

– Muller D: PhD Thesis, Designing Effective Multimedia for Physics Education, http://www.physics.usyd.edu.au/pdfs/research/super/PhD%28Muller%29.pdf

Learning Experience Digest (Come Play)

When I initially got interested in neuroscience and cognitive psychology (4-5 years ago), I was poking around the interwebs, finding articles and whatnot.  Some useful, some not, but pretty much all secondary sources.  And, let’s face it, primary sources were beyond me — at the time I could no more have picked up and understood a paper about written about an fmri study than I could’ve one written in Latvian (mighta done better with the Latvian, actually).

Then I found Cognitive Daily [cue heavens-parting sound effect] which did a lovely job of taking peer reviewed articles and translating them into accessible write-ups that I could understand. Bliss!

Sadly, it’s defunct now, but there are lots of other similar blogs out there in the cognitive sciences doing similiar kinds of things.

There seems to not be an equivalent in the learning sciences, though, and despite a love of evidence-based practice, keeping up with that evidence is more than any one person can handle.

So we are starting one.  The Learning Experience Digest.

And you should come play.  The more people involved, the  more research-y goodness for everybody.

If you are interested in either contributing or reading, hie on over to

http://hypergogue.net/2010/12/02/hit-the-stacks/

where Simon Bostock has been doing all the heavy (organizing) lifting so far.

Fraught Decisions

The last post (on the pay gap in e-Learning) was a bit of a digression in terms of topics for this blog, but it kind of wasn’t at the same time.  I don’t usually get into industry stuff, but I also think that the gender pay gap is an interesting topic from a learning point of view.

The reason I say that: it’s a complicated, messy issue, and sometimes we have to create learning experiences for complicated, messy issues.  I frequently hear from clients that they really want to teach people good, critical, decision-making skills. My heart always sinks a little when they say that, because critical thinking and decision-making are definitely skills developed over a long period of time, and not really something in which you can make a huge dent during a 2-hr e-Learning course.

If you are going to create a learning experience for something that is complicated and messy, it would probably be helpful to understand what exactly is making it complicated and messy.  One of the most useful things I’ve read on this is the discussion of fraught choices in the book Nudge by Richard Thaler and Cass Sunstein.

In the book, they discuss fraught choices – basically situations where people are least likely to be able to make good choices.  Thaler and Sunstein argue (for the most part rightly) that these choices are likely candidates for choice architecture, but more on that later.

Fraught Decisions

The identifying characteristics for fraught decisions are:

  • Benefits Now – Costs Later (and also Costs Now – Benefits Later)
  • Degree of Difficulty
  • Frequency
  • Feedback
  • Knowing What You Like

One of the book’s examples of a fraught decision is saving for retirement – you have costs now, but don’t see the benefits for years or decades, it’s very difficult to determine what the right amount really will be, and also difficult to wade through all the fund options, tax laws and retirement plans that make little or no sense to the lay person, you probably only make these decisions once a year or so, while you do get feedback in the form of account statements, it’s difficult to interpret that feedback (“Did the account go up because of something I did, or is it just the state of the market?”), and unless you are a professional or a retirement account wonk, you aren’t like to have innate likes or dislikes to guide you (“You know, I just really like the feel of no-load mutual funds.”).

Unfraught Decisions

By contrast, an unfraught decision might be buying a sweater.  You buy a sweater, and get the benefit immediately.  In most cases it’s not a particularly difficult task (Go to store, decide you like the blue one, buy it).  You buy clothing fairly often — far more often than you choose retirement options.  You get pretty immediate feedback (“Why yes, this *is* a new sweater” or “Damn, this is itchy” or “What was I thinking?”), and you are likely to have innate preferences that require little cognitive load to manage (Appliqué Teddy Bears = No).

Applique Teddy Bears = No

So, let’s go back to our salary negotiation example.

Let’s say you’ve been tasked with designing learning for “How to Handle Your Salary Negotiation.”  I think we can all agree that this meets the definition of a fraught decision:

  • Time lag between benefits and costs – Some of the benefits/costs happen immediately, but much of both those benefits and costs will follow you for years in your career.
  • Degree of difficulty – it’s very difficult to determine what to ask for — there are all the variables of the field, your qualifications, the position itself, the employer’s situation.
  • Frequency – Unless you are changing jobs a LOT, you do not do this frequently (I’ve only done it twice in the last ten years).
  • Feedback – knowing what you should ask is like the Price is Right game — get as close as you can without going over, and you pretty much never get precise feedback (“You know, we’re glad to have you on board! And, by the way, you could have gotten another four grand and an extra week’s vacation if you’d pushed a little bit more!”).
  • Knowing what you like – You think you know what you like (“More!“), but it’s more complicated than that.

So, what are the implications for the design of instruction?

Thaler and Sunstein’s book isn’t about learning design — they talk about choice architecture.  For fraught decisions, how can you structure the choice (the options and defaults) so that people can make the best possible decisions?  Quite frankly, I would love to hear their ideas about choice architecture to address the gender pay gap.

But learning designers may or may not be able to influence that choice architecture, so what can they do instead if their subject matter is seriously fraught?  A few possibilities:

  • Time lag between benefits and costs – this one is tough, because there’s a lot of wiring that works against us.  While one of the human brain’s killer apps is specifically the ability to defer a benefit now for later gain, we still aren’t all that good at it (see this and this and this). Instructional Solutions: If you really want people to consider the now vs. later, you need instruction that speaks to the affective self (e.g. storytelling with emotional impact or scenarios with consequences), or tools that help them envision that future state (e.g. projection or modeling tools or simulations).
  • Degree of difficulty – I think this the only one of the characteristics that standard instructional design addresses at all well.  Instructional Solutions: There are a lot of options (helping people create mental models, breaking content down into manageable chunks, job aids, etc.), but one good resource is  Jeroen van Merrienboer’s book on Complex Learning which goes into great detail on this topic.
  • Frequency – lack of frequency means that real-world trial and error learning is pretty much out, but you can put it into your learning design.  Instructional Solutions: Practice.  Lots of practice.  Multiple practice scenarios with as much context as you can possibly muster. You also want to consider distributing some of that practice over time (see the Spacing Implementation Quick-Audit available here).
  • Feedback – in a lot of situations, the difficulty with this one isn’t actually designing instruction for it — you can create scenarios with real-world consequences pretty easily.  The difficulty here can be ensuring that your data is good — empirically speaking, are these actually the consequences people will encounter in the real world? The temptation here is to design to the ideal solution – this is how you want the outcome to happen. But can you back those outcomes up through research or observation of high performers?  Instructional Solutions: Get real data about the consequences whenever possible, and use it to inform the content and design of your learning scenarios.
  • Knowing what you like – When a topic is too esoteric for there to be an innate pull in one direction or the other, how can you help people develop helpful instincts or preferences? You might ask why it matters, but well-informed liking and preference can be very useful shortcuts in fraught decision-making. Instructional Solutions: There are a number of ways to help people develop a sense of preference, such as exposure to lots and lots of examples or embedding the information in stories. Also, decision-making job aids can act as a stand-in shortcut when preference is absent.

So, think about our salary negotiations training — how would you design training for that fraught decision*?

How would you approach it?  Which of the solutions above would be most helpful?  What else can you think of that I’ve missed?

Nothing above is revolutionary in terms of training solutions, but if you’ve ever had difficulty selling a scenario or practice-based approach to a stakeholder, this might be a useful wedge for why those approaches are necessary if you want to teach those pesky critical thinking or decision-making skills.

I’ve been thinking a lot lately about good matching of specific instructional techniques to specific challenges — I think that instructional design tends to wield it’s brush pretty broadly sometimes (“Scenario-based learning is good” is true a lot of times, but always?), and that we need more critical-thinking about matching the tools to the solution, rather than matching the solution to the available tools (I really like where BJ Fogg is going with this in behavior change, btw).

So, what experiences have you had with fraught choices?  Any strategies you would like to add?  Would love to hear from you in the comments below.

(* Don’t you like how I made this about giving you the opportunity to think it through, rather than about the fact that I’m running out of time to get this blog post done?)

One-Size-Fits-All-ism

Okay – this is just a quick post because I am procrastinating right now, but you know how sometime you read a couple of random things together, and they really resonate off each other for you?

So let me tell you about the things I just read.

First was this slideshare by the marvelous Stephen Anderson:

It’s an interesting presentation on management theory, but the thing that caught my attention was this slide:

He then goes on to discuss how the context (the type of the organization, the structure, the leadership) determines what management strategy is most applicable.

This has long been an issue that I have about Instructional Design – we say we are doing audience analysis (and frequently are), but we still talk about design strategies as if everybody is the same (I am not immune to this, btw).  I think we lack good tools and principles for this.

Then, the saved tweet right after that was this blog post by Nick Shackelton-Jones (via @bfchirpy):

In it he makes several interesting points, but he explains that context is crucial for learning:

What became perfectly clear was that context rather than the content determined learning efficiency: if the organisation to which you belonged could give you a compelling reason to study (such as a life-altering test) then it hardly mattered whether they gave you content at all – let alone what format it was in. People, it turns out, are resourceful learners.

He proceeds to explain about his Affective Context Model.  Go read it – it’s definitely worth the time.  And I think it’s moving in the right direction – we need less recommendations that say “this is how you do Learning” (one-size-fits-all-ism) and more specific tools, ideas and strategies for addressing the specific contexts and abilities of learners.

Do People Learn Like Buildings Do?

So, I had this horrible job…

Years ago I had a fairly blechy job teaching GMAT prep classes.  The class met for an entire weekend (Friday night and all day Saturday and Sunday) to help prospective MBA students prepare to take the GMAT exam the following weekend.

It was a horrible job for a number of reasons (the pace, the last-minute info-cram format, the nasty windowless hotel meeting room locations, the scent of desperation in the room), but one of the biggest issues was whether or not we could actually help the students. The answer was mixed.

With a typical student we stood a decent chance of improving their Quantitative (Math, Logic, Problem-Solving) scores, but we usually couldn’t make much of a dent in their Verbal scores. I’ll explain why in a moment, but stop for a second and think about why that might be the case.

<Jeopardy theme music while you formulate a hypothesis>

Maybe it’s obvious…

… but it came down to the specifics of what we could teach them.  In the quantitative section, we could teach them some quickie short cuts for math problems, remind them of the geometry formulas they hadn’t seen since their sophomore year of high school, and get them used to the wacky “data sufficiency” format that shows up on the test.

These were were a) information-based b) based on activation of prior (albeit rusty) knowledge or c) very brief skills which (in the case of the data sufficiency format) could be brought to a reasonable level of mastery in a few hours (whether they retained those skills is another matter).

In the verbal section, they needed skills like vocabulary, reading comprehension, complex analysis and reasoning.  As you might imagine, these are not skills you acquire in a weekend (try decades). There are very few quickie shortcuts that you can teach someone if the foundations of their language skills aren’t there. This was amplified by the fact that right answers in the verbal section were relative right answers (“Choose the best answer”) instead of absolute (“Choose the correct answer”) — they involved judgement calls rather than calculating to find the one correct answer.

What does this have to do with buildings?

I was thinking about all of this as I read Clark Quinn’s excellent post on Designing for an uncertain world. In it, he talks about “a pedagogy that looks at slow development over time.”

This then made me think about a presentation [ppt] that Karl Fast, an information architect friend of mine, did at the IA Summit a few years back.

He referenced Stewart Brand’s “How Buildings Learn” (links to the whole BBC series here).

The Pace Layering of Buildings by Stewart Brand

Basically, the idea is that some things change quickly (the actual contents of the room might change daily, the interior decorating might change in months to years), and some thing will change more slowly (the space usage, the interior layout, the actual structure might change in years), and some things will change only very slowly (the structure, the foundation might change in years, decades or centuries).

“The fast parts learn, propose, and absorb shocks; the slow parts remember, integrate, and constrain. The fast parts get all the attention. The slow parts have all the power.”

Steward Brand, The Long Now Foundation

He has a similar pace layering for civilization (from Brand, S. (1999). Clock of the Long Now):

  • Fashion/art
  • Commerce
  • Infrastructure
  • Governance
  • Culture
  • Nature

So, here’s my question — what’s the pace layering of learning?

Pace Layering for Learning

Or maybe the question is what is the pace layering of knowledge?

In the GMAT course I taught, we could, at best, rearrange some furniture (and hope that it stayed rearranged until they took the test the following week).  We weren’t going to really change anything like their verbal skills – those were part of the structure and foundation.

Over the years, I’ve worked a fair number of supervisory/management skills, and you’ll bump in to circumstances where someone wants a two or three hour course on management skills (or leadership training, which is an entity unto itself).

Okay, so the notion that you can make a significant difference in how someone manages in a 2-hour course is laughable. Of course you aren’t.  So what can you do?

If I think about how management skills would map to the pace layering idea (I’m not going to try to map all the levels directly):

  • Stuff (easily changeable): Specific tools, techniques, concepts & principles
  • Space Plan / Structure (moderately changeable over time): Skills and practices
  • Foundation (slow & difficult to change): Culture, core principles, people skills and personality

Which of these is really going to make the difference in how someone is going to behave as a manager or supervisor?  Remember, the slow parts have all the power.

I think there are a couple of ways this perspective could be useful:

Find a few throw pillows: What are some easy, cheap ways to make an impact?  It might be a model, a tool, a job aid, a checklist — something that is easy for your learners to implement right away, that will have an immediate impact — it won’t change their world, but it might solve a small but pesky problem. Don’t try to solve big problems with a throw pillow, though.  They may brighten the room, and be a cheap way to have an impact (and there’s nothing wrong with that), but they aren’t a substitute for the heavy lifting involved in real behavioral change.

Give them some sturdier pieces: Give them some more concrete material, but recognize that this is going to take more time – they will need to set it up, move it into place, get rid of the old piece, arrange it their existing stuff in it, and get used to how it changes their current patterns. You need to make sure that you don’t try to do that all at once, but recognize that there are several steps that all need to be supported, unless you want the unassembled items sitting in it’s box in the storage area indefinitely.

Recognize that you aren’t going to change their structure:  If they have some renovations already in place, you might move them along a little, or you can help them start some planning for future changes.  This sounds easy, but actually, it’s really hard.  It’s hard because it involves letting go of the deeply held belief that we can do major renovations in short period of time.  We can’t and it’s a waste of resources to pretend we can.  If we approach it with the longer view in mind, we can create better ways to help people, and ensure that there is a long-term plan.

Respect the Foundation: The foundation is based on bedrock like culture and personal differences.  If your structural changes aren’t going to sit well on the foundation, then you are better off changing your design, because it’s really unlikely that the foundation is going anywhere.

A couple of resources:

http://www.elearningpost.com/blog/how_buildings_learn_6_episodes_on_google_video/

http://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisition

http://www.work-learning.com/learning_factors.htm#5. Spacing repetitions and practice over time

http://wiki.bath.ac.uk/display/webservices/Shearing+layers

http://en.wikipedia.org/wiki/Shearing_layers

(I’ve seen these ideas from “How Buildings Learn” show up in Information Architecture and UX circles, but haven’t seen it applied to instructional design — has anybody seen a good application of it to learning?)

Game Making for Learning

So here’s my pitch for not making learning games for students, but rather getting them to make those games for you.  It’s less work, more fun, and potentially a better learning experience for students.  Win, win, win.

First, the back story:

Okay, so I used to teach Project Management to art students.  This was a somewhat quirky undertaking.

Admittedly, these were design/visualization students who were going to go work for advertising agencies or web design houses or the like, not fine arts students seeking purer artistic truth, but it’s still a somewhat odd mix.

The nice thing about their program is that it was very practically based, and the students (almost without exception) were working on projects (websites, marketing materials, etc.) for actual (non-paying) clients at the same time that they taking my class.

What that meant was that they did a ton of work for my class early in the semester (analysis, project plans, scope, budgets, etc.), and that by the end of the semester we were just filling in the gaps.  Also, at the end of the semester, they were furiously working to get their client projects done, having discovered (with a certain painful inevitability) the joys of scope changes, and schedule delays, and changeable clients, and so forth.

But (as a merciful gesture) I wanted to have their final project not be a source of great stress. By that point, they had mostly earned their grades, and I just wanted a final project that would cause them to revisit key material, and to reflect on everything they’d learned over the semester.

So I gave them two options:

  • One: They could create a project management template that they could use on future projects, that organized all their formats, and gave everything a standard look and feel.  This required them to scrutinize all the documentation that they had done all semester, and process it into a coherent whole.
  • Two: They could create a project management game. The only requirements where that it 1) covered all of 15 key topic areas from the semester, and 2) that it could be used to teach somebody else about project management.

On the last day of class, students would either present their template to their fellow students, or they would bring their game, and we’d play it.

And this ROCKED.

Some of the things that they came up with:

  • Project Management Chutes and Ladders (the metaphor holds surprisingly well)
  • Project Management CLUE (“I think it was the primary stakeholder in the copy room with the scope change…”)
  • A video game called “Hunt the Project Monster”
  • Any number of different varieties of board games
  • A variation on the card game Scruples that involved project management dilemmas (ethical and otherwise)
  • Project Management Jenga (again, the metaphor holds surprisingly well)
  • A Project Management race card game where remote controlled cars would race around the track and crash into project obstacles which they would then have to resolve
  • Project Management Twister (Seriously)
  • A Project Management memory game that involved matching problems to solutions
  • A Project Management adventure game that involved the project management issues facing trolls and elves during battle preparation.
  • Any number of trivia games, board games and team games
  • And, new this year (I no longer teach the class, but I passed it to one of my original students from several years ago and keep in touch), they have apparently added Project Management drinking games to the repertoire (although I believe the demonstrations were mercifully alcohol-free)

As I mentioned earlier, the two things that I wanted the students to achieve by doing this were:

  • Revisit key material
  • Reflect on what they’d learned over the semester

And, by and large, this assignment does those things pretty well.  Students actually have a reason to go through the material looking for key points they can use in the games, and, while I also assigned a more traditional reflection paper, I was really satisfied with the quality of reflection demonstrated by the games they built.

A couple of things about this:

  • There’s a tendency towards trivia games: There’s a tendency to have the project management ideas show up in these games primarily as trivia questions (“If a client decides to completely change everything at the last minute, will you A) Cry B) Do a scope change or C) Set their desk on fire“) – the experience would definitely be richer if the gameplay was bound up in the workings of project management itself (e.g. scope points, time points, budget points, etc.).  EdgeofStretch was talking on twitter about concept mapping a while back , and I think that could be a useful tool for thinking about how the elements interact, and from there how to make that a game. There are also wonderful resources out there on how to conceive, design and build games.  But I think the amount of guidance and modeling needed to get the students into that mindset were beyond the scope of this particular class.  It’s definitely something I would consider when using game-building as a learning tool in the future.
  • It doesn’t have to be this full blown an effort: Even if it’s not practical to have students build a game (maybe you are limited by time, or may you are dealing with an online environment), just having your students go through the act of thinking about how they would “game-ify” a topic, and sketching out design ideas could be a really valuable activity.  Meta-understanding, in the form of game design.

What examples have you seen?  What could it be used for?