Brain as Prediction Machine

So I’m really interested right now in how the brain operates as a prediction machine. Basically, one of our core brain functions seems to be guessing what is going to happen next.

I think this has some really fascinating implications for behavior change.  Humans are (in many ways) bad at risk prediction.  More people seem to be afraid of flying than driving, despite data that shows the riskiest part of any flight is the drive to the airport. We are often more afraid of things that are scary than things that are likely — sedentary behavior is far more likely than bungee jumping to injure us, but we probably wouldn’t rate sitting on the couch as more risky than jumping off a bridge attaching to a giant rubber band.

Classic behaviors that are difficult to change include things like diet, exercise, smoking, texting while driving.  In workplace contexts, I might look at safety procedures or sanitary food handling.  All of these activities involve some assessment of the risk involved and some prediction of outcomes, either consciously or unconsciously.

Here are some interesting things I’ve been looking at regarding this:

How your brain hallucinates your conscious reality by Anil Seth:

How our brains use embodied simulation to construct meaning:

http://www.npr.org/sections/health-shots/2013/05/02/180036711/imagine-a-flying-pig-how-words-take-shape-in-the-brain (from Benjamin Bergen’s book Louder Than Words)

 

How even the structure of our vision is structured around predicting the immediate future from Mark Changizi:

This is another explanation of how vision is a constructed function (the rest of his talk covers similar ground to the Anil Seth talk):

Here’s a closer look at the image he is describing:

Here’s a good talk on Risk Literacy from Gerd Gigerenzer:

Emily Pronin et al found that people make different choices for their future selves, and that the decisions they make for their future selves are more like the decisions they might make for other people — we essentially have a “do as I say, not as I do” relationship with our future selves:

http://journals.sagepub.com/doi/abs/10.1177/0146167207310023

Similarly, seeing pictures of your aged self can impact your retirement planning:

http://newsroom.ucla.edu/stories/the-stranger-within-connecting-with-our-future-selves

Image of the scientist and his artificially aged self

While some of this is not immediately translatable into practical applications for learning and development, it does seem that construction of reality and future prediction is an important part of meaning-making and decision-making, which in turn impacts choices and behaviors.

 

Behavior Research Links

So, I was just talking to someone interesting in doing user research for behavior change, and I put together a set of links for her.  I thought it was a useful list, so also posting it here:

This is a nice collection of resources about UX User Research, including a list of people to follow:  http://www.uxbooth.com/articles/complete-beginners-guide-to-design-research/

Complexity and Learning

I’m kind of obsessing about complexity theory right now (Dave Snowden’s Cynefin Model mostly), and looking at simple, complicated and complex systems. I had a lot of conversations about this last weekend, and have been thinking about it a lot.

A couple of upfront disclaimers — first, I’m just learning about this, so I don’t pretend to really understand this stuff.  It’s my interpretation, but it wouldn’t surprise me at all to know I’m getting the details wrong. Second, I’m not digging into Chaotic (for now at least). Third, there’s a much longer post on this brewing, and I have more questions than answers right now.

So — let’s apply this to the question of school testing, for example:

Simple things (with explicit rule sets) are probably fine to assess via multiple choice tests. MCQs for multiplication tables? Sure! No problem!

But complicated things (e.g. the subtleties of designing a scientific experiment) and complex things (e.g. problem-solving skills) do not have explicit rule sets, and are therefore NOT appropriate topics for a really reductionistic assessment methods.

School testing models are trying to squeeze all the ambiguity out of the system by trying to control every variable. You can do that with simple and possibly with complicated systems (though it’s an insane amount of work — witness the amount of procedural documentation in the air safety industry, or the nuclear power industry in their attempt to eliminate all ambiguity. It’s usually only justifiable when people’s lives are at stake).

But you can’t (by definition) eliminate all the ambiguity in complex systems. E.g. you can teach principals for problem-solving, or a process, but how it gets implemented depends on the context, which you can’t control. That’s where teachers, with their personal judgment and ability to adapt, become really important. It’s one of the limitations of computer-based instruction.

People don’t like not having control. School testing is trying to exert control by pretending that everything can be put in the simple box, so it can be measures using simple, objective measures. But it just doesn’t work.

I think there’s some real value in having a good way to assess whether or not  you are dealing with a simple, complicated or complex situation, and adjusting not only your assessment, but also your learning design for that. Working on this, but if you know of anything really useful, please let me know.

A couple of good resources:

Thoughts?

Addition:  This article is a pretty perfect case study of this Poet: I can’t answer questions on Texas standardized tests about my own poems

Stephen Anderson – From Paths to Sandboxes

Sat in on Karl Fast and Stephen Anderson‘s Design for Understanding workshop at the IA Summit last week, and it was double-plus-good.

Here are Stephen’s slides from his IA Summit presentation.  Excellent stuff relating to autonomy in learning environments, and multitudes more:

Social Norms -or- Hey, What are they doing over there?

I’m working on a change management presentation, and have been looking for some of the social norms research – especially at the practice of using messages that help people understand that the majority of the group is already doing the desired behavior.

Before I close the tabs, I thought I’d collect the most interesting links here (that’s all I have time for today!).

socialproof

Wikipedia entry (which defines it, and rightly points out that outcomes are uneven for this approach) – https://en.wikipedia.org/wiki/Social_norms_approach

Environmental behaviors and social norms (This is a nice summary paper of using social norms in environmental campaigns, influencing behaviors like littering) – http://195.37.26.249/ijsc/docs/artikel/03/3_03_IJSC_Research_Griskevicius.pdf

Thermostats with social feedback (This is one of the actual papers on this pretty widely known example) – http://www.carlsonschool.umn.edu/assets/118375.pdf

Social norms and teen smoking (And feet. An interesting television commercial aimed at social norms and teen smoking) – http://nudges.org/2011/06/14/new-social-norm-campaign-on-teen-smoking-in-texas/

Social norms and tax compliance (using a general appeal vs a social norm appeal to improve tax compliance) – http://www.socialnorms.org/CaseStudies/taxcompliance.php

More social norms and tax compliance (HBR article, though you need registration/subscription to see the whole thing) – http://hbr.org/2012/10/98-of-hbr-readers-love-this-article/ar/1

Social norms and binge drinking (a write up of one of the earlier studies that looked at perceived and actual norms for college students’ drinking behaviors) – http://socialnorms.org/pdf/socnormapproach.pdf

 

Webcast: Using the Psychology of Games for Learning

I should have posted this a few days ago, but I’m doing a webcast tomorrow (Wednesday May 15th, 1pm ET) for ASTD on using the psychology of game design for learning.  Talking about some familiar stuff (flow, hyperbolic discounting) and a few new things (visceral feedback).  Not sure if you need to be an ASTD member to attend, but I *think* you can just sign up:

http://webcasts.astd.org/webinar/731#.UZKUcU7gd84.twitter

 

Virtual Chainsaws (When it’s not a knowledge problem)

Just wrote a piece for the Research for Practitioners series over at Learning Solutions Magazine on some really fascinating research at the Stanford Virtual Human Interaction Lab.  It’s crazy interesting research, and it involves virtual chainsaws, behavior change and crafty research techniques. What’s not to love in that?

Go check it out here: Research for Practitioners: When It’s Not a Knowledge Problem

chainsaw

Problem Statements – The much shinier version

The last blog post I wrote was about starting design with a problem rather than a solution, and it came from a conversation with Stephen Anderson about a presentation he was putting together for the IA Summit.

Here’s his presentation, and (of course) it’s great stuff: