Brain as Prediction Machine

So I’m really interested right now in how the brain operates as a prediction machine. Basically, one of our core brain functions seems to be guessing what is going to happen next.

I think this has some really fascinating implications for behavior change.  Humans are (in many ways) bad at risk prediction.  More people seem to be afraid of flying than driving, despite data that shows the riskiest part of any flight is the drive to the airport. We are often more afraid of things that are scary than things that are likely — sedentary behavior is far more likely than bungee jumping to injure us, but we probably wouldn’t rate sitting on the couch as more risky than jumping off a bridge attaching to a giant rubber band.

Classic behaviors that are difficult to change include things like diet, exercise, smoking, texting while driving.  In workplace contexts, I might look at safety procedures or sanitary food handling.  All of these activities involve some assessment of the risk involved and some prediction of outcomes, either consciously or unconsciously.

Here are some interesting things I’ve been looking at regarding this:

How your brain hallucinates your conscious reality by Anil Seth:

How our brains use embodied simulation to construct meaning:

http://www.npr.org/sections/health-shots/2013/05/02/180036711/imagine-a-flying-pig-how-words-take-shape-in-the-brain (from Benjamin Bergen’s book Louder Than Words)

 

How even the structure of our vision is structured around predicting the immediate future from Mark Changizi:

This is another explanation of how vision is a constructed function (the rest of his talk covers similar ground to the Anil Seth talk):

Here’s a closer look at the image he is describing:

Here’s a good talk on Risk Literacy from Gerd Gigerenzer:

Emily Pronin et al found that people make different choices for their future selves, and that the decisions they make for their future selves are more like the decisions they might make for other people — we essentially have a “do as I say, not as I do” relationship with our future selves:

http://journals.sagepub.com/doi/abs/10.1177/0146167207310023

Similarly, seeing pictures of your aged self can impact your retirement planning:

http://newsroom.ucla.edu/stories/the-stranger-within-connecting-with-our-future-selves

Image of the scientist and his artificially aged self

While some of this is not immediately translatable into practical applications for learning and development, it does seem that construction of reality and future prediction is an important part of meaning-making and decision-making, which in turn impacts choices and behaviors.

 

Behavior Research Links

So, I was just talking to someone interesting in doing user research for behavior change, and I put together a set of links for her.  I thought it was a useful list, so also posting it here:

This is a nice collection of resources about UX User Research, including a list of people to follow:  http://www.uxbooth.com/articles/complete-beginners-guide-to-design-research/

A Habit-based Approach to Racial Bias

We all carry around implicit bias. It’s embedded in the culture, and it’s hideously obvious that it can lead to horrible tragic results.
 
This is study that has really been influencing my thinking about a habit-based approach to behavior change. The results actually show reduction in people’s implicit racial bias. It’s remarkable and rare to change something so deeply ingrained.
I’ve been using this study as an example of a habit-based approach to behavior change, but it seems timely to talk about these actual strategies — not as an example, but as an actual opportunity to improve our own bias. 
Here’s the actual study:

Long-term reduction in implicit race bias: A prejudice habit-breaking intervention
Patricia G. Devine, Patrick S. Forscher, Anthony J. Austin, and William T. L. Cox
J Exp Soc Psychol. 2012 Nov; 48(6): 1267–1278.

Here’s what they found:
Students took the Black-White Implicit Association Test (IAT) to test their level of implicit racial bias.  This test is adminstered via Harvard University. I recommend you try it yourself here: https://implicit.harvard.edu/implicit/takeatest.html
Then participants engaged in five habit-based strategies to counteract their own implicit racial bias. This is important because participants watched for their own bias to show up and engaged in deliberately counteracting the incidents with one or more specific habit strategies. This gets at behavior rather than just intent.
Here are the specific strategies from the study:
  • Stereotype replacement
    This strategy involves replacing stereotypical responses for non-stereotypical responses. Using this strategy to address personal stereotyping involves recognizing that a response is based on stereotypes, labeling the response as stereotypical, and reflecting on why the response occurred. Next one considers how the biased response could be avoided in the future and replaces it with an unbiased response (Monteith, 1993). A parallel process can be applied to societal (e.g., media) stereotyping.
  • Counter-stereotypic imaging
    This strategy involves imagining in detail counter-stereotypic others (Blair et al., 2001). These others can be abstract (e.g., smart Black people), famous (e.g., Barack Obama), or non-famous (e.g., a personal friend). The strategy makes positive exemplars salient and accessible when challenging a stereotype’s validity.
  • Individuation
    This strategy relies on preventing stereotypic inferences by obtaining specific information about group members (Brewer, 1988; Fiske & Neuberg, 1990). Using this strategy helps people evaluate members of the target group based on personal, rather than group-based, attributes.
  • Perspective taking
    This strategy involves taking the perspective in the first person of a member of a stereotyped group. Perspective taking increases psychological closeness to the stigmatized group, which ameliorates automatic group-based evaluations (Galinsky & Moskowitz, 2000).
  • Increasing opportunities for contact
    This strategy involves seeking opportunities to encounter and engage in positive interactions with out-group members. Increased contact can ameliorate implicit bias through a wide variety of mechanisms, including altering the cognitive representations of the group or by directly improving evaluations of the group (Pettigrew, 1998; Pettigrew & Tropp, 2006).
The results were successful in reducing implicit racial bias (as measured by the IAT) for the intervention group:
IAT_study
As I mentioned above, this is an exceptional result.  Traditional diversity classes often produce good intentions but little behavior change, and rarely address the deep level of unconscious bias.
Hope this is helpful. – Julie

Complexity and Learning

I’m kind of obsessing about complexity theory right now (Dave Snowden’s Cynefin Model mostly), and looking at simple, complicated and complex systems. I had a lot of conversations about this last weekend, and have been thinking about it a lot.

A couple of upfront disclaimers — first, I’m just learning about this, so I don’t pretend to really understand this stuff.  It’s my interpretation, but it wouldn’t surprise me at all to know I’m getting the details wrong. Second, I’m not digging into Chaotic (for now at least). Third, there’s a much longer post on this brewing, and I have more questions than answers right now.

So — let’s apply this to the question of school testing, for example:

Simple things (with explicit rule sets) are probably fine to assess via multiple choice tests. MCQs for multiplication tables? Sure! No problem!

But complicated things (e.g. the subtleties of designing a scientific experiment) and complex things (e.g. problem-solving skills) do not have explicit rule sets, and are therefore NOT appropriate topics for a really reductionistic assessment methods.

School testing models are trying to squeeze all the ambiguity out of the system by trying to control every variable. You can do that with simple and possibly with complicated systems (though it’s an insane amount of work — witness the amount of procedural documentation in the air safety industry, or the nuclear power industry in their attempt to eliminate all ambiguity. It’s usually only justifiable when people’s lives are at stake).

But you can’t (by definition) eliminate all the ambiguity in complex systems. E.g. you can teach principals for problem-solving, or a process, but how it gets implemented depends on the context, which you can’t control. That’s where teachers, with their personal judgment and ability to adapt, become really important. It’s one of the limitations of computer-based instruction.

People don’t like not having control. School testing is trying to exert control by pretending that everything can be put in the simple box, so it can be measures using simple, objective measures. But it just doesn’t work.

I think there’s some real value in having a good way to assess whether or not  you are dealing with a simple, complicated or complex situation, and adjusting not only your assessment, but also your learning design for that. Working on this, but if you know of anything really useful, please let me know.

A couple of good resources:

Thoughts?

Addition:  This article is a pretty perfect case study of this Poet: I can’t answer questions on Texas standardized tests about my own poems