Computers are dumb, which can make your e-Learning dumb. What can you do about it?
Be Less Helpful
Over the weekend, I watched this video of a great presentation by Dan Meyer on how to Be Less Helpful to students.
It’s an hour long, and is well worth the viewing, even if you aren’t really interested in math education. Amongst his many good points, there were a couple that stood out for me:
- Students need to know how to approach and problem solve messy problems
- Good questions should have information missing, so students can learn to figure out what else they need to know
- Intuition can be an excellent tool in the problem-solving toolbox, if you can learn how use it well
- These problems don’t necessarily have a single, tidy, correct answer.
That all sounds well and good and constructivist, and is in the neighborhood of all sorts of instructional design ideas that I’ve considered and espoused for years (Problem-based Learning, Authentic Tasks, etc.), but my summary of this really doesn’t do it justice.
It was actually pretty revelatory to me — one of view shifts that picks your head up and sets it back down at a 45 degree angle, and you don’t see things the same afterwards. Lest you think I’m overselling it, set aside time to watch the presentation. There’s also a very good explanation of his approach to these questions on his blog here: http://blog.mrmeyer.com/?p=1928
Basically, the revelation that I had was — I like right answers. I really like tidy right answers. I usually don’t ask learners questions that I don’t have a “right” answer or answers for. Even when the task is “authentic” and “embedded in context” I want there to be a right answer. And this is wrong.
Because what Dan Meyer is teaching his students is how to approach problems that don’t have right answers, which is the way that most of the problems in the real world work. His students are learning to be okay with that, and how to ask good questions, and how approach those problems.
How does this relate to computers being dumb?
Dan Meyer does a good job of recognizing why this approach is challenging to implement in his instructor-led math classes.
And it occurred to me that if it was challenging in an instructor-led environment, it was nearly impossible in a computer-based one.
Because computers are dumb.
At best, interactions in e-Learning usually consist of recognizing a right answer (or answers). So instead of generating answers or even recalling answers, e-Learning users recognize right answers.
That’s because computers are the enemies of ambiguity (note: I’m not talking about instructor-led webinars or the like, but traditional single-user e-Learning). Instead of recognizing chewy, messy ambiguity, computers recognize Right Answers and Wrong Answers.
Even dressed up multiple choice questions are still multiple choice questions.
The other day, I was creating a quick e-Learning tutorial, and the right answer for a question was $1,244. If I wanted the computer to be at all flexible about what the user could type in, I had to also tell the computer (in tedious detail) that 1,244 / 1244 / 1,244.00 / 1244.00 / $1244 / $1244.00 & $1,244.00 were also acceptable answers.
See? Computers are dumb.
I know that technically, it’s the software that’s dumb, and maybe if you are funded by Massive Educational Foundation and have access to some pretty impressive technology resources, you can explore things like natural language recognition (so e-Learning students could type complex essay question answers) but that’s not a practical solution for most e-Learning projects. Even more so if you are constrained by “rapid” e-Learning tools.
So, what do you do to reconcile the fact that while there can be tremendous value in ambiguious, messy problems, computers don’t even like figuring what decimal place the learner might be rounding to?
We can dress is up in all sorts of different ways, but in the end it’s *very* difficult to insert any ambiguity into the process.
So, what can we do about it?
So, if you you don’t have access to programming resources to make interesting complicated learning games, here are a five suggestions to at least increase the level of ambiguity in your e-Learning:
————————————————————————–
1 ) Have lots of options. Lots and lots:
This is a pretty blunt force solution, but it means that your learners may need to employ other strategies to get answers than just browsing through until they recognize things that seem familiar or right. It also disguises the “rightest” answers (learners have years of experience answering these kinds of questions, and know tricks for guessing and short-cutting the process).
Ways to enhance this:
- Use same lengthy set of answers for multiple scenarios – this helps turns off learners’ “guessing” behaviors
- Layer sets of questions (e.g. follow “What’s the Problem?” with “What clarifying questions could you ask?”)
- Use a relevant document as the answers (e.g. “Which section/paragraph of the ethics manual addresses this issue?”)
————————————————————————–
2) Don’t use wrong answers
The standard format for a multiple choice question that has three wrong answers and one right answer:
- “I’m sorry ma’am, but that’s the policy.”
- “Only a manager can do that.”
- “Ma’am, I understand why you would be upset about an incorrect charge.”
- “Of course, we’ll take care of that right away!”
We’ve all taken those quizzes, right? The ones where you are pretty sure you can answer all the questions with common sense, without looking at any of the materials.
- “I can’t reverse that charge myself, ma’am, but I will find out what I need to do to help you with this.”
- “I’m sure there’s a way we can help you. I just need some additional information.”
- “Ma’am, I understand why you would be upset about an incorrect charge.”
- “How frustrating that must have been for you! We’ll take care of that right away!”
Now, to answer the question correctly, you might need to know that the training was about using validation to defuse anger.
Ways to enhance this:
- Use “None of the above”:
- “I can’t reverse that charge myself, ma’am, but I will find out what I need to do to help you with this.”
- “I’m sure there’s a way we can help you. I just need some additional information.”
- “Ma’am, I understand why you would be upset about an incorrect charge.”
- “How frustrating that must have been for you! We’ll take care of that right away!”
- None of the above
“None of the above” doesn’t take the number of possible answers from 4 to 5. It takes the number of possible answer from 4 to an infinite number of options (This is something I learned from the estimable Will Thalheimer — you can get more wisdom on designing good questions, and other things learning-related at his site).
————————————————————————–
3) Use Self-Evaluation
Your learners are infinitely smarter and more subtle than any computer. Let them be the judge of their own performance:
So how do you know if they got the right answer? The same way you know somebody didn’t guess on a multiple choice questions: You Don’t. And that’s okay, because you have to trust your learners sometime.
Ways to enhance this:
- Have them compare their answers to a “right” answer, or even better, multiple “right answers.”
- Don’t give them a right answer, but ask complex questions, and create discussion areas to address them (here’s an example of good questions — sorry about the font size)
Okay, so far, these are all things that can be implemented in any reasonable rapid e-Learning tool, without much programming assistance. But let’s say you do have some programming resources available. That opens up your options considerably, but here are a just few things you might want to consider:
————————————————————————–
4) Points-based answers
Since we are trying to get away from Right and Wrong, if you use points as the feedback for answers, you can answers that are a little bit right or wrong, a lot wrong or right, or completely neutral. You can also have answers that are right in some ways (lowering the customer’s frustration level), but very wrong in other ways (wrong procedure eventually backfires and angers customer even more).
- You can also base your outcomes on points, and have excellent, good, mediocre and bad outcomes based on points or points plus other variables (you made all the right procedural choices, but the customer is still unhappy and takes her business elsewhere).
- This gets you a little closer to games, which can have marvelous complexity and ambiguity.
————————————————————————–
5) One more thing
So there are a number of ways to insert ambiguity into e-Learning, but here’s the thing — these all kind of suck. Seriously, they are, at best, kludgey workarounds for the dumbness of computers. Many of these still have elements of recognition rather than recall or generation, and none of them get anywhere close to having the kind of complexity that Dan Meyer was talking about.
So if computers can’t really provide the kind of ambiguity and complexity that we’d like to see, what can?
Other people.
Don’t just have your learners interact with the computer — have them interact with other people. There are lots of ways to do this kind of Social Learning, and I’m not going to get into here, as this post is already long enough, and there are lots of smart people working on this elsewhere.
So, how do you put ambiguity and/or complexity into your e-Learning? Do you have ways to make computers not-so-dumb?
(You might notice that branching scenarios are not on this list. While they are the most common format for simulations, they are also essentially multiple-choice questions, so I didn’t choose to include them here.)
This is exactly my problem with the over-extension of cognitive load theory. In order to reduce cognitive load, there has to be one right answer.
But I have the opposite problem to you. I don’t like right answers. Reactance is my biggest cognitive bias.
Donald Clark (@iOPT) was Tweeting about this last week. eLearning is most effective when it’s a mixture of the more procedural stuff you show above and social/emergent/messy learning.
I haven’t seen eLearning that deals well with ambiguity. But I have played a few thought-provoking games recently.
EchoBazaar manages to unfold a story and distribute different parts of the story to different people – stories deal with ambiguity well and I think eLearning can be a good ‘primer’/social object for later face-to-face interaction.
Every Day the Same Dream, We the Giants and Silent Conversation are also good. They work because they’re crude. Crude is less helpful, it’s half-baked. People find it easier to comment on things that are half-baked. All of these games have simple mechanics and could be adapted for the above situations (Hmmm, not sure about We the Giants. But the other ones, for sure.)
I guess I’m saying, you can deal with ambiguity by playing simple games. That are slightly mysterious. But that give people an incentive to get together later and discuss.
Yeah — bottom line is that I think you can inch it forward in standalone e-Learning, but can’t really address messy issues without other people involved.
Btw — I was fully prepared for you to take this post as further evidence that e-Learning is crap. Won’t be offended if you do 🙂
Have basically come to the conclusion personally that e-Learning is useful but limited, and that it can be a good part of blended solutions.
Still haven’t signed up for echobazaar, but swear it’s on my list.
The tragedy about eLearning is that if you were to take a profile of the person in the world most likely to love it and want to do great work in the field – it would be me.
But it leaves me cold. It’s just not the type of learning I’m interested in.
And every time somebody tells me that there’s good stuff out there they have to add, sheepishly, that it’s locked behind an LMS password.
Still, this is the year I get over myself (regarding email). Next post, I’ll detail what my beefs are and then move on. You can’t moan about it as much as I do without showing how you would do it yourself.
I’ve posted a big reply to your post here: http://www.bfchirpy.com/2010/01/back-to-front-elearning-scaling-your.html
Great post and thanks for some ideas. I think the reason we see so many multiple choice questions in eLearning is because it is just flat out easier for the designer. It takes a lot of thought and hard work to add ambiguity and create engaging eLearning. Putting a rapid development tool in the hands of a SME will generally produce boring multiple choice filled eLearning. This is where instructional designers need to separate themselves by applying methods you discuss here and other forms of branching tutorials rather than quickly producing an easy multiple choice question. Thanks again for the ideas.
Thanks for the comment Joe — I think you are right — there’s something about the promise of “rapid” e-Learning that implies that all parts of the process are rapid, when, if anything, rapid tools should require as much or more thought and effort in the design of the experience.
Great post. I agree with your thoughts on branching – the way it is usually designed is essentially a string of multiple choice questions, each with the same challenges and still leading to a right answer.
One challenge for approach 3 would be ensuring that your learners have the skills sets needed to be able to successfully carry out the self evaluation. Have you dealt with this before?
I thought approach 5 was neat, but as I thought about it more I started imagining the SME/stakeholder committee meetings that might be needed in some situatations (“No. that’s definitely a minus 20, because….”). Subjectivity is so subjective some times…
I don’t usually use the self-evaluation without some kind of modeling and explanation up front. Will some of the learners still lack the skills to self-evaluate? Probably, and it definitely puts the responsibility on the learner (who could be typing “asdfsdfd” in the box for all we know). I also don’t think the self-evaluation works if the evaluation criteria is too subjective – it needs to be concrete enough for the learner to be able to reasonably answer the question.
I think you could even potentially model the evaluation process first — have them identify the issues in some canned responses using that criteria before they turn the lens on themselves.
Yes — I could easily see the points based stuff turning into SME/stakeholder nightmare! Use carefully…
I enjoyed this article, and I learned a lot of good ideas. I have been annoyed so many times by badly written test questions that force my ESL students to guess at which partly correct answer is the one the publisher had in mind. If one of the answers was clearly “best,” it would help, but I end up crediting students with correct answers regardless of their choice. It never occurred to me to assign points to “better” answers. Now I can grade those badly written questions with fair meaningful scores!
Personally I would prefer to receive answers in text boxes. It is relatively easy to fix capitalization, remove unexpected *$> characters and extra spaces, and even to correct double letters in misspelled entries before grading. Then check student word choices against synonyms of expected words, and finally assign points for expected words!
Thanks David – I’m glad it was helpful!
I have seen some successful uses of keyword recognition for text entries in e-Learning, but the range of answers has to be pretty narrow for that to work. I think your ideas speak to the fact that having an actual human evaluate and provide feedback is still a much better solution that having a computer do it, and that’s not likely to change terribly soon…
Great post thx for sharing