Check out my May virtual workshop: Learning Design for Behavior Change

A little ChatGPT Adventure – Create an ID Curriculum

So a few weeks ago, someone posted a link to this video to one of my email lists. It’s from a youtuber who is using ChatGPT to learn things, and they had a series of prompts they shared to create a curriculum for learning to code in Python. It sounded downright amazing and magical.
But there were some inaccuracies/myths in the video, that gave me pause (In no way do I want to pick on this young person — I’m sincerely glad these tools are helpful for them, and I think their curiosity, ingenuity, and enthusiasm will serve them well.). When I brought up a few concerns, the person who posted the video suggested I use ChatGPT to create a curriculum for a topic that I know well.

Okay then — challenge accepted! I will let ChatGPT create an instructional design curriculum for me!

The results were…mixed.

A few disclaimers: Yes, I recognize that the technology is constantly improving and this is just a moment in time. I was using the current free version — I know the paid version is better, and that refining the prompts would help a lot. I deliberately didn’t adjust the prompts because a completely new person wouldn’t necessarily know how to optimize the prompts based on the output. I also recognize that I kind of expected mixed results and so there’s an element of confirmation bias. And I might just be a tad nitpicky about ID curriculums (okay — let’s be honest — very nitpicky).

I used their exact prompts for an instructional design curriculum. I commented up the full version here (google doc), but here’s the short version. The prompts are paraphrased a bit here, but you can see the full versions in the doc :

Prompt 1: Pareto idea / Give me the 20% that is most useful where I can do 70-80% of the stuff

I have doubts about the intent behind this question, but gave it a shot. ChatGPT gave me six topics to focus on in a hacked together a short version of a curriculum, though it’s honestly not what I would ever put together as a short form for minimum viable curriculum. I also asked later for a list of essential ID curriculum topics, and got 14 items. This short version has six of the 14 and is arguably the top items, but left out things like Understanding Learning Theories, Needs Assessment, and Ethical and Legal Questions (including issues related to copyright, accessibility, and privacy). I don’t think a responsible curriculum (however short) could completely ignore these topics. I do a lot of ID curriculum work for large companies, and the most consistent gap I hear about is needs assessment — making sure you understand the problem needing to be solved, and that you can solve it with training (or why it’s not a training problem). Some of the weaknesses of this item are probably equally about the prompt itself, rather than just ChatGPT’s response.

If was rating this prompt, I’d give it a 5 out of 10. They hit several of the big topics, but also had some glaring omissions.


Prompt 2: Create a study plan in an appropriate number of weeks

It parsed out the six items over 12 weeks (48 hours), but then only gave me a 2 1-hr activities for most of the items (~14 hours), and I think in that time, there were two actual practice activities. The rest were just reading about each topic or looking at examples. This would be a pretty weak curriculum design, and none of the math around the hours adds up.

I’d rate this one a 2 out of 10.


Prompt 3: Suggest various learning resources for the above study plan (like books, videos, podcasts, interactive exercises)

The book references were pretty good overall. It recommended my book, so that’s nice, and several other good books. The book recommendations didn’t quite line up with the topics, or would require the learner to really scrutinize the book contents to find the relevant material, and the books would not be readable in the time allotted in the study plan. The video links were all broken. I can’t be 100% sure but I’m pretty convinced they are all ChatGPT hallucinations. They are attributed to people or entities that do exist, but don’t do a lot on youtube. There are videos on most of these topics, some of them by reputable folks, but ChatGPT did not point to any of those.

They did link to a good podcast, and one or two other reasonable resources, but this prompt did expose a big issue in instructional design. While the roots are similar, instructional design is practiced very differently in higher ed vs workplace, so some of the books and resources are primarily useful in one or the other domains, not both.

Rating would be 5 out of 10 because it picked genuinely good books.


Prompt 4: Can you give me some beginner projects I could work on to help strengthen my workplace instructional design skills?

For this prompt, I specified workplace, and it’s a pretty good list. Not because a beginner could actually work on most of these projects (they would need a subject matter expert at the very least), but because it gave me a pretty good sample of the kinds of projects that somebody in workplace learning would actually work on, and would give a new person a good idea of how they might be spending their time.

I’m rating this one high, because it’s useful even if it didn’t really answer the question asked – 7 out of 10


I stopped at that point (though there were a few add-on questions in the doc). Overall, it’s kind of what I expected (did I mention my potential confirmation bias?). I saw some genuinely useful stuff, some mediocre stuff and some utter nonsense.

This is absolutely a snapshot in time, and will likely improve as the technology gets better. It did emphasize for me that ChatGPT and the like will always be a lagging indicator for content production, and it’s propensity to make stuff up is probably a feature, not a bug.

It all has a focus on curating and parsing content for self-motivated learners that ignores the reality that content is the relatively easy part of education. Feedback is the hard expensive part. The potential use of LLMs for education that is actually exciting to me is the idea that we could use them to give feedback at scale. MOOCs could put content out to millions, but they couldn’t grade a million term papers.

I’m not sure what we would need to do to feel confident about LLMs providing student feedback, but it’s an interesting avenue to explore.

Final comment: I promise I don’t hate AI/LLMs

I didn’t do this because I hate AI or because I don’t believe AI/LLMs will be transformative. I do believe they will be transformative. But I’m also seeing a lot of “You can create XYZ in seconds!!” rhetoric out in the world, and I don’t think that’s helpful either. Are these tools amazing? Yes they are. Are they fully ready for prime time in all cases? Maybe not quite yet.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.