So I had an interesting interchange with Josh Cavalier the other day. Somebody was posting about AI, and I offered up my experience with the accuracy and usefulness of AI in my own work. I said that when I’ve used AI to help with things, the usefulness has ranged from 20%-80% useful. Sometimes I can glean a few things that I can use, and sometimes I can use most of the output with tweaks.
Josh let me know he could probably get me up to 95%. I totally believe him (he’s very smart about this stuff, and my approach to prompt engineering is fairly haphazard), but what struck me about the conversation was all the places where my expertise was necessary in any of these scenarios.
The 80% scenario: I needed to vet the content and make the relatively small adjustments to make it closer to 100%.
The 20% scenario: I needed to realize that I should discard most of the material, identify what was useful, and revise the question I’m asking to get a better result.
The 95% scenario: I need to use my expertise to craft a prompt that will get the most accurate result, while still recognizing the parts that need revision.
At the Devlearn Conference last November, I took part in a Guildmaster panel on AI, and I talked about how most of the AI implementation has been taking place in an interesting window where most of the people using it already know how to do their jobs.
I think this is significant because we are already moving out of that window — we will soon have people coming into jobs with AI processes in place who do not already know how to do their jobs (assuming that they are not entry-level people being replaced by AI). It raises the question of whether or not the AI processes we are implementing right now will stand up to people who do not have the expertise I identified in the 20%-80%-95% scenarios.
I am absolutely seeing the accuracy and usefulness of the LLMs improve. I am not seeing them get to the point where we don’t still need to have enough expertise to recognize whether your result was 80% or just 20%.
Diane Elkins had my favorite comment of the panel. She described how the AI results are essentially average. If someone has been performing below average (we’ve all seen it) then the AI is an amazing improvement. It’s not going to get above average though, or at least not without the expertise we talked about.
So, yeah, AI is very helpful AND you still have to know stuff.

So absolutely true! The great news is that oftentimes with good engineering and resources, knowing stuff becomes so much easier with AI as our learning coach. If we know what we don’t know a little bit at a time and keep exposing new things, it makes for an amazing self-directed learning environment. (And this also requires us to know stuff!)
Julie, I like the 20%–80%–95% framing. That matches my experience fairly closely. One thing I’ve noticed is that the quality of AI output often reflects the quality of the user’s mental model of the problem.
When the prompt is grounded in a clear understanding of the domain, the results improve dramatically. When it isn’t, the output tends to drift toward something that looks plausible but isn’t actually useful.
In my own work, I’ve found that pushing the conversation further – refining the question, challenging weak responses, and iterating – can turn AI into something closer to a thinking partner. But that process still depends heavily on the user’s expertise and judgment. And takes time.
Your point about people entering roles where AI processes are already in place is the REALLY interesting one. At that point the question becomes less about generating answers and more about knowing enough to recognize when the answer isn’t quite right. That’s going to be an important SKILL for our field going forward
I loved hearing your take in Vegas last year both on the panel and in your personal session that was like part of the 5% that didn’t involve AI or tech. We can’t leave people and our expertise out of the loop. I love AI – I use it for so much and it keeps getting better. Some days more than other’s it’s helpful but I am confident that it won’t replace me. We are the sum of so much experience and knowledge that it doesn’t have. We need to realize it’s just a tool.