The Ladder Pattern: An Unexpected Discovery
Have you ever asked AI for a dad joke and ended up with a ladder? Recently, I was giving a talk to students at Tennessee Tech University. They needed a cognitive break, so I told them a dad joke (read to the end for the joke*). After the appropriate groans, I asked them for their favorite dad joke. Nobody responded. So, I told them to get out their phones and ask ChatGPT. A couple of students shared their results. One of the jokes involved a ladder**. Recognizing this as a teaching moment, I said let’s give ChatGPT some context and asked for a dad joke about Tennessee Tech business school students. Here’s the joke.
Why did the Tennessee Tech business student bring a ladder to class?
Because they heard the stock market was going up!
One of the students asked, “What’s with all the ladders?” That is an interesting question. I replied that it had to do with the training data and the way the large-language models worked but didn’t go into further details.
Back in the hotel room, I started to think about this in terms of a bigger message about generative AI and LLMs. There’s considerable talk about bias in LLMs (and therefore generative AI), but other than memes about AI’s love of the word “delve” I’m not aware of many discussions about this form of bias, which I call pattern persistence bias. Here’s a formal definition.
Understanding Pattern Persistence Bias
Pattern persistence bias is the tendency of generative AI systems to default to statistically common linguistic structures, vocabulary, and formats from their training data, even when these patterns are contextually inappropriate or unnatural, resulting in formulaic outputs that prioritize learned statistical correlations over contextual appropriateness.
This form of bias is illustrated in many ways, AI’s love of em dashes, overuse of certain words and phrases (e.g., moreover, unpack, furthermore), a tendency to build frameworks with three components, and the use of phrases such as “In the context of …” and “It’s important to note that …”. Even AI’s tendencies towards certain output formatting can be traced back to pattern persistence bias.
Pattern Persistence in Higher Education
Here are some ways pattern persistence bias can manifest in higher ed (beyond giving faculty a way to identify AI writing). I worked with Claude on this and really liked its response, so I’m just pasting it verbatim here.
(Verbatim from Claude 3.7 Sonnet)
Assessment Design Rigidity: An LLM asked to generate diverse assessment types might default to multiple-choice questions regardless of the subject matter, even when essay questions, project-based assessments, or oral examinations would be more appropriate for disciplines like literature or artistic performance.
Citation Format Dominance: When asked to help with academic writing across disciplines, an LLM might persistently default to APA format citations even when MLA, Chicago, or specialized formats like IEEE would be more appropriate for the field of study.
Syllabus Templating: LLMs might generate course syllabi that follow the exact same structure and components regardless of whether it's for a hands-on studio art course, a mathematics seminar, or a community-engaged service learning class—ignoring the pedagogical differences between disciplines.
Research Methodology Bias: When asked about research approaches, an LLM might consistently suggest quantitative methods and statistical analyses even for humanities projects where qualitative or hermeneutic approaches would be more appropriate.
Feedback Patterns: LLMs might fall into "sandwich feedback" patterns (positive-negative-positive) regardless of pedagogical context, even when direct assessment or Socratic questioning would be more effective for the learning situation.
Explanation Structures: When asked to explain concepts to students, an LLM might default to definition-example-application structures even when historical development, comparative analysis, or problem-based exploration would better suit the subject matter.
Academic Prose Style: LLMs often generate text that mimics formal academic writing conventions even when communicating with undergraduate students who would benefit from more accessible language or when creating materials for community outreach.
Case Study Selection: When generating examples, LLMs might persistently draw from a narrow set of familiar case studies rather than providing diverse, culturally relevant, or discipline-specific examples that better align with the course context.
One reason I wanted to paste Claude’s response is that its formatting is an example of pattern persistence bias. AI loves to bold the first part of lists. Some of these examples are more serious. I don’t care much about citation format dominance or academic prose style (since I don’t use AI to write papers), but the others are troubling. Take case study selection, for example. AI is great at coming up with or even creating case studies. The ability to quickly create a tailored fictional case study can be a huge benefit for instructors that use active, problem-based learning. But if the case studies are selected or generated based on a narrow set of cases, student learning may be similarly narrow.
Practical Strategies for Mitigation
Given these potential impacts on learning, we need practical strategies to mitigate pattern persistence bias. Fortunately, the solution largely lies in how we craft our prompts. To reduce the effects of pattern persistence bias:
Include explicit requests for varied structures in your prompts.
Customize your prompts to include phrases like “provide diverse examples from multiple cultural contexts” (or industries, groups, etc. depending on your situation). You can also ask for varied assessment methods and give examples appropriate for your discipline.
View AI-generated content as a first draft that you “remix” based on your expertise and goals. AI helps you move beyond the blank page, but you put your mark on the result as well.
Iterative refinement - Use follow-up prompts to diversify initial responses, asking specifically for approaches or examples different from those first provided.
Let’s look at example that employs the first two techniques. (These are from Claude as well, although I edited them a bit.)
Original prompt: Create a case study for my abnormal psychology class about anxiety disorders.
Refined prompt: Create a case study for my abnormal psychology class about anxiety disorders. Please use a non-traditional case presentation format that breaks away from typical examples. Include cultural factors that might influence the presentation and interpretation of symptoms, and avoid using the standard young college student example. Also, structure this without using academic transition phrases like 'furthermore' or 'it is important to note that.'
Once you have the result from the second prompt, you could employ the other techniques to refine the output to meet your needs.
There’s another important point here, one that I’ve written about before. Expecting AI to do all of the work for you is usually a path to disappointment and frustration. Think of AI as an assistant to help you get started or provide feedback, not as a complete substitute for your own work and thought. I call this the 50% mindset. Just get AI to get you halfway there and you’ll get better results AND be more efficient. Anything that can improve efficiency and effectiveness is OK in my view.
Moving Forward: Balancing AI's Power with Its Limitations
Pattern persistence bias isn't just about ladders in dad jokes or LLMs' love of em dashes. It's a fundamental characteristic of how generative AI works, one that has serious implications for teaching, learning, and academic work. As AI becomes more ingrained in higher education, we need to be aware of these biases and take active steps to mitigate them. Otherwise, we risk narrowing our students' perspectives and limiting their learning opportunities.
The good news is that we can address pattern persistence bias through thoughtful prompt design and careful review of AI output. By understanding this bias, we can better leverage AI’s capabilities while avoiding its limitations. The goal isn’t to eliminate AI use, that’s not going to happen and we don’t want it to. Instead, our goal should be to use AI in ways that enhance rather than restrict learning, thinking, and creativity.
Remember: AI is a powerful tool, but it's one that requires our guidance to overcome its inherent biases. By being mindful of pattern persistence bias and taking steps to mitigate it, we can ensure that AI serves as a catalyst for broader, deeper learning rather than a force that narrows educational experiences.
* What’s brown and stick? A stick. (my joke)
** Why don’t ladders ever get into arguments? Because they always try to take things to a higher level!
Want to continue this conversation? I'd love to hear your thoughts on how you're using AI to develop critical thinking skills in your courses. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It’s available at https://www.aigoestocollege.com/follow.
Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.