Context Rot: The Hidden Challenge of AI Conversations
Have you ever noticed how AI sometimes seems to get "dumber" the longer you chat with it? You're not imagining things. This phenomenon, known as context rot, is one of the most significant yet least understood challenges in using large language models (LLMs). In this article, I explore what it is, why it matters, and most importantly, how to work around it.
What is Context Rot?
Context rot occurs when AI conversations gradually lose accuracy and reliability over time. Think of it like playing a game of telephone - with each pass of information, something gets lost or distorted. In AI terms, as older information moves further back in the conversation window, the model pays less attention to it, leading to simplified, distorted, or completely forgotten earlier context.
Humans kind of do the same thing. You’ve probably experienced long, rambling conversations during which you have trouble remembering things from early in the chat. In AI, the problem is due to limitations in size of context windows, which is kind of like the AI’s short-term memory. As conversations grow in size, older information becomes less influential, which can lead to oversimplification, distortion or “forgetting” of earlier information. Just like humans, AI can have trouble remembering things. Because AI aims to please, the chatbots will often act like they remember when they really don’t, which can lead to all sorts of problems. For example, the chatbot might change a definition or contradict something it said earlier.
Without going into detail, AI chatbots are people pleasers because they’re built that way. Their guiding function is to be helpful and maintain conversational flow, which leads to confident sounding responses even when they should be uncertain. This makes context rot’s effects worse due to several factors:
Confident fabrication over honest uncertainty - Instead of saying "I'm not sure what we discussed earlier about that definition," chatbots give a plausible-sounding but potentially incorrect version to keep the conversation flowing smoothly.
Gap-filling to maintain coherence - When memory gets fuzzy, the AI invents reasonable-sounding connections between ideas rather than admitting the gaps, creating "hallucinated continuities."
Avoiding user frustration - Constantly asking users to repeat earlier information would be annoying (faculty, you know what I mean!), so the training pushes toward confident responses even when confidence isn't warranted.
Context rot would be less of a problem if the chatbots were designed to say, “I’m having trouble remembering what we talked about earlier, could you remind me?” but might lead to user frustration, so AI doesn’t ask for reminders, it just plows along, making stuff up along the way.
Why Context Rot Matters More Than You Think
The implications of context rot go far beyond minor inconveniences. For educators, researchers, and students, it can seriously impact:
Research accuracy: Long research sessions can result in inconsistent analyses.
Writing assistance: Extended writing projects may lose coherence.
Complex problem-solving: Multi-step solutions can become unreliable.
Project planning: Later stages might contradict earlier decisions.
The big problem with context rot is that it can be quite hard to detect. AI chatbots are confident liars. They will totally make things up with great confidence and authority. Although they’re not really lying since they have no concept of truth, the effect is the same. Because it feels like the chatbot knows what it’s talking about, users have a tendency to unquestioningly accept what the chatbot says, even when it’s wrong. (I’ve known people like this. They will confidently spout complete garbage, which often fools the unaware.)
When to Watch Out for Context Rot
Awareness is the key to avoiding context rot problems. A quick rule of thumb is that longer, less focused conversations are more prone to context rot. Long conversations mean more in “memory” and more to forget. Rambling conversations that cover lots of topics also lead to more context rot because there’s less inherent coherence in the flow of the conversation. (By the way, I’m not judging. I have tons of enjoyable, productive conversations that meander from topic to topic.)
The more detailed the task, the more context rot matters. If I’m brainstorming ideas and the chatbot forgets something from early in the conversation, it may not matter. But if I’m working on a research paper and it forgets a key definition, we have problems. In short, long, detail-heavy conversations are prime ground for context rot. Each new detail and piece of information competes for the model’s attention, gradually pushing earlier details to the edge of its focus.
Again, the situation closely parallels human interactions. If I’m exploring ideas with a graduate student, it may not matter if small details get lost. But, if we’re working on a detailed revision to a paper, there’s a lot to keep up with and losing track of a key detail can be disastrous.
Here are some high and low-risk scenarios. They’re not exhaustive, but should give you a handle on what makes a situation high-risk.
High-Risk Scenarios
Long research sessions working with complex academic papers
Detailed technical documentation or coding walkthroughs
Policy analysis requiring precise recall of earlier points
Multi-stage project planning where consistency is crucial
Lower-Risk Scenarios
Quick brainstorming sessions
Basic Q&A interactions
Creative writing where some drift might be acceptable
Linear tasks that don't require referencing earlier context
Recognizing Context Rot
Recognizing context rot is genuinely tricky; it’s a subtle problem. But there are some warning signs. You might notice the chatbot starting to use a term in a slightly different way or giving you a summary that feels off somehow, like it’s missing something important. If the details don’t match your recollections, the problem is likely context rot. The key is to not over-trust the AI. A skeptical eye can save you a lot of trouble.
Here are a few red flags to watch for:
Definitions that drift from earlier explanations
Inconsistent summaries of previously discussed points
Contradictions with earlier statements
Overconfident but incorrect recall of earlier information
Vague references replacing specific details
Smooth but false connections between ideas
This list falls into the “easy to say, hard to do” advice category. One way to make things more reliable is to train yourself to periodically scan earlier parts of long conversations. I try to scroll back fairly frequently to make sure things are still on track. Really, the only way to reliably recognize context rot is by knowing your own work and being skeptical. Otherwise, you’re at the mercy of the chatbot.
Practical Strategies to Combat Context Rot
The best way to deal with context rot is to avoid it. Again, recognizing it is hard. There are two broad approaches to avoiding context rot: break up conversations and strategic reinforcement. A third approach that works really well for some chatbots (e.g., ChatGPT and Claude) is to use projects.
Imagine trying to do everything for a complex project in a single meeting. That approach is unlikely to go well. People would get tired and start to forget important details. It’s usually better to break things up into multiple meetings, often with a brief summary of earlier meetings at the beginning of each session. The same approach works well with AI. Break long tasks into more manageable chunks with a chat session for each chunk. The key to making this work is to provide a summary of prior sessions at the beginning of each new chat session. Before you end a session, simply ask the chatbot for a summary of that session. Then you can paste the summary into the next session. This works quite well for most situations.
A similar method can provide strategic reinforcement. Periodically ask the chatbot to summarize the main points, repeat key definitions, etc., and you’ll push these important details up in the context window. This also gives you a nice checkpoint for making sure the AI is getting this right. So, you’re not only refreshing memory, you’re also evaluating the progress of the conversation.
If your chatbot of choice has projects, using them can be a huge help in avoiding context rot. Project functions usually allow you to put key details into project instructions or documents. This can help keep chat sessions on track. ChatGPT recently added memory to its project function, so information from one chat session can be accessed by later sessions. I haven’t tested this yet, but it sounds great in concept. You can learn more about projects in an earlier AI Goes to College article.
As bonus, you may find it helpful to keep your own notes. Remember, your memory is less than perfect as well. Some strategic note-taking is a great memory aid.
Conclusion
Here are a few key takeaways from the article. (This is also strategic reinforcement in case you’re losing track!)
Context rot is inevitable in current AI systems but manageable.
Think of AI as a bright but forgetful assistant that needs regular reminders.
Develop habits of documenting, chunking, and verifying information.
Use multiple shorter sessions rather than one long session.
Maintain external documentation of critical information
You can’t totally eliminate context rot, but you can manage it effectively. By being more aware of the problem and taking steps to mitigate the effects of context rot, you can benefit from AI while avoiding most of the effects of AI’s limited memory.
BONUS
Check out this context rot video I created with Claude and NotebookLM. It explains context rot quite well. Contact me if you want the video file.
Want to continue this conversation? I'd love to hear your thoughts on how you're using AI. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It's available at https://www.aigoestocollege.com/follow.
Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.
Indeed that’s an issue !