I asked a friend once: “If I told an LLM a secret today, would it remember me tomorrow?” She laughed and said, “Only if you put the secret inside its glass window.” That joke stuck — because it’s basically true. This post tells that small truth as a short story: of mirrors, goldfish, diaries, and the math that makes them tick.
Scene 1 — The Uncanny Mirror
You sit across a café table from a smart speaker. It answers like a person: it references your joke from
last week, finishes your sentence, even gets your sarcasm. You feel seen. But the mirror is only
reflecting patterns it learned, not holding a life that touches yours. It’s convincing, uncanny — like a
mirror that smiles back when you blink.
LLMs simulate memory. They don’t replay an experience; they reconstruct language patterns that best
match the prompt. That reconstruction is why they can feel human — but it’s not continuity.

Think of it like this: humans chunk, compress, and choose what to recall. LLMs compute weighted mixtures of tokens inside a sliding window. The result often looks like remembering — but it’s an echo, not an autobiographical memory.
Scene 2 - Why the Mirror Behaves that Way (Simple Math and Sliding Window)
Imagine the model has a sliding glass pane over the conversation: whatever’s inside the pane it can “attend to”; everything outside is invisible.The transformer’s attention weight between tokens 𝑖 and 𝑗 is:
Quick Question for You?
Imagine you’re chatting with an LLM. You give it this sequence inside a context window: A = 5, B = 10, A + B = ? It answers: 15. Now scroll the window forward so the “A = 5” part slides out. You ask again: A + B = ? What happens? Human: still 15 (we remember). But what about LLMs?Scene 3 — Memorization vs Reasoning (A Tiny Experiment)
Suppose you train a model on 10,000 puzzle-answers. It learns to output correct answers for those exact puzzles — perhaps by memorizing. Researchers measure this behavior with a simple intuition:Another Question for You?
On an island: Knights always tell the truth. Knaves always lie. You meet two people: Alex says: “Blair is a Knave.” Blair says: “Alex and I are both Knights.” Who’s who? Take 30 seconds to solve. (Answer: Alex is a Knight, Blair is a Knave.)Now imagine an LLM trained on 10,000 such puzzles. Will it be able to answer?
Scene 4 — Diaries, Librarians, and Patchy Memory
If a human has to remember something important, they write a note or keep a diary. For LLMs, we have two practical tricks:- Retrieval (RAG) — hook the model to an external searchable diary. When asked, it looks up the fact and quotes it. Good when retrieval works; brittle when it doesn’t.
- Continued training / memory-conditioning — teach the model new facts by showing them the old context plus the update, so it can associate the two. This helps the model surface the right facts later, but still doesn’t guarantee consistent reasoning across all contexts. In other words: if you want true long-term memory, you need both a diary and a librarian — storage plus a system that knows when to consult it.
Conclusion
In short to summarise, LLMs mirror patterns but they do not really store lived episodes like humans. And
the way we build it (through context windows) really serve a great purpose as short term memories but
again the issue of cost and dilution limit them. And to be honest Memorization is not reasoning. It's
differentiating between two students who just mug things up and the one who actually thinks. But
memorization often scaffolds reasoning.
Reach out to me about reviews through my email: tathagata2403@gmail.com.
🔗 Further Reading
- Xie, Y., Xu, Z., Zhang, W., Ma, Z., Liu, S., Yu, Z., Liu, H., & Chen, B. (2024). On Memorization of Large Language Models in Logical Reasoning. arXiv:2410.23123.
- Li, J., & Goyal, N. (2025). Memorization vs. Reasoning: Updating LLMs with New Knowledge. arXiv:2504.12523.
- Kambhampati, S. (2024). Through the Uncanny Mirror: Do LLMs Remember Like the Human Mind? Towards Data Science.