Memories
When Inflection AI introduced Pi one of the interesting features was "memory" - the ability to keep track of past interactions can positively influence future interactions. Generative AI companies are now taking this even further with both Microsoft and OpenAI now talking about virtually infinite memory. In the early days of intelligent machines, all the way back in 1969, researchers grappled with what they called "the frame problem" and “infinite memory” can actually make this problem even harder.
John McCarthy and Patrick Hayes, wrestling with logic-driven robots, posed a seemingly trivial question: after an action, how does a machine decide what didn’t change? In their 1969 paper Some Philosophical Problems from the Standpoint of Artificial Intelligence they proposed that one had to assert a “frame axiom” for every invariant fact (“the ceiling is still overhead, gravity is still on”). Doing so was of course prohibitively complex for the robotics and computers that were available at the time (thus a "philosophical" problem).
Fast forward to 2025, OpenAI has now switched ChatGPT’s memory from “per-session” to per-life: every prior conversation is embedded, stored, and surfaced whenever the system’s retrieval policy thinks it might help. Usefulness though will hinge on a retrieval policy that can correctly decide which memories to bring forward and which to leave buried. The once philosophical has now become a practical challenge:
Recommended by LinkedIn
This is effectively a restatement of the McCarthy and Hayes frame problem from 1969, instead of asking whether a machine can remember the relevant, can AI forget the irrelevant? Without the ability to forget, we may find that infinite memory will become a liability rather than a benefit.
Analysis.Tech | Analyst | CEO, Founder, Automation Den | Keynote Speaker | Thought Leader | LOWCODE | NOCODE | GenAi | Godfather of RPA | Inventor of Neuronomous| UX Guru | Investor | Podcaster
1wThis new memory “labels” feature is quite cool if used well and succinctly.. so far, seems to strike a fair balance. Without this, it was hard to manage “out of memory” threads.
Strategic Partnerships Executive, Quest Software | Delivering Scalable Data & AI Solutions | Driving Growth Through High-Impact Alliances | Business Development Leader
1wTed, I always appreciate your thought-provoking articles. This one really made me pause and reflect. Are all memories truly relevant? Take athletes, for example. After a bad performance, they often try to quickly move on—forget the game, reset mentally. But at the same time, they also analyze what went wrong so they can improve. One response is emotional and psychological, the other is rational and action-oriented. It makes me wonder: as generative AI evolves, will it ever be able to adopt this kind of dual-memory logic? One that mirrors not just our ability to remember or forget, but the deeper psychological reasons why we do so? Fascinating questions ahead as we push the boundaries of memory in machines. #Infinitefuture
AI First Sales Leader I EVP Kore AI | University Chair AI | Advisor Gen AI
2wYour insights on infinite memory in AI are enlightening. How about exploring proactive forgetting mechanisms to ensure relevance and efficiency in interactions? 🤔
Bridging experience and interaction design with AI
2wI think this is interesting in two modes: 1) memory implies the model's ability to mirror, reference, and approximate user preferences and interests, so is *relational* and 2) memory provides the ability to preserve and sustain content and substance of ongoing conversations, so is interaction design. Both are user experience design, one touching on our psychological proximity and trust in AI and the other the investment and engagement we make in AI-generated interactions. I find it useful personally but occasionally weird, such as when ChatGPT will refer to my favorite philosophers or films in order to make analogies or to draw comparisons. This raises a point of unintended consequences, which are many with GenAI: is it more or less uncanny for the AI to approximate us? Is a more personal and personable AI more or less weird to us? Is the distinction a fixed difference between humans and machines - and thus insurmountable? Or is it a threshold that moves over time as we become accustomed to and grow comfortable with new technologies? As we've never before had real *intelligence* technologies, AI is perhaps not a medium (in McLuhan's sense), though it is an "extension" of human capabilities (thought)...
Interesting comparison with the "frame problem" from 1969! I agree, as memory builds up, there's a lot to consider both technically and ethically Ted Shelton