Memories
Memories

Memories

When Inflection AI introduced Pi one of the interesting features was "memory" - the ability to keep track of past interactions can positively influence future interactions. Generative AI companies are now taking this even further with both Microsoft and OpenAI now talking about virtually infinite memory. In the early days of intelligent machines, all the way back in 1969, researchers grappled with what they called "the frame problem" and “infinite memory” can actually make this problem even harder.

John McCarthy and Patrick Hayes, wrestling with logic-driven robots, posed a seemingly trivial question: after an action, how does a machine decide what didn’t change? In their 1969 paper Some Philosophical Problems from the Standpoint of Artificial Intelligence they proposed that one had to assert a “frame axiom” for every invariant fact (“the ceiling is still overhead, gravity is still on”). Doing so was of course prohibitively complex for the robotics and computers that were available at the time (thus a "philosophical" problem).

Fast forward to 2025, OpenAI has now switched ChatGPT’s memory from “per-session” to per-life: every prior conversation is embedded, stored, and surfaced whenever the system’s retrieval policy thinks it might help. Usefulness though will hinge on a retrieval policy that can correctly decide which memories to bring forward and which to leave buried. The once philosophical has now become a practical challenge:

  1. Scale: The longer ChatGPT is storing your interactions, the more stale or trivial facts can hijack its reasoning
  2. Regulatory: It could become difficult to comply with laws such as the EU "right to be forgotten" given that these memories may contain information about anyone
  3. Variance: Mis-retrieval can occur when a person's facts or preferences change

This is effectively a restatement of the McCarthy and Hayes frame problem from 1969, instead of asking whether a machine can remember the relevant, can AI forget the irrelevant? Without the ability to forget, we may find that infinite memory will become a liability rather than a benefit.


Francis Carden

Analysis.Tech | Analyst | CEO, Founder, Automation Den | Keynote Speaker | Thought Leader | LOWCODE | NOCODE | GenAi | Godfather of RPA | Inventor of Neuronomous| UX Guru | Investor | Podcaster

1w

This new memory “labels” feature is quite cool if used well and succinctly.. so far, seems to strike a fair balance. Without this, it was hard to manage “out of memory” threads.

Erwin Anderson-Smith

Strategic Partnerships Executive, Quest Software | Delivering Scalable Data & AI Solutions | Driving Growth Through High-Impact Alliances | Business Development Leader

1w

Ted, I always appreciate your thought-provoking articles. This one really made me pause and reflect. Are all memories truly relevant? Take athletes, for example. After a bad performance, they often try to quickly move on—forget the game, reset mentally. But at the same time, they also analyze what went wrong so they can improve. One response is emotional and psychological, the other is rational and action-oriented. It makes me wonder: as generative AI evolves, will it ever be able to adopt this kind of dual-memory logic? One that mirrors not just our ability to remember or forget, but the deeper psychological reasons why we do so? Fascinating questions ahead as we push the boundaries of memory in machines. #Infinitefuture

Avi Sahi

AI First Sales Leader I EVP Kore AI | University Chair AI | Advisor Gen AI

2w

Your insights on infinite memory in AI are enlightening. How about exploring proactive forgetting mechanisms to ensure relevance and efficiency in interactions? 🤔

Adrian Chan

Bridging experience and interaction design with AI

2w

I think this is interesting in two modes: 1) memory implies the model's ability to mirror, reference, and approximate user preferences and interests, so is *relational* and 2) memory provides the ability to preserve and sustain content and substance of ongoing conversations, so is interaction design. Both are user experience design, one touching on our psychological proximity and trust in AI and the other the investment and engagement we make in AI-generated interactions. I find it useful personally but occasionally weird, such as when ChatGPT will refer to my favorite philosophers or films in order to make analogies or to draw comparisons. This raises a point of unintended consequences, which are many with GenAI: is it more or less uncanny for the AI to approximate us? Is a more personal and personable AI more or less weird to us? Is the distinction a fixed difference between humans and machines - and thus insurmountable? Or is it a threshold that moves over time as we become accustomed to and grow comfortable with new technologies? As we've never before had real *intelligence* technologies, AI is perhaps not a medium (in McLuhan's sense), though it is an "extension" of human capabilities (thought)...

Interesting comparison with the "frame problem" from 1969! I agree, as memory builds up, there's a lot to consider both technically and ethically Ted Shelton

Like
Reply

To view or add a comment, sign in

More articles by Ted Shelton

  • Seventy Years

    Next year will mark the seventieth anniversary of "The Dartmouth Summer Research Project on Artificial Intelligence,"…

    13 Comments
  • Choices

    At this year's TED conference in Vancouver, Tristan Harris cofounder of the Center for Humane Technology (and whom you…

    8 Comments
  • Punctuated equilibrium

    I am at the TED conference in Vancouver this week and participating in a special track designed for CIOs and CHROs in…

    6 Comments
  • A culture of innovation empowerment

    This week at Intel Corporation's Vision in Las Vegas, newly appointed CEO Lip-Bu Tan took the stage to introduce…

    2 Comments
  • Saucier's apprentice

    If Ethan Mollick and colleagues are right about the results in their new working paper: The Cybernetic Teammate: A…

    3 Comments
  • Canary

    We now have a canary. The origin of "canary in a coal mine" came from a practice in which miners would bring caged…

    7 Comments
  • Ada Lovelace

    We might imagine the rise of artificial intelligence is purely a modern story. But concerns about machine…

    9 Comments
  • Consumerization of Technology

    (where we are now..

    11 Comments
  • AI Interregnum

    An interregnum: where one epoch is fading and another struggles to emerge. I have these wildly disparate conversations.

    12 Comments
  • Quantum-Enhanced AI?

    Wednesday evening I tried to go to sleep early as I had to get up for a flight the next day and then two full two days…

    6 Comments

Insights from the community

Others also viewed

Explore topics