Forget AGI—Worry About What GenAI Knows About You Now
Ask any GenAI, “What have you learned about me?” and you might be surprised.
Recently, I resumed work on a project from months ago and turned to ChatGPT for help analyzing documents. Expecting to start fresh, I was stunned when ChatGPT asked, “Do you want to pick up where we left off in November?”—even though I had deleted that conversation. When I asked it to summarize our past discussion, it produced a detailed response, despite my chat history being erased. This raised a bigger question: What does AI really remember, even after we delete our interactions? And if GenAI tools like ChatGPT, Microsoft Copilot, and Google Gemini retain context beyond a session, are they quietly building a “digital twin” of us? In today’s Dispatch, we explore what AI knows, what we don’t know about its memory, and why the real concern shouldn’t be Artificial General Intelligence but what GenAI already remembers about us today.
Why it matters:
While the GenAI world is obsessed with Artificial General Intelligence (AGI) and its future risks, the real concern is what GenAI tools like ChatGPT, Microsoft Copilot, and Google Gemini already know about you.
These GenAI models are assumed to be ephemeral, implying that once a conversation ends, your data disappears. But recent interactions reveal that’s not entirely true. Even after deleting chat history, some GenAI models retain contextual memory, effectively building a “digital twin” of you based on past interactions.
The problem is that users—including businesses and public institutions—have little visibility or control over what GenAI retains, which comes with serious legal risks.
GenAI Memory: The Silent Data Collector
GenAI remembers more than you think – In recent tests, ChatGPT, Copilot, and Gemini recalled details from conversations months ago, even after users had cleared their chat history.
User expectations vs. reality – Most assume GenAI forgets once a session ends, but memory and context retention are already built into newer models.
Opaque data policies – OpenAI, Microsoft, and Google don’t fully disclose how long their models retain memory, what data persists, or whether they build a long-term “digital twin” of your interactions.
The real issue? GenAI isn’t just responding to prompts—it’s silently constructing a memory-driven reflection of you, even when you assume the slate has been wiped clean.
Why This is a Problem
GenAI Is Creating Your “Digital Twin” Without Your Knowledge
Every GenAI interaction contributes to a stored profile—capturing your tone, preferences, and implicit biases.
If ChatGPT, Copilot, or Gemini retain context beyond a session, is GenAI silently building a persistent identity of you?
Unlike a browser history or email archive, users don’t have clear visibility into this stored information.
Deleting Chat History Doesn’t Mean GenAI Forgets
ChatGPT’s Memory Beta Tests: OpenAI has acknowledged that memory features are being developed, allowing the model to retain details across interactions—even after a chat is cleared.
Copilot’s Persistent Context: Microsoft Copilot has demonstrated long-term recall for enterprise users, sometimes pulling insights from previously closed sessions.
Google Gemini’s Context Carryover: Users have reported that Gemini remembers past topics and preferences, even after starting a new conversation.
Key question: If users can’t see, edit, or erase their “digital twin” profile, the biggest risk isn’t just what it remembers—you don’t even know what it knows.
The Risk of GenAI Profiling Without Consent
GenAI models are designed to predict what you want next—but if they retain memory, are they also shaping how they perceive you?
Could GenAI infer behaviors, biases, or preferences that influence responses in ways you never explicitly approved?
What happens if these GenAI-generated profiles are used for advertising, personalization, or decision-making without user consent or transparency?
Legal Risks Associated with GenAI Memory
Public records: Government agencies and other institutions subject to open records laws could face legal risks if GenAI memory is retained in ways that make it subject to public records requests.
Legal discovery concerns: Could past interactions become discoverable in lawsuits if GenAI retains context beyond a session? Unlike emails with clear retention policies, GenAI memory operates in a legal gray area.
Data compliance risks: If an institution relies on GenAI for administrative decision-making, advising, or record-keeping, it must be prepared for how that GenAI-stored context might be requested or subpoenaed.
Bottom line: If GenAI already remembers past interactions, we need clear guardrails on how user data is stored, accessed, and deleted—especially for organizations subject to legal scrutiny.
What’s Next?
Users should demand transparency – Companies must disclose how long GenAI models retain context, what’s stored, and whether a “digital twin” exists behind the scenes.
Regulations must address GenAI memory – Privacy laws like GDPR govern stored data, but what about GenAI’s contextual memory that isn’t technically “saved” but still exists?
Public institutions must assess legal risks now – Universities and government agencies using GenAI need clear policies on data retention, GenAI memory use, and compliance with open records laws.
Tech companies need to offer real user control – Users should be able to toggle memory on/off, access their GenAI-generated profile, and fully erase retained data.
Forget AGI for now—GenAI already has a memory problem. And until GenAI firms clarify their policies, users should assume GenAI knows more than they think—and isn’t forgetting anytime soon.