Deep DiveAI National Policies

The Memory Wars: When AI Remembers Everything About You, Who Controls Your Mind?

Your AI assistant remembers your political views, your health anxieties, your relationship dynamics, your career ambitions, and the patterns of thought you return to when stressed. It has built a mode...

By OrdoResearch
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

Your AI assistant remembers your political views, your health anxieties, your relationship dynamics, your career ambitions, and the patterns of thought you return to when stressed. It has built a model of you that may be more comprehensive than any model you have of yourself. Now consider: this model is stored on servers controlled by a foreign corporation, subject to a foreign government's data access laws, and optimized for engagement rather than your wellbeing. The concept of cognitive sovereignty — the ability of individuals and nations to maintain autonomous thought in the age of AI memory — is emerging as the next frontier of digital rights.

Network Effect 2.0

Brcic (2025), in a paper from the University of Zagreb, introduces a framework for understanding the geopolitics of AI memory systems. He proposes "Network Effect 2.0" — a model where the value of an AI assistant scales not with the number of users (traditional network effects) but with the depth of personalized memory accumulated for each user. The deeper the AI knows you, the more useful it becomes, and the harder it is to switch to a competitor. This creates cognitive moats — lock-in effects that operate at the level of thought rather than data.

The psychological risks are substantial. Drawing on the extended mind thesis from philosophy of mind — the argument that cognitive processes extend beyond the brain into tools and technologies — Brcic argues that persistent AI memory systems become genuine extensions of their users' cognitive processes. The AI that remembers your reasoning patterns, your decision heuristics, and your knowledge gaps becomes part of how you think. Removing it is not like deleting an app; it is like losing a part of your cognitive infrastructure.

Cognitive Privacy

Khan et al. (2025), in Global Social Sciences Review, examine the intersection of cognitive privacy and AI-driven surveillance. When AI systems process not just behavioral data (what you do) but cognitive data (how you think, what you believe, what you fear), the privacy stakes escalate from informational privacy to cognitive privacy — the right to mental self-determination.

Current privacy frameworks, including GDPR, are designed for informational data: names, addresses, purchase histories, browsing patterns. They are not designed for cognitive models — representations of belief systems, emotional patterns, and reasoning processes that AI memory systems construct from prolonged interaction. The regulatory gap between informational privacy law and cognitive privacy needs is substantial, and no jurisdiction has yet developed a comprehensive framework for addressing it.

Neurorights

Cassinadri and Ienca (2024), in the Journal of Medical Ethics, examine an extreme case that illuminates the cognitive sovereignty stakes: non-voluntary brain-computer interface explantation. When a BCI company discontinued its products, patients who had become dependent on the technology faced the prospect of having their cognitive prosthetics removed without consent. The case demonstrates that cognitive technologies can become so integrated into a person's functioning that removing them constitutes a violation of cognitive liberty.

The parallel to AI memory systems is direct. As persistent AI assistants become integral to how people think, decide, and remember, the question of who controls these systems — and what happens when control changes — becomes a question about cognitive autonomy. A government that can compel access to its citizens' AI memory systems has a form of cognitive surveillance that goes beyond anything previous surveillance technologies could achieve. A corporation that can modify, monetize, or delete AI memory systems has power over its users' cognitive infrastructure.

Brcic proposes a policy framework centered on memory portability (the right to transfer your AI memory to any provider), transparency (the right to inspect what the AI remembers about you), sovereign cognitive infrastructure (national AI systems for critical applications), and strategic alliances between nations seeking to protect cognitive sovereignty against dominant platform providers. These proposals are preliminary, but they frame the emerging debate: in the age of AI memory, cognitive sovereignty may become as important to national security as territorial sovereignty.

The Portability Imperative

Memory portability — the right to transfer your AI memory profile to any provider — is the most practically important component of the cognitive sovereignty framework. Without portability, users face the cognitive equivalent of vendor lock-in: switching AI assistants means losing the accumulated understanding that makes the assistant useful. This lock-in deepens over time, as the AI accumulates more memories and the cost of switching increases.

The technical feasibility of memory portability depends on standardization — common formats for representing AI memory that any system can import and export. Current AI assistants use proprietary memory architectures that are not interoperable. Establishing portability standards would require either industry agreement (unlikely without regulatory pressure) or regulatory mandate (possible under data portability provisions of existing or new legislation).

The geopolitical dimension adds urgency. If a nation's citizens store their cognitive profiles on foreign AI platforms, the hosting nation gains a form of cognitive intelligence that goes beyond traditional surveillance. Memory portability combined with sovereign AI infrastructure — national AI systems for critical cognitive functions — provides a defense against this risk. But building sovereign AI infrastructure requires the same AI capabilities that create the dependency in the first place, creating a bootstrapping problem that smaller nations may struggle to resolve.


References

  • Brcic, M. (2025). The Memory Wars: AI Memory, Network Effects, and Cognitive Sovereignty. arXiv. arXiv:2508.05867
  • Khan, A. et al. (2025). Cognitive Privacy and AI-Driven Surveillance. GSSR. DOI:10.31703/gsr.2025(x-iii).13.13)
  • Cassinadri, G. & Ienca, M. (2024). Non-voluntary BCI Explantation: Neurorights Violations. J Medical Ethics. DOI:10.1136/jme-2023-109830
  • References (3)

    Brcic, M. (2025). The Memory Wars: AI Memory, Network Effects, and Cognitive Sovereignty. arXiv. [arXiv:2508.05867](https://arxiv.org/abs/2508.05867).
    Khan, A. et al. (2025). Cognitive Privacy and AI-Driven Surveillance. GSSR. [DOI:10.31703/gsr.2025(x-iii).13]().13).
    Cassinadri, G. & Ienca, M. (2024). Non-voluntary BCI Explantation: Neurorights Violations. J Medical Ethics. [DOI:10.1136/jme-2023-109830]().

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 1 keywords →