This page situates communication with large language models within Elena Esposito’s theory of social forgetting and her analysis of algorithmic memory. It complements: Communication with an Artificial Partner
question
What kind of memory – and what kind of forgetting – is involved in algorithmic processing, and how does it differ from human and Zettelkasten-based memory?
claim
Algorithmic systems do not remember and do not forget. They process data without understanding and produce information by exploiting human memory, intelligence, and attribution of meaning. This makes algorithmic processing a new medium of social forgetting rather than a stronger form of memory.
support
Esposito (2002) shows that social memory is inseparable from forgetting. Remembering always requires selection, and selection requires exclusion. Forgetting is not erasure but a structural condition of meaning. In her 2017 analysis of algorithmic memory, Esposito sharpens this distinction: Algorithms do not operate with memories at all. They operate with data (differences) and correlations. They neither remember nor forget, but calculate. Their specificity lies in the ability to bypass the classical paradox of forgetting: attempts to forget something usually reinforce remembering by drawing attention to it. Algorithmic systems can instead reinforce forgetting by multiplying references, copies, and contextualizations until no stable attribution remains.
distinction
Human memory: • remembering tied to understanding • forgetting paradoxical (deciding to forget remembers) • memory attributed to subjects Zettelkasten: • second memory with durable addresses • forgetting through loss of connectivity • inspectable genealogy of ideas Algorithmic systems (LLMs): • no memory, no forgetting • processing without understanding • outputs not attributable to any subject • parasitic exploitation of human meaning-making --- bridge to LLMs Communication with LLMs appears as communication with an artificial partner, but this partner does not possess memory in the system-theoretical sense. What appears as “amnesia” is in fact non-mnemonic processing. What appears as “knowledge” is a communicative effect produced by statistical recombination of differences. LLMs therefore radicalize the forgetting function of modern media: they generate information while dissolving stable memory traces. --- implications • The right to be forgotten cannot be modelled as deletion. • Algorithmic forgetting operates through multiplication, not erasure. • Responsibility and attribution become structurally problematic. • Communication remains possible, but memory becomes externalized, fragmented, and institutionally contested. --- related Soziales Vergessen (Esposito, 2002) Algorithmic Memory and the Right to Be Forgotten (Esposito, 2017) Kommunikation mit Zettelkästen (Luhmann, 1981) Communication with an Artificial Partner Memory / Forgetting Algorithms
pages/algorithmic-forgetting-and-the-right-to-be-forgotten