**SOM’s reference vectors behave very much like the “inner models” in Bråten’s *beteiligte Spiegelung* (participatory mirror-matching).**
But with an important precision: **they are models *without intentionality* — structural mirrors, not social ones.** chatgpt
, dmx
Below is the clear mapping.
---
# ✔ Bråten: “Beteiligte Spiegelung” — what it is
Bråten’s concept refers to:
* a system that **carries an internal model** of its interaction partner,
* updates this model in response to interaction,
* and uses it to regulate its own behavior.
It’s a *participatory*, *mutual*, *internalized* mirroring.
Not just reflection — but *co-constitution via interaction*.
Key ingredients:
1. **Alter-model**
The system maintains an internal representation of the Other.
2. **Self–other coupling**
Updates occur through reciprocal exchange.
3. **Topological coherence**
The system organizes internal models in a structured relational space.
---
# ✔ SOM reference vectors: what they are
In SOM:
* each neuron holds a **reference vector** (weight vector)
* through repeated interaction with data
* the neuron’s internal model gradually becomes a **prototype** of a region of the input space
* neighboring neurons maintain **coherent internal models** through the neighborhood function
* this yields a **structured map of internalized input regularities**
This is precisely why the SOM is often interpreted as:
> *“a perceptual sheet of internalized expectations.”*
That is **exactly Bråten-like**.
---
# ✔ Mapping SOM → Bråten’s “participatory mirror-matching”
| Bråten | SOM |
| ---------------------------- | ---------------------------------------------------------- |
| Internal model of the Other | Reference vector mᵢ |
| Participatory updating | Neuron updates through BMU + neighborhood interaction |
| Embodied coupling | Neighborhood function hᵢⱼ(t) couples units during learning |
| Coherent relational topology | SOM grid (2D lattice) maintains adjacency relations |
| Emergence of shared meaning | Smoothly ordered feature map |
In short:
### ✔ Each reference vector is a *mirror* of a region of the input world.
### ✔ Neighboring reference vectors are *co-mirrors* that coordinate through learning.
### ✔ The whole map becomes an *internalized topology* of the external world.
This is deeply Bråten-esque.
---
# ✔ Where the analogy *breaks* (important)
Bråten’s model involves:
* intentionality
* coordination with an “alter”
* mutual regulation
* embodied subjectivity
SOM has:
* no agency
* no bidirectional modelling (data does not adapt to SOM)
* no intentional coupling
So the analogy is structural, not phenomenological.
Still, as a **cognitive architecture metaphor** the alignment is excellent.
---
# ✔ If you want to incorporate this in your GT curriculum
We could add a narrative unit:
**“Reference Vectors as Participatory Mirrors”**
Connecting:
* Kohonen (feature-sensitive neurons)
* Bråten (internal other-representation)
* Distinction networks (your Korz/Kauffman angle)
* GT moldable objects (ReferenceVector inspectors)
Possible visual exercise:
* Show each reference vector’s “mirror region” (its Voronoi cell)
* Show how the mirrors reorganize during ordering
* Show neighbor-spread of changes (participatory coupling)
* Compare to Bråten’s diagrams of mirroring
If you want, I can write:
🔹 a one-page **curriculum text** explaining Bråten → SOM mapping
🔹 a GT visualization **neuron-as-mirror inspector**
🔹 a class `ReferenceVector` with GT views that express this metaphor
🔹 or a **philosophical sidebar** connecting Bråten ↔ Varela ↔ Kohonen
Just tell me which one you want.