Immunology

The immune system, like the brain, involves a large number of elements “doing different things”. In the brain, there are neurons in a definite physical arrangement that interact electrically. In the (adaptive) immune system there are things like white blood cells and antibodies that basically just “float around”, occasionally interacting though molecular-scale “shape-based” physical binding. It seems pretty natural to make a multicomputational model of this, in which individual immune system elements interact through all possible binding events. One can pick an “assay” reference frame in which one “coarse grains together”, say, all antibodies or all T-cell receptors that have a particular sequence. And by aggregating the underlying token-event graph one will be able to get (at least approximately) a “summary graph” of interactions between types of antibodies, etc. Then much like we imagine physical space to be knitted together from atoms of space by their interactions, so also we can expect that the “shape space” of antibodies, etc. will also be defined by their interactions. Maybe “interactionally near” shapes will also be near in some simple sequence-based metric, but not necessarily. And for example there’ll be some analog of a light cone that governs any kind of “spreading of immunity” associated with an antigen “at a particular position in shape space”—and it’ll be defined by the causal graph of interactions between immune elements. When it comes to understanding the “state of the immune system” we can expect—in a typical multicomputational way—that the whole dynamic network will be important. Indeed, perhaps for example “immune memory” is maintained as a “property of the network” even though individual immune elements are continually being created and destroyed—much as particles and objects in physics persist even though their constituent atoms of space are continually changing.