⇒ Physicalized Concepts in Multicomputation
From the basic definitions of multicomputation it’s hard to have much intuition about how multicomputational systems will work. But knowing how multicomputation works in our model of fundamental physics immediately gives us not only powerful intuition, but also all sorts of metaphors and language for describing multicomputation in pretty much any possible setting. As we saw in the previous section, at the lowest level in any multicomputational system there are what we can call (in correspondence with physics) “events”—that represent individual applications of rules, “progressing through time”. We can think of these rules as operating on “tokens” of some kind (and, yes, that’s a term from computation, not physics). And what these tokens represent will depend on what the multicomputational system is supposed to be modeling. Often (as in our Physics Project) the tokens will consist of combinations of elements—where the same elements can be shared across different tokens. (In the case of our Physics Project, we view the elements as “atoms of space”, with the tokens representing connections among them.) The sharing of elements between tokens is one way in which the tokens can be “knitted together” to define something like space. But there is another, more robust way as well: whenever a single event produces multiple tokens it effectively defines a relation between those tokens. And the map of all such relations—which is essentially the token-event graph—defines the way that different tokens are knitted together into some kind of generalized spacetime structure. At the level of individual events, ideas from the theory and practice of computation are useful. Events are like functions, whose “arguments” are incoming tokens, and whose output is one or more outgoing tokens. The tokens that exist in a certain “time slice” of the token-event graph together effectively represent the “data structure” on which the functions are acting. (Unlike in basic sequential programming, however, the functions can act in parallel on different parts of the data structure.) The whole token-event graph then gives the complete “execution history” of how the functions act on the data structure. (In an ordinary computational system, this “execution history” would essentially form a single chain; a defining feature of a true multicomputational system is that this history instead forms a nontrivial graph.) In understanding the analogy with our everyday experience with physics, we’re immediately led to ask what aspect of the token-event graph corresponds to ordinary, physical space. But as we’ve discussed, the answer is slightly complicated. As soon as we set up a foliation of the token-event graph, effectively dividing it into a sequence of time slices, we can say that the tokens on each slice correspond to a certain kind of space, “knitted together” by the entanglements of the tokens defined by their common ancestry in events. But the kind of space we get is in general something beyond ordinary physical space—effectively something we can call “multispace”. In the specific setup of our Physics Project, however, it’s possible to define at least in certain limits a decomposition of this space into two components: one that corresponds to ordinary physical space, and one that corresponds to what we call branchial space, that is effectively a map of entangled possible quantum states. In multicomputational systems set up in different ways, this kind of decomposition may work differently. But given our everyday intuition—and mathematical physics knowledge—about ordinary physical space it’s convenient first to focus on this in describing the general “physicalization” of multicomputational systems. In our Physics Project ordinary “geometrical” physical space emerges as a very large-scale limit of slices of the token-event graph that can be represented as “spatial hypergraphs”. In the Physics Project we imagine that at least in the current universe, the effective dimension of the spatial hypergraph (measured, for example, through growth rates of geodesic balls) corresponds to the observed 3 dimensions of physical space. But it’s important to realize that the underlying structure of multicomputation doesn’t in any way require such a “tame” limiting form for space—and in other settings (even branchial space in physics) things may be much wilder, and much less amenable to present-day mathematical characterization. But the picture in our Physics Project is that even though there is all sorts of computationally irreducible—and seemingly random—underlying behavior, physical space still has an identifiable large-scale limiting structure. Needless to say, as soon as we talk about “identifiable structure” we’re implicitly assuming something about the observer who’s perceiving it. And in seeing how to leverage intuition from physics, it’s useful to discuss what we can view as the simpler case of thermodynamics and statistical mechanics. At the lowest level something like a gas consists of large numbers of discrete molecules interacting according to certain rules. And it’s almost inevitable that the detailed behavior of these molecules will show computational irreducibility—and great complexity. But to an observer who just looks at things like average densities of molecules the story will be different—and the observer will just perceive simple laws like diffusion. And in fact it’s the very complexity of the underlying behavior that leads to this apparent simplicity. Because a computationally bounded observer (like one who just looks at average densities) won’t be able to do more than just read the underlying computational irreducibility as being like “simple randomness”. And this means that for such an observer it’s going to be reasonable to model the overall behavior by using mathematical concepts like statistical averaging, and—at least at the level of that observer—to describe the system as showing computationally reducible behavior represented, say, by the diffusion equation. It’s interesting to note that the emergence of something like diffusion depends on the presence of certain (identifiable) underlying constraints in the system—like conservation of the number of molecules. Without such constraints, the underlying computational irreducibility would lead to “pure randomness”—and no recognizable larger-scale structure. And in the end it’s the interplay of identifiable underlying constraints with identifiable features of the observer that leads to identifiable emergent computational reducibility. And it’s very much the same kind of thing with multicomputational systems—except that the “identifiable constraints” are much more abstract ones having to do with the fundamental structure of multicomputation. But much as we can say that the detailed computationally irreducible behavior of underlying molecules leads to things like large-scale fluid mechanics at the level of practical (“coarse-grained”) observers, so also we can say that the detailed computationally irreducible behavior of the hypergraph that represents space leads to the large-scale structure of space, and things like Einstein’s equations. And the important point is that because the “constraints” in multicomputational systems are generic features of the basic abstract structure of multicomputation, the emergent laws like Einstein’s equations can also be expected to be generic, and to apply with appropriate translation to all multicomputational systems perceived by observers that operate at least somewhat like the way we operate in perceiving the physical universe. Any system in which the same rules get applied many times must have a certain ultimate uniformity to its behavior, manifest, for example, in the “same laws” applying “all over the system”. And that’s why, for example, we’re not surprised that physical space seems to work the same throughout the physical universe. But given this uniformity, how do there come to be any identifiable features or “places” in the universe, or, for that matter, in other kinds of systems that are constructed in similar ways? One possibility is just that the observer can choose to name things: “I’ll call this token ‘Tokie’ and then I’ll trace what happens, and describe the behavior of the universe in terms of the ‘adventures of Tokie’”. But as such, this approach will inevitably be quite limited. Because a feature of multicomputational systems is events are continually happening, consuming existing tokens and creating new ones. In physics terms, there is nothing fundamentally constant in the universe: everything in it (including space itself) is being continually recreated. So how come we have any perception of permanence in physics? The answer is that even though individual tokens are continually being created and destroyed, there are overall patterns that are persistent. Much like vortices in a fluid, there can for example be essentially topological phenomena whose overall structure is preserved even though their specific component parts are continually changed. In physics, those “topological phenomena” presumably correspond to things like elementary particles, with all their various elaborate symmetries. And it’s not clear how much of this structure will carry over to other multicomputational systems, but we can expect that there will be some kinds of persistent “objects”—corresponding to certain pockets of local computational reducibility. An important idea in physics is the concept of “pure motion”: that “objects” can “move around in space” and somehow maintain their character. And once again the possibility of this depends on the observer, and on what it means that their “character is maintained”. But we can expect that as soon as there is a concept of space in a multicomputational system there will also be a concept of motion. What can we say about motion? In physics, we can discuss how it will be perceived in different reference frames—and for example we define inertial frames that explore space and time differently precisely so as to “cancel out motion”. This leads to phenomena like time dilation, which we can view as a reflection of the fact that if an object is “using its computational resources to move in space” then it has less to devote to its evolution in time—so it will “evolve less in a certain time” than if it wasn’t moving. So if we can identify things like motion (and, to make it as simple as possible, things like inertial frames) in any multicomputational system, we can expect to see phenomena like time dilation—though potentially translated into quite different terms. What about phenomena like gravity? In physics, energy (and mass) act as a “source of gravity”. But in our models of physics, energy has a rather simple (and generic) interpretation: it is effectively just the “density of activity” in the multicomputational system—or the number of events in a certain “region of space”. Imagine that we pick a token in a multicomputational system. One question we can ask is: what is the shortest path through the token-event graph to get to some specific other token? There’ll be a “light cone” that defines “how far in space” we can get in a certain time. But in general in physics terms we can view the shortest path as defining a spacetime geodesic. And now there’s a crucial—but essentially structural—fact: the presence of activity in the token-event graph inevitably effectively “deflects” the geodesic. And at least with the particular setup of our Physics Project it appears that that deflection can be described (in some appropriate limit) by Einstein’s equations, or in other words, that our system shows the phenomenon of gravity. And once again, we can expect that—assuming there is any similar kind of notion of space, or similar character of the observer—a phenomenon like gravity will also show up in other multicomputational systems. Once we have gravity, what about phenomena like black holes? The concept of an event horizon is immediately something quite generic: it is just associated with disconnection in the causal graph, which can potentially occur in basically any multicomputational system. What about a spacetime singularity? In the most familiar kind of singularity in physics (a “spacelike singularity” of the kind that appears at the center of a non-rotating black hole spacetime), what fundamentally happens is that there is a piece of the token-event graph to which no rules apply—so that in essence “time ends” there. And once again, we can expect that this will be a generic phenomenon in multicomputational systems. But there’s more to say about this. In general relativity, the singularity theorems say that when there’s “enough energy or mass” it’s inevitable that a singularity will be formed. And we can expect that the same kind of thing will happen in any multicomputational system, though potentially it’ll be interpreted in very different terms. (By the way, the singularity theorems implicitly depend on assumptions about the observer and about what “states of the universe” they can prepare, and these may be different for other kinds of multicomputational systems.) It’s worth mentioning that when it comes to singularities, there’s a computational characterization that may be more familiar than the physics one (not least since, after all, we don’t have direct experience of black holes). We can think of the progress of a multicomputational system through time as being like a process of evaluation in which rules are repeatedly applied to transform whatever “input” was given. In the most familiar case in physics, this process will just keep going forever. But in the more familiar case in practical computing, it will eventually reach a fixed point representing the “result of the computation”. And this fixed point is the direct analog of a “time ends” singularity in physics. When we have a large multicomputational system we can expect that—like in physics—it will seem (at least to appropriate observers, etc.) like an approximate continuum of some kind. And then it’s essentially inevitable that there will be a whole collection of rather general “local” statements about the behavior that can be made. But what if we look at the multicomputational system as a whole? This is the analog of studying cosmology in physics. And many of the same concepts can be expected to apply, with, for example, the initial conditions for the multicomputational system playing the role of the Big Bang in physics. In the history of physics over the past century or so three great theoretical frameworks have emerged: statistical mechanics, general relativity and quantum mechanics. And when we look at multicomputational systems we can expect to get intuition—and results—from all three of these. So what about quantum mechanics? As I mentioned above, quantum mechanics is—in our model of physics—basically just like general relativity, except played out not in ordinary physical space, but instead in branchial space. And in many ways, branchial space is a more immediate kind of space to appear in multicomputational systems than physical space. But unlike physical space, it’s not something about which we have everyday experience, and instead to think about it we tend to have to rely on the somewhat elaborate traditional formalism of quantum mechanics. A key question about branchial space both in physics and in other multicomputational systems is how it can be coordinatized (and, yes, that’s inevitably a question about observers). In general the issue of how to put meaningful “numerical” coordinates on a very “non-numerical space” (where the “points of the space” are for example tokens corresponding to strings or hyperedges or whatever) is a difficult one. But the formalism of quantum mechanics makes for example the suggestion of thinking in terms of complex numbers and phases. The spaces that arise in multicomputational systems can be very complicated, but it’s rather typical that they can be thought of as somehow “curved”, so that, for example, “parallel” lines (i.e. geodesics) don’t stay a fixed distance apart, and that “squares” drawn out of geodesics won’t close. And in our model of physics, this kind of phenomenon not only yields gravity in physical space, but also yields things like the uncertainty principle when applied to branchial space. We might at first have imagined that a theory of physics would be specific to physics. But as soon as we imagine that physics is multicomputational then that fact alone leads to a robust and inexorable structure that should appear in any other multicomputational system. It may be complicated to know quite what the detailed translations and interpretations are for other multicomputational systems. But we can expect that the core phenomena we’ve identified in physics will somehow be reflected there. So that through the common thread of multicomputation we can leverage the tower of successes in physics to shed light on all sorts of systems in all sorts of fields.
DOT strict digraph rankdir=LR node [style=filled fillcolor=lightyellow penwidth=3 color=black fontname="Helvetica"] HERE NODE node [style=filled fillcolor=lightblue] WHERE /^⇒/ LINKS HERE -> NODE node [style=filled fillcolor=white] HERE NODE WHERE /^⇒/ LINKS HERE -> NODE node [style=filled fillcolor=white penwidth=3 color=black] LINKS HERE -> NODE node [style=filled fillcolor=white penwidth=1 color=black] HERE NODE LINKS HERE -> NODE node [style="filled,rounded,dotted" fillcolor=white] edge [style=dotted] HERE NODE BACKLINKS NODE -> HERE