Linguistics

Languages, like all the other kinds of things we’ve discussed, are fundamentally dynamic constructs. And to make a multicomputational model of them we can for example imagine every instance of every word (or even every conceivable word) being a token—with events being utterances that involve multiple words. The resulting token-event graph then defines relationships between instances of words (essentially by the “context of their usage”). And within any given time slice these relationships will imply a certain layout of word instances in what we can interpret as “meaning space”. There’s absolutely no guarantee that “meaning space” will be anything like a manifold, and I expect that—like the emergent spaces from most token-event graphs we’ve generated—it’ll be considerably more complicated to “coordinatize”. Still, the expectation is that instances of a word with a given sense will appear nearby, as will synonymous words—while different senses of a given word will appear as separate clusters. In this setup, the time evolution of everything would be based on there being a sequence of utterances that are effectively strung together by someone somehow hearing a given word in a certain utterance, then at some point later using that word in another utterance. What utterances are possible? Essentially it’s all “meaningful” ones. And, yes, this is really the nub of “defining the language”. As a rough approximation one could for example use some simple grammatical rule—in which case the possible events might themselves be determined by a multiway system. But the key point is that—like in physics—we may expect that there’ll be global laws quite independent of the “microscopic details” of what precise utterances are possible, just as a consequence of the whole multicomputational structure. What might these “global laws” of language be like? Maybe they’ll tell us things about how languages evolve and “speciate”, with event horizons forming in the “meaning space of words”. Maybe they’ll tell us slightly smaller-scale things about the splitting and merging of different meanings for a single word. Maybe there’ll be an analog of gravity, in which the “geodesic” associated with the “etymological path” for a word will be “attracted” to some region of meaning space with large amounts of activity (or “energy”)—or in effect, if a concept is being “talked about a lot” then the meanings of words will tend to get closer to that. By the way, picking a “reference frame for a language” is presumably about picking which utterances one’s effectively chosen to have heard by any given time, and thus which utterances one can use to build up one’s “sense of the meanings of words” at that time. And if the selection of utterances for the reference frame is sufficiently wild, then one won’t get a “coherent sense of meaning” for the language as a whole—making the “emergence of meaning” something that’s ultimately about what amounts to human choices.