Field Autopoiesis

# Field Autopoiesis: from IVANHOE to 'Patacriticism Let's recapitulate the differences between book markup and TEI markup. TEI defines itself as a two-dimensional generative space mapped as (1) a set of defined "content objects" (2) organized within a nested tree structure. The formality is clearly derived from an elementary structuralist model of language (a vocabulary + a syntax, or a semantic + a syntagmatic dimension). In the SGML/TEI extrusion, both dimensions are fixed and their relation to each other is defined as arbitrary rather than co-dependent. The output of such a system is thus necessarily symmetrical with the input (cf. Curie's principle of causes and effects). Input and output in a field of traditional textuality works differently. Even in quite restricted views, as we know, the operations of natural language and communicative exchange generate incommensurable effects. The operations exhibit behavior that topolo-gists track as bifurcation or even generalized catastrophe, whereby an initial set of structural stabilities produces morphogenetic behaviors and conditions that are unpredictable.3 This essential feature of "natural language" – which is to say, of the discourse fields of communicative exchange – is what makes it so powerful, on one hand, and so difficult to model and formalize, on the other. In these circumstances, models like TEI commend themselves to us because they can be classically quantified for empirical – numerable – results. But as Thorn observed long ago, there is no such thing as "a quantitative theory of catastrophes of a dynamical system" like natural language. To achieve such a theory, he went on to say, "it would be necessary to have a good theory of integration on function spaces" (Thorn 1975: 321), something that Thorn could not conceive. That limitation of qualitative mathematical models did not prevent Thorn from vigorously recommending their study and exploration. He particularly criticized the widespread scientific habit of "tak[ing] the main divisions of science, the[ir] taxonomy… as given a priori" rather than trying to re-theorize taxonomies as such (1975: 322). In this frame of reference we can see (1) that textualization in print technology is a qualitative (rather than a taxonomic) function of natural language, and (2) that textualization integrates function spaces through demonstrations and enactments rather than descriptions. This crucial understanding – that print textuality is not language but an operational (praxis-based) theory of language – has stared us in the face for a long time, but seeing we have not seen. It has taken the emergence of electronic textualities, and in particular operational theories of natural language like TEI, to expose the deeper truth about print and manuscript texts. SGML and its derivatives freeze (rather than integrate) the function spaces of discourse fields by reducing the field components to abstract forms – what Coleridge called "fixities and definites." This approach will serve when the object is to mark textual fields for storage and access. Integration of dynamic functions will not emerge through such abstract reductions, however. To develop an effective model of an autopoietic system requires an analysis that is built and executed "in the same spirit that the author writ." That formulation by Alexander Pope expresses, in an older dialect, what we have called in this century "the uncertainty principle", or the co-dependent relation between measurements and phenomena. An agent defines and interprets a system from within the system itself – at what Dante Gabriel Rossetti called "an inner standing point." What we call "scientific objectivity" is in one sense a mathematical function; in another, it is a useful method for controlling variables. We use it when we study texts as if they were objective things rather than dynamic autopoietic fields. Traditional textual conditions facilitate textual study at an inner standing point because all the activities can be carried out – can be represented – in the same field space, typically, in a bibliographical field. Subject and object meet and interact in the same dimensional space – a situation that gets reified for us when we read books or write about them. Digital operations, however, introduce a new and more abstract space of relations into the study-field of textuality. This abstract space brings the possibility of new and in certain respects greater analytic power to the study of traditional texts. On the downside, however, digitization – at least to date, and typically – situates the critical agent outside the field to be mapped and re-displayed. Or – to put this crucial point more precisely (since no measurement has anything more than a relative condition of objectivity) – digitization situates the critical agent within levels of the textual field's dimensionalities that are difficult to formalize bibliographically. To exploit the power of those new formalizations, a digital environment has to expose its subjective status and operation. (Like all scientific formalities, digital procedures are "objective" only in relative terms.) In the present case – the digital marking of textual fields – this means that we will want to build tools that foreground the subjectivity of any measurements that are taken and displayed. Only in this way will the autopoietic character of the textual field be accurately realized. The great gain that comes with such a tool is the ability to specify – to measure, display, and eventually to compute and transform – an autopoietic structure at what would be, in effect, quantum levels. A series of related projects to develop such tools is under way at University of Virginia's Speculative Computing Laboratory (Speclab). The first of these, IVANHOE, is an online gamespace being built for the imaginative reconstruction of traditional texts and discourse fields. Players enter these works through a digital display space that encourages players to alter and transform the textual field. The game rules require that transformations be made as part of a discourse field that emerges dynamically through the changes made to a specified initial set of materials.4 As the IVANHOE project was going forward, a second, related project called Time Modelling was being taken up by Bethany Nowviskie and Johanna Drucker. The project was begun "to bring visualization and interface design into the early content modeling phase" of projects like IVANHOE, which pursue interpretation through transformational and even deformative interactions with the primary data. IVANHOE's computer is designed to store the game players' performative interpretational moves and then produce algorithmically generated analyses of the moves after the fact. The chief critical function thus emerges after-the-fact, in a set of human reflections on the differential patterns that the computerized analyses expose. In the Time Modelling device, however, the performative and the critical actions are much more closely integrated because the human is actively involved in a deliberated set of digital transformations. The Time Modelling device gives users a set of design functions for reconstructing a given lineated timeline of events in terms that are subjective and hypothetical. The specified field of event-related data is brought forward for transformation through editing and display mechanisms that emphasize the malleability of the initial set of field relations. The project stands, conceptually, somewhere between design programs (with their sets of tools for making things) and complex websites like The Rossetti Archive (with their hypertextual datasets organized for on-the-fly search and analysis). It is a set of editing and display tools that allows users to design their own hypothetical (re)formulations of a given dataset. The frankly experimental character of Time Modelling's data (re)constructions has led to an important reimagining of the original IVANHOE project. From the outset of that project we intended to situate the "interpreter" within the discourse field that was the subject of interpretive transformation. Our initial conception was toward what we called "Ultimate IVANHOE", that is, toward a playspace that would be controlled by emergent consciousness software. With the computer an active agent in an IVANHOE session, players could measure and compare their own understandings of their actions against a set of computer generated views. This prospect for IVANHOE's development remains, but the example of Time Modelling exposed another way to situate the human interpreter at an inner standing point of an autopoietic system. If'Pataphysics is, in the words of its originator, "the science of exceptions", the project here is to reconceive IVANHOE under the rubric of'Patacriticism, or the theory of subjective interpretation. The theory is implemented through what is here called the dementianal method, which is a procedure for marking the autopoietic features of textual fields. The method works on the assumption that such features characterize what topolo-gists call a field of general catastrophe. The dementianal method marks the dynamic changes in autopoietic fields much as Thorn's topological models allow one to map forms of catastrophic behavior. The'Patacritical model differs from Thorn's models because the measurements of the autopoietic field's behaviors are generated from within the field itself, which only emerges as a field through the action of the person interpreting – that is to say, marking and displaying – the field's elements and sets of relations. The field arises co-dependently with the acts that mark and measure it. In this respect we wish to characterize its structure as dementianal rather than dimensional. As the device is presently conceived, readers engage autopoietic fields along three behavior dementians: transaction, connection, resonance. A common transaction of a page space moves diagonally down the page, with regular deviations for horizontal line transactions left to right margin, from the top or upper left to the bottom at lower right. Readers regularly violate that pattern in indefinite numbers of ways, often being called to deviance by how the field appears marked by earlier agencies. Connections assume, in the same way, multiple forms. Indeed, the primal act of autopoietic connection is the identification or location of a textual element to be "read." In this sense, the transaction of an autopoietic field is a function of the marking of connections of various kinds, on one hand, and of resonances on the other. Resonances are signals that call attention to a textual element as having a field value – a potential for connectivity – that appears and appears unrealized. Note that each of these behavior dementians exhibit co-dependent relations. The field is transacted as connections and resonances are marked; the connections and resonances are continually emergent functions of each other; and the marking of dementians immediately reorders the space of the field, which itself keeps re-emerging under the sign of the marked alteration of the dynamic fieldspace and its various elements. These behavioral dementians locate an autopoietic syntax, which is based in an elementary act or agenting event: G. Spencer Brown's "law of calling", which declares that a distinction can be made. From that law comes the possibility that elements of identities can be defined. They emerge with the co-dependent emergence of the textual field's control dimensions, which are the field's autopoietic semantics. (For further discussion of these matters see below, Appendix A and Appendix B.)