Problems of Control

Even when sophisticated information-processing techniques are brought to bear, however, many problems stubbornly resist solution. The initial promises of cybernetics and more recently of artificial intelligence have proved harder than expected to attain. Some problems in pattern recognition and robotics simply appear to be difficult in spite of the fact that humans solve such problems every day. Such problems seem to exhibit an intrinsic complexity, a complexity that the process of finding their solutions must share.

In addition, though information processing has become so ubiquitous a part of design and control that the humblest of kitchen appliances seems to contain a microchip, success inevitably gives rise to new possibilities for failure. Highly leveraged option traders, when their hedged bets go bad, fail spectacularly. Engines regulated by microprocessors may be efficient and reliable, but they are hard to fix when they break down.

In this paper we suggest a systematic treatment of how complex adaptive systems face and solve Problems of Control. We provide a unified framework for the analysis of systems that get information, that incorporate that information in models of their surroundings and that make decisions on the basis of those models. This framework is designed both to support what Simon calls blueprints (descriptions of state) and recipes (prescriptions for action) and to provide for their interaction: in our models, adaptation is the co-adaptation of blueprints and recipes. Systems learn about their environment by attempting to control it and modify their representation of the environment as a function of the results of those attempts at control. These methods are relevant to any system that processes information in order to adapt, including biological systems undergoing natural selection

~

LLOYD, Seth and SLOTINE, Jean-Jacques E., 1996. Information theoretic tools for stable adaptation and learning. International Journal of Adaptive Control and Signal Processing. July 1996. Vol. 10, no. 4–5, p. 499–530. doi

Lyapunov design has never been systematic. In the adaptive control of complex multi-input non-linear systems, physical considerations, such as conservation of energy or entropy increase, represent one of the major tools in building Lyapunov-like functions and providing stability and performance guarantees. In this paper we show that a physically motivated Lyapunov-like function based on the concept of total information can be derived for large classes of non-linear physical systems. We study how this function may be used for designing estimation, adaptation and learning mechanisms for such systems.

In the process we revisit familiar notions such as Controllability and Observability from an information perspective, which in turns allows us to define ‘natural’ space-time scales at which to observe and control a given complex system. By formulating control problems in algorithmic form, we emphasize the importance of computability and computational complexity for issues of control. Generic control problems are shown to be NP-hard: each additional complication, such as the presence of noise or the absence of complete system identification, moves the control problem further up the polynomial hierarchy of computational complexity. In some cases, requirements of ‘optimality’ may be unrealistic or irrelevant, since the solution to the problem of finding the optimal algorithm for control is uncomputable.