Cybernetical physics is a scientific area on the border of cybernetics and physics which studies physical systems with cybernetical methods. Cybernetical methods are understood as methods developed within control theory, information theory, systems theory and related areas: control design, estimation, identification, optimization, pattern recognition, signal processing, image processing, etc. Physical systems are also understood in a broad sense; they may be either lifeless, living nature or of artificial (engineering) origin, and must have reasonably understood dynamics and models suitable for posing cybernetical problems. Research objectives in cybernetical physics are frequently formulated as analyses of a class of possible system state changes under external (controlling) actions of a certain class. An auxiliary goal is designing the controlling actions required to achieve a prespecified property change. Among typical control action classes are functions which are constant in time (bifurcation analysis, optimization), functions which depend only on time (vibration mechanics, spectroscopic studies, program control), and functions whose value depends on measurement made at the same time or on previous instances. The last class is of special interest since these functions correspond to system analysis by means of external feedback (feedback control).
Until recently no creative interaction of physics and control theory (cybernetics) had been seen and no control theory methods were directly used for discovering new physical effects and phenomena. The situation dramatically changed in the 1990s when two new areas emerged: control of chaos and quantum control.
In 1990 a paper [1] was published in Physical Review Letters by Edward Ott, Celso Grebogi and James Yorke from the University of Maryland reporting that even small feedback action can dramatically change the behavior of a nonlinear system, e.g., turn chaotic motions into periodic ones and vice versa. The idea almost immediately became popular in the physics community, and since 1990 hundreds of papers have been published demonstrating the ability of small control, with or without feedback, to significantly change the dynamics of real or model systems. By 2003, this paper by Ott, Grebogi and Yorke[1] had been quoted over 1300 times whilst the total number of papers relating to control of chaos exceeded 4000 by the beginning of the 21st century, with 300-400 papers per year being published in peer-reviewed journals. The method proposed in [1] is now called the OGY-method after the authors' initials.
Later, a number of other methods were proposed for transforming chaotic trajectories into periodic ones, for example delayed feedback (Pyragas method).[2] Numerous nonlinear and adaptive control methods were also applied for the control of chaos, see surveys in.[3][4][5][6]
It is important that the results obtained were interpreted as discovering new properties of physical systems. Thousands of papers were published that examine and predict properties of systems based on the use of control, identification and other cybernetic methods. Notably, most of those papers were published in physical journals, their authors representing university physics departments. It has become clear that such types of control goals are important not only for the control of chaos, but also for the control of a broader class of oscillatory processes. This provides evidence for the existence of an emerging field of research related to both physics and control, that of "cybernetical physics".[7][8]
It is conceivable that molecular physics was the area where ideas of control first appeared. James Clerk Maxwell introduced a hypothetical being, known as Maxwell's Demon, with the ability to measure the velocities of gas molecules in a vessel and to direct the fast molecules to one part of the vessel while keeping the slow molecules in another part. This produces a temperature difference between the two parts of the vessel, which seems to contradict the Second Law of Thermodynamics. Now, after more than a century of fruitful life, this demon is even more active than in the past. Recent papers discussed issues relating to the experimental implementation of Maxwell's Demon, particularly at the quantum-mechanical level.[9]
At the end of the 1970s the first mathematical results for the control of quantum mechanical models appeared based on control theory[10] At the end of the 1980s and beginning of the 1990s rapid developments in the laser industry led to the appearance of ultrafast, so-called femtosecond lasers. This new generation of lasers has the ability to generate pulses with durations of a few femtoseconds and even less (1 fs = [math]\displaystyle{ 10^{-15} }[/math]sec). The duration of such a pulse is comparable with the period of a molecule's natural oscillation. Therefore, a femtosecond laser can, in principle, be used as a mean of controlling single molecules and atoms. A consequence of such an application is the possibility of realizing the alchemists' dream of changing the natural course of chemical reactions. A new area in chemistry emerged, femtochemistry, and new femtotechnologies were developed. Ahmed Zewail from Caltech was awarded the 1999 Nobel Prize in Chemistry for his work on femtochemistry.
Using modern control theory, new horizons may open for studying the interaction of atoms and molecules, and new ways and possible limits may be discovered for intervening in the intimate processes of the microworld. Besides, control is an important part of many recent nanoscale applications, including nanomotors, nanowires, nanochips, nanorobots, etc. The number of publications in peer-reviewed journals exceeds 600 per year.
The basics of thermodynamics were stated by Sadi Carnot in 1824. He considered a heat engine which operates by drawing heat from a source which is at thermal equilibrium at temperature [math]\displaystyle{ T_{hot} }[/math], and delivering useful work. Carnot saw that, in order to operate continuously, the engine requires also a cold reservoir with the temperature [math]\displaystyle{ T_{cold} }[/math], to which some heat can be discharged. By simple logic he established the famous ‘’’Carnot Principle’’’: ‘’No heat engine can be more efficient than a reversible one operating between the same temperatures’’.
In fact it was nothing but the solution to an optimal control problem: maximum work can be extracted by a reversible machine and the value of extracted work depends only on the temperatures of the source and the bath. Later, Kelvin introduced his absolute temperature scale (Kelvin scale) and accomplished the next step, evaluating Carnot's reversible efficiency [math]\displaystyle{ \eta_{Carnot}=1-\frac{T_{cold}}{T_{hot}}. }[/math] However, most work was devoted to studying stationary systems over infinite time intervals, while for practical purposes it is important to know the possibilities and limitations of the system's evolution for finite times as well as under other types of constraints caused by a finite amount of available resources.
The pioneer work devoted to evaluating finite time limitations for heat engines was published by I. Novikov in 1957,[11] and independently by F.L. Curzon and B. Ahlborn in 1975:[12] the efficiency at maximum power per cycle of a heat engine coupled to its surroundings through a constant heat conductor is [math]\displaystyle{ \eta_{NCA}=1-\sqrt{\frac{T_{cold}}{T_{hot}}} }[/math] (the Novikov-Curzon-Ahlborn formula). The Novikov-Curzon-Ahlborn process is also optimal in the sense of minimal dissipation. Otherwise, if the dissipation degree is given, the process corresponds to the maximum entropy principle. Later, the results[12][11] were extended and generalized for other criteria and for more complex situations based on modern optimal control theory. As a result, a new direction in thermodynamics arose known under the names "optimization thermodynamics", "finite-time thermodynamics", Endoreversible thermodynamics or "control thermodynamics", see.[13]
By the end of the 1990s it had become clear that a new area in physics dealing with control methods had emerged. The term "cybernetical physics" was proposed in.[7][14] The subject and methodology of the field are systematically presented in.[15][16]
A description of the control problems related to cybernetical physics includes classes of controlled plant models, control objectives (goals) and admissible control algorithms. The methodology of cybernetical physics comprises typical methods used for solving problems and typical results in the field.
A formal statement of any control problem begins with a model of the system to be controlled (plant) and a model of the control objective (goal). Even if the plant model is not given (the case in many real world applications) it should be determined in some way. The system models used in cybernetics are similar to traditional models of physics and mechanics with one difference: the inputs and outputs of the model should be explicitly specified. The following main classes of models are considered in the literature related to control of physical systems: continuous systems with lumped parameters described in state space by differential equations, distributed (spatio-temporal) systems described by partial differential equations, and discrete-time state-space models described by difference equations.
It is natural to classify control problems by their control goals. Five kinds are listed below.
Regulation (often called stabilization or positioning) is the most common and simple control goal. Regulation is understood as driving the state vector [math]\displaystyle{ x(t) }[/math] (or the output vector [math]\displaystyle{ y(t) }[/math]) to some equilibrium state [math]\displaystyle{ x* }[/math] (respectively, [math]\displaystyle{ y* }[/math]).
Tracking. State tracking is driving a solution [math]\displaystyle{ x(t) }[/math] to the prespecified function of time [math]\displaystyle{ x*(t) }[/math]. Similarly, output tracking is driving the output [math]\displaystyle{ y(t) }[/math] to the desired output function [math]\displaystyle{ y*(t) }[/math]. The problem is more complex if the desired equilibrium [math]\displaystyle{ x* }[/math] or trajectory [math]\displaystyle{ x*(t) }[/math] is unstable in the absence of control action. For example, a typical problem of chaos control can be formulated as tracking an unstable periodic solution (orbit). The key feature of the control problems for physical systems is that the goal should be achieved by means of sufficiently small control. A limit case is stabilization of a system by an arbitrarily small control. The solvability of this task is not obvious if the trajectory [math]\displaystyle{ x*(t) }[/math] is unstable, for example in the case of chaotic systems. See.[1]
Generation (excitation) of oscillations. The third class of control goals corresponds to the problems of "excitation" or "generation" of oscillations. Here, it is assumed that the system is initially at rest. The problem is to find out if it is possible to drive it into an oscillatory mode with the desired characteristics (energy, frequency, etc.) In this case the goal trajectory of the state vector [math]\displaystyle{ x*(t) }[/math] is not prespecified. Moreover, the goal trajectory may be unknown, or may even be irrelevant to the achievement of the control goal. Such problems are well known in electrical, radio engineering, acoustics, laser, and vibrational technologies, and indeed wherever it is necessary to create an oscillatory mode for a system. Such a class of control goals can be related to problems of dissociation, ionization of molecular systems, escape from a potential well, chaotization, and other problems related to the growth of the system energy and its possible phase transition. Sometimes such problems can be reduced to tracking, but the reference trajectories [math]\displaystyle{ x*(t) }[/math] in these cases are not necessarily periodic and may be unstable. Besides, the goal trajectory [math]\displaystyle{ x*(t) }[/math] may be known only partially.
Synchronization. The fourth important class of control goals corresponds to synchronization (more accurately, "controlled synchronization" as distinct from "autosynchronization" or "self-synchronization"). Generally speaking, synchronization is understood as concurrent change of the states of two or more systems or, perhaps, concurrent change of some quantities related to the systems, e.g., equalizing of oscillation frequencies. If the required relation is established only asymptotically, one speaks of "asymptotic synchronization". If synchronization does not exist in the system without control the problem may be posed as finding the control function which ensures synchronization in the closed-loop system, i.e., synchronization may be a control goal. Synchronization problem differs from the model reference control problem in that some phase shifts between the processes are allowed that are either constant or tend to constant values. Besides, in a number of synchronization problems the links between the systems to be synchronized are bidirectional. In such cases the limit mode (synchronous mode) in the overall system is not known in advance.
Modification of the limit sets (attractors) of the systems. The last class of control goals is related to the modification of some quantitative characteristics that limit the behavior of the system. It includes such specific goals as
Investigation of the above problems started at the end of the 1980s with work on bifurcation control and continued with work on the control of chaos. Ott, Grebogi and Yorke[1] and their followers introduced a new class of control goals not requiring any quantitative characteristic of the desired motion. Instead, the desired qualitative type of the limit set (attractor) was specified, e.g., control should provide the system with a chaotic attractor. Additionally, the desired degree of chaoticity may be specified by specifying the Lyapunov exponent, fractal dimension, entropy, etc. See.[4][5]
In addition to the main control goal, some additional goals or constraints may be specified. A typical example is the "small control" requirement: the control function should have little power or should require a small expenditure of energy. Such a restriction is needed to avoid "violence" and preserve the inherent properties of the system under control. This is important for ensuring the elimination of artefacts and for an adequate study of the system. Three types of control are used in physical problems: constant control, feedforward control and feedback control. Implementation of a feedback control requires additional measurement devices working in real time, which are often hard to install. Therefore, studying the system may start with the application of inferior forms of control: time-constant and then feedforward control. The possibilities of changing system behavior by means of feedback control can then be studied.
The methodology of cybernetical physics is based on control theory. Typically, some parameters of physical systems are unknown and some variables are not available for measurement. From the control viewpoint this means that control design should be performed under significant uncertainty, i.e., methods of robust control or adaptive control should be used. A variety of design methods have been developed by control theorists and control engineers for both linear and nonlinear systems. Methods of partial control, control by weak signals, etc. have also been developed.
Currently, an interest in applying control methods in physics is still growing. The following areas of research are being actively developed:[15][16]
Among the most important applications are: control of fusion, control of beams, control in nano- and femto-technologies.
In order to facilitate information exchange in the area of cybernetical physics the International Physics and Control Society (IPACS) was created. IPACS organizes regular conferences (Physics and Control Conferences) and supports an electronic library, IPACS Electronic Library and an information portal, Physics and Control Resources.
Original source: https://en.wikipedia.org/wiki/Cybernetical physics.
Read more |