control theory

 

  • “[12] Likewise; “A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these
    variables and using the difference as a means of control.

  • These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for
    system response and design techniques for most systems of interest.

  • [11] The definition of a closed loop control system according to the British Standard Institution is “a control system possessing monitoring feedback, the deviation signal
    formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero.

  • Closed-loop controllers have the following advantages over open-loop controllers: • disturbance rejection (such as hills in the cruise control example above) • guaranteed
    performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact • unstable processes can be stabilized • reduced sensitivity to parameter variations • improved
    reference tracking performance • improved rectification of random fluctuations[15] In some systems, closed-loop and open-loop control are used simultaneously.

  • Even assuming that a “complete” model is used in designing the controller, all the parameters included in these equations (called “nominal parameters”) are never known with
    absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.

  • A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis.

  • [2] Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other
    applications range far beyond this.

  • The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring
    the controlled process variable to the same value as the set point.

  • Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design.

  • Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability.

  • These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.

  • In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into
    account these random deviations.

  • Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov’s Theory) to ensure stability without regard to the inner dynamics of the system.

  • [16] Analysis techniques – frequency domain and time domain Mathematical techniques for analyzing and designing control systems fall into two different categories: • Frequency
    domain – In this type the values of the state variables, the mathematical variables representing the system’s input, output and feedback are represented as functions of frequency.

  • List of the main control techniques • Optimal control is a particular control technique in which the control signal optimizes a certain “cost index”: for example, in the case
    of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel.

  • In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories
    permitting control of nonlinear systems.

  • For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next
    section).

  • The state space representation (also known as the “time-domain approach”) provides a convenient and compact way to model and analyze systems with multiple inputs and outputs.

  • Many systems may be assumed to have a second order and single variable system response in the time domain.

  • Linear and nonlinear control theory The field of control theory can be divided into two branches: • Linear control theory – This applies to systems made of devices which obey
    the superposition principle, which means roughly that the output is proportional to the input.

  • As the general theory of feedback systems, control theory is useful wherever feedback occurs – thus control theory also has applications in life sciences, computer engineering,
    sociology and operations research.

  • Classical SISO System Design[edit] The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance
    rejection using a second input.

  • In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation,[citation needed] a
    mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations.

  • Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent.

  • Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred
    in most industrial applications.

  • The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot,
    or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.

  • Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal.

  • This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically.

  • In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables.

  • The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence.

  • From a geometrical point of view, looking at the states of each variable of the system to be controlled, every “bad” state of these variables must be controllable and observable
    to ensure a good behavior in the closed-loop system.

  • • Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control
    systems are nonlinear.

  • Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems.

  • Constraints A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints.

  • In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential
    equations describing the system.

  • Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e.

  • Model identification and robustness[edit] A control system must always have some robustness property.

  • System classifications Linear systems control[edit] Main article: State space (controls) For MIMO systems, pole placement can be performed mathematically using a state space
    representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions.

  • Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim
    Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications.

  • Main control strategies Every control system must guarantee first the stability of the closed-loop behavior.

  • The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions.

  • Analysis Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system’s transfer function
    and using Nyquist and Bode diagrams.

  • A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems.

  • Together with PID controllers, MPC systems are the most widely used control technique in process control.

  • An electromechanical timer, normally used for open-loop control based purely on a timing sequence, with no feedback from the process In open-loop control, the control action
    from the controller is independent of the “process output” (or “controlled process variable”).

  • Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area.

  • If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated
    poles at the complex plane origin (i.e.

  • The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.

  • These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden
    it into a vast generalization of a regulator interacting with a plant.

  • When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease
    from their initial value and do not show permanent oscillations.

  • Modern MIMO System Design[edit] Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems.

  • Practically speaking, stability requires that the transfer function complex poles reside • in the open left half of the complex plane for continuous time, when the Laplace
    transform is used to obtain the transfer function.

  • Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions.

  • This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP).

  • • A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree.

  • [19] Decentralized systems control[edit] Main article: Distributed control system When the system is controlled by multiple controllers, the problem is one of decentralized
    control.

  • The most common controllers designed using classical control theory are PID controllers.

  • The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response.

  • Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that .

  • The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems.

  • Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque
    of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is “fed back” as input to the process, closing the loop.

  • The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.

  • Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.

  • System identification Further information: System identification The process of determining the equations that govern the model’s dynamics is called system identification.

  • The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes.

  • Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e.

  • The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane, to compensate for changes
    in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere.

  • When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.

 

Works Cited

[‘o Maxwell, J. C. (1868). “On Governors” (PDF). Proceedings of the Royal Society. 100. Archived (PDF) from the original on December 19, 2008.
o ^ Minorsky, Nicolas (1922). “Directional stability of automatically steered bodies”. Journal of the American
Society of Naval Engineers. 34 (2): 280–309. doi:10.1111/j.1559-3584.1922.tb04958.x.
o ^ GND. “Katalog der Deutschen Nationalbibliothek (Authority control)”. portal.dnb.de. Retrieved April 26, 2020.
o ^ Maxwell, J.C. (1868). “On Governors”. Proceedings
of the Royal Society of London. 16: 270–283. doi:10.1098/rspl.1867.0055. JSTOR 112510.
o ^ Fernandez-Cara, E.; Zuazua, E. “Control Theory: History, Mathematical Achievements and Perspectives”. Boletin de la Sociedad Espanola de Matematica Aplicada.
CiteSeerX 10.1.1.302.5633. ISSN 1575-9822.
o ^ Routh, E.J.; Fuller, A.T. (1975). Stability of motion. Taylor & Francis.
o ^ Routh, E.J. (1877). A Treatise on the Stability of a Given State of Motion, Particularly Steady Motion: Particularly Steady
Motion. Macmillan and co.
o ^ Hurwitz, A. (1964). “On The Conditions Under Which An Equation Has Only Roots With Negative Real Parts”. Selected Papers on Mathematical Trends in Control Theory.
o ^ Flugge-Lotz, Irmgard; Titus, Harold A. (October
1962). “Optimum and Quasi-Optimum Control of Third and Fourth-Order Systems” (PDF). Stanford University Technical Report (134): 8–12. Archived from the original (PDF) on April 27, 2019.
o ^ Hallion, Richard P. (1980). Sicherman, Barbara; Green,
Carol Hurd; Kantrov, Ilene; Walker, Harriette (eds.). Notable American Women: The Modern Period: A Biographical Dictionary. Cambridge, Mass.: Belknap Press of Harvard University Press. pp. 241–242. ISBN 9781849722704.
o ^ “Feedback and control systems”
– JJ Di Steffano, AR Stubberud, IJ Williams. Schaums outline series, McGraw-Hill 1967
o ^ Mayr, Otto (1970). The Origins of Feedback Control. Clinton, MA US: The Colonial Press, Inc.
o ^ Mayr, Otto (1969). The Origins of Feedback Control. Clinton,
MA US: The Colonial Press, Inc.
o ^ Bechhoefer, John (August 31, 2005). “Feedback for physicists: A tutorial essay on control”. Reviews of Modern Physics. 77 (3): 783–836. doi:10.1103/RevModPhys.77.783.
o ^ Cao, F. J.; Feito, M. (April 10, 2009).
“Thermodynamics of feedback controlled systems”. Physical Review E. 79 (4): 041118. arXiv:0805.4824. doi:10.1103/PhysRevE.79.041118.
o ^ “trim point”.
o ^ Donald M Wiberg (1971). State space & linear systems. Schaum’s outline series. McGraw Hill.
ISBN 978-0-07-070096-3.
o ^ Terrell, William (1999). “Some fundamental control theory I: Controllability, observability, and duality —AND— Some fundamental control Theory II: Feedback linearization of single input nonlinear systems”. American Mathematical
Monthly. 106 (9): 705–719 and 812–828. doi:10.2307/2589614. JSTOR 2589614.
o ^ Gu Shi; et al. (2015). “Controllability of structural brain networks (Article Number 8414)”. Nature Communications. 6 (6): 8414. arXiv:1406.5197. Bibcode:2015NatCo…6.8414G.
doi:10.1038/ncomms9414. PMC 4600713. PMID 26423222. Here we use tools from control and network theories to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure
o ^
Melby, Paul; et., al. (2002). “Robustness of Adaptation in Controlled Self-Adjusting Chaotic Systems”. Fluctuation and Noise Letters. 02 (4): L285–L292. doi:10.1142/S0219477502000919.
o ^ N. A. Sinitsyn. S. Kundu, S. Backhaus (2013). “Safe Protocols
for Generating Power Pulses with Heterogeneous Populations of Thermostatically Controlled Loads”. Energy Conversion and Management. 67: 297–308. arXiv:1211.0248. doi:10.1016/j.enconman.2012.11.021. S2CID 32067734.
o ^ Liu, Jie; Wilson Wang; Farid
Golnaraghi; Eric Kubica (2010). “A novel fuzzy framework for nonlinear system control”. Fuzzy Sets and Systems. 161 (21): 2746–2759. doi:10.1016/j.fss.2010.04.009.
o ^ Richard Bellman (1964). “Control Theory”. Scientific American. Vol. 211, no.
3. pp. 186–200. doi:10.1038/scientificamerican0964-186.
Photo credit: https://www.flickr.com/photos/rkramer62/3912497870/’]