Everything You Need To Know To Find The Best control technology co

11 Mar.,2024

 

Courtesy: Peter Galan, a retired software engineer

 

Learning Objectives

  • Various control methods can be considered as part of control system design.
  • Learn control system basics of optimal control, fuzzy control, artificial neural networks and classic proportional-integral-derivative (PID) controls.
  • Understand comparisons of advanced motion controls with graphics, equations and examples.

Insights for Advanced Controls, Artificial Intelligence for Motion Control

  • Every control system designer is asked often about what control method is best. The best engineering answer is that it depends on the application. Peter Galan, a retired controls engineer, deftly explains with equations, graphs and examples, four control techniques for advanced process control applications.
  • He covers optimal control, fuzzy control and artificial neural networks (or ANN-based controls), in addition to classic proportional-integral-derivative (PID) controls. While the in-depth tutorial discussion could apply to multiple control examples, Galan looks at servo-based motion control. This is the tenth Galan Control Engineering tutorial on control system topics.

Every control system user/designer is very likely asked the same question many times: What’s the best control method? The control-method question is particularly important today, with many more options than decades before. Long ago, the “golden” (actually the only) solution was a proportional-integral-derivative (PID) controller. While there’s nothing wrong with PID control, practical limitations make other control methods preferable in many applications. What “other” control methods are available today?

This article examines optimal control, fuzzy control and artificial neural networks (or ANN-based controls), in addition to classic PID control. The limits of a classic PID controller are seen in a typical controlled system (plant) analysis. Several examples could be used, but examples below focus on a position servomechanism, widely used in industrial robots, autonomous vehicles and many other, not only industrial, applications.

Position servosystem identification

If the controlled system is a servomechanism based on a dc motor with a constant excitation (with permanent magnets, for example) it can be described by a simplified transfer function, the angular displacement over the voltage in the s-domain as:

where:

  • K represents a motor torque constant
  • T represents the mechanical parameters of a motor: J/B, where J is a total inertia including the load and B represents the motor damping
  • Ti represents the integration constant (determined by the motor revolution and the transmission ratio).

The transfer function above represents a first order system combined with an integrator. Theoretically, there is another first-order, low pass filter involved, having a time constant equivalent to the ratio of the motor winding inductance and resistance; this time constant is significantly smaller than the dominant, T, constant, so it doesn’t need to be taken into account.

If you are a motor user, not a motor designer, it might be simpler to find the (dominant) time constants by the experimental measurements of its response in an open loop. For that purpose, apply a certain actuating variable (a voltage, VIN) to the servo system, for example, 50% of what would correspond to the maximum servo speed. The servo has to be fully loaded as it would under regular working conditions. Scan the servo position response and turn it off just before it reaches the maximum position. Plotting the measured values might produce a graph as shown in Figure 1.

From the characteristics above and by knowing the input value (the step function, VIN, which corresponds to 50% of the maximum servo speed, for example), determine the T and Ti constants of this first order “astatic” (an integrator combined with the first order lag) transfer function of a typical servosystem.

Synthesis of a servosystem control system

A servomotor and servo drive motion control system can use a variety of closed-loop controls described below with equations, examples, diagrams, tables and charts.

How to apply a PID controller

PID compensation is one of the most common forms of closed-loop control. Why it is so popular? In most applications, the controlled process can be modelled by a first- or a second-order transfer function. The PID controller can cancel, or at least significantly compensate, exactly two poles of the transfer function.

The transfer function of the PID controller,  , in the s-domain can be expressed as follows:

Now let’s have a controlled system, which can be approximated by a first-order astatic transfer function

The open loop transfer function of the control system, , equals to

Why use PD instead of PID? First, notice that from the full PID controller, integration part is missing. Why? Because the integrator is already present in the controlled servo system.

At the optimally tuned control system, the  ratio will have to match value of the τ1 time constant, so the open loop transfer function will be reduced just to: .

The closed-loop transfer function of an ideally tuned PD control system, , will equal:

This corresponds to a single pole transfer function with the time constant,  , equivalent to  where τ2 is the integration member time constant, ΚP is the proportional constant of the controller and Κ is the gain of the controlled system. However, if the time constants were found by using a response characteristic as shown in Figure 1, then the gain (that is, the constant Κ) value becomes 1.0. Then the closed-loop transfer function time constant,  , equals to . This is very good, because you can achieve much faster time response from the closed-loop control system by making ΚP adequately large. In practice, “adequately large” can provide 5 to 10 times faster response. Higher value of ΚP can lead to an unstable response. Remember, in the just-approximated system parameters, actual values definitely differ.

Example of why controllers need limiters: However, such an optimally tuned P(I)D controller has its shortcomings. Observe the output (actuating) variable of the P(I)D controller to see, at the first steps of the control process, it reaches almost infinitely high values. They rapidly drop, but for a relatively long time they remain far above what such a controlled system (servomechanism) can tolerate. Imagine that a servomotor, which runs on a nominal 24V input, would get voltages in the range of thousands of volts in the first steps, and, later, hundreds of volts! No controller could drive such a high voltage, but even one could, the motor would not survive such a high voltage. What’s needed? Adding a limiter to the controller guarantees the actuating variable will never exceed a maximum value acceptable by the controlled system (a dc motor in this case). A final PID control arrangement of a servomechanism is shown in Figure 2.

Near the end of the article, compare PID controller performance with other types of controllers. An optimally tuned PD controller with the unlimited actuating variable (that is, an ideal controller) has the output characteristic shape shown in the yellow color (Figure 8); its practical version (with the actuating variable limited to the maximum acceptable values) is shown in the red color.

Applying time-optimal (bang-bang) control

Optimal control has been the subject of extensive research for many decades. Some basic ideas about optimal controls are worth reviewing.

Time-optimal control defined, with an example: Imagine a common electrical servomechanism with a dc motor. The task at hand is to control servo so it reaches a new reference point in the shortest possible time. An optimally-tuned PID controller would not achieve this goal if it has to apply a realistic actuating variable. The intuitive reaction would be to apply the maximum acceptable voltage to the dc motor, and let the motor run full-speed forward.

Then, at a certain time, change the voltage polarity, so the motor will start to de-accelerate at the maximum possible rate. Later, at the moment when motor speed is zero, turn off the voltage. If voltage polarity is changed at the right time, then, at the moment the motor stops running, the servo will be exactly in the desired position. This is called optimal control. In this case, because time is a criterion of this optimal control, the control is called time-optimal control.

Finding the shape of the switching curve

Figure 3 explains the above-described time optimal control process in the state space, which in this case is a two-dimensional space (area); one dimension is an output variable, and the other dimension is its (time) derivative. At the moment when a new reference value is applied, the output variable is shifted along the horizontal axis, so it represents the regulation error (err), which is the difference between the reference and the actual output. At the same time, t0, a maximum voltage, is applied to the dc motor. The servo leaves its initial position, P0, and starts to accelerate. At the time t1, the controller changes voltage polarity, and the motor speed soon starts to decline. At the time t2, just as the motor speed becomes zero and the desired position, P2, has been achieved, the actuating variable, voltage, is turned off. While this seems straightforward, knowing the shape of the switching curve is not so simple.

If the controlled system is a servomechanism with a simplified transfer function, the angular displacement over the voltage in the s-domain:

then a complete, time-optimal control system can have a block diagram as shown in Figure 4.

Origin of the bang-bang controller name: This control scheme is completely different from PID control. It is a non-linear control and because the second non-linearity, N2, represents a relay providing ±U value of the actuating variable, such a control is called the bang-bang control. Regarding the first non-linearity, N1, which represents the switching curve, a “sqrt” (square root) function of the regulation error, E, delivers pretty reasonable results. In practice it is always difficult to find an exact switching curve, and, as a result, the controller might be switching the driving voltage between its maximum and minimum forever. To avoid this, increase the dead zone of the relay.

Near the end of the article, see how well such a time-optimal controller copes compared to other controller types. Its output variable function is shown in the bright blue color (Figure 8).

How to apply fuzzy control

Fuzzy control is another non-linear control method, which can be very good solution for controlled systems that are difficult to analyze, or those whose dynamic behavior is unknown at the time of design.

The fuzzy control can be compared to a “sub-optimal” time-optimal control, which can deliver worse-than-optimal results, though they still can be very good. Learn more about fuzzy control and time-optimal control.

In this particular case with the position servomechanism control, the following control scheme (Figure 5) using a fuzzy controller may be used:

Explanation of fuzzification, defuzzification: The fuzzy control can be seen as an extension or modification of the fuzzy logic. In the first phase, the fuzzy logic converts (in a process known as fuzzification) the “crisp” input variables into the “fuzzy” sets. In the second phase, it processes those fuzzy sets. In the final phase, it converts (in a process known as defuzzification) the processed fuzzy sets back to a crisp output variable.

For the fuzzification of the input variables, select, for example, a set of 5 “membership” functions of the lambda shape _/_. To make the control process better (more “refined”), 7 membership functions are used in the example. They cover the full range of the regulation error, err, and its derivative, der, state variables, which can acquire values between -1000 and +1000. They can be called:

  • High-negative (HN)
  • Medium-negative (MN)
  • Low-negative (LN)
  • Small (S)
  • Low-positive (LP)
  • Medium-positive (MP)
  • High-positive (LP).

Seven similar levels are used for the output (actuating) variable fuzzification:

  • Full-negative (FN)
  • Medium-negative (MN)
  • Low-negative (LN)
  • Zero (Z)
  • Low-positive (LP)
  • Medium-positive (MP)
  • Full-positive (FP).

Processing the fuzzy sets is the most critical phase of fuzzy control. It is “governed” by the fuzzy control knowledge base. Figure 6 shows one such suitable knowledge base.

Notice how the err and der input variables are quantified. Their distribution functions are not spread equidistantly; the err membership functions are “pushed” more towards the center (S), while the der membership functions are pushed more towards the highest values. Why is such arrangement better? Examining the output variable distribution in the knowledge base provides the answer. Look at the zero (Z) levels. They are arranged to very closely follow the output of the square root function of the error, err, which is the best possible emulation of the switching curve used in the time-optimal control.

Next, learn how well such a fuzzy controller performs compared to other types of controllers. Shape of the output variable is shown in the orange color (Figure 8).

Applying ANN-based control

There are countless possibilities of using artificial neural networks (ANN) in control systems. Many of them use ANN-based controlled systems (plants) models, or they model their inverse dynamics, which, combined with classic PID controllers, help create adaptive and other, more sophisticated, control systems.

Try a different approach by training such an ANN to model a switching curve of a position servosystem. As learned, the best (fastest) servomechanism movement can be achieved by using the time-optimal control. The simplest approximation of the switching curve (the most critical aspect) is using a square root function of the regulation error. Even the fuzzy controller was “tuned” to emulate such a square root function.

However, the actual switching curve can still differ from its approximation by a sqrt() function, for example. Is there a way to find the actual switching curve of a position servomechanism? The answer is yes. It is possible to find the actual switching curve of a position servomechanism, “train” ANN to remember it and generate it on demand. Going one step further, it’s possible to train the ANN to take over the entire bang-bang controller (see Figure 4).

A switching curve is a sequence of the [err, der] pair values at which the servo motor driver commutes polarity of the nominal (maximum) voltage that can be applied to the motor. Find those values by running the servo mechanism in open loop (that is, without feedback), measuring and recording its position, err, and its speed, der. At first, prepare a series of expected der (servomechanism speed) values from the lowest to the highest value. Now apply the maximum positive actuating variable (voltage +U) to the motor and let it run until the servomechanism reaches the first expected der value in that series. Important: Log the servo position at the same moment as P1, and concurrently commute the actuating variable to the -U value. At the moment when the servo speed (der) drops to zero, turn off the voltage and log the current position as P2. That process provides the first pair of the [err, der] coordinates (where err  = P2– P1) of the first switching curve point. Of course, the servomechanism must run fully loaded, exactly as it is intended to be used.

The sequence of the [err, der] coordinates represents the switching curve points. For optimal results, take about 50 coordinates (pairs) evenly spread along the der axis. Then train a suitable ANN to use those switching points for commuting the actuating variable to the servomechanism. It’s possible to do with even the simplest ANN with one hidden layer and about 12 nodes. You can find more information about using ANNs in control applications.

Figure 7 shows an output values table, which can be used to train the ANN controller. After capturing the switching curve points, it makes sense to train the ANN controller off line, that is, not directly on the physical controlled system. Start from the bottom (or top) of the captured coordinates series, and for every individual der input you need to generate several (20?) err values (from -max to +max) and to each combination of the [err, der] coordinates, provide (to the ANN output) a particular output value. All the output values left from the switching curve should correspond to the maximum negative actuating value; all output values on right-hand side of the switching curve should be positive maximums. And of course, at those particular [err, der] coordinates, previously found as the switching curve points, apply zero values. The farther the err coordinates are from the switching curve, the less often those ±U values need to be provided, as the output surface remains there flat:  +U or -U.

Please notice the close resemblance of this training data table with the knowledge base of the fuzzy system (Figure 6). At first, just the upper half of the state space is used to train the ANN controller (Figure 3). When the servo has to move in the opposite direction, the ANN controller will just swap the output values. However, if the servo system does not behave identically in both movements directions, the ANN controller has to be trained the entire state space behavior.

How does the ANN-based control system compete with the others? Look at its output variable shape (white color).

Control systems comparison

The screen shot (Figure 8) using Python, shows the results of the simulated servomechanism and its controllers. The transfer function of the servomechanism was approximated as where the time constants are in numbers of samples (Please reference this Control Engineering article for a better understanding). The step function (desired servo system position) was generated going from zero to 800 (where 1000 is a maximum of the desired position and of the actuating variable) and later, after 1000 samples, it dropped from 800 to 400. The input step function is depicted in the green color.

Figure 8 also shows how the individual control systems performed. The first was the optimally-tuned PD controller, which reacted to the step function immediately. Its ΚP parameter was set to 5. But this is really only an ideal PD controlling. The practical, optimally-tuned PD controller with the limited actuating variable behaves as shown by the red exponential curve. Compared to the other controllers, it is actually the worst (slowest) performer. However, if tuned properly, it will not overshoot, which, in certain applications, can be very important.

The best (fastest) controller was the ANN-based controller (shown in the white curve). This should not be surprising; ANN was trained to emulate a precise switching curve, so it behaves as a perfect, time-optimal control system.

The classic time-optimal controller (cyan color) simulating the switching curve by the sqrt(e) function performed a little worse (slower). Considering its very simple implementation, however, there should not be any complaints about its performance.

The fuzzy controller (orange color) in this particular case didn’t perform as well. However, it was not tuned to its best performance (only initial tunings were made to roughly simulate a switching curve), so there is room for further performance improvements. Theoretically, the fuzzy controller should not perform worse than the classic, time-optimal controller.

Final observations: Using ANN, fuzzy control for control systems

Based on this particular example, use of ANN for control industry applications is confirmed. However, if systems change behaviors/parameters “on the run,” a controller is needed that is more tolerant towards such variations. Tolerance is where fuzzy controllers shine. Among the most surprising revelations is the classic time-optimal controller (based on the sqrt() function), learned at university more than 50 years ago, can still perform well.

Peter Galan is a retired control software engineer. Edited by Mark T. Hoske, content manager, Control Engineering, CFE Media, mhoske@cfemedia.com.

KEYWORDS

Artificial intelligence for control systems

CONSIDER THIS

Feedback loop/engineering interaction: Please email the editor with comments, suggestions or questions.

ONLINE

Control Engineering advanced process control tutorial library from Peter Galan below:

TBACT is an emissions limitation which is based on the maximum degree of control that can be achieved and applies to existing facilities, as defined by Cleaner Air Oregon [OAR 340-245-0020(20)]. TBACT may be based on a design standard, equipment standard, work practice standard or other operational standard, or a combination thereof. When TBACT is required the facility must provide a feasibility analysis consisting of the following components [OAR 340-245-0220(3)(a)]:
  • What has been achieved in practice for sources in the same class, or processes or emissions similar to the source under review
  • Energy, health, and environmental impacts not related to air quality
  • Economic impacts and cost-effectiveness strictly related to controlling Toxic Air Contaminant (TAC) emissions
TLAER is an emissions limitation, similar to TBACT, which applies to new facilities in the Cleaner Air Oregon program. The major difference from TBACT is that TLAER must provide the maximum degree of reduction technically feasible without regard to energy impacts, health and environmental impacts, or economic impacts. [OAR 340-245-0220(4)(a)] When TLAER is required it is not considered achievable if the cost of control is so great that a new facility could not be built; however, if another facility in the same industry, or with similar processes or emissions, uses that control technology, then such use demonstrates that the cost to industry of that control is not prohibitive. [OAR 340-245-0220(4)(b)]

is an emissions limitation which is based on the maximum degree of control that can be achieved and applies tofacilities, as defined by Cleaner Air Oregon [OAR 340-245-0020(20)]. TBACT may be based on a design standard, equipment standard, work practice standard or other operational standard, or a combination thereof. When TBACT is required the facility must provide a feasibility analysis consisting of the following components [OAR 340-245-0220(3)(a)]:is an emissions limitation, similar to TBACT, which applies tofacilities in the Cleaner Air Oregon program. The major difference from TBACT is that TLAER must provide the maximum degree of reduction technically feasibleregard to energy impacts, health and environmental impacts, or economic impacts. [OAR 340-245-0220(4)(a)] When TLAER is required it is not considered achievable if the cost of control is so great that a new facility could not be built; however, if another facility in the same industry, or with similar processes or emissions, uses that control technology, then such use demonstrates that the cost to industry of that control is not prohibitive. [OAR 340-245-0220(4)(b)]

Everything You Need To Know To Find The Best control technology co

Department of Environmental Quality : Step 7: Toxics Best Available Control Technology (TBACT) and Toxics Lowest Achievable Emission Rate (TLAER) and Risk Reduction : Cleaner Air Oregon : State of Ore

For more information, please visit Application Of Spectrum Analyzer, function generator hot sale, What Does A Signal Generator Do.