2.2. Linear Time-Invariant Control Theory

A system is linear when the output is a sum of products between variables and constant coefficients. It is time invariant when the same inputs always yield the same output.

Note

Skimming the surface here

LTI control theory gives us a model to analyize the behavior of a control system. It is used in the study of systems with complex moving parts such as robots used in manufacturing that have arms with several joints and degrees of freedom. Our coverage here is just an introduction.

We can model a controller as being linear and time–invariant (LTI). Controllers that are not linear can often be modeled as being linear within a limited operating range. Analog LTI systems are modeled with differential equations. On computer controlled discrete sampled systems, derivatives become differences in sample values and differential equations become difference equations. The design process for discrete LTI systems uses difference equations and math tools from the field of linear algebra.

\frac{dx(t)}{dt} = \dot{x} = x_{k+1} - x_{k}

The change to a system from the system plant can be modeled as a function of the current state and control signal.

\dot{x} = f(x, u)

Then, given a sample time of \delta\! t.

x_{k+1} = x_k + \delta\! t f(x_k, u_k)

The task of designing the controller then becomes a matter of designing the control signal (u) from the error between the reference and estimated system state.

2.2.1. State Space Form

LTI controllers are represented with equations in a standardized format called State Space Form. The first step when designing a controller is to describe the desired behavior in state space form. State space form requires three matrices. Some more complex controllers have a forth matrix also. These matrices will then be used in a standardized design to implement the controller.

A key concept is to focus on modeling the state of the system. The state, x, is usually a vector containing the physical position of the system (robot) and the derivative with respect to time for each dimension of the controller’s position. For simple mobile robots, the state of the robot is described in two dimensions, x = \left[ p_x, p_y, \dot{p}_x,
\dot{p}_y \right]. Although some robotic controllers, such as speed control and steering angle, have a single dimension state. Controllers for arm robots may require a state vector with three dimensions each for the position and orientation of the end effector.

2.2.2. The Point Mass Controller

To illustrate how to model a controller with state space equations, we will consider the simplest of controllers. In the point mass controller, a force is applied to a point on a line. From physics, we know that a force produces an acceleration (F = ma).

../_images/point_mass.png

\begin{array}{lcl}
        & \ddot{p} = u &       \\
x_1 = p & & \dot{x}_1 = x_2 \\
x_2 = \dot{p} & & \dot{x}_2 = u
\end{array}

The state of the system is a 1 x 2 vector.

x = \left[ \begin{array}{c}
   x_1 \\ x_2
\end{array} \right]

The state space model consists of two equations – the derivative of the state x and the output, which is the controlled position.

\dot{x}_1 = x_2 \\
\dot{x}_2 = u

y = p = \dot{x}_1

To put the above equations in state space form, we express them with a standardized notation as two equations of matrices.

\dot{x} = \left[ \begin{array}{c}
     \dot{x}_1 \\ \dot{x}_2
  \end{array} \right]
= \left[ \begin{array}{cc}
     0 & 1 \\
     0 & 0
  \end{array} \right]
  \left[ \begin{array}{c}
     x_1 \\ x_2
  \end{array} \right]
 + \left[ \begin{array}{c}
     0 \\ 1
  \end{array} \right]
  u

y = \left[ \begin{array}{ll} 1 & 0 \end{array} \right]x

The state space form is:

\dot{x} = A\,x + B\,u

y = C\,x

For the point mass controller,

A = \left[ \begin{array}{cc}
            0 & 1 \\
            0 & 0
         \end{array} \right]
B = \left[ \begin{array}{c}
            0 \\ 1
         \end{array} \right]
C = \left[ \begin{array}{ll} 1 & 0 \end{array} \right]

2.2.3. The Controller Implementation

We start with the model of the system in state space form:

\dot{x} = A\,x + B\,u

y = C\,x

The matrices A, B and C are called the characteristic matrices. The A matrix is called the Dynamics Matrix because it describes the physics of the system. The B matrix is called the Control Matrix because it operates on the input. The C matrix is called the Sensor Matrix because our estimate of the system position comes from the sensors. The sensors operate on the internal state of the system to quantify its current position.

../_images/LTI.png

To design a controller, always begin by writing the equations for \dot{x} and y in the generalized state space form. These equations describe our model for the controller, not its solution that could be used to implement the controller. However, there is a known solution for a controller described by the state space equations.

We will skip the majority of the math to derive the solution. However, we should point out a couple points which relate to the stability of the system.

We know from differential equations that if the variables are scalars, instead of matrices, and we ignore the input term that the solution appears as follows.

\dot{x} = ax(t), \;\; x(t_0) = x_0 \\
x(t) = e^{a(t-t_0)}x_0

Note

This simple differential equation solution relates to the determination of if the controller is stable or not.

You may recognize this equation from biology or other related fields. Equations describing growth and decay take this general form.

It turns out that we can express the differential equation solution in the same form when A is a matrix.

\dot{x} = Ax, \;\; x(t_0) = x_0 \\
x(t) = e^{A(t-t_0)}x_0

It is a bit awkward to work with equations like this since part of exponent term is a matrix. However, exponent equations using the special number e have a Taylor series expansion. By using the Taylor series expansion, dealing with matrix is simplified.

e^{at} = \sum_{k=0}^{\infty} \frac{a^{k} t^{k}}{k!} \;\;\;\;
e^{At} = \sum_{k=0}^{\infty} \frac{A^{k} t^{k}}{k!}

The Taylor series expansion relates to the general form of the controller solution.

Note

Controller Equations for LTI discrete systems:

x[k] = A^k x[0] + \sum_{j=0}^{k-1} A^{k-j-1} Bu[j]

y[k] = C A^k x[0] + \sum_{j=0}^{k-1} C A^{k-j-1} Bu[j]

This may look difficult and too time consuming to compute for each time sample; however, there are iterative techniques that allow us to reuse previous calculations.

2.2.4. Controller Stability

We discussed previously that the stability of the system is related to the solution to the differential equation \dot{x} = a\,x(t), which contains an exponential equation with the special math constant of e.

x(t) = e^{a(t-t_0)}x_0

The value of the constant a determines if the system is stable or if it might produce very large values which can not be satisfied by the hardware and thus, the controller is unstable.

../_images/exp1.png

With a positive exponent, the controller blows up – BOOM!

../_images/exp2.png

With a negative exponent, the controller is stable

The basic concept is that a negative constant in the exponent equation is what determines stability or not. Unfortunately, with discrete systems expressed in state space form, we can not simply evaluate a constant variable. We can determine stability of the system by evaluating the Dynamic Matrix, A. To do this we need to compute the eigenvalues of the matrix. The system is stable if the real part of all of the eigenvalues are negative. It is critically stable if the real part of any eigenvalues are zero. It is unstable if any eigenvalues have positive real components. If any eigenvalues have complex components, then the system will oscillate to various degrees depending on the value of the eigenvalue’s imaginary component.

Note

Eigenvalues come to us from the field of linear algebra. Here is a web page, also that talk about how to compute eigenvalues of a matrix. However, it is not necessary to compute them by hand. The eig( ) function in Matlab and numpy.linalg.eig( ) function in Python will return the eigenvalues of a matrix.

Here is how the eigenvalue computations in Python looks for the point mass system. I’m using iPython in what is shown below.

In [1]: import numpy

In [2]: a = numpy.array([[0, 1],[0,0]])

In [3]: a
Out[3]:
array([[0, 1],
    [0, 0]])

In [4]: numpy.linalg.eig(a)
Out[4]:
(array([ 0.,  0.]),
array([[  1.00000000e+000,  -1.00000000e+000],
    [  0.00000000e+000,   2.00416836e-292]]))

We see here that it has two eigenvalues and that the real and imaginary part of both eigenvalues are zero. Thus, the system is only critically stable. The eig( ) function returned two arrays. The first array is the eigenvalues. The second array relates to eigenvectors, which obviously are related to eigenvalues, but we don’t need them here.

2.2.5. Designing for Stability

We have not discussed the input to our system. Since we want to make use of feedback to produce a stable controller, we could make the input u be a function of y, the estimated output as measured by the sensors. In doing so, we might be able to design the controller to be strictly stable.

Considering the point mass controller, we could design u to always move the point towards the origin (zero).

u = -Ky = -KCx

We have a new variable K that we can use to tune the controller. Since u is now in terms of x, we can now write the whole state space model in terms of x.

\dot{x} = Ax + Bu = Ax - BKCx = (A - BKC)x

Now, if we call \hat{A} = A - BKC, we have a state space equation that is just like the differential equation that we looked at before.

\dot{x} = \hat{A}x

Thus to determine stability, we can compute the eigenvalues of \hat{A}.

2.2.5.1. First Try

Let’s set K = 1 to start with and see if it is stable.

\hat{A} = \left[ \begin{array}{cc}
       0 & 1 \\
       0 & 0
    \end{array} \right]
 - \left[ \begin{array}{c}
       0 \\ 1
    \end{array} \right]
  \left[ \begin{array}{ll} 1 & 1 \end{array} \right]
  \left[ \begin{array}{ll} 1 & 0 \end{array} \right]
= \left[ \begin{array}{cc}
       0 & 1 \\
      -1 & 0
    \end{array} \right]

To determine stability, we can use either Python or Matlab to find the eigenvalues of \hat{A}.

eig(\hat{A}) = \pm j

Where j is the engineering common name for the imaginary value \sqrt{-1}. Math folks mistakenly call it i, but engineers call it j.

Thus, with K = 1, the system is only critically stable and it oscillates. We can do better!

2.2.6. Placing Eigenvalues

We can pick the eigenvalues of the system and work backwards to find the desired coefficients.

We’ll begin by combining our previous K and C terms into one matrix so that we only need to compute one matrix.

The state space equations are now:

\dot{x} = A\,x + B\,u

u = -K\,x =
-\left[ \begin{array}{ll} k_1 & k_2 \end{array} \right]\,x

\dot{x} = (A - BK)\,x = \hat{A}\,x

For the point mass controller,

\hat{A} = A - BK = \left[ \begin{array}{cc}
       0 & 1 \\
       0 & 0
    \end{array} \right]
 - \left[ \begin{array}{c}
       0 \\ 1
    \end{array} \right]
  \left[ \begin{array}{ll} k_1 & k_2 \end{array} \right]
= \left[ \begin{array}{cc}
       0 & 1 \\
     -k_1 & -k_2
    \end{array} \right]

Now, we need to compute the eigenvalues of \hat{A}, but the matrix contains variables, so we can not use our software tools. Matlab contains a function called place that can place the eigenvalues and compute the needed coefficients. If we forgot to pay Matlab’s big price tag, then we’ll have to compute them by hand. But since this is a fairly small matrix, it will not be so bad.

2.2.6.1. Computing Eigenvalues

Given a matrix M, it’s eigenvalues (\lambda) satisfy the equation:

det(\lambda \mathit{I} - M) = 0

Where \mathit{I} is the identity matrix.

M = \left[ \begin{array}{cc}
   m_1 & m_2 \\
   m_3 & m_4
\end{array} \right]

\lambda \mathit{I} - M = \lambda
\left[ \begin{array}{cc}
      1 & 0 \\
      0 & 1
   \end{array} \right] - M
  = \left[ \begin{array}{cc}
    \lambda - m_1 & -m_2 \\
      -m_3 & \lambda - m_4
   \end{array} \right]

For a 2x2 matrix, the determinant is a scalar given by:

det\left( \left[ \begin{array}{cc}
  a & b \\
  c & d
\end{array} \right] \right)
= ad - cb

2.2.6.2. Back to the Point Mass

det(\lambda \mathit{I} - \hat{A}) =
\left| \begin{array}{cc}
        \lambda & -1 \\
      -k_1 & \lambda + k_2
     \end{array} \right|
     = \lambda(\lambda + k_2) + k_1
     = \lambda^{2} + \lambda\,k_2 + k_1 = 0

We want both eigenvalues to have negative real numbers so that the controller is stable and does not oscillate. We could set both eigenvalues to -1. Eigenvalues are also called poles, which is a term deriving from evaluation of analog systems in the LaPlace domain. The point being that if our \lambda variables are at any eigenvalue, a term in a product of poles becomes zero resulting in the whole product being zero.

We represent each eigenvalue as \lambda_i and write the following product of poles:

(\lambda - \lambda_1)(\lambda - \lambda_2) = 0

Since we want the eigenvalues at -1:

(\lambda + 1)(\lambda + 1) = 0 \\
\lambda^{2} + 2\lambda + 1 = 0

In computing the eigenvalues of the point mass controller, we had:

det(\lambda \mathit{I} - \hat{A}) =
\lambda^{2} + \lambda\,k_2 + k_1 = 0

We can line up the coefficients of the two polynomials to find our K matrix.

K = \left[ \begin{array}{ll} k_1 & k_2 \end{array} \right]
= \left[ \begin{array}{ll} 1 & 2 \end{array} \right]

Our state space equations become:

\dot{x} = (A - BK)\,x = \left( \left[ \begin{array}{cc}
       0 & 1 \\
       0 & 0
    \end{array} \right]
 - \left[ \begin{array}{c}
       0 \\ 1
    \end{array} \right]
  \left[ \begin{array}{ll} 1 & 2 \end{array} \right]
  \right)\,x
= \left[ \begin{array}{cc}
       0 & 1 \\
      -1 & -2
    \end{array} \right]x

u = -Kx =
\left[ \begin{array}{ll} -1 & -2 \end{array} \right]x

Now, our equation for \hat{A} becomes:

\hat{A} =
\left[ \begin{array}{cc}
     0 & 1 \\
    -1 & -2
  \end{array} \right]

Now, we can use use Python to compute the eigenvalues of \hat{A}.

In [1]: import numpy

In [2]: Ahat = numpy.array([[0, 1],[-1, -2]])

In [3]: numpy.linalg.eig(Ahat)[0]
Out[3]: array([-1., -1.])

Thus, we have verified that both eigenvalues are at -1. Our controller is stable and it does not oscillate.

It may seem like we covered a lot in this section, but we really just introduced LTI controllers. We’ll leave more complete coverage to more advanced courses dealing specifically with control systems.