6.7. Systems of Linear ODEs

Note

The information in this section is here for sake of completeness of how MATLAB is used to solve ODEs, but it will not be covered on exams.

See also

In this 19 minute video MIT professor Gilbert Strang explains how eigenvectors and eigenvalues give us the solution to a system of first order, linear ordinary differential equations.

Differential equations that come up in engineering design and analysis are usually systems of equations, rather than a single equation. Fortunately, they are often first order linear equations. As discussed in the Symbolic Differential Equations section, the Symbolic Math Toolbox can solve many differential equations expressed by a single equation. For higher order and non-polynomial systems of ODEs, we need numerical methods, such as discussed in the Numeric Differential Equations section. However, systems of first order linear ODEs may be solved analytically with eigenvalues and eigenvectors, which we get from linear algebra.

Exponents of the special number e have the special property that it is the only function whos derivative is a scalar multiple of itself. Specifically,

\frac{d\,e^{a\,t}}{dt} = a\,e^{a\,t}.

Thus, it follows that ODEs of the form

\frac{dy(t)}{dt} = a\,y(t)

have the solution

y(t) = c\,e^{a\,t}.

The same principle applies to systems of ODEs, except that we use vectors and matrices to describe the equations.

\spalignsys{y_1' = a_{11}\,y_1 + a_{12}\,y_2 + \cdots + a_{1n}\,y_n;
y_2' = a_{21}\,y_1 + a_{22}\,y_2 + \cdots + a_{2n}\,y_n;
 \.  \vdots;
y_n' = a_{n1}\,y_1 + a_{n2}\,y_2 + \cdots + a_{nn}\,y_n}

In matrix notation, this is

Y' = A\,Y.

The solution has the form

Y(t) = c_1e^{\lambda_1\,t}X_1 + c_2e^{\lambda_2\,t}X_2 +
\cdots + c_ne^{\lambda_n\,t}X_n

The set of scalar values \{\lambda_1, \lambda_2, \cdots, \lambda_n\} are called the eigenvalues of matrix A. The vectors \{X_1, X_2, \cdots, X_n\} are the eigenvectors of A.

Full coverage of the many applications eigenvalue problems and calculating the eigenvalues and eigenvectors is beyond the scope of this course. We will only give the basic definitions here.

6.7.1. Eigenvalues and Eigenvectors

Recall our previous discussion of solutions to systems of linear equations. In matrix notation, such systems are represented as A\,x = b. Here we will consider a special case of such equations.

A\,X_i = \lambda_i\,X_i

The X_i vector is a special vector such that its product with the A matrix is equal to its product with the scalar value \lambda_i. For the n-by-n square matrix, A, there are n X_i vectors called eigenvectors and n matching \lambda_i scalars called eigenvalues.

The n scalar eigenvalues, \{\lambda_1, \lambda_2, \cdots,
\lambda_n\}, can be viewed as the shift of the matrix’s main diagonal that will make the matrix singular. Eigenvalues are found by subtracting \lambda along the main diagonal and finding the set of \lambda for which the determinant is zero.

det(A - \lambda\,I) = 0

\spaligndelims\vert\vert
\spalignmat{{a_{11} - \lambda} a_{12} {\cdots} a_{1n};
    a_{21} {a_{22} - \lambda} {\cdots} a_{2n};
    {\vdots}  {\vdots} {}  {\vdots};
    a_{n1} a_{n2} {\cdots} {a_{nn} - \lambda}} = 0

The determinant yields a degree-n polynomial, which can be factored to find the eigenvalue roots.

A = \spalignmat{2 2;2 -1}

\begin{array}{rl} \spaligndelims\vert\vert
\spalignmat{(2-\lambda) 2;2 (-1-\lambda)} & = 0 \\ \\
(2 - \lambda)(-1 - \lambda) - 4 & = 0 \\
\lambda^2 - \lambda - 6 & = 0 \\
(\lambda - 3)(\lambda + 2) & = 0 \end{array}

\lambda_1 = 3, \hspace{0.5in} \lambda_2 = -2

The eigenvectors, X_i (one per eigenvalue) lie in the same line as A\,X_i: A\,X_i = \lambda_i\,X_i. Thus,

(A - \lambda_i\,I)X_i = 0.

The solution to the above equation is called the null solution because we are looking for a vector, X_i, that sets the equation to zero. Given the matrix A and the eigenvalues, the eigenvectors can be found with elimination or with MATLAB’s null function. Two things to note about the eigenvectors returned from null: First, MATLAB always normalizes the vector (unit length). Secondly, eigenvectors may alway be multiplied by a scalar.

>> A = [2 2;2 -1];
>> l1 = -2; l2 = 3;
>> N1 = A - l1*eye(2)
N1 =
    4     2
    2     1
>> N2 = A - l2*eye(2)
N2 =
   -1     2
    2    -4
>> X1 = null(N1)
X1 =
   -0.4472
    0.8944
>> X1 = X1/X1(1)
X1 =
    1
   -2
>> X2 = null(N2)
X2 =
   -0.8944
   -0.4472
>> X2 = X2/X2(2)
X2 =
    2
    1

MATLAB has a function called eig that calculates both the eigenvalues and eigenvectors of a matrix.

>> A = [2 2;2 -1];
>> [X, V] = eig(A)
X =
    0.4472   -0.8944
   -0.8944   -0.4472
V =
   -2     0
    0     3

The eigenvalues are on the diagonal of V. The eigenvectors are columns of X. As with the null function, the eig function always normalizes the eigenvectors (unit length).

6.7.2. Eigenvector Animation

Eigenvectors and Eigenvalues can be difficult to understand, so the MATLAB code shows an animation that will hopefully help to visualize what makes a vector an Eigenvector.

function eigdemo(A)
%% Eigenvector, Eigenvalue Demo for input 2x2 matrix, A
%
%  When some special vectors are mulitiplied by A, the result is then a
%  scalar multiple of the vector.  These vectors are called the
%  Eigenvectors of A. The scalar values (lambda) are the Eigenvalues of A.
%  It is the orientation of an Eigenvectors that makes them special.
%  Multiplying by a scalar constant (C) does not change the Eigenvectors.
%  Thus, for simplicity, we can consider a set of unit vectors as candidate
%  Eigenvectors.
%
%  A*x == lambda*x,  C*A*x == C*lambda*x
%
%  This function shows an animation demonstrating that the product (A*x) is
%  inline with the vector x when, and only when, x is an Eigenvector. The A
%  (input) matrix must be a 2x2 matrix for this demonstration, but
%  Eigenvectors and Eigenvectors exit, of course, for larger matrices also.
%
%  The red rotating line is possible unit vectors, x.
%  The blue moving line is v = A*x.
%  The red fixed arrows show the eigenvectors (ex).
%  The green fixed arrows are the product (lamda*eigenvectors == A*ex),
%      which are scalar (lamda) mulitple of the red arrows.
%
%  Notice that when the red rotating vector hits an Eigenvector that
%  the blue line matches the green arrows. The blue line is also (-1)
%  of the green arrow when the red rotating line is (-1) of
%  an Eigenvector.
%


if size(A,1) ~= 2 || size(A,2) ~= 2
    disp('Matrix A must be 2x2')
    return;
end

theta = linspace(2*pi,0,100);
xv = cos(theta);
yv = sin(theta);
v = A*[xv; yv];
[ex, lambda] = eig(A);
if ~isreal(ex) || ~isreal(lambda)
    disp('Complex Eigenvector results, use another A matrix');
    return;
end
ep = A*ex;       % same as A*ex == lambda*ev

fprintf('eigenvector 1: %f %f\n', ex(1,1), ex(2,1));
fprintf('eigenvector 2: %f %f\n', ex(1,2), ex(2,2));
fprintf('A*eigenvector 1: %f %f\n', ep(1,1), ep(2,1));
fprintf('A*eigenvector 2: %f %f\n', ep(1,2), ep(2,2));
fprintf('lambda: %f %f\n', lambda(1,1), lambda(2,2));

M = ceil(max([norm(ep(:,1)) norm(ep(:,2))]));
figure; axis([-M M -M M]);
daspect([1 1 1]);
z = [0;0];
hold on
l1 = line([0 1], [0 0], 'LineWidth', 3, 'Color', 'r');
l2 = line([0 1], [0 0], 'LineWidth', 3, 'Color', 'b');
plot_arrow([z ex(:,1)], 'r'); % eigenvector 1
plot_arrow([z ex(:,2)], 'r'); % eigenvector 2
plot_arrow([z ep(:,1)], 'e'); % lamda 1 * eigenvector 1
plot_arrow([z ep(:,2)], 'e'); % lamda 2 * eigenvector 2
hold off

for k = 1:3        % make three loops
    for i = 1:100
        l1.set('XData', [0 xv(i)], 'YData', [0 yv(i)]);
        l2.set('XData', [0 v(1,i)], 'YData', [0 v(2,i)]);
        drawnow;
        pause(0.1);
    end
end

6.7.3. ODE Example

Consider the set of ODEs and initial conditions,

\spalignsys{y_1(t)' = -2\,y_1(t) + y_2(t);
            y_2(t)' = y_1(t) - 2\,y_2(t)}
\mbox{, \hspace{0.5in}} \spalignsys{y_1(0) = 6; y_2(0) = 2}.

In matrix notation,

Y' = \spalignmat[r]{-2 1; 1 -2}\,Y
\mbox{, \hspace{0.5in}} Y(0) = \spalignvector{6 2}.

We first use MATLAB to find the eigenvalues and eigenvectors. MATLAB always returns normalized eigenvectors, which can be multiplied by a constant to get simpler numbers.

>> A = [-2 1; 1 -2];
>> [X,lambda] = eig(A)
X =
    0.7071    0.7071
   -0.7071    0.7071
lambda =
    -3     0
     0    -1
>> X = X*2/sqrt(2)
X =
    1.0000    1.0000
   -1.0000    1.0000

The columns of the X matrix are the eigenvectors. The eigenvalues are on the diagonal of lambda. Our solution has the form

Y(t) = c_1\,e^{-3t}\spalignvector[r]{1 -1} +
c_2\,e^{-t}\spalignvector{1 1}

At the initial condition, the exponent terms become 1.

Y(0) = c_1\,\spalignvector[r]{1 -1} +
c_2\,\spalignvector{1 1}
= \spalignmat[r]{1 1; -1 1}\spalignvector{c_1 c_2}
= X\,C

>> Y0 = [6;2];
>> C = X\Y0
C =
    2.0000
    4.0000

\spalignsys{y_1(t) = 2\,e^{-3t} + 4\,e^{-t};
y_2(t) = -2\,e^{-3t} + 4\,e^{-t}}

Note

Some ODE systems have complex eigenvalues. When this occurs, the solution will have sine and cosine oscillating terms because of Euler’s formula, e^{j\,x} = \cos(x) + j\,\sin(x).