10.2 Homogeneous Linear Systems
INTRODUCTION
In Example 7 of Section 10.1 we saw that the general solution of the homogeneous system X′ = X is X = c1X1 +
. Since both solution vectors have the form Xi =
, i = 1, 2, where k1, k2, λ1, and λ2 are constants, we are prompted to ask whether we can always find a solution of the form
X = eλt = Keλt(1)
We will be dealing only with linear systems with constant coefficients.
for the general homogeneous linear first-order system
X′ = AX,(2)
where the coefficient matrix A is an n × n matrix of constants.
Eigenvalues and Eigenvectors
If (1) is to be a solution vector of the system, then X′ = Kλeλt so that (2) becomes Kλeλt = AKeλt. After dividing out eλt and rearranging, we obtain AK = λK or AK – λK = 0. Since K = IK, the last equation is the same as
(A – λI)K = 0.(3)
The matrix equation (3) is equivalent to the system of linear algebraic equations
Thus to find a nontrivial solution X of (2) we must first find a nontrivial solution of the foregoing system; in other words, we must find a nontrivial vector K that satisfies (3). But in order for (3) to have solutions other than the obvious trivial solution k1 = k2 = = kn = 0, we must have
See Theorem 8.6.6 on page 419.
det(A – λI) = 0.
This polynomial equation in λ is called the characteristic equation of the matrix A; its solutions are the eigenvalues of A. A solution K ≠ 0 of (3) corresponding to an eigenvalue λ is called an eigenvector of A. A solution of the homogeneous system (2) is then X = Keλt.
In the following discussion we examine three cases: all the eigenvalues are real and distinct (that is, no eigenvalues are equal), repeated eigenvalues, and finally, complex eigenvalues.
10.2.1 Distinct Real Eigenvalues
When the n × n matrix A possesses n distinct real eigenvalues λ1, λ2, … , λn, then a set of n linearly independent eigenvectors K1, K2, … , Kn can always be found and
is a fundamental set of solutions of (2) on the interval (–∞, ∞).
THEOREM 10.2.1 General Solution—Homogeneous Systems
Let λ1, λ2, … , λn be n distinct real eigenvalues of the coefficient matrix A of the homogeneous system (2), and let K1, K2, … , Kn be the corresponding eigenvectors. Then the general solution of (2) on the interval (–∞, ∞) is given by
.
EXAMPLE 1 Distinct Eigenvalues
Solve (4)
SOLUTION
We first find the eigenvalues and eigenvectors of the matrix of coefficients.
From the characteristic equation
det (A – λI) = = λ2 – 3λ – 4 = (λ + 1)(λ – 4) = 0
we see that the eigenvalues are λ1 = –1 and λ2 = 4.
Now for λ1 = –1, (3) is equivalent to
3k1 + 3k2 = 0
2k1 + 2k2 = 0.
Thus k1 = –k2. When k2 = –1, the related eigenvector is
For λ2 = 4, we have
so that k1 = k2, and therefore with k2 = 2, the corresponding eigenvector is
K2 = .
Since the matrix of coefficients A is a 2 × 2 matrix, and since we have found two linearly independent solutions of (4),
we conclude that the general solution of the system is
(5)≡
Phase Portrait
For the sake of review, you should keep firmly in mind that a solution of a system of linear first-order differential equations, when written in terms of matrices, is simply an alternative to the method that we employed in Section 3.12—namely, listing the individual functions and the relationship between the constants. If we add the vectors on the right side of (5) and then equate the entries with the corresponding entries in the vector of the left, we obtain the more familiar statement
x = c1e–t + 3c2e4t, y = –c1e–t + 2c2e4t.
As pointed out in Section 10.1, we can interpret these equations as parametric equations of a curve or trajectory in the xy-plane or phase plane. The three graphs shown in FIGURE 10.2.1, x(t) in the tx-plane, y(t) in the ty-plane, and the trajectory in the phase plane, correspond to the choice of constants c1 = c2 = 1 in the solution. A collection of trajectories in the phase plane as shown in FIGURE 10.2.2 is said to be a phase portrait of the given linear system. What appears to be two red lines in Figure 10.2.2 are actually four half-lines defined parametrically in the first, second, third, and fourth quadrants by the solutions X2, –X1, –X2, and X1, respectively. For example, the Cartesian equations y = x, x > 0, and y = –x, x > 0, of the half-lines in the first and fourth quadrants were obtained by eliminating the parameter t in the solutions x = 3e4t, y = 2e4t, and x = e–t, y = –e–t, respectively. Moreover, each eigenvector can be visualized as a two-dimensional vector lying along one of these half-lines. The eigenvector K2 =
lies along y =
x in the first quadrant and K1 =
lies along y = –x in the fourth quadrant; each vector starts at the origin with K2 terminating at the point (2, 3) and K1 terminating at (1, –1).
FIGURE 10.2.1 A particular solution of (5) gives three different curves in three different coordinate planes
FIGURE 10.2.2 A phase portrait of system (4)
The origin is not only a constant solution, x = 0, y = 0, of every 2 × 2 homogeneous linear system X′ = AX but is as well an important point in the qualitative study of such systems. If we think in physical terms, the arrowheads on a trajectory in Figure 10.2.2 indicate the direction that a particle with coordinates (x(t), y(t)) on a trajectory at time t would move as time increases. Observe that the arrowheads, with only those on the half-lines in the second and fourth quadrants being the exception, indicate that a particle would move away from the origin as time t increases. If we imagine time ranging from –∞ to ∞, then inspection of the solution x = c1e–t + 3c2e4t, y = –c1e–t + 2c2e4t, c1 ≠ 0, c2 ≠ 0, shows that a trajectory, or moving particle, “starts” asymptotic to one of the half-lines defined by X1 or –X1 (since e4t is negligible for t → –∞) and “finishes” asymptotic to one of the half-lines defined by X2 and –X2 (since e–t is negligible for t → ∞).
We note in passing that Figure 10.2.2 represents a phase portrait that is typical of all 2 × 2 homogeneous linear systems X′ = AX with real eigenvalues of opposite signs. See Problem 19 in Exercises 10.2. Moreover, phase portraits in the two cases when distinct real eigenvalues have the same algebraic sign would be typical portraits of all such 2 × 2 linear systems; the only difference is that the arrowheads would indicate that a particle would move away from the origin on any trajectory as t → ∞ when both λ1 and λ2 are positive and would move toward the origin on any trajectory when both λ1 and λ2 are negative. Consequently, it is common to call the origin a repeller in the case λ1 > 0, λ2 > 0, and an attractor in the case λ1 < 0, λ2 < 0. See Problem 20 in Exercises 10.2. The origin in Figure 10.2.2 is neither a repeller nor an attractor. Investigation of the remaining case when λ = 0 is an eigenvalue of a 2 × 2 homogeneous linear system is left as an exercise. See Problem 52 in Exercises 10.2.
EXAMPLE 2 Distinct Eigenvalues
Solve
(6)
SOLUTION
Using the cofactors of the third row, we find
det (A − λI) = = −(λ + 3)(λ + 4)(λ − 5) = 0,
and so the eigenvalues are λ1 = −3, λ2 = −4, λ3 = 5.
For λ1 = −3, Gauss–Jordan elimination gives
Therefore k1 = k3 and k2 = 0. The choice k3 = 1 gives an eigenvector and corresponding solution vector
K1 = , X1 =
e−3t.(7)
Similarly, for λ2 = −4,
implies k1 = 10k3 and k2 = −k3. Choosing k3 = 1, we get a second eigenvector and solution vector
(8)
Finally, when λ3 = 5, the augmented matrices
yield (9)
The general solution of (6) is a linear combination of the solution vectors in (7), (8), and (9):
≡
Use of Computers
Software packages such as MATLAB, Mathematica, and Maple can be real time savers in finding eigenvalues and eigenvectors of a matrix. For example, to find the eigenvalues and eigenvectors of the matrix of coefficients in (6) using Mathematica, we first input the definition of the matrix by rows:
m = { {−4, 1, 1}, { 1, 5, −1}, { 0, 1, −3} }.
The commands Eigenvalues[m] and Eigenvectors[m] given in sequence yield
{5, −4, −3} and { { 1, 8, 1}, { 10, −1, 1}, { 1, 0, 1} },
respectively. In Mathematica, eigenvalues and eigenvectors can also be obtained at the same time by using Eigensystem[m].
10.2.2 Repeated Eigenvalues
Of course, not all of the n eigenvalues λ1, λ2, … , λn of an n × n matrix A need be distinct; that is, some of the eigenvalues may be repeated. For example, the characteristic equation of the coefficient matrix in the system
X′ = X(10)
is readily shown to be (λ + 3)2 = 0, and therefore λ1 = λ2 = −3 is a root of multiplicity two. For this value we find the single eigenvector
(11)
is one solution of (10). But since we are obviously interested in forming the general solution of the system, we need to pursue the question of finding a second solution.
In general, if m is a positive integer and (λ − λ1)m is a factor of the characteristic equation while (λ − λ1)m+1 is not a factor, then λ1 is said to be an eigenvalue of multiplicity m. The next three examples illustrate the following cases:
- For some n × n matrices A it may be possible to find m linearly independent eigenvectors K1, K2, … , Km corresponding to an eigenvalue λ1 of multiplicity m ≤ n. In this case, the general solution of the system contains the linear combination
- If there is only one eigenvector corresponding to the eigenvalue λ1 of multiplicity m, then m linearly independent solutions of the form
where Kij are column vectors, can always be found.
Eigenvalue of Multiplicity Two
We begin by considering eigenvalues of multiplicity two. In the first example we illustrate a matrix for which we can find two distinct eigenvectors corresponding to a repeated eigenvalue.
EXAMPLE 3 Repeated Eigenvalues
Solve X′ = X.
SOLUTION
Expanding the determinant in the characteristic equation
det(A − λI) = = 0
yields −(λ + 1)2 (λ − 5) = 0. We see that λ1 = λ2 = −1 and λ3 = 5.
For λ1 = −1, Gauss–Jordan elimination immediately gives
The first row of the last matrix means k1 − k2 + k3 = 0 or k1 = k2 − k3. The choices k2 = 1, k3 = 0 and k2 = 1, k3 = 1 yield, in turn, k1 = 1 and k1 = 0. Thus two eigenvectors corresponding to λ1 = −1 are
Since neither eigenvector is a constant multiple of the other, we have found, corresponding to the same eigenvalue, two linearly independent solutions
Lastly, for λ3 = 5, the reduction
implies k1 = k3 and k2 = −k3. Picking k3 = 1 gives k1 = 1, k2 = −1, and thus a third eigenvector is
We conclude that the general solution of the system is
The matrix of coefficients A in Example 3 is a special kind of matrix known as a symmetric matrix. An n × n matrix A is said to be symmetric if its transpose AT (where the rows and columns are interchanged) is the same as A; that is, if AT = A. It can be proved that if the matrix A in the system X′ = AX is symmetric and has real entries, then we can always find n linearly independent eigenvectors K1, K2, … , Kn, and the general solution of such a system is as given in Theorem 10.2.1. As illustrated in Example 3, this result holds even when some of the eigenvalues are repeated.
Second Solution
Now suppose that λ1 is an eigenvalue of multiplicity two and that there is only one eigenvector associated with this value. A second solution can be found of the form
(12)
where
To see this we substitute (12) into the system X′ = AX and simplify:
Since this last equation is to hold for all values of t, we must have
(A − λ1I)K = 0(13)
and (A − λ1I)P = K.(14)
Equation (13) simply states that K must be an eigenvector of A associated with λ1. By solving (13), we find one solution X1 = K. To find the second solution X2 we need only solve the additional system (14) for the vector P.
EXAMPLE 4 Repeated Eigenvalues
Find the general solution of the system given in (10).
SOLUTION
From (11) we know that λ1 = −3 and that one solution is X1 = e−3t. Identifying K =
and P =
, we find from (14) that we must now solve
Since this system is obviously equivalent to one equation, we have an infinite number of choices for p1 and p2. For example, by choosing p1 = 1 we find p2 = . However, for simplicity, we shall choose p1 =
so that p2 = 0. Hence P =
. Thus from (12),
The general solution of (10) is then
≡
By assigning various values to c1 and c2 in the solution in Example 4, we can plot trajectories of the system in (10). A phase portrait of (10) is given in FIGURE 10.2.3. The solutions X1 and −X1 determine two half-lines y = x, x > 0, and y =
x, x < 0, respectively, that are shown in red in Figure 10.2.3. Because the single eigenvalue is negative and e–3t → 0 as t → ∞ on every trajectory, we have (x(t), y(t)) → (0, 0) as t → ∞. This is why the arrowheads in Figure 10.2.3 indicate that a particle on any trajectory would move toward the origin as time increases and why the origin is an attractor in this case. Moreover, a moving particle on a trajectory x = 3c1e–3t + c2(te–3t +
e–3t), y = c1e−3t + c2te−3t, c2 ≠ 0, approaches (0, 0) tangentially to one of the half-lines as t → ∞. In contrast, when the repeated eigenvalue is positive the situation is reversed and the origin is a repeller. See Problem 23 in Exercises 10.2. Analogous to Figure 10.2.2, Figure 10.2.3 is typical of all 2 × 2 homogeneous linear systems X′ = AX that have repeated negative eigenvalues. See Problem 34 in Exercises 10.2.
FIGURE 10.2.3 A phase portrait of system (10)
Eigenvalue of Multiplicity Three
When the coefficient matrix A has only one eigenvector associated with an eigenvalue λ1 of multiplicity three, we can find a solution of the form (12) and a third solution of the form
(15)
where
By substituting (15) into the system X′ = AX, we find that the column vectors K, P, and Q must satisfy
(A − λ1I)K = 0(16)
(A − λ1I)P = K(17)
and (A − λ1I)Q = P.(18)
Of course, the solutions of (16) and (17) can be used in forming the solutions X1 and X2.
EXAMPLE 5 Repeated Eigenvalues
Solve X′ = X.
SOLUTION
The characteristic equation (λ − 2)3 = 0 shows that λ1 = 2 is an eigenvalue of multiplicity three. By solving (A − 2I)K = 0 we find the single eigenvector
We next solve the systems (A − 2I)P = K and (A − 2I)Q = P in succession and find that
Using (12) and (15), we see that the general solution of the system is
REMARKS
When an eigenvalue λ1 has multiplicity m, then we can either find m linearly independent eigenvectors or the number of corresponding eigenvectors is less than m. Hence the two cases listed on page 611 are not all the possibilities under which a repeated eigenvalue can occur. It could happen, say, that a 5 × 5 matrix has an eigenvalue of multiplicity 5 and there exist three corresponding linearly independent eigenvectors. See Problems 33 and 53 in Exercises 10.2.
10.2.3 Complex Eigenvalues
If λ1 = α + iβ and λ2 = α − iβ, β > 0, i2 = −1, are complex eigenvalues of the coefficient matrix A, we can then certainly expect their corresponding eigenvectors to also have complex entries.*
For example, the characteristic equation of the system
(19)
is
From the quadratic formula we find λ1 = 5 + 2i, λ2 = 5 − 2i.
Now for λ1 = 5 + 2i we must solve
Since k2 = (1 − 2i)k1,† the choice k1 = 1 gives the following eigenvector and a solution vector:
In like manner, for λ2 = 5 − 2i we find
We can verify by means of the Wronskian that these solution vectors are linearly independent, and so the general solution of (19) is
(20)
Note that the entries in K2 corresponding to λ2 are the conjugates of the entries in K1 corresponding to λ1. The conjugate of λ1 is, of course, λ2. We write this as λ2 = and K2 =
. We have illustrated the following general result.
THEOREM 10.2.2 Solutions Corresponding to a Complex Eigenvalue
Let A be the coefficient matrix having real entries of the homogeneous system (2), and let K1 be an eigenvector corresponding to the complex eigenvalue λ1 = α + iβ, α and β real. Then
and
are solutions of (2).
It is desirable and relatively easy to rewrite a solution such as (20) in terms of real functions. To this end we first use Euler’s formula to write
Then (20) becomes
If we let and
then the last line can be written
X = C1X1 + C2X2,(21)
where
and
It is now important to realize that the two vectors X1 and X2 in (21) are themselves linearly independent real solutions of the original system. Consequently, we are justified in ignoring the relationship between C1, C2 and c1, c2, and we can regard C1 and C2 as completely arbitrary and real. In other words, the linear combination (21) is an alternative general solution of (19).
The foregoing process can be generalized. Let K1 be an eigenvector of the coefficient matrix A (with real entries) corresponding to the complex eigenvalue λ1 = α + iβ. Then the two solution vectors in Theorem 10.2.2 can be written as
By the superposition principle, Theorem 10.1.2, the following vectors are also solutions:
For any complex number z = a + ib, both and
are real numbers.
Therefore, the entries in the column vectors and
are real numbers. By defining
(22)
we are led to the following theorem.
THEOREM 10.2.3 Real Solutions Corresponding to a Complex Eigenvalue
Let λ1 = α + iβ be a complex eigenvalue of the coefficient matrix A in the homogeneous system (2), and let B1 and B2 denote the column vectors defined in (22). Then
(23)
are linearly independent solutions of (2) on (–∞, ∞).
The matrices B1 and B2 in (22) are often denoted by
B1 = Re(K1) and B2 = Im(K1) (24)
since these vectors are, respectively, the real and imaginary parts of the eigenvector K1. For example, (21) follows from (23) with
EXAMPLE 6 Complex Eigenvalues
Solve the initial-value problem
(25)
SOLUTION
First we obtain the eigenvalues from
The eigenvalues are λ1 = 2i and λ2 = = −2i. For λ1 the system
gives k1 = −(2 + 2i)k2. By choosing k2 = −1 we get
Now from (24) we form
Since α = 0, it follows from (23) that the general solution of the system is
(26)
Some graphs of the curves or trajectories defined by the solution (26) of the system are illustrated in the phase portrait in FIGURE 10.2.4. Now the initial condition X(0) = , or equivalently x(0) = 2, and y(0) = −1, yields the algebraic system 2c1 + 2c2 = 2, −c1 = −1 whose solution is c1 = 1, c2 = 0. Thus the solution to the problem is X =
. The specific trajectory defined parametrically by the particular solution x = 2 cos 2t – 2 sin 2t, y = –cos 2t is the red curve in Figure 10.2.4. Note that this curve passes through (2, −1).≡
FIGURE 10.2.4 A phase portrait of system (25) in Example 6
10.2 Exercises Answers to selected odd-numbered problems begin on page ANS-27.
10.2.1 Distinct Real Eigenvalues
In Problems 1–12, find the general solution of the given system.
= x + 2y
= 4x + 3y
= 2x + 2y
= x + 3y
= −4x + 2y
= −
x + 2y
= −
x + 2y
=
x − 2y
- X′ =
X
- X′ =
X
= x + y − z
= 2y
= y − z
= 2x − 7y
= 5x + 10y + 4z
= 5y + 2z
- X′ =
X
- X′ =
X
- X′ =
X
- X′ =
X
In Problems 13 and 14, solve the given initial-value problem.
- X′ =
X, X(0) =
- Consider the large mixing tanks shown in FIGURE 10.2.5. Suppose that both tanks A and B initially contain 100 gallons of brine. Liquid is pumped in and out of the tanks as indicated in the figure; the mixture pumped between and out of the tanks is assumed to be well-stirred.
- Construct a mathematical model in the form of a linear system of first-order differential equations for the number of pounds x1(t) and x2(t) of salt in tanks A and B, respectively, at time t. Write the system in matrix form. [Hint: See Section 2.9 and Problem 52, Chapter 2 in Review.]
- Use the eigenvalue method of this section to solve the linear system in part (a) subject to x1(0) = 20, x2(0) = 5.
- Use a graphing utility or CAS to plot the graphs of x1(t) and x2(t) in the same coordinate plane.
- Suppose the system of mixing tanks is to be turned off when the number of pounds of salt in tank B equals that in tank A. Use a root-finding application of a CAS or calculator to approximate that time.
FIGURE 10.2.5 Mixing tanks in Problem 15
- In Problem 26 of Exercises 3.12 you were asked to solve the following linear system
using elimination techniques. Recall, this linear system is a mathematical model for the number of pounds of salt x1(t), x2(t), and x3(t) in the connected mixing tanks A, B, and C shown in Figure 2.9.7 on page 102.
- Use the eigenvalue method of this section to solve the system subject to x1(0) = 15, x2(0) = 10, x3(0) = 5.
- What are
and
Interpret this result.
Computer Lab Assignments
In Problems 17 and 18, use a CAS or linear algebra software as an aid in finding the general solution of the given system.
- X′ =
X
- X′ =
X
-
- Use computer software to obtain the phase portrait of the system in Problem 5. If possible, include the arrowheads as in Figure 10.2.2. Also, include four half-lines in your phase portrait.
- Obtain the Cartesian equations of each of the four half-lines in part (a).
- Draw the eigenvectors on your phase portrait of the system.
- Find phase portraits for the systems in Problems 2 and 4. For each system, find any half-line trajectories and include these lines in your phase portrait.
10.2.2 Repeated Eigenvalues
In Problems 21–30, find the general solution of the given system.
= 3x − y
= 9x − 3y
= −6x + 5y
= −5x + 4y
- X′ =
X
- X′ =
X
= 3x − y − z
= x + y − z
= x − y + z
= 3x + 2y + 4z
= 2x + 2z
= 4x + 2y + 3z
- X′ =
X
- X′ =
X
- X′ =
X
- X′ =
X
In Problems 31 and 32, solve the given initial-value problem.
- X′ =
X, X(0) =
- X′ =
X, X(0) =
- Show that the 5 × 5 matrix
has an eigenvalue λ1 of multiplicity 5. Show that three linearly independent eigenvectors corresponding to λ1 can be found.
Computer Lab Assignment
- Find phase portraits for the systems in Problems 22 and 23. For each system, find any half-line trajectories and include these lines in your phase portrait.
10.2.3 Complex Eigenvalues
In Problems 35–46, find the general solution of the given system.
= 6x − y
= 5x + 2y
= x + y
= −2x − y
= 5x + y
= −2x + 3y
= 4x + 5y
= −2x + 6y
- X′ =
X
- X′ =
X
= z
= −z
= y
= 2x + y + 2z
= 3x + 6z
= −4x − 3z
- X′ =
X
- X′ =
X
- X′ =
X
- X′ =
X
In Problems 47 and 48, solve the given initial-value problem.
- X′ =
X, X(0) =
- X′ =
X, X(0) =
-
- In the closed system shown in FIGURE 10.2.6 the three large tanks A, B, and C initially contain the number of gallons of brine indicated. Construct a mathematical model for the number of pounds of salt x1(t), x2(t), and x3(t) in the tanks A, B, and C at time t, respectively.
- Use the eigenvalue method of this section to solve the system in part (a) subject to x1(0) = 30, x2(0) = 20, x3(0) = 5.
FIGURE 10.2.6 Mixing tanks in Problem 49
- For the system in Problem 49:
- Show that
Interpret this result.
- What are
and
Interpret this result.
- Show that
Computer Lab Assignments
- Find phase portraits for the systems in Problems 38–40.
- Solve each of the following linear systems.
- X′ =
X
- X′ =
X
Find a phase portrait of each system. What is the geometric significance of the line y = −x in each portrait?
- X′ =
Discussion Problems
- Consider the 5 × 5 matrix given in Problem 33. Solve the system X′ = AX without the aid of matrix methods, but write the general solution using the matrix notation. Use the general solution as a basis for a discussion on how the system can be solved using the matrix methods of this section. Carry out your ideas.
- Obtain a Cartesian equation of the curve defined parametrically by the solution of the linear system in Example 6. Identify the curve passing through (2, −1) in Figure 10.2.4. [Hint: Compute x2, y2, and xy.]
- Examine your phase portraits in Problem 51. Under what conditions will the phase portrait of a 2 × 2 homogeneous linear system with complex eigenvalues consist of a family of closed curves? Consist of a family of spirals? Under what conditions is the origin (0, 0) a repeller? An attractor?
- The system of linear second-order differential equations
(27)
describes the motion of two coupled spring/mass systems (see Figure 3.12.1). We have already solved a special case of this system in Sections 3.12 and 4.6. In this problem we describe yet another method for solving the system.
- Show that the system in (27) can be written as the matrix equation X″ = AX, where
- If a solution is assumed of the form X = Keωt, show that X″ = AX yields
(A − λI)K = 0 where λ = ω2.
- Show that if m1 = 1, m2 = 1, k1 = 3, and k2 = 2, a solution of the system is
- Show that the solution in part (c) can be written as
- Show that the system in (27) can be written as the matrix equation X″ = AX, where
* When the characteristic equation has real coefficients, complex eigenvalues always appear in conjugate pairs.
† Note that the second equation in the system is simply 1 + 2i times the first equation.