Purdue MA 26500 Spring 2022 Final Exam Solutions

Here comes the solution and analysis for Purdue MA 26500 Spring 2022 Final exam. This exam covers all topics from Chapter 1 (Linear Equations in Linear Algebra) to Chapter 7 Section 1 (Diagonalization of Symmetric Matrices).


Purdue Department of Mathematics provides a linear algebra course MA 26500 every semester, which is mandatory for undergraduate students of almost all science and engineering majors.

Textbook and Study Guide

Disclosure: This blog site is reader-supported. When you buy through the affiliate links below, as an Amazon Associate, I earn a tiny commission from qualifying purchases. Thank you.

MA 26500 textbook is Linear Algebra and its Applications (6th Edition) by David C. Lay, Steven R. Lay, and Judi J. McDonald. The authors have also published a student study guide for it, which is available for purchase on Amazon as well.

Exam Information

MA 26500 Final exam covers all the topics from Chapter 1 to Chapter 7 Sections 1 in the textbook. This is a two-hour comprehensive common final exam given during the final exam week. There are 25 multiple-choice questions on the final exam.

Spring 2022 Final Exam Solutions

Problem 1

Problem 1 Solution

Start with the augmented matrix of the system, and do row reduction like the below

\[ \left[\begin{array}{ccc|c}1&2&3&16\\2&0&-2&14\\3&2&1&3a\end{array}\right]\sim \left[\begin{array}{ccc|c}1&2&3&16\\0&-4&-8&-18\\0&-4&-8&3a-48\end{array}\right]\sim \left[\begin{array}{ccc|c}1&2&3&16\\0&-4&-8&-18\\0&0&0&3a-30\end{array}\right] \]

Clearly, this system of equations is consistent when \(a=10\). So the answer is B.

Problem 2

Problem 2 Solution

First review the properties of determinants:

Let A be a square matrix.
a. If a multiple of one row of \(A\) is added to another row to produce a matrix \(B\), then \(\det B =\det A\).
b. If two rows of \(A\) are interchanged to produce \(B\), then \(\det B=-\det A\).
c. If one row of A is multiplied by \(k\) to produce B, then \(\det B=k\cdot\det A\).

Also since \(\det A^T=\det A\), a row operation on \(A^T\) amounts to a column operation on \(A\). The above property is true for column operations as well.

With these properties in mind, we can do the following

\[\begin{align} \begin{vmatrix}d&2a&g+d\\e&2b&h+e\\f&2c&i+f\end{vmatrix} &=2\times \begin{vmatrix}d&a&g+d\\e&b&h+e\\f&c&i+f\end{vmatrix}= 2\times \begin{vmatrix}d&a&g\\e&b&h\\f&c&i\end{vmatrix}= 2\times (-1)\times \begin{vmatrix}a&d&g\\b&e&h\\c&f&i\end{vmatrix}\\ &=(-2)\times \begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}=(-2)\times 1=-2 \end{align}\]

So the answer is A.

Problem 3

Problem 3 Solution

Denote \(A=BCB^{-1}\), it can be seen that \[\det A=\det (BCB^{-1})=\det B\det C\det B^{-1}=\det (BB^{-1})\det C=\det C\]

Thus we can directly write down the determinant calculation process like below (applying row operations) \[ \begin{vmatrix}1&2&3\\1&4&5\\-1&3&7\end{vmatrix}= \begin{vmatrix}1&2&3\\0&2&2\\0&5&10\end{vmatrix}= 1\times (-1)^{1+1}\begin{vmatrix}2&2\\5&10\end{vmatrix}= 1\times (2\times 10-2\times 5)=10 \]

So the answer is B.

Problem 4

Problem 4 Solution

From the book Section 4.6 (Rank):

If two matrices A and B are row equivalent, then their row spaces are the same. If B is in echelon form, the nonzero rows of B form a basis for the row space of A as well as for that of B.

Also, pay attention to the following warning from the book as well:

With this information, we can tell that statements (i) and (iii) are TRUE, while statements (iv) and (v) are FALSE because "row operations can change the column space of a matrix". If we do row reduction to the augmented matrix, the far right column (representing \(\pmb b\)) would be changed. This proves (iv) is also FALSE.

For (ii), since \(A\) and \(B\) are row-equivalent matrices, their null spaces are the same. This is because none of these operations change the solution set of homogeneous linear equations. So \(A\pmb x=\pmb 0\) if and and only if \(B\pmb x=\pmb 0\). Hence \(\text{Nul}\;A=\{\pmb 0\}\), so the columns of \(A\) are linearly independent. This statement is FALSE.

So the answer is E.

Note the following statement is also TRUE.

Let \(A\) be an \(n\times n\) matrix. If \(\text{Col}\;A = \text{Nul}\;A\), it can be proven that \(\text{Nul}\;A^2=\mathbb R^n\).

Problem 5

Problem 5 Solution

\(A\) is a \(4\times 5\) matrix for a linear transformation \(T\) from \(\mathbb R^5\) to \(\mathbb R^4\). Correspondingly, \(A^T\) is a \(5\times 4\) matrix for a linear transformation \(S\) from \(\mathbb R^4\) to \(\mathbb R^5\).

Onto and One-to-One
Neither Onto Nor One-to-One

A. Note that \(A\) has an all-zero column, and has a pivot position in each row (rank 4). This means it has only 4 linear independent columns. For each \(\pmb b\) in \(\mathbb R^4\), the equation\(A\pmb x=\pmb b\) is consistent. In other words, the linear transformation \(T\) maps \(\mathbb R^5\) onto \(\mathbb R^4\). But we can multiply any value with the first all-zeros vector, it is definitely not one-to-one. This statement is FALSE.

B. Remember that "A linear transformation \(S\) is one-to-one if and only if the columns of the matrix are linearly independent, and if and only if \(S(\pmb x)=\pmb 0\) has only the trivial solution." \(A^T\) satisfies this. This statement is TRUE.

C. For \(S\) to be onto, the rank of \(A^T\) must equal the number of rows. So it needs rank 5 to be onto. However, it has a row of all zeros, its rank is 4. Therefore, S is not onto. Not all \(\pmb b\) has solutions.

D. \(A^T\) has rank 4 and 4 columns. So the nullity is 4-4 = 0. This statement is FALSE. Alternately we can see that \(A^T\pmb x=\pmb 0\) has only a trivial (all-zero) solution.

E. The range of \(T\) is the column space of \(A\). It is \(\mathbb R^4\). This statement is FALSE.

So the answer is B.

Problem 6

Problem 6 Solution

Note the trace of a square matrix \(A\) is the sum of the diagonal entries in A and is denoted by tr \(A\).

Remember the formula for inverse matrix \[ A^{-1}=\frac{1}{\det A}\text{adj}\;A=[b_{ij}]\qquad b_{ij}=\frac{C_{ji}}{\det A}\qquad C_{ji}=(-1)^{i+j}\det A_{ji} \] Where \(\text{adj}\;A\) is the adjugate of \(A\), \(C_{ji}\) is a cofactor of \(A\), and \(A_{ji}\) denotes the submatrix of \(A\) formed by deleting row \(j\) and column \(i\).

Now we can find the answer step-by-step:

  1. Calculate the determinant of \(A\) \[ \begin{vmatrix}1&2&7\\1&3&12\\2&5&20\end{vmatrix}= \begin{vmatrix}1&2&7\\0&1&5\\0&1&6\end{vmatrix}= \begin{vmatrix}1&2&7\\0&1&5\\0&0&1\end{vmatrix}=1 \]

  2. Calculate \(b_{11}\), \(b_{22}\), and \(b_{33}\) \[ b_{11}=\frac{C_{11}}{1}=\begin{vmatrix}3&12\\5&20\end{vmatrix}=0\quad b_{22}=\frac{C_{22}}{1}=\begin{vmatrix}1&7\\2&20\end{vmatrix}=6\quad b_{33}=\frac{C_{33}}{1}=\begin{vmatrix}1&2\\1&3\end{vmatrix}=1 \]

  3. Get the trace of \(A^{-1}\) \[\text{tr}\;A^{-1}=b_{11}+b_{22}+b_{33}=0+6+1=7\]

So the answer is C.

Problem 7

Problem 7 Solution

First do row reduction to get row echelon form of the matrix \(A\):

\[\begin{align} &\begin{bmatrix}1&2&2&10&3\\2&4&1&11&5\\3&6&2&18&1\end{bmatrix}\sim \begin{bmatrix}1&2&2&10&3\\0&0&-3&-9&-1\\0&0&-4&-12&-8\end{bmatrix}\sim \begin{bmatrix}1&2&2&10&3\\0&0&3&9&1\\0&0&1&3&2\end{bmatrix}\\ \sim&\begin{bmatrix}1&2&2&10&3\\0&0&3&9&1\\0&0&3&9&6\end{bmatrix} \sim\begin{bmatrix}\color{fuchsia}{1}&2&2&10&3\\0&0&\color{fuchsia}{3}&9&1\\0&0&0&0&\color{fuchsia}{5}\end{bmatrix} \end{align}\]

This shows that there are 3 pivot elements and 3 corresponding pivot columns (from the original matrix \(A\)) shown below

\[\begin{Bmatrix} \begin{bmatrix}1\\2\\3\end{bmatrix}, \begin{bmatrix}2\\1\\2\end{bmatrix}, \begin{bmatrix}3\\5\\1\end{bmatrix} \end{Bmatrix}\]

These columns form a basis for \(\text{Col}\;A\). Now look at the statements A and E.

In the statement A, the first vector equals the sum of the first two pivot columns above. In the statement E, the third vector equals the sum of the last two pivot columns above. So both are TRUE.

To check the statements B, C, and D, we need to find the basis for \(\text{Nul}\;A\). From the row echelon form, it can be deduced that with \(x_2\) and \(x_4\) as free variable \[\begin{align} x_5&=0\\x_3&=-3x_4\\x_1&=-2x_2-2x_3-10x_4=-2x_2-4x_4 \end{align}\] This leads to \[ \begin{bmatrix}x_1\\x_2\\x_3\\x_4\\x_5\end{bmatrix}= \begin{bmatrix}-2x_2-4x_4\\x_2\\-3x_4\\x_4\\0\end{bmatrix}= x_2\begin{bmatrix}-2\\1\\0\\0\\0\end{bmatrix}+x_4\begin{bmatrix}-4\\0\\-3\\1\\0\end{bmatrix} \]

So the basis of \(\text{Nul}\;A\) is \[\begin{Bmatrix} \begin{bmatrix}-2\\1\\0\\0\\0\end{bmatrix}, \begin{bmatrix}-4\\0\\-3\\1\\0\end{bmatrix} \end{Bmatrix}\]

The statement B is TRUE because its first vector is the first column above scaled by 2, and its 2nd vector is just the 2nd column above scaled by -1.

For statement D, its 1st vector is the same as the first column above, and the 2nd vector is just the sum of the two columns. It is TRUE as well.

The statement C is FALSE since generating the 2nd vector with 3 and -2 coexisting is impossible.

So the answer is C.

Problem 8

Problem 8 Solution

Per the definition of Subspace in Section 4.1 "Vector Spaces and Subspaces"

A subspace of a vector space \(V\) is a subset \(H\) of \(V\) that has three properties:
a. The zero vector of \(V\) is in \(H\).
b. \(H\) is closed under vector addition. That is, for each \(\pmb u\) and \(\pmb v\) in \(H\), the sum \(\pmb u + \pmb v\) is in \(H\).
c. \(H\) is closed under multiplication by scalars. That is, for each \(\pmb u\) in \(H\) and each scalar \(c\), the vector \(c\pmb u\) is in \(H\).

(i) This is FALSE since the zero vector \(\begin{bmatrix}0\\0\end{bmatrix}\) is not satifying the condition.

(ii) Obviously the zero vector is in the set because \(A\pmb 0=3\pmb 0\). This is the set of eigenvectors of \(A\) corresponding to the eigenvalue 3. Eigenvectors corresponding to the same eigenvalue, along with the zero vector, form a subspace. So this is a subspace of V. It is TRUE.

Besides it is easy to verify that \[\begin{align} A(c\pmb v)&=cA\pmb v=c3\pmb v=3(c\pmb v)\\ A(\pmb v_1+\pmb v_2)&=A\pmb v_1+A\pmb v_2=3\pmb v_1+3\pmb v_2=3(\pmb v_1+\pmb v_2) \end{align}\] So the set is closed under both multiplication by scalars and vector addition, hence it forms a subspace.

(iii) First check if the zero vector is in the vector set, we get the system of equations below \[ \left\{ \begin{array}{ll} a+b+3d&=0\\ a+c&=-1\\ -c+d&=1\\ b+2c&=0 \end{array} \right. \]

Solve this with the augmented matrix, and do row reduction in the process

\[\begin{align} &\left[\begin{array}{cccc|c}1&1&0&3&0\\1&0&1&0&-1\\0&0&-1&1&1\\0&1&2&0&0\end{array}\right]\sim \left[\begin{array}{cccc|c}1&1&0&3&0\\0&-1&1&-3&-1\\0&0&-1&1&1\\0&1&2&0&0\end{array}\right]\\ \sim&\left[\begin{array}{cccc|c}1&1&0&3&0\\0&-1&1&-3&-1\\0&0&-1&1&1\\0&0&3&-3&-1\end{array}\right]\sim\left[\begin{array}{cccc|c}1&1&0&3&0\\0&-1&1&-3&-1\\0&0&-1&1&1\\0&0&0&0&2\end{array}\right] \end{align}\] This ends up with an inconsistent system without a solution. This means the zero vector is not in the set. This set does NOT form a subspace.

(iv) For \(\mathbb P_3\), we can write \(p(t)=a_0+a_1t+a_2t^2+a_3t^3\) and \(p'(t)=a_1+2a_2t+3a_3t^2\). Now since \(p'(2)=0\), we get \(a_1+4a_2+12a_3=0\). It can be easily proved that this \(p(t)\) in this polynomial set has all three properties. It is TRUE that this set forms a subspace.

Now we have (ii) and (iv) are subspaces, so the is D.

Problem 9

Problem 9 Solution

To find the \(\text{Ker}(T)\), need to find the set of \(p(t)\) such that \(T(p(t))=0\) \[ T(a_0+a_{1}t+a_{2}t^2)=a_{2}t^3=0 \Rightarrow a_2=0 \] Thus \(p(t)=a_0+a_{1}t\), the basis is \(\{1,t\}\).

So the answer is A.

Problem 10

Problem 10 Solution

A. This statement is FALSE. The column space of \(A^T\) is a subspace of \(\mathbb{R}^n\), not \(\mathbb{R}^m\). Since \(A\) is an \(m \times n\) matrix, its transpose \(A^T\) is an \(n \times m\) matrix, so the columns of \(A^T\) are vectors in \(\mathbb{R}^n\).

B. This statement is FALSE. If \(A\) is an invertible \(n \times n\) matrix, then: \[\det(2A^{-1} A^T)=2^n \det(A^{-1})\det(A^T)=2^n \det(A^{-1})\det(A)=2^n\]

C. This statement is FALSE. If \(A^2 = I_n\), then \(\det A^2=(\det A)^2=1\). So \(\det A=\pm1\).

D. \(𝐴^T𝐴=0⟺𝐴=0\), this statement is always TRUE. Here's the proof:

If \(A^T A = 0\), then for any vector \(x\), we have: \(x^T (A^T A) x = (x^T A^T) (Ax) = (Ax)^T (Ax) = 0\) Since \((Ax)^T (Ax)\) is the dot product of \(Ax\) with itself, it can only be zero if \(Ax = 0\) for all \(x\). This implies that \(A\) must be the zero matrix.

A more comprehensive proof can be seen at this stackexchange page. Basically it useds the fact that \(\text{trace}(A^TA)=\sum_i\sum_j(a_{ji})^2\). If this trace is zero, all matrix \(A\) entries are zero.

E. This statement is FALSE. The null space (or kernel) of a matrix is always a subspace of the space in which the vectors are multiplied by the matrix. Since \(A\) is an \(m \times n\) matrix, the null space of \(A\) is a subspace of \(\mathbb{R}^n\), not \(\mathbb{R}^m\).

So the answer is D.

Problem 11

Problem 11 Solution

The vector set can be regarded as a linear transformation, then we can do row reduction with it:

Note that row operations do not affect the dependence relations between the column vectors. This makes it possible to use row reduction to find a basis for the column space.

\[ \begin{bmatrix}1&1&1&1&1\\-1&1&2&0&-2\\1&1&1&1&3\end{bmatrix}\sim \begin{bmatrix}\color{fuchsia}{1}&1&1&1&1\\0&\color{fuchsia}{2}&3&1&-1\\0&0&0&0&\color{fuchsia}{2}\end{bmatrix} \]

So there are 3 pivot entries and the rank is 3. The pivot columns below form a basis for \(H\). \[\begin{Bmatrix} \begin{bmatrix}1\\-1\\1\end{bmatrix}, \begin{bmatrix}1\\1\\1\end{bmatrix}, \begin{bmatrix}1\\-2\\3\end{bmatrix} \end{Bmatrix}\]

A is wrong as it has only 2 vectors and the rank is 2.

For B, C, and D, their 3rd vectors can be generated with the linear combination of the first two vectors. So their ranks are also 2.

E is equivalent to the basis above. Its second vector can be generated like below \[ \begin{bmatrix}1\\-1\\1\end{bmatrix}+\begin{bmatrix}1\\1\\1\end{bmatrix}= \begin{bmatrix}2\\0\\2\end{bmatrix}=2\times \begin{bmatrix}1\\0\\1\end{bmatrix} \]

So the answer is E.

Problem 12

Problem 12 Solution

Note this question asks which one is NOT in the subspace spanned by \(\pmb x\) and \(\pmb y\). A vector is in the subspace spanned by \(\pmb x\) and \(\pmb y\) if and only if it is a linear combination of \(\pmb x\) and \(\pmb y\). This also means that the augmented matrix \([\pmb x\;\pmb y \mid \pmb v]\) has solutions.

Let's try vector from A. \[ \left[\begin{array}{cc|c}2&1&4\\3&2&2\\1&1&1\end{array}\right]\sim \left[\begin{array}{cc|c}2&1&4\\3&2&2\\2&2&2\end{array}\right]\sim \left[\begin{array}{cc|c}2&1&4\\1&0&0\\0&1&-2\end{array}\right]\sim \left[\begin{array}{cc|c}2&0&6\\1&0&0\\0&1&-2\end{array}\right] \] We see inconsistent results for \(x_1\), hence this vector is NOT a linear combination of \(\pmb x\) and \(\pmb y\). There is no need to check other options.

So the answer is A.

Problem 13

Problem 13 Solution

Refer to Example 3 in the textbook section 1.9 (The Matrix of a Linear Transformation)

For 2 radians counter-clockwise rotation, the transformation matrix is written as \[A=\begin{bmatrix}\cos(2)&-\sin(2)\\\sin(2)&\cos(2)\end{bmatrix}\] To find the eigenvalues of this \(2\times 2\) matrix, need to solve the equation \(\det (A-\lambda I)=0\) \[ \begin{vmatrix}\cos(2)-\lambda&-\sin(2)\\\sin(2)&\cos(2)-\lambda\end{vmatrix}=\lambda^2-2\cos(2)+\cos^2(2)+\sin^2(2)=\lambda^2-2\cos(2)+1 \] Apply the quadratic formula, get the roots \[\lambda=\frac{2\cos(2)\pm\sqrt{4\cos^2(2)-4}}{2}=\cos(2)\pm i\sin(2)\]

So the answer is C.

📝Notes: If it is changed to clockwise rotation, the transformation matrix will become \[A=\begin{bmatrix}\cos(2)&\sin(2)\\-\sin(2)&\cos(2)\end{bmatrix}\]

Problem 14

Problem 14 Solution

(i) Matrix \(A\) and its transpose have the same eigenvalues. If one is diagonalizable, so does the other. This statement is TRUE.

(ii) This is TRUE. Since a \(3\times 3\) matrix has 3 eigenvalues and complex eigenvalues come in pairs (conjugate). So if two are real, the 3rd one must be real.

(iii) This is FALSE. Say two values 1 and 2, and the second one (2) has a multiplicity of 2.

(iv) Because \(B^T=(AA^T)^T=(A^T)^TA^T=AA^T=B\), \(B\) is a symmetric matrix, which is always diagonalizable. This is TRUE.

So (i), (ii), and (iv) are TRUE, the answer is D.

Problem 15

Problem 15 Solution

(i) Since \(A\) is a \(4\times 6\) matrix, its reduced echelon form has at most 4 pivot entries. But the statement says at least, so it is FALSE.

(ii) Multiplying A with this vector gives \[[\pmb v_1 \pmb v_2 \pmb v_3 \pmb v_4 \pmb v_5 \pmb v_6][1\;0\;(-2)\;0\;(-4)\;0]^T=\pmb v_1-2\pmb v_3-4\pmb v_5=\pmb 0\] So indeed this vector is in \(\text{Nul}\;A\). It is TRUE.

(iii) The condition given is that \(\pmb v_1\) and \(\pmb v_4\) are linear combinations of the other vectors. It does not say anything for the rest. The set \(\{\pmb v_2 \pmb v_3 \pmb v_5 \pmb v_6\}\) could be linearly dependent. So it is FALSE.

(iv) Because \(\pmb v_1\) and \(\pmb v_4\) are linear combinations of the other vectors, \(\text{Col}\;A=\text{Span}\{\pmb v_2 \pmb v_3 \pmb v_5 \pmb v_6\}\). On the other hand, \(\text{Span}\{\pmb v_1 \pmb v_2 \pmb v_3 \pmb v_6\}\) is equivalent to Span\(\{\pmb v_2 \pmb v_3 \pmb v_5 \pmb v_6\}\), because \(\pmb v_1\) and \(\pmb v_3\) can generate \(\pmb v_5\). So this is TRUE.

(v) We know from (i) that \(\text{rank}\;A\le 4\). Since \(\text{rank}\;A+\text{nullity}\;A=6\), it proves that \(\text{nullity}\;A\ge 2\). So this is TRUE.

Overall, (ii), (iv), and (v) are TRUE, and the answer is A.

Problem 16

Problem 16 Solution

(i) Remember Theorem 2 from the book Section 7.1 (Diagonalization of Symmetric Matrices) says

An \(n\times n\) matrix \(A\) is orthogonally diagonalizable if and only if \(A\) is a symmetric matrix.

This means a symmetric matrix is always orthogonally diagonalizable. Since this matrix is symmetric, it is diagonalizable over real numbers.

(ii) For this matrix, first find out the eigenvalues \[\begin{align} \begin{vmatrix}1-\lambda&1&1\\0&2-\lambda&2\\0&2&5-\lambda\end{vmatrix} &=(1-\lambda)\begin{vmatrix}2-\lambda&2\\2&5-\lambda\end{vmatrix}\\ &=(1-\lambda)(\lambda^2-7\lambda+10-4)=-(1-\lambda)^2(\lambda-6) \end{align}\] We get two eigenvalues 1 (with multiplicity 2) and 6.

For the eigenvalue 1, we are expecting two eigenvectors. Let's check. \[ \begin{bmatrix}0&1&1\\0&1&2\\0&2&4\end{bmatrix}\sim\begin{bmatrix}0&1&1\\0&0&1\\0&0&0\end{bmatrix} \] This means \(x_1\) is free, \(x_2=x_3=0\). which generates only one eigenvector here \(\begin{bmatrix}1\\0\\0\end{bmatrix}\). This matrix is NOT diagonalizable.

(iii) This is an upper triangular matrix, the diagonal entries are the eigenvalues. As these are distinct, this matrix is diagonalizable.

(iv) This is also an upper triangular matrix. The eigenvalues are 1 and 3 with multiplicity 2.

For \(\lambda=1\), we have \(\begin{bmatrix}0&0&0\\0&2&4\\0&0&2\end{bmatrix}\). This solves to an eigenvector \(\begin{bmatrix}1\\0\\0\end{bmatrix}\).

For \(\lambda=3\), we have \(\begin{bmatrix}-2&0&0\\0&0&4\\0&0&0\end{bmatrix}\). This solves to an eigenvector \(\begin{bmatrix}0\\1\\0\end{bmatrix}\).

However, we need 3 eigenvectors to diagonalize. So this matrix is NOT diagonalizable.

So only (i) and (iii) are diagonalizable, the correct answer is D.

Problem 17

Problem 17 Solution

First calculate \(A^{-1}\) \[A^{-1}=\frac{1}{2\times 1-0\times (-1)}\begin{bmatrix}1&1\\0&2\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1&1\\0&2\end{bmatrix}\]

Then calculate the transformation \[\begin{align} T(X)&=A^{-1}XB=\frac{1}{2}\begin{bmatrix}1&1\\0&2\end{bmatrix}\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}1&-1\\-1&1\end{bmatrix}\\ &=\frac{1}{2}\begin{bmatrix}a+c&b+d\\2c&2d\end{bmatrix}\begin{bmatrix}1&-1\\-1&1\end{bmatrix}\\ &=\frac{1}{2}\begin{bmatrix}a+c-b-d&-a-c+b+d\\2c-2d&-2c+2d\end{bmatrix} \end{align}\]

Next, write the result in parametric vector form \[\begin{align} T(X)&=a\begin{bmatrix}1/2&-1/2\\0&0\end{bmatrix}+b\begin{bmatrix}-1/2&1/2\\0&0\end{bmatrix}+c\begin{bmatrix}1/2&-1/2\\1&-1\end{bmatrix}+d\begin{bmatrix}-1/2&1/2\\-1&1\end{bmatrix}\\ &=(a-b)\begin{bmatrix}1/2&-1/2\\0&0\end{bmatrix}+(c-d)\begin{bmatrix}1/2&-1/2\\1&-1\end{bmatrix} \end{align}\] Since the two \(2\times 2\) matrices are linearly independent. they form the basis for \(\mathcal M_{2\times 2}\). Hence the range of \(T\) has a dimension of 2.

So the answer is C.

Problem 18

Problem 18 Solution

Remember Problem 6 introduced the definition of trace, which is the sum of all diagonal entries of a matrix. Denote the \(2\times 2\) as \(A=\begin{bmatrix}a&b\\c&d\end{bmatrix}\), then \(\text{tr}(A)=a+d=-2\). Since \(\det A=11\), it gives \(ad-bc=11\).

With these in mind, we can do the eigenvalue calculation below \[ \begin{vmatrix}a-\lambda&b\\c&d-\lambda\end{vmatrix}=\lambda^2-(a+d)\lambda+ad-bc=\lambda^2+2\lambda+11=0 \] Apply the quadratic formula, get the roots \[\lambda=\frac{-2\pm\sqrt{4-44}}{2}=-1\pm i\sqrt{10}\]

Refer to the following table for the mapping from \(2\times 2\) matrix eigenvalues to trajectories:

Eigenvalues Trajectories
\(\lambda_1>0, \lambda_2>0\) Repeller/Source
\(\lambda_1<0, \lambda_2<0\) Attactor/Sink
\(\lambda_1<0, \lambda_2>0\) Saddle Point
\(\lambda = a\pm bi, a>0\) Spiral (outward) Point
\(\lambda = a\pm bi, a<0\) Spiral (inward) Point
\(\lambda = \pm bi\) Ellipses (circles if \(b=1\))

So the answer is C.

Problem 19

Problem 19 Solution

  • Method 1: matrix multiplication

    From \(P^{-1}AP=D\), we have \(AP=PD\). We can use the latter to do a simple calculation (and avoid calculating the inverse of \(P\)).

    If we start from the last option E, we can find that it satisfies the equation.

    \[\begin{align} AP=\begin{bmatrix}4&0&0\\3&2&1\\0&1&2\end{bmatrix}\begin{bmatrix}0&0&1\\-1&1&2\\1&1&1\end{bmatrix}=\begin{bmatrix}0&0&4\\-1&3&8\\1&3&4\end{bmatrix}\\ PD=\begin{bmatrix}0&0&1\\-1&1&2\\1&1&1\end{bmatrix}\begin{bmatrix}1&0&0\\0&3&0\\0&0&4\end{bmatrix}=\begin{bmatrix}0&0&4\\-1&3&8\\1&3&4\end{bmatrix} \end{align}\]

  • Method 2: eigenvalues and eigenvectors

    From \(P^{-1}AP=D\), we also have \(A=PDP^{-1}\). This means that \(D\) could be the diagonalized matrix of \(A\). Inspecting \(D\), we can see that this is true. So the diagonal entries of \(D\) are just eigenvalues of \(A\), and \(P\)'s columns are corresponding eigenvectors.

    Now take the eigenvalue 4, we get the matrix becomes \[ \begin{bmatrix}0&0&0\\3&-2&1\\0&1&-2\end{bmatrix}\sim \begin{bmatrix}0&0&0\\3&0&-3\\0&1&-2\end{bmatrix} \] This gives the eigenvector as \(\begin{bmatrix}1\\2\\1\end{bmatrix}\). Only option E has this as the last column.

So the answer is E.

Problem 20

Problem 20 Solution

A. This statement is TRUE. By definition, \(W^⊥\) is the set of all vectors that are orthogonal to every vector in \(W\). If \(\pmb z\) is orthogonal to \(\pmb u_1\) and \(\pmb u_2\), and \(W\) is spanned by \(\pmb u_1\) and \(\pmb u_2\), then \(\pmb z\) is orthogonal to every vector in \(W\), So \(\pmb z\) is in \(W^⊥\).

B. This statement is TRUE. If \(\pmb y\) is decomposed into a sum of \(\pmb z_1\) in \(W\) and \(\pmb z_2\) in \(W^⊥\), then \(\pmb z_1\) is the projection of \(\pmb y\) onto \(W\).

C. This statement is TRUE. If \(\pmb x\) is not in \(W\), then \(proj_W \pmb x\) is the closest point in \(W\) to \(\pmb x\). The vector \(\pmb x−proj_W \pmb x\) is orthogonal to \(W\) and represents the "error" between \(\pmb x\) and its projection onto \(W\). This error vector is non-zero if \(\pmb x\) is not in \(W\).

D. This statement is TRUE. Refer to Theorem 6 and 7 in the book section 6.3 (Orthogonal Sets), quoted below

E. This statement is FALSE. The best approximation to \(\pmb y\) by elements of a subspace \(W\) is \(proj_W \pmb y\), the closest point in \(W\) to \(\pmb y\).

All statements except E are true. Therefore, the correct answer is E.

📝Notes: An orthogonal matrix is such a square invertible matrix \(U\) such that \(U^{-1}=U^T\). Let \(U\) be an \(n\times n\) matrix with orthonormal columns, then \(U^TU=I\) and \(\det U=\det U^{-1}=±1\).

Problem 21

Problem 21 Solution

Refer to the figure below from the book Section 6.3 (Orthogonal Projections)

Since \(\pmb u_1\) and \(\pmb u_2\) are orthogonal, they form an orthogonal basis of \(W\). Hence the closest point in \(W\) to \(\pmb x\) is the orthogonal projection of \(\pmb x\) onto \(W\). The formula to calculate that is \[\begin{align} \text{proj}_{W}\pmb x&=\frac{\pmb x\bullet\pmb u_1}{\pmb u_1\bullet\pmb u_1}\pmb u_1+ \frac{\pmb x\bullet\pmb u_2}{\pmb u_2\bullet\pmb u_2}\pmb u_2\\ &=\frac{1\times 4+2\times 7+(-4)\times 1}{1^2+2^2+(-4)^2}\begin{bmatrix}1\\2\\-4\end{bmatrix}+\frac{2\times 4+1\times 7+1\times 1}{2^2+1^1+1^2}\begin{bmatrix}2\\1\\1\end{bmatrix}\\ &=\frac{14}{21}\begin{bmatrix}1\\2\\-4\end{bmatrix}+\frac{16}{6}\begin{bmatrix}2\\1\\1\end{bmatrix}=\frac{2}{3}\begin{bmatrix}1\\2\\-4\end{bmatrix}+\frac{8}{3}\begin{bmatrix}2\\1\\1\end{bmatrix}=\begin{bmatrix}6\\4\\0\end{bmatrix} \end{align}\]

So the answer is A.

⚠️ Warning: If \(\pmb u_1\) and \(\pmb u_2\) are NOT orthogonal, we cannot use the above formula. Instead, we can use the least-squares solution to find the closest point in \(W\) to \(\pmb x\).

Problem 22

Problem 22 Solution

Refer to the figure below from the book Section 6.5 (Least-Squares Problems)

Each least-squares solution of \(A\pmb x=\pmb b\) satisfies the equation \(A^TA\pmb x=A^T\pmb b\).

Following this, first calculate \(A^TA\) and \(A^T\pmb b\) separately \[\begin{align} A^TA&=\begin{bmatrix}1&-1&2\\0&1&2\end{bmatrix}\begin{bmatrix}1&0\\-1&1\\2&2\end{bmatrix} =\begin{bmatrix}6&3\\3&5\end{bmatrix}\\ A^T\pmb b&=\begin{bmatrix}1&-1&2\\0&1&2\end{bmatrix}\begin{bmatrix}-5\\0\\1\end{bmatrix} =\begin{bmatrix}-3\\2\end{bmatrix} \end{align}\] Then combine the results to form an augmented matrix and solve it by row reduction \[ \left[\begin{array}{cc|c}6&3&-3\\3&5&2\end{array}\right]\sim \left[\begin{array}{cc|c}6&3&-3\\6&10&4\end{array}\right]\sim \left[\begin{array}{cc|c}6&3&-3\\0&7&7\end{array}\right]\sim \left[\begin{array}{cc|c}2&1&-1\\0&1&1\end{array}\right] \]

This ends up with \(x_2=1\) and \(x_1=-1\).

So the answer is E.

Problem 23

Problem 23 Solution

(1) First find the eigenvalues using \(\det A-\lambda I=0\) \[ \begin{vmatrix}1-\lambda &4\\2 &3-\lambda\end{vmatrix}=\lambda^2-4\lambda+3-8=(\lambda-5)(\lambda+1) \] So there are two eigenvalues 5 and -1.

  • For \(\lambda_1=5\), the matrix \(\det A-\lambda I\) becomes \[ \begin{bmatrix}-4 &4\\2 &-2\end{bmatrix}\sim \begin{bmatrix}-1 &1\\0 &0\end{bmatrix} \] So the eigenvector can be \(\begin{bmatrix}1\\1\end{bmatrix}\).

  • Likewise, for \(\lambda_2=-1\), the matrix \(\det A-\lambda I\) becomes \[ \begin{bmatrix}2 &4\\2 &4\end{bmatrix}\sim \begin{bmatrix}1 &2\\0 &0\end{bmatrix} \] So the eigenvector can be \(\begin{bmatrix}2\\-1\end{bmatrix}\).

(2) With the eigenvalues and corresponding eigenvectors known, we can apply them to the general solution formula \[\pmb x(t)=c_1\pmb{v}_1 e^{\lambda_1 t}+c_2\pmb{v}_2 e^{\lambda_2 t}\] This gives \[ \begin{bmatrix}x(t)\\y(t)\end{bmatrix}=c_1\begin{bmatrix}1\\1\end{bmatrix}e^{5t}+c_2\begin{bmatrix}2\\-1\end{bmatrix}e^{-t} \]

(3) Apply the initial values of \(x(0)=-1\) and \(y(0)=2\), here comes the following equations: \[\begin{align} c_1+2c_2&=-1\\ c_1-c_2&=2 \end{align}\] Solving this gives \(c_1=1\) and \(c_2=-1\). So \(x(2)=e^{5\times 2}-2e^{-2}=e^{10}-2e^{-2}\).

So the answer is D.

Problem 24

Problem 24 Solution

Denote \(x(t)=20t^3-12t^2\) and \(u_1=1, u_2=t\), the process to find the orthogonal projection of \(x(t)\) onto \(W\) spanned by \(u_1\) and \(u_2\) is shown below (applying the given inner product definition):

\[\begin{align} \text{proj}_{W}x&=\frac{\langle x,u_1\rangle}{\langle u_1,u_1\rangle}u_1+ \frac{\langle x,u_2\rangle}{\langle u_2,u_2\rangle}u_2\\ &=\frac{\langle 20t^3-12t^2,1\rangle}{\langle 1,1\rangle}1+\frac{\langle 20t^3-12t^2,t\rangle}{\langle t,t\rangle}t\\ &=\frac{\int_{-1}^1 (20t^3-12t^2)dt}{\int_{-1}^1 1\cdot 1dt}+\frac{\int_{-1}^1 (20t^4-12t^3)dt}{\int_{-1}^1 (t\cdot t)dt}\\ &=\frac{(5t^4-4t^3)\large\rvert_{-1}^1}{t\large\rvert_{-1}^1}+\frac{(4t^5-3t^4)\large\rvert_{-1}^1}{\frac{1}{3}t^3\large\rvert_{-1}^1}\\ &=\frac{(5-4)-(5+4)}{1-(-1)}+\frac{(4-3)-(-4-3)}{\frac{1}{3}-(-\frac{1}{3})}t=-4+12t \end{align}\]

So the answer is B.

Problem 25

Problem 25 Solution

Apply the normalization process:

\[ \pmb u_1=\frac{\pmb v_1}{\|\pmb v_1\|}=\frac{1}{\sqrt{\pmb v_1\bullet\pmb v_1}}\begin{bmatrix}2\\1\\2\end{bmatrix}=\frac{1}{\sqrt{2^2+1+2^2}}\begin{bmatrix}2\\1\\2\end{bmatrix}=\begin{bmatrix}2/3\\1/3\\2/3\end{bmatrix} \]

Only the option B matches. So the answer is B.

If needed we can calculate the other basis vectors like below, then normalize them one by one.

Other MA265 Final Exam Solutions

MA 265 Fall 2022 Final

MA 265 Sprint 2023 Final

MA 265 Fall 2019 Final