Jordan normal form

Task number: 2665

Transform the following matrix into Jordan normal form and determine eigenvectors, and if necessary also generalized eigenvectors.

  • Variant

    \( \mathbf A=\begin{pmatrix} 1 & 1 & 1\\ 0 & 1 & 0\\ -1 & 0 & 3 \end{pmatrix} \)

  • Hint

    The generalized eigenvector \(\mathbf x_i\) can be obtained from the system\((\mathbf A-\lambda \mathbf I)\mathbf x_i=\mathbf x_{i-1}\).

  • Resolution

    Characteristic polynomial \(p_\mathbf A(t)= \begin{vmatrix} 1-t & 1 & 1\\ 0 & 1-t & 0\\ -1 & 0 & 3-t \end{vmatrix} = (1-t)(2-t)^2 \).

    The system \((\mathbf A-2\mathbf I)\mathbf x^1=\mathbf 0\) has a solution \(\mathbf x^1=p(1{,}0,1)^T\).

    The eigenvalue \(\lambda=2\) has geometric multiplicity 1 and algebraic 2, we shall find a generalized eigenvector. In the sequel we choose \(p=1\), i.e. \(\mathbf x^1=(1, 0, 1)^T\).

    The generalized eigenvector \(\mathbf x^2\) we get from \((\mathbf A-2\mathbf I)\mathbf x^2=\mathbf x^1\). It has a solution \(\mathbf x^2=q(1, 0, 1)^T+(-1, 0, 0)^T\).

    The system \((\mathbf A-1\mathbf I)\mathbf x=\mathbf 0\) has solution \(\mathbf x^3=r(2, -1, 1)^T\).

    By a suitable choice of parameters (to have \(\mathbf x^1, \mathbf x^2\) linearly independent) \(q=1\) and \(r=1\) we get the desired matrix \(\mathbf R\). We also calculate its inverse \(\mathbf R^{-1}\).

    \( \mathbf R= \begin{pmatrix} 1 & 0 & 2\\ 0 & 0 & -1\\ 1 & 1 & 1 \end{pmatrix} \qquad \mathbf R^{-1}= \begin{pmatrix} 1 & 2 & 0\\ -1 & -1 & 1\\ 0 & -1 & 0 \end{pmatrix} \)

  • Result

    The given matrix can be factorized into Jordan normal form as

    \( \mathbf A= \begin{pmatrix} 1 & 1 & 1\\ 0 & 1 & 0\\ -1 & 0 & 3 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 2\\ 0 & 0 & -1\\ 1 & 1 & 1 \end{pmatrix} \begin{pmatrix} 2 & 1 & 0\\ 0 & 2 & 0\\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 2 & 0\\ -1 & -1 & 1\\ 0 & -1 & 0 \end{pmatrix} = \mathbf R\mathbf J\mathbf R^{-1} \)

  • Variant

    \( \mathbf A= \begin{pmatrix} 2 & 1 & 0 & 2\\ 0 & 2 & 0 & -2\\ 0 & 0 & 2 & 2\\ 0 & 0 & 0 & 2 \end{pmatrix} \)
  • Solution

    The matrix has a single eigenvalue \(\lambda = 2\) of algebraicmultiplicity 4, eigenvectors are nontrivial solutions of a system of equation with matrix \[\mathbf A-\lambda \mathbf I = \begin{pmatrix} 0&1&0&2\\ 0&0&0&-2\\ 0&0&0&2\\ 0&0&0&0 \end{pmatrix} \sim \begin{pmatrix} 0&1&0&0\\ 0&0&0&1\\ \end{pmatrix} \] The dimension of \(\ker(\mathbf A-\lambda \mathbf I)\) is 2, so the matrix \(\mathbf A\) has two linearly independent eigenvectors, e.g. \(\mathbf x_1=(0{,}0,1{,}0)^T\) and \(\mathbf x_2=(1{,}0,0{,}0)^T\)

    To get a regular matrix \(\mathbf R\) we need to find two generalized eigenvectors. Instead the system \((\mathbf A-\lambda \mathbf I)\mathbf x=\mathbf 0\) we choose the right-hand side the eigenvector (or in the next step the generalized eigenvector), i.e. we solve a system with augmented matrix (for \(\mathbf x_1\)) \[ \begin{pmatrix} 0&1&0&2&0\\ 0&0&0&-2&0\\ 0&0&0&2&1\\ 0&0&0&0&0 \end{pmatrix} \] which has no solution (see the second and third rows). From this we deduce that this vector corresponds to a Jordan cell of size 1.

    For the second eigenvector \(\mathbf x_2\) we thus need two generalized eigenvectors and the other cell will be of size 3. We follow the same approach, the right-hand side is \(\mathbf x_2\) \[ \begin{pmatrix} 0&1&0&2&1\\ 0&0&0&-2&0\\ 0&0&0&2&0\\ 0&0&0&0&0 \end{pmatrix} \sim \begin{pmatrix} 0&1&0&0&1\\ 0&0&0&1&0\\ \end{pmatrix} \] The set of solutions is an affine space of dimension 2, more precisely of form \((p,1,q,0)^T\). We choose \(p=q=-1\) so thus \(\mathbf x_3=(-1{,}1,-1{,}0)^T\).

    Then we choose \(\mathbf x_3\) as the right side of amny times solved system and continue \[\begin{pmatrix} 0&1&0&2&-1\\ 0&0&0&-2&1\\ 0&0&0&2&-1\\ 0&0&0&0&0 \end{pmatrix} \sim \begin{pmatrix} 0&1&0&0&0\\ 0&0&0&2&-1\\ \end{pmatrix} \] with (for sequel calculations nice) solution \(\mathbf x_4=(0{,}0,0,-\frac{1}{2})^T\)

    Now we may assemble matrices \(\mathbf J\) a \(\mathbf R\) and then calculate \(\mathbf R^{-1}\)

  • Answer

    \( \mathbf J = \begin{pmatrix} 2&0&0&0\\ 0&2&1&0\\ 0&0&2&1\\ 0&0&0&2 \end{pmatrix} , \mathbf R= \begin{pmatrix} 0&1&-1&0\\ 0&0&1&0\\ 1&0&-1&0\\ 0&0&0&-\frac{1}{2} \end{pmatrix} , \mathbf R^{-1}= \begin{pmatrix} 0&1&1&0\\ 1&0&0&0\\ 0&1&1&0\\ 0&0&0&-2 \end{pmatrix} \)
  • Variant

    \(\mathbf A= \begin{pmatrix} 0&1&0&0&-1\\ 0&0&0&0&0\\ 0&0&0&1&-1\\ 0&0&0&0&1\\ 0&0&0&0&0 \end{pmatrix} \)
  • Solution

    It has a single eigenvalue \(\lambda=0\) of algebraic multiplicity 5.

    The rank of \(\mathbf A\) is 3, so \(\dim(\ker A)=2\) and we have to linearly independent eigenvectors.

    When we would like for the eigenvector \(\mathbf x_1\) calculate a generalized eigenvector \(\mathbf x_2\), it would mean to solve \((\mathbf A-\lambda \mathbf I)\mathbf x_2=\mathbf x_1\), what is equivalent to solving \((\mathbf A-\lambda \mathbf I)^2\mathbf x_2 = (\mathbf A-\lambda \mathbf I)\mathbf x_1 = \mathbf 0\). In other words, \(\mathbf x_2 \in \ker((\mathbf A-\lambda \mathbf I)^2) \setminus \ker(\mathbf A-\lambda \mathbf I)\) etc. for the other generalized eigenvectors in the chain.

    In our case with \(\lambda=0\) we calculate \[ \mathbf A^2= \begin{pmatrix} 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&1\\ 0&0&0&0&0\\ 0&0&0&0&0 \end{pmatrix}\] a \[\mathbf A^3= \begin{pmatrix} 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0 \end{pmatrix}\]

    Clearly \(\dim(\ker \mathbf A^2)=4\) and \(\dim(\ker \mathbf A^3)=5\), so there is only one linearly independent vector in \(\ker \mathbf A^3 \setminus \ker \mathbf A^2\), e.g. \(x_3=(0{,}0,0{,}0,1)^T\). This shall satisfy \(\mathbf A\mathbf x_3=\mathbf x_2\), so we claculate \(\mathbf x_2=(-1{,}0,-1{,}1,0)^T\) and \(\mathbf x_1=\mathbf A\mathbf x_2=(0{,}0,1{,}0,0)^T\) (this is an eigenvector of the matrix \(\mathbf A\)). This yields a single Jordan cell of size 3.

    The other cell will be of size 2 and again we will start at the end of the chain, i.e. by the generalized eigenvector \(\mathbf y_2\). Tis must satisfy \(\mathbf y_2 \in \ker \mathbf A^2 \setminus \ker \mathbf A\) and is linearly independent on \(\mathbf x_2\). We choose e.g. \(\mathbf y_2=(0{,}1,0{,}0,0)^T\) and calculate \(\mathbf y_1=\mathbf A\mathbf y_2=(1{,}0,0{,}0,0)^T\) (opět vlastní vektor \(\mathbf A\)).

    Now we may assemble matrices \(\mathbf R\) (columns are \(\mathbf y_1,\mathbf y_2,\mathbf x_1,\mathbf x_2,\mathbf x_3\)) and \(\mathbf J\) and claculate the inverse matrix \(\mathbf R^{-1}\)

  • Answer

    \(\mathbf R= \begin{pmatrix} 1&0&0&-1&0\\ 0&1&0&0&0\\ 0&0&1&-1&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{pmatrix}\), \(\mathbf J= \begin{pmatrix} 0&1&0&0&0\\ 0&0&0&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1\\ 0&0&0&0&0 \end{pmatrix}\), \(\mathbf R^{-1}=\begin{pmatrix} 1&0&0&1&0\\ 0&1&0&0&0\\ 0&0&1&1&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{pmatrix}\)

Difficulty level: Hard task
Routine calculation training
Cs translation
Send comment on task by email