{ "cells": [ { "cell_type": "markdown", "id": "e0200fd8", "metadata": {}, "source": [ "# Linear Algebra 2\n", "**Instructor:** Sam Schulz (s1schulz@ucsd.edu)\n", "**TAs:** Tommy Stone (thstone@ucsd.edu) and Camilla Marcellini (cmarcellini@ucsd.edu)\n", "\n", "\n", "This lesson builds on Linear Algebra 1 to introduce Eigenvalues and Eigenvectors as well as diagonalization. I attempt to return to applications, but mostly this focuses on definitions and how these ideas apply to matrix representations.\n", "\n", "Lecture notes shamelessly inspired by [3Blue1Brown's essence of linear algebra series](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab)." ] }, { "cell_type": "markdown", "id": "0b973dff", "metadata": {}, "source": [ "## 1 | Introduction and motivation\n", "\n", "The goal here is to try to build a bit of intuition for Eigenvalues and Eigenvectors, as well as to provide some light motivation for why we might use them to solve differential equations. Linear algebra is heavy on theorems, which I think is why it can often seem boring. Proofs will be skipped here, in favor of intuitive arguments when they exist and declarations of fact otherwise. These notes may be heavy on jargon, but paired with the in person lecture they will hopefully be useful.\n", "\n", "Let's think about a linear transformation $A$ on an abstract arbitrary vector $\\vec{v}$. This can be represented by the equation $A \\vec{v} = \\vec{v ^\\prime}$. Let us assume that $A$ transforms vectors from a vector space onto itself. This operation is doable if, for example, we have the matrix form of $A$, and is even quite simple if $A$ has not too many dimensions. As an example, let\n", "\n", "\\begin{equation*}\n", "A=\\begin{pmatrix}\n", " 4 & 3\\\\\n", " 3 & -4 \n", "\\end{pmatrix}\n", "\\end{equation*}\n", "and \n", "\\begin{equation*}\n", " \\vec{v} = \\begin{pmatrix}\n", " 1\\\\\n", " 2\n", " \\end{pmatrix}.\n", "\\end{equation*}\n", " \n", "I am not so good at matrix multiplication, but if you work through it you can get that \n", "\n", "\\begin{equation*}\n", " \\vec{v^\\prime} = \\begin{pmatrix}\n", " 10\\\\\n", " -5\n", " \\end{pmatrix}.\n", "\\end{equation*}\n", "\n", "Exercise: do the above computation.\n", "\n", "However, linear algebra is often used in much more abstract situations, where $A$ could, for example, have infinite dimensions. One way to 'make the math easier,' so to speak, is to use eigensystems and diagonalization before performing the linear transformation. " ] }, { "cell_type": "markdown", "id": "befad5fb", "metadata": {}, "source": [ "## 2 | Definitions \\& finding eigenvectors\n", "\n", "Let's say we have a vector $\\vec{x}$ which satisfies the following criterion:\n", "\\begin{equation}\n", " A \\vec{x} = \\lambda \\vec{x}\n", " \\label{eq:eig_def}\n", "\\end{equation}\n", "for some number $\\lambda$. In words, this means that the transformation $A$ simply scales $\\vec{x}$ by a scalar multiple, which is unusual! If the above is true, $\\vec{x}$ is an eigenvector of the transformation $A$ and $\\lambda$ is its eigenvalue. One thing to emphasize here is that eigenvectors and their corresponding eigenvalues are associated with particular transformations. \n", "\n", "How do we find the eigenvectors of a transformation? If we can represent the transformation as a matrix (this is true for all finite-dimension transformations), there is a formulaic way to do it. To start, let's introduce the identity matrix:\n", "\\begin{equation}\n", "I =\n", "\\begin{pmatrix}\n", "1 & 0 & 0 & \\cdots & 0 \\\\\n", "0 & 1 & 0 & \\cdots & 0 \\\\\n", "0 & 0 & 1 & \\cdots & 0 \\\\\n", "\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n", "0 & 0 & 0 & \\cdots & 1\n", "\\end{pmatrix}.\n", "\\end{equation}\n", "We can write Eq. 1 as\n", "\\begin{equation*}\n", " A \\vec{x} = (\\lambda I) \\vec{x}\n", "\\end{equation*}\n", "and thus \n", "\\begin{equation*}\n", " (A-\\lambda I) \\vec{x} = \\vec{0}.\n", "\\end{equation*}\n", "\n", "For the above to be true for nonzero $\\vec{x},$ $(A-\\lambda I)$ must be a noninvertible matrix.\\footnote{Otherwise $(A-\\lambda I)^{-1} \\vec{0}= \\vec{x}$, which is impossible} This means that $\\det (A-\\lambda I)=0.$ This means that we can find the eigenvalues for a matrix $A$ by solving the equation $\\det (A-\\lambda I)=0.$, which will be a polynomial equation. \n", "\n", "To get the eigenvector for an eigenvalue $\\lambda_i$, we return to the equation \n", "\\begin{equation*}\n", " (A-\\lambda I) \\vec{x} = \\vec{0}.\n", "\\end{equation*}\n", "and plug in $\\lambda_i$ for lambda. Setting each element of $x$ to an arbitrary value $x_i$, we have a linear system of equations which can be solved to back out $x_i$. A scalar multiple of an eigenvector is also an eigenvector of the same value, so it i\n", "s often conventional to normalize the eigenvector such that the sum of the squares of each element is 1 ($\\sum_i{x_i^2}=1$). This is easier to see in practice, and we will use the matrix\n", "\\begin{equation*}\n", "A=\\begin{pmatrix}\n", " 4 & 3\\\\\n", " 3 & -4 \n", "\\end{pmatrix}\n", "\\end{equation*}\n", "as our example.\n", "\n", " - Exercise: find the Eigenvalues and Eigenvectors of matrix $A$ above.\n", " - Exercise: find the Eigenvalues and Eigenvectors of matrix $B$:\n", " \\begin{equation*}\n", " B = \\begin{pmatrix}\n", " -6 & 3\\\\\n", " 4 & 5 \n", " \\end{pmatrix}\n", " \\end{equation*}" ] }, { "cell_type": "markdown", "id": "9043888c", "metadata": {}, "source": [ "## 3 | Diagonalization\n", "\n", "What was this all for? Eigenvalues and eigenvectors have a few uses, but for me an essential use case is making computations that would be otherwise hard/impossible into computations that are easier/doable using diagonalization. Diagonalization is the process of writing a linear transformation in the basis of its own eigenvalues. The reason it is called diagonalization is that a matrix written in the basis of its own eigenvectors $\\{\\lambda_i\\}$ is diagonal:\n", "\n", "\\begin{equation*}\n", "\\Lambda =\n", "\\begin{pmatrix}\n", "\\lambda_1 & 0 & 0 & \\cdots & 0 \\\\\n", "0 & \\lambda_2 & 0 & \\cdots & 0 \\\\\n", "0 & 0 & \\lambda_3 & \\cdots & 0 \\\\\n", "\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n", "0 & 0 & 0 & \\cdots & \\lambda_n\n", "\\end{pmatrix}.\n", "\\end{equation*}\n", "\n", "If you have a vector written in the same basis, applying the linear transformation to that vector is trivial. The process of diagonalization gets us there.\n", "\n", "### 3.1 Performing a diagonalization\n", "To perform a diagonalization on a finite-dimensional linear transformation $A$, we first need to find the eigenvalues and eigenvectors of $A$. We then construct the diagonalization matrix $S$ by making its columns the (normalized) eigenvectors of $A$. \n", "\\begin{equation}\n", "S =\n", "\\begin{pmatrix}\n", "| & | & & | \\\\\n", "v_1 & v_2 & \\cdots & v_n \\\\\n", "| & | & & |\n", "\\end{pmatrix}\n", "\\end{equation}\n", "\n", "We already can find the diagonal matrix $\\Lambda$ by writing the eigenvalues on the diagonal, but it is worth noting that $\\Lambda = S^{-1} A S.$ To write a vector $\\vec{v}$ in the diagonal basis, you multiply it by $S$:\n", "\n", "\\begin{equation}\n", " \\vec{v}_{\\text{diag}}= S^{-1} \\vec{v}.\n", "\\end{equation}\n", "\n", "- Exercise: Perform the computation from section 1 by transforming the vector\n", " \\begin{equation*}\n", " \\vec{v} = \\begin{pmatrix}\n", " 1\\\\\n", " 2\n", " \\end{pmatrix}\n", " \\end{equation*}\n", " into the diagonal basis, then multiplying it by the diagonal matrix $\\Lambda,$ then transforming it back into the original basis.\n" ] }, { "cell_type": "markdown", "id": "d132106a", "metadata": {}, "source": [ "## 4 | Why?\n", "[This section gets a little more abstract, but I think it gets at some stuff you will see in Applied Math 1, if you take it.]\n", "\n", "This exercise is clearly more work than it was to just do the computation, so why bother diagonalizing? The gain mostly comes when it is tricky or impossible to represent $A$ as a matrix, such as in infinite-dimensional vector spaces. One example is in ODEs. The n'th derivative operation is a linear transformation on the vector space of infinitely differentiable functions. Exponentials are eigenvalues of the first derivative operator, since $\\frac{d}{dx}e^{kx}=k e^{kx}.$\n", "- Exercise: Think of some functions which are eigenvectors of the second derivative operator and are not exponentials.\n", "This gets us to a method of solving ODEs which comes up in Applied Math 1. If we have a way to write a function in a vector space where the relevant differential operator is diagonal, we can turn a differential equation into an algebraic one. I will work through an example in class if we have time." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.18" } }, "nbformat": 4, "nbformat_minor": 5 }