
EIGEN VECTORS AND EIGEN VALUES
- Aryan

- Mar 22
- 5 min read
What Are Matrices ?
Matrices are structured arrays of numbers that enable various mathematical operations, such as addition, multiplication, and dot products. Fundamentally, a matrix defines a linear transformation—a function that maps vectors from one coordinate space to another by altering their coordinates in a linear manner.
Matrix as a Transformation
Consider the following matrix A :



Understanding the Transformation

As a result, the vector x = (0,2) is now mapped to (6,0) in this new coordinate system.
Key Insight : Matrices serve as powerful tools for linear transformations. When we apply a matrix to a vector using matrix-vector multiplication, we are effectively transforming the entire coordinate space. Every vector in that space—including the one we focused on—is mapped to a new position in the transformed space .
Eigenvectors and Eigenvalues
Let's consider the matrix :

and examine how it transforms vectors in a given vector space. When applying this transformation, every vector will change its position. For example, the vector x = (−1,2) moves to (−2,1), and (−3,1) moves to (−6,−2). This illustrates that most vectors change their span after transformation .


Understanding Span and Transformation
The span of a vector refers to the direction in which the vector extends in space. When we apply a matrix transformation, the span of most vectors changes. This means that the transformed vector points in a different direction than the original. However, there exist certain vectors that do not change their span after transformation. These vectors remain in the same span, though their magnitude may change.
Definition of Eigenvectors and Eigenvalues
An eigenvector of a matrix is a nonzero vector that, when multiplied by the matrix, results in a scalar multiple of itself. This means the transformation does not change its span—only its magnitude may be altered. Mathematically, if v is an eigenvector of matrix A, then :
Av = λv
where:
λ is a scalar called the eigenvalue, representing the factor by which the eigenvector is scaled .
Example Interpretation
From the given example, consider the vector (2,0) . After transformation by matrix A, it is mapped to (4,0). Here :
The vector retains the same span, meaning it is an eigenvector.
The scaling factor (eigenvalue) is λ = 2, as the magnitude has doubled.


Intuition : Axis of Rotation
What is an Axis of Rotation ?
In linear algebra, the axis of rotation refers to a fixed line around which a transformation occurs. When applying a linear transformation using a matrix, some vectors might change direction, but certain vectors remain along the same span—these vectors act like the axis of rotation.
An eigenvector with an eigenvalue of 1 means that after the transformation, it remains unchanged.
If an eigenvector's span does not change, it means the transformation is happening around it—making it behave like an axis of rotation.
This concept is useful in understanding how transformations affect vector spaces and coordinate systems.

Transformation and Shrinking Effect
When applying the matrix :

Some vectors change direction and scale.
However, a particular eigenvector behaves like an axis of rotation—meaning its span remains unchanged.
Eigenvectors as the Axis of Rotation
Eigenvectors do not change their span under transformation.
Instead of shifting in a new direction, they only scale.
If an eigenvector remains aligned after transformation, it acts as an axis of rotation for the system.
Scaling vs. Rotation
Linear transformations scale eigenvectors but do not change their span.
The eigenvalue determines the scaling factor :
○ λ = 1 → No change.
○ λ > 1 → Stretching.
○ 0 < λ < 1 → Shrinking.
○ λ < 0 → Flipping.
HOW TO CALCULATE EIGEN VECTORS AND EIGEN VALUES
By definition, an eigenvector satisfies the equation :
AX = λX
Where A is matrix, X is vector, and λ is scalar.
When we take the dot product of a vector with a matrix, we obtain the same vector but scaled by a factor.
When we take the dot product of a vector with a matrix , we obtain the same vector but scaled by a factor.
When we apply the linear transformation represented by matrix A on vector X, we get either the same vector or a scaled version of that vector, where the scaling factor is λ .
AX = 𝜆𝑋
AX = 𝜆𝐼𝑋
AX − 𝜆𝐼𝑋 = 0
(A-𝜆𝐼)X = 0
For a nontrivial solution (i.e., a non zero eigenvector X), the determinant of A − λI must be zero :
det(A-𝜆𝐼) = 0
This implies that A-𝜆𝐼 is a non-invertible (singular) matrix .

There are two eigenvalues, which means there are two eigenvectors — one for λ = 2 and one for λ = 1.
Let's calculate the eigenvectors.
The eigenvector equation is :
(A-𝜆𝐼)V = 0
For 𝜆 = 2 :

This gives the system of equations :

Since the resulting vector is simply a scaled version of the original vector by a factor of λ = 2, we confirm that λ = 2 is an eigenvalue of the matrix .

Properties
Sum of Eigenvalues : The sum of all the eigenvalues of a matrix is equal to its trace (the sum of the diagonal elements of the matrix). This holds true regardless of whether the matrix is square or not .
Product of Eigenvalues : The product of all the eigenvalues of a matrix is equal to its determinant. This also holds for square matrices.
Eigenvectors corresponding to different eigenvalues are orthogonal : If a matrix A is symmetric (i.e.,A = Aᵀ ), the eigenvectors corresponding to distinct eigenvalues are orthogonal to each other.
Eigenvalue of an Identity Matrix : For an identity matrix, the eigenvalues are all 1, regardless of the dimension of the matrix.
Eigenvalue of a Scalar Multiple : If B is a matrix obtained by multiplying a scalar c to a matrix A (i.e., B = cA), then the eigenvalues of B are just the eigenvalues of A each multiplied by c.
Eigenvalues of a Diagonal Matrix : For a diagonal matrix, the eigenvalues are the diagonal elements themselves.
Eigenvalues of a Transposed Matrix : The eigenvalues of a matrix and its transpose are the same.
Eigenvectors in PCA
Consider a dataset with two input features : CGPA and IQ, and one output feature: LPA. The output LPA depends on CGPA and IQ. In Principal Component Analysis (PCA), our goal is to perform feature extraction by identifying the axis that captures the highest variance in the data. PCA finds new feature directions that maximize variance, helping in dimensionality reduction. To achieve this, we compute the covariance matrix of the data :

Since the covariance matrix is symmetric (of size n*n), its eigenvectors are orthogonal.
The eigenvectors of this matrix represent the new feature directions in which the data has maximum variance. Among these, at least one eigenvector corresponds to the highest variance and is selected as the principal component for dimensionality reduction .


