Skip to content

Keyboard shortcuts

Go home G then H
Browse sets G then B
Create set G then C
My library G then L
Show shortcuts ?
Go back Swipe from left edge

Linear Algebra — Vectors and Matrices

Core concepts in linear algebra

E
eigenvalue_evan 25 terms Feb 22, 2026
Flashcards
Learn
Written
Match
Test
Blitz

Terms 25

1
Vector Space
Set of vectors closed under addition and scalar multiplication satisfying 8 axioms
2
Linear Independence
Vectors are linearly independent if no vector is a linear combination of others; only trivial solution to c₁v₁+...=0
3
Span
Set of all linear combinations of a set of vectors
4
Basis
Linearly independent set of vectors that spans the vector space
5
Dimension
Number of vectors in a basis of the vector space
6
Matrix Multiplication
(AB)_ij = Σ A_ik · B_kj; number of columns of A must equal rows of B
7
Determinant
Scalar value representing scaling factor of linear transformation; det(A) = 0 means singular
8
Inverse Matrix
A⁻¹ such that AA⁻¹ = I; exists only if det(A) ≠ 0
9
Eigenvalue
Scalar λ such that Av = λv for nonzero vector v; characteristic of transformation
10
Eigenvector
Nonzero vector v satisfying Av = λv; direction unchanged by transformation
11
Characteristic Polynomial
det(A − λI) = 0; roots are eigenvalues
12
Diagonalization
A = PDP⁻¹ where D is diagonal matrix of eigenvalues; requires n independent eigenvectors
13
Orthogonal Matrix
Matrix where columns are orthonormal; Q^T = Q⁻¹; preserves lengths and angles
14
Dot Product
u·v = Σuᵢvᵢ = |u||v|cos(θ); measures similarity; zero if orthogonal
15
Cross Product
u×v gives vector perpendicular to both; |u×v| = |u||v|sin(θ); right-hand rule
16
Rank
Dimension of column space (or row space); number of linearly independent rows/columns
17
Null Space (Kernel)
Set of all x such that Ax = 0; dimension is nullity
18
Rank-Nullity Theorem
rank(A) + nullity(A) = number of columns of A
19
Row Reduction (RREF)
Gaussian elimination to reduced row echelon form; solves linear systems
20
Gram-Schmidt Process
Converts any basis into an orthonormal basis via successive orthogonalization
21
Singular Value Decomposition (SVD)
A = UΣV^T; decomposes any matrix into rotation, scaling, rotation; fundamental in data science
22
Principal Component Analysis (PCA)
Uses eigenvectors of covariance matrix to find directions of maximum variance; dimensionality reduction
23
Trace
Sum of diagonal elements of a matrix; equals sum of eigenvalues
24
Symmetric Matrix
A = A^T; always has real eigenvalues and orthogonal eigenvectors
25
Positive Definite Matrix
All eigenvalues positive; x^TAx > 0 for all nonzero x; covariance matrices are positive semidefinite