Class MatrixHelper<T>
Provides helper methods for matrix operations used in AI and machine learning algorithms.
public static class MatrixHelper<T>
Type Parameters
TThe numeric type used for matrix elements.
- Inheritance
-
MatrixHelper<T>
- Inherited Members
Remarks
For Beginners: A matrix is a rectangular array of numbers arranged in rows and columns. Matrices are fundamental in machine learning for representing data, transformations, and mathematical operations.
Methods
ApplyGivensRotation(Matrix<T>, T, T, int, int, int, int)
Applies a Givens rotation to specific rows of a matrix.
public static void ApplyGivensRotation(Matrix<T> H, T c, T s, int i, int j, int kStart, int kEnd)
Parameters
HMatrix<T>The matrix to which the rotation will be applied.
cTThe cosine component of the Givens rotation.
sTThe sine component of the Givens rotation.
iintThe index of the first row to be rotated.
jintThe index of the second row to be rotated.
kStartintThe starting column index for the rotation.
kEndintThe ending column index for the rotation.
Remarks
For Beginners: This method applies a rotation to two rows of a matrix. It's like mixing two rows together in specific proportions (determined by c and s) to create new rows. This is commonly used to zero out specific elements in numerical algorithms.
ApplyHouseholderTransformation(Matrix<T>, Vector<T>, int)
Applies a Householder transformation to a matrix.
public static Matrix<T> ApplyHouseholderTransformation(Matrix<T> matrix, Vector<T> vector, int k)
Parameters
matrixMatrix<T>The matrix to transform.
vectorVector<T>The Householder vector defining the reflection.
kintThe starting row and column for the transformation.
Returns
- Matrix<T>
The transformed matrix.
Remarks
For Beginners: A Householder transformation is a way to reflect vectors across a plane. In matrix operations, it's used to introduce zeros in specific parts of a matrix. This is a key step in many algorithms that decompose matrices into simpler forms.
BandDiagonalMultiply(int, int, Matrix<T>, Vector<T>, Vector<T>)
Multiplies a band diagonal matrix by a vector.
public static void BandDiagonalMultiply(int leftSide, int rightSide, Matrix<T> matrix, Vector<T> solutionVector, Vector<T> actualVector)
Parameters
leftSideintThe number of subdiagonals (bands below the main diagonal).
rightSideintThe number of superdiagonals (bands above the main diagonal).
matrixMatrix<T>The band diagonal matrix stored in compact form.
solutionVectorVector<T>The vector where the result will be stored.
actualVectorVector<T>The vector to multiply with the matrix.
Remarks
For Beginners: A band diagonal matrix is a matrix where non-zero elements are concentrated around the main diagonal within a certain "band". This method efficiently multiplies such a matrix with a vector without processing all the zero elements outside the band. Band matrices often arise when discretizing differential equations and in image processing algorithms used in machine learning.
CalculateDeterminantRecursive(Matrix<T>)
Calculates the determinant of a matrix using a recursive algorithm.
public static T CalculateDeterminantRecursive(Matrix<T> matrix)
Parameters
matrixMatrix<T>The matrix whose determinant is to be calculated.
Returns
- T
The determinant value of the matrix.
Remarks
For Beginners: The determinant is a special number calculated from a square matrix. It tells us important information about the matrix, such as whether it has an inverse. If the determinant is zero, the matrix doesn't have an inverse.
This method uses a recursive approach, breaking down the calculation into smaller parts by creating submatrices.
Exceptions
- ArgumentException
Thrown when the matrix is not square.
CalculateHatMatrix(Matrix<T>)
Calculates the Hat Matrix (also known as the projection matrix) used in regression analysis.
public static Matrix<T> CalculateHatMatrix(Matrix<T> features)
Parameters
featuresMatrix<T>The feature matrix (design matrix) containing the independent variables.
Returns
- Matrix<T>
The Hat Matrix that projects the dependent variable onto the fitted values.
Remarks
For Beginners: The Hat Matrix is an important concept in regression analysis. It "puts a hat" on your data, transforming your actual observed values into predicted values. Mathematically, it's calculated as H = X(X'X)^(-1)X', where X is your feature matrix, X' is its transpose, and ^(-1) means matrix inverse.
The Hat Matrix has several important properties: - It's used to calculate fitted values in regression: y = Hy - The diagonal elements (H_ii) tell you how much influence each data point has on the model - These diagonal values are used to identify outliers and high-leverage points - In machine learning, understanding the Hat Matrix helps with model diagnostics and improving prediction accuracy
ComputeGivensRotation(T, T)
Computes the cosine and sine components of a Givens rotation.
public static (T c, T s) ComputeGivensRotation(T a, T b)
Parameters
aTThe first element used to compute the rotation.
bTThe second element used to compute the rotation.
Returns
Remarks
For Beginners: A Givens rotation is a way to zero out specific elements in a matrix. It's like rotating a 2D coordinate system to make one component become zero. This is useful in many numerical algorithms to simplify matrices step by step.
CreateHouseholderVector(Vector<T>)
Creates a Householder vector from a given vector.
public static Vector<T> CreateHouseholderVector(Vector<T> xVector)
Parameters
xVectorVector<T>The input vector.
Returns
- Vector<T>
A Householder vector that can be used for reflection.
Remarks
For Beginners: A Householder vector defines a reflection plane that, when applied to the original vector, zeros out all but the first component. This is useful in many matrix decomposition algorithms to systematically simplify matrices.
ExtractDiagonal(Matrix<T>)
Extracts the diagonal elements of a matrix into a vector.
public static Vector<T> ExtractDiagonal(Matrix<T> matrix)
Parameters
matrixMatrix<T>The matrix from which to extract the diagonal.
Returns
- Vector<T>
A vector containing the diagonal elements of the matrix.
Remarks
For Beginners: The diagonal of a matrix consists of the elements where the row index equals the column index (top-left to bottom-right). In many AI algorithms, the diagonal elements have special significance, such as representing variances in covariance matrices.
Hypotenuse(T, T)
Calculates the hypotenuse of a right triangle given the lengths of the other two sides.
public static T Hypotenuse(T x, T y)
Parameters
xTThe length of one side of the right triangle.
yTThe length of the other side of the right triangle.
Returns
- T
The length of the hypotenuse.
Remarks
For Beginners: The hypotenuse is the longest side of a right triangle, opposite to the right angle. This method calculates it using a numerically stable algorithm that avoids overflow or underflow issues that can occur with a direct application of the Pythagorean theorem (a² + b² = c²).
This function is useful in many AI algorithms, particularly when calculating distances or norms.
Hypotenuse(params T[])
Calculates the Euclidean norm (magnitude) of a vector of values.
public static T Hypotenuse(params T[] values)
Parameters
valuesT[]The values to calculate the norm for.
Returns
- T
The Euclidean norm of the values.
Remarks
For Beginners: The Euclidean norm is a way to measure the "length" or "magnitude" of a vector. It's calculated as the square root of the sum of the squares of all values. In a 2D space, this is equivalent to finding the hypotenuse of a right triangle using the Pythagorean theorem.
In machine learning, norms are often used to measure the size of vectors, such as weight vectors in neural networks or for regularization techniques.
InvertUsingDecomposition(IMatrixDecomposition<T>)
Inverts a matrix using a provided matrix decomposition.
public static Matrix<T> InvertUsingDecomposition(IMatrixDecomposition<T> decomposition)
Parameters
decompositionIMatrixDecomposition<T>The matrix decomposition to use for inversion.
Returns
- Matrix<T>
The inverse of the matrix.
Remarks
For Beginners: Matrix inversion is like finding the reciprocal of a number, but for matrices. This method uses a decomposition (a way of breaking down a matrix into simpler parts) to efficiently compute the inverse. Matrix inversion is used in many machine learning algorithms, especially in linear regression and when solving systems of linear equations.
IsInvertible(Matrix<T>)
Determines if a matrix is invertible (non-singular).
public static bool IsInvertible(Matrix<T> matrix)
Parameters
matrixMatrix<T>The matrix to check for invertibility.
Returns
- bool
True if the matrix is invertible, false otherwise.
Remarks
For Beginners: An invertible matrix is one that has an inverse - another matrix that, when multiplied with the original, gives the identity matrix. For a matrix to be invertible, it must be square (same number of rows and columns) and have a non-zero determinant. In machine learning, invertible matrices are important for solving linear systems and in algorithms like linear regression.
IsUpperHessenberg(Matrix<T>, T)
Determines if a matrix is in upper Hessenberg form within a specified tolerance.
public static bool IsUpperHessenberg(Matrix<T> matrix, T tolerance)
Parameters
matrixMatrix<T>The matrix to check.
toleranceTThe numerical tolerance for considering a value as zero.
Returns
- bool
True if the matrix is in upper Hessenberg form, false otherwise.
Remarks
For Beginners: An upper Hessenberg matrix is almost triangular - it has zeros below the first subdiagonal. This is an intermediate form used in many eigenvalue algorithms, making computations more efficient than working with a full matrix.
OrthogonalizeColumns(Matrix<T>)
Orthogonalizes the columns of a matrix using the Gram-Schmidt process.
public static Matrix<T> OrthogonalizeColumns(Matrix<T> matrix)
Parameters
matrixMatrix<T>The matrix whose columns will be orthogonalized.
Returns
- Matrix<T>
A matrix with orthogonal columns.
Remarks
For Beginners: Orthogonalization means making vectors perpendicular to each other. The Gram-Schmidt process takes a set of vectors and creates a new set where each vector is perpendicular (orthogonal) to all previous vectors. This is important in many machine learning algorithms that need independent features or basis vectors.
OuterProduct(Vector<T>, Vector<T>)
Computes the outer product of two vectors.
public static Matrix<T> OuterProduct(Vector<T> v1, Vector<T> v2)
Parameters
v1Vector<T>The first vector.
v2Vector<T>The second vector.
Returns
- Matrix<T>
A matrix representing the outer product of the two vectors.
Remarks
For Beginners: The outer product of two vectors results in a matrix. If you have a vector of size n and another of size m, their outer product is an n×m matrix where each element is the product of the corresponding elements from each vector. This operation is used in various machine learning algorithms, including neural networks for weight updates.
PowerIteration(Matrix<T>, int, T)
Implements the power iteration algorithm to find the dominant eigenvalue and eigenvector of a matrix.
public static (T, Vector<T>) PowerIteration(Matrix<T> aMatrix, int maxIterations, T tolerance)
Parameters
aMatrixMatrix<T>The input matrix for which to find the dominant eigenvalue and eigenvector.
maxIterationsintThe maximum number of iterations to perform.
toleranceTThe convergence tolerance.
Returns
- (T, Vector<T>)
A tuple containing the dominant eigenvalue and its corresponding eigenvector.
Remarks
For Beginners: An eigenvalue and eigenvector are special values and vectors associated with a matrix. When you multiply a matrix by its eigenvector, you get the same vector scaled by the eigenvalue. The power iteration method repeatedly multiplies the matrix by a vector and normalizes it until it converges to the eigenvector with the largest eigenvalue (the dominant one). This is useful in many AI algorithms like PageRank, PCA, and recommendation systems.
ReduceToHessenbergFormat(Matrix<T>)
Reduces a matrix to Hessenberg form, which is useful for eigenvalue calculations.
public static Matrix<T> ReduceToHessenbergFormat(Matrix<T> matrix)
Parameters
matrixMatrix<T>The matrix to reduce.
Returns
- Matrix<T>
The matrix in Hessenberg form.
Remarks
For Beginners: A Hessenberg matrix is almost triangular - it has zeros below the first subdiagonal. Converting a matrix to Hessenberg form is often a first step in calculating eigenvalues, which are important values that help us understand the behavior of linear transformations in machine learning algorithms.
This method uses Householder transformations to efficiently reduce the matrix.
SpectralNorm(Matrix<T>)
Calculates the spectral norm of a matrix, which is the largest singular value.
public static T SpectralNorm(Matrix<T> matrix)
Parameters
matrixMatrix<T>The matrix for which to calculate the spectral norm.
Returns
- T
The spectral norm of the matrix.
Remarks
For Beginners: The spectral norm measures the maximum "stretching" that a matrix can cause when applied to a vector. It's the largest singular value of the matrix, which indicates how much the matrix can amplify a vector in any direction. In machine learning, this helps understand the stability of algorithms and the conditioning of data.
TridiagonalSolve(Vector<T>, Vector<T>, Vector<T>, Vector<T>, Vector<T>)
Solves a tridiagonal system of linear equations.
public static void TridiagonalSolve(Vector<T> vector1, Vector<T> vector2, Vector<T> vector3, Vector<T> solutionVector, Vector<T> actualVector)
Parameters
vector1Vector<T>The subdiagonal elements (below the main diagonal).
vector2Vector<T>The main diagonal elements.
vector3Vector<T>The superdiagonal elements (above the main diagonal).
solutionVectorVector<T>The vector where the solution will be stored.
actualVectorVector<T>The right-hand side vector of the system.
Remarks
For Beginners: A tridiagonal matrix is a special type of matrix where non-zero elements are only on the main diagonal and the diagonals directly above and below it. This method efficiently solves equations of the form Ax = b, where A is a tridiagonal matrix. Tridiagonal systems appear in many numerical methods for differential equations and spline interpolation used in machine learning.