Deep Learning 深度学习 Notes Chapter 2 Linear Algebra
1. Multiplying Matrices and Vector
\(C=AB\), \(\text{The operation is defined by:}\)
\(\text{Dot product}:\)
2. Linear Dependence and Span
\(\text{It is not possible to have } {\bf more\ than\ one\ but\ less\ than\ infinitely\ many \ solution} \text{ for a particular } b\). \(\text{ If both }x,y \text{ are solutions, then }z = ax+(1-a)y \text{ is also a solution for any real }a.\)
\(\text{We can rewrite:}\)
\(\bf Span\): \(\text{ A Span of a set of vectors is the set of all points obtainable by linear combination of the original vectors.}\)
\(\bf Linear\ Combination:\)
\(\large \text{Determining if } Ax=b\text{ has a solution thus amounts to testing if }b\text{ is in the span of the columns of }A.\text{ Where }A\in \mathbb{R^{m\times n}, b\in\mathbb{R^m}}.\text{ Therefore, }A \text{ must have at least }m\text{ columns, that is }n\geq m.\)
3. Norms
\(\text{Frobenius Norm}\):
4. Eigendecomposition
\(\text{Where }v \text{ is the } {\bf eigenvector},\lambda \text{ is the } {\bf eigenvalue.}\)
\(\bf Eigen-Decomposition:\)
\(\text{Where }V = [v^{(1)},v^{(2)},...,v^{(n)}]\)
\(\text{For every symmetric matrix can be composed into an expression using only real-valued eigenvectors and eigenvalues:}\)
5. SVD
\(\text{where }A\in\mathbb{R^{m\times n}},U\in\mathbb{R^{m\times m}},D\in \mathbb{R^{m\times n}},V\in\mathbb{R^{n\times n}}.\)
\(\text{Which means that the right singular vectors of } \bf A\text{ are the eigen-vectors of } \bf A^TA\).
6. Moore-Penrose Pseudoinverse
\(\text{To solve a linear equation:}\)
\(\text{Psedoinverse of }\bf A\text{ is defined as:}\)
7. Trace Operation
\(\text{Other properties:}\)