Linear Algebra for Machine Learning

  • Thread starter Thread starter m_s
  • Start date Start date
Joined
8/21/20
Messages
154
Points
38
Anyone has any recommendations for a book in Linear Algebra for Machine Learning? I am about to start reading Strang's book Linear Algebra and Learning from Data, but I was wondering if anyone has any other recommendations, merci!
 
This question can be broken down into several use cases.

1. Learning the syntax of linear algebra and its relationship to matrix theory. Gotta get used to symbols, notation and jargon ASAP.
2. Some numerical linear algebra.
3. Stuff like gradient, Jacobian, Hessian
4. Some background on optimisation
5. numpy and scipy have buckets of stuff. You learn a lot by applying these libraries to practical cases. The Schaum books are great for input examples.

In general, I don't think that a book LA and ML exists.

(most books on ML fall short big time, unfortunately.. it is quite depressing to be honest)
Which books on 1-5 depends on .. but a first shot is

Shilov, G.E. (1977) Linear Algebra Dover.
Dahlquist, G. and Björck, A. (1974) Numerical Methods. Dover
Kreider, D.L., Kuller, R.G., Ostberg, D.R. and Perkins, F.W. (1966) An Introduction to Linear Analysis Addison-Wesley.
Nocedal, J. and Wright, S. (2006) Numerical Optimisation Springer. (bit more advanced)

// I am finishing the manuscript of my new PDE/FDM book and I have about 4-5 chapters on these topics.

I treat these and much on www.datasim.nl online courses

// bit of name dropping: Gil Strang is my "academic PhD grandfather".
 
Last edited:
Anyone has any recommendations for a book in Linear Algebra for Machine Learning? I am about to start reading Strang's book Linear Algebra and Learning from Data, but I was wondering if anyone has any other recommendations, merci!


On a side note, that book by Strang is very useful and very well-written.
 
On a follow-on remark (looking in the crystal balls), mainstream ML is grounded in linear algebra due to its (chance?) influences in the 80s in the seminal paper by Rumelhart, Hniton and Williams (at least that's the way it seems to me). Then everyone latched onto gradient descent? Linear algebra will only get you so far.
The next ML wave is Hilbert Space = "linear algebra++" aka RKHS which has been used for at least 20 years but not many seem to know it. The book "Learning with Kernels" by Schoelkopf and Smola does a good job. It contains the popular SVM Support Vector Machines as as special case..

"Linear Functional Analysis" by Rynne and Youngson is readable for someone with linear algebra background.


//

"It has become clear that kernel methods provide a framework for tackling some rather profound issues in machine learning theory. At the same time, successful applications have demonstrated that SVMs not only have a more solid foundation than artificial neural networks, but are able to serve as a replacement for neural networks that perform as well or better, in a wide variety of fields."

Schölkopf and Smola (2002).
 
Last edited:
Back
Top Bottom