Gaussian Processes for Machine Learning
Complete text of:
Gaussian Processes for Machine Learning, Carl Edward Rasmussen and Christopher K. I. Williams, MIT Press, 2006. ISBN-10 0-262-18253-X, ISBN-13 978-0-262-18253-9.
I like the quote from James Clerk Maxwell that goes:
The actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man’s mind.
Interesting. Is our identification of subjects probabilistic or is our identification of what we thought others meant probabilistic?
Or both? Neither?
From the preface:
Over the last decade there has been an explosion of work in the “kernel machines” area of machine learning. Probably the best known example of this is work on support vector machines, but during this period there has also been much activity concerning the application of Gaussian process models to machine learning tasks. The goal of this book is to provide a systematic and unified treatment of this area. Gaussian processes provide a principled, practical, probabilistic approach to learning in kernel machines. This gives advantages with respect to the interpretation of model predictions and provides a well founded framework for learning and model selection. Theoretical and practical developments of over the last decade have made Gaussian processes a serious competitor for real supervised learning applications.
I am downloading the PDF version but have just ordered a copy from Amazon.
If you want to encourage MIT Press and other publishers to put materials online as well as in print, order a copy of this and other online materials.
Saying online copies don’t hurt print sales isn’t as convincing as hearing the cash register go “cha-ching!”
(I would also drop a note to the press saying you bought a copy of the online book as well.)