1 About this book

This is a book about concepts of Statistical Learning. The book has an overarching take-home message that we use as the title of this work: All Models Are Wrong. This comes from a famous quote credited to great British statistician George Edward Pelham Box:

“All models are wrong, but some are useful.”

Because Statistical Learning is inherently related to model building, we must never forget that every proposed model will be wrong. This is why we need to measure the generalization error of a model. We may never be able to come up with a perfect model that gives zero error. And that is okay. It doesn’t have to be perfect. The important—and challenging—thing is to find useful models (yes, we know, easier said than done).

Knowing that the field(s) of Machine Learning, Statisticial Learning (SL), and any other name about learning from data, is a very broad subject, we should warn you that this book is not intended to be the ultimate compilation of every single SL technique ever devised.

Instead, we focus on the concepts that we consider the building blocks that any user or practitioner needs to make sense of most common SL techniques.

A big shortcoming of the book: we don’t cover neural networks. At least not in this first round of iterations. Sorry.

On the plus side: We’ve tried hard to keep the notation as simple and consistent as possible. And we’ve also made a titanic effort to make it extremely visual (lots of diagrams, pictures, plots, graphs, figures, …, you name it).

Prerequisites

We are assuming that you already have some knowledge under your belt.

You will better understand (and hopefully enjoy) the book if you’ve taken one or more courses on the following subjects:

  • linear or matrix algebra
  • multivariable calculus
  • statistics
  • probability
  • programming or scripting

Acknowledgements

Many thanks to the UC Berkeley students, and teaching staff, of Stat 154 Modern Statistical Prediction and Machine Learning (Fall 2017, Spring 2018, Fall 2019, Spring 2020).

In particular, thank you to Jin Kweon, and Jiyoon Jeong for catching many errors in the first iteration of the course slides. Likewise, thank you to Sharon Hui for spotting dozens of typos in the first drafts of the book.

Also, thanks to Anita Silver, Joyce Yip, Jingwei Guan, Skylar Liang, Raymond Chang, Houyu Jiang, Valeria Garcia, and Bing Li for being amazing and committed note takers.

Likewise, thanks to Johnny Hong, Omid Shams Solari, Ryan Theisen, Frank Qiu, and Billy Fang for their collaboration as TAs.