toc-burkov-Machine learning
1. intro
1.1 what is ML
1.2 types of Learning
1.3 HOw supervised learning works
1.4 why the mdoel work
2 Notation and Definitions
2.1 Notation
2.1.1 Data Structures
2.1.2 Capital Sigma Notation
2.2 Random Variable
2.3 Unbiased Estimators
2.4 Bayes’ Rule
2.5 Parameter Estimation
2.6 Parameters vs. Hyperparameters
2.7 Classification vs. Regression
2.8 Model-Based vs. Instance-Based Learning
2.9 Shallow vs. Deep Learning
3 Fundamental Algorithms
3.1 Linear Regression
3.2 Logistic Regression
3.3 Decision Tree Learning
3.4 Support Vector Machine
3.5 k-Nearest Neighbors
4 Anatomy of a Learning Algorithm
4.1 Building Blocks of a Learning Algorithm
4.2 Gradient Descent
4.3 How Machine Learning Engineers Work
4.4 Learning Algorithms’ Particularities
5 Basic Practice
5.1 Feature Engineering
5.2 Learning Algorithm Selection
5.3 Three Sets
5.4 Underfitting and Overfitting
5.5 Regularization
5.6 Model Performance Assessment
5.7 Hyperparameter Tuning
6 Neural Networks and Deep Learning
6.1 Neural Networks
7 Problems and Solutions
7.1 Kernel Regression
7.2 Multiclass Classification
7.3 One-Class Classification
7.4 Multi-Label Classification
7.5 Ensemble Learning
7.6 Learning to Label Sequences
7.7 Sequence-to-Sequence Learning
7.8 Active Learning
7.9 Semi-Supervised Learning
8 Advanced Practice
8.1 Handling Imbalanced Datasets
8.2 Combining Models
8.3 Training Neural Networks
8.4 Advanced Regularization
8.5 Handling Multiple Inputs
8.6 Handling Multiple Outputs
8.7 Transfer Learning
8.8 Algorithmic Efficiency
9 Unsupervised Learning
9.1 Density Estimation
9.2 Clustering
9.2.1 K-Means
9.3 Dimensionality Reduction
9.4 Outlier Detection