This post goes through the hyperparameter optimization for model selection, via cross-validation, and the regularization technique. The former topic is more recently referred to as **meta-learning**.
This post aims at visualizing the bias-variance dilemma, understanding how the model capacity relates to its performance and why it is common practice to split the dataset into training and testing, creating some learning curves that should clarify whether gathering additional data might be worthy.
This post defines the bias and variance concepts and apply it to both a linear and a non-linear model.
This post implements a logistic regression in Scikit-learn for the MNIST dataset.