The Bias-Variance Tradeoff

To avoid extremely long and redundant blog posts, instead of writing notes on an entire chapter from Deep Learning, I will instead write about a chapter subsection or some topic I find interesting.


Today’s post is about the bias-variance tradeoff, a well-known problem in machine learning which involves minimizing two sources of error which prevent supervised learning algorithms from generalizing well to new data. We’ll discuss point estimation and statistical bias, variance (and their relationship with generalization), and consistency.

Point Estimation

From Deep Learning: “Point estimation is the attempt to provide the single ‘best’ prediction of some quantity of interest.” The quantity of interest may be a parameter, a vector of parameters, or even a function. The book uses the notation \mathbf{\hat\theta} to denote a point estimate of a parameter \mathbf{\theta}.

Now, if \{ \mathbf{x}^{(1)}, ..., \mathbf{x}^{(m)}\} is a set of independent and identically distributed (IID) data samples, any function of the data

\mathbf{\hat\theta}_m = g(\mathbf{x}^{(1)}, ..., \mathbf{x}^{(m)})

is considered a point estimate or a statistic of the data. This definition doesn’t require that g returns a good estimate of \mathbf{\theta}, or even that the range of g is the same as the allowable values of \mathbf{\theta}; in this sense, it allows the designer of the estimator a great deal of flexibility. We consider a good estimator to be a function g whose output is close to the true \mathbf{\theta} which generated the dataset \{ \mathbf{x}^{(1)}, ..., \mathbf{x}^{(m)} \}.

In this discussion, we take the frequentist view on statistics: the true parameter value \mathbf{\theta} is fixed but unknown, and the statistic \mathbf{\hat\theta} is a function of the dataset. Since the data is generated from a random process parametrized by \mathbf{\theta}, any function of the dataset is random; therefore, the estimator \mathbf{\hat\theta} is a random variable.

Point estimation can also refer to estimation of the relationship between input (explanatory) and output (dependent) variables; this is commonly referred to as function estimation. We now try to predict a variable \mathbf{y} given an input vector \mathbf{x}. We assume there is a function f(\mathbf{x}) which captures the approximates the relationship between the two; e.g., we might assume that \mathbf{y} = f(\mathbf{x}) + \mathbf{\epsilon}, where \mathbf{\epsilon} is the part of \mathbf{y} which cannot be predicted from \mathbf{x}. In function estimation, we wish to approximate f with a model or estimator \hat f. In this sense, function estimation is the same as point estimation, in which the estimated \hat f is simply a point estimate in function space.

We can now review the most commonly studied properties of point estimators and discuss what they imply about these estimators.

Bias

Here, we are referring to statistical bias; that is, “the difference between this estimator’s expected value and the true value of the parameter being estimated” (Wikipedia). More formally, this is defined as:

\text{bias}(\mathbf{\hat\theta}_m) = \mathbb{E}[\mathbf{\hat\theta}_m] - \mathbf{\theta},

where the expectation is taken over the dataset (viewed as samples of a random variable) and \mathbf{theta} is the true value of the estimated parameter \mathbf{\theta}, used to define the data generating distribution. We call an estimator unbiased if \text{bias}(\mathbf{\hat\theta}_m) = 0, implying that its expected value, \mathbb{E}[\mathbf{\hat\theta}_m] = \mathbf{\theta}. An estimator is called asymptotically biased if \text{lim}_{m \rightarrow \infty} \text{bias}(\mathbf{\hat\theta})_m = 0, implying \text{lim}_{m \rightarrow \infty} \mathbb{E}[\mathbf{\hat\theta}_m] = \mathbf{\theta}.

Variance and Standard Error

We may also wish to consider how much we expect our estimator to vary as a function of the data sample. Just as we may compute the expectation of an estimator to determine its bias, we may also compute its variance, given by:

\text{Var}(\mathbf{\hat\theta}),

where the random variable is the dataset. The square root of the variance is called the standard error, denoted by \text{SE}(\mathbf{\hat\theta}).

The variance or standard error of an estimator is a measure of much we expect the output of our estimator to vary as a function of independent resampling of data from the data generating distribution. Just as we might like an estimator to exhibit low bias, we might also want it to have low variance.

When we compute any statistic using a finite number of data samples, our estimate of the true parameter will always be uncertain, in the sense that we could have obtained different samples from the data generating distribution whose statistics would be different. Therefore, the expected degree of variation in an estimator is a source of error we would like to quantify.

The standard error of the mean is given by

\text{SE}(\hat\mu) = \sqrt{\text{Var}[\frac{1}{m}\sum_{i=1}^{m} x^{(i)}]} = \frac{\sigma}{\sqrt{m}},

where \sigma^2 is the true variance of the samples x^{(i)}. The standard error is often estimated by using an estimate of \sigma; however, neither the square root of the sample variance nor the square root of the unbiased estimator of the variance is an unbiased estimated of the standard error. Both tend to underestimate the true standard deviation, yet are still commonly used in practice. For large m, the square root of the unbiased estimate provides a reasonable approximation.

The standard error of the mean can be useful for machine learning experiments. We often estimate the generalization error by computing the sample mean of the error on the test dataset, where the number of data samples in the set determines this estimate’s accuracy. Using the central limit theorem, which tells us that the mean will be approximately distributed as a normal distribution, we can use the standard deviation to compute the probability that the true expectation falls in any given interval. For example, the 95% confidence interval centered on the mean \hat\mu_m is

(\hat\mu_m - 1.96 \text{SE}(\hat\mu_m),\hat\mu_m + 1.96 \text{SE}(\hat\mu_m))

under the normal distribution with mean \hat\mu_m and variance \text{SE}(\hat\mu_m)^2. In machine learning experiments, it is typical to say that algorithm A is better than algorithm B if the upper bound of the 95% confidence interval for the error of algorithm A is less than the lower bound of the 95% confidence interval for the error of algorithm B.

Trading off Bias and Variance to Minimize Mean Squared Error

Bias and variance measure two different sources of an estimator’s error: bias measures the expected deviation from the true value of the function or parameter, and variance measures the deviation from the expected estimator value that any given sampling of from the data generating distribution is likely to cause.

How do we choose between two estimators, one with more bias and one with more variance? The most common way to settle this trade-off is by using cross-validation, in which the training data is partitioned into k equally subsets, each of which is “held out” to use as validation data in a series of training, validation “rounds”. The results from evaluating the estimator on the validation data in each round is averaged to produce a estimated generalization error.

Alternatively, we may use the mean squared error (MSE) to compare estimators:

\text{MSE} = \mathbb{E}[(\mathbf{\hat\theta}_m - \mathbf{\theta})^2]

= \text{Bias}(\mathbf{\hat\theta}_m)^2 + \text{Var}(\mathbf{\hat\mu}).

The MSE measures the overall expected deviation, in the sense of squared errors, between the estimator and the true value of the parameter \mathbf{\theta}. From the above equation, the MSE incorporates both the bias and variance in a natural way. Desirable estimators are those with small MSE, and small bias and variance components in turn.

The relationship between bias and variance is closely related to the machine learning concepts of overfitting, underfitting, and capacity. When generalization error is measure by MSE, where bias and variance are meaningful components, increasing model capacity tends to lead to an increase in variance and decrease in bias. This is illustrated in Figure 1, where we see a U-shaped generalization error curve as a function of model capacity.

Adopted from Deep Learning
Figure 1: Bias-Variance Tradeoff as a Function of Model Capacity

Consistency

So far, we’ve discussed useful properties of estimators given a training dataset of fixed size. We are usually interested in the behavior of the estimator as the size of this dataset grows. We typically want our point estimates to converge to the true value(s) of the corresponding parameter(s) as the number of data samples m in our dataset grows. More formally, we want

\text{plim}_{m \rightarrow \infty} \mathbf{\hat\theta}_m = \mathbf{\theta},

where \text{plim} denotes convergence in probability; i.e., for any \epsilon > 0, P(|\mathbf{\hat\theta}_m - \mathbf{\theta}| > \epsilon) \rightarrow 0 as m \rightarrow \infty. This condition is known as consistency (sometimes known as weak consistency).

Consistency guarantees that the bias induced by the estimator diminishes as the number of training samples grows. The reverse is not true: asymptotic unbiasedness does not imply consistency, meaning there exist unbiased estimators which are not consistent.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s