Standard error of the mean

Last revised by Daniel J Bell on 9 Oct 2019

The standard error of the mean, SE(M) is a fundamental concept in hypothesis testing.

When you pick a random sample out of a population (say a 100 data point sample out of a 10,000 data point population), what is the mean value of that sample? It's going to want to tend toward the population mean, but in actuality, it's going to be different with each new sample data set you pull out. If you were to repeatedly pull many different data sets out of a population, they will form a distribution of means around the population mean.

This distribution of means has the standard deviation of the population, but the standard error decreases if you have larger sample sizes. This make intuitive sense. If you were to pick multiple 10 data point samples out of a 10,000 point population, you would expect more variation in their means than if you picked multiple 2,000 data point samples out of a 10,000 data point population.

Standard error of the mean = σ / √n
  • σ: standard deviation of the sample (Greek letter sigma)
  • √n: square root of the total number of data points in the sample

The distribution of means is used when accepting or rejecting the null hypothesis in inferential statistics. The p value is the critical point in the distribution of means, beyond which we consider the mean of the experimental data set to be outside the range of variation of means, and statistically significant.

ADVERTISEMENT: Supporters see fewer/no ads