Based on the assumption of normality, the Grubbs' test for outliers can be used to detect a single outlier at a time. It is not recommended for use on samples with less than 7 observations.

Grubbs' test statistic (G) is calculated as the ratio of the largest absolute deviation from the sample mean to the sample standard deviation. That is,

yi is an outlier if G is greater than the critical value,

Where:

is the sample mean

s is the sample standard deviation

N is the sample size, and

is the critical value from a t distribution with N-2 degrees of freedom and a significance level of .