One sample test. The Kolmogorov-Smirnov one-sample test for normality is based on the maximum difference between the sample cumulative distribution and the hypothesized cumulative distribution. If the D statistic is significant, then the hypothesis that the respective distribution is normal should be rejected. For many software programs, the probability values that are reported are based on those tabulated by Massey (1951); those probability values are valid when the mean and standard deviation of the normal distribution are known a priori and not estimated from the data. However, usually those parameters are computed from the actual data. In that case, the test for normality involves a complex conditional hypothesis ("how likely is it to obtain a D statistic of this magnitude or greater, contingent upon the mean and standard deviation computed from the data"), and this is how the Kolmogorov-Smirnov and Lilliefors probabilities should be interpreted (Lilliefors, 1967). Note that in recent years, the Shapiro-Wilk W test has become the preferred test of normality because of its good power properties as compared to a wide range of alternative tests.

Two-sample
test. The Kolmogorov-Smirnov two-sample test is a test of the significant
difference between two data samples. As with the one-sample test, the
statistic compares cumulative distributions; in this case the cumulative
distributions of the two data samples (e.g., observed target values vs
simulated target values). A large difference between the two cumulative
sample distributions indicates that data are not drawn from the same distribution.
This test can be used during certain model building processes to compare
predicted outcomes (based on simulated input data) to observed outcomes.
A significant difference between predicted outcomes and observed outcomes
is usually