Logit Regression and Transformation

In the logit regression model, the predicted values for the dependent or response variable will never be less than (or equal to) 0, or greater than (or equal to) 1, regardless of the values of the independent variables; it is, therefore, commonly used to analyze binary dependent or response variable (see also the binomial distribution). This is accomplished by applying the following regression equation (the term logit was first used by Berkson, 1944):

y=exp(b0+b1*x1+...+bn*xn)/{1+exp(b0+b1*x1+...+bn*xn)}

Regardless of the regression coefficients or the magnitude of the x values, this model will always produce predicted values (predicted y's) in the range of 0 to 1. The name logit stems from the fact that we can easily linearize this model via the logit transformation. Suppose we think of the binary dependent variable y in terms of an underlying continuous probability p, ranging from 0 to 1. We can then transform that probability p as:

p' = loge {p/(1-p)}

This transformation is referred to as the logit or logistic transformation. Note that p' can theoretically assume any value between minus and plus infinity. Since the logit transform solves the issue of the 0/1 boundaries for the original dependent variable (probability), we could use those (logit transformed) values in an ordinary linear regression equation. In fact, if we perform the logit transform on both sides of the logit regression equation stated earlier, we obtain the standard linear multiple regression model:

p' = b0+ b1*x1+ b2*x2+ ... + bn*xn

For additional details, see Nonlinear Estimation or the Generalized Linear Models method of analysis; see also Probit Transformation and Regression and Multinomial Logit and Probit Regression for similar transformations.