Factor analysis represents multivariate data as linear combinations of fewer random quantities called factors. Maximum likelihood is a popular method for finding the factor loadings, that is, the linear coefficients, and the ML estimates can be found using the EM algorithm. One problem with factor analysis, including ML estimation, is that the loadings are not unique. Practitioners use ``factor rotation'' to choose a unique set of coefficients that is more interpretable. This dissertation introduces a penalized maximum likelihood technique for estimating loadings that produces interpretable loadings without rotation. This method takes advantage of the propensity of the Lasso penalty to estimate some coefficients as exact zeros. We also provide an efficient algorithm for computing the estimates.