We present an algorithm for estimation of head orientation, given cropped images of a subject's head from any viewpoint. Our algorithm handles dramatic changes in illumination, applies to many people without per-user initialization, and covers a wider range (e.g., side and back) of head orientations than previous algorithms. The algorithm builds an ellipsoidal model of the head, where points on the model maintain probabilistic information about surface edge density. To collect data for each point on the model, edge-density features are extracted from hand-annotated training images and projected into the model. Each model point learns a probability density function from the training observations. During pose estimation, features are extracted from input images; then, the maximum a posteriori pose is sought, given the current observation.