A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It obtains an "optimal" halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. Least-squares modelbased halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We examine the one-dimensional case, in which each row or column of the image is halftoned independently. One-dimensional least-squares halftoning is implemented, in closed form, with the Viterbi algorithm. Experiments show that it produces better spatial and gray-scale resolution than conventional one-dimensional techniques and eliminates the problems associated with the modified (to account for printer distortions) error diffusion algorithm.