Humans adjust gaze by eye, head, and body movements. Certain stimulus properties are therefore elevated at the gaze center, but the relative contribution of eye-in-head and head-in-world movements to this selection process is unknown. Gaze- and head-centered videos recorded with a wearable device (EyeSeeCam) during free exploration are reanalyzed with respect to responses of a face-detection algorithm. In line with results on low-level features, it was found that face detections are centered near the center of gaze. By comparing environments with few and many true faces, it was inferred that actual faces are centered by eye and head movements, whereas spurious face detections ("hallucinated faces") are primarily centered by head movements alone. This analysis suggests distinct contributions to gaze allocation: head-in-world movements induce a coarse bias in the distribution of features, which eye-in-head movements refine.