We study a dynamic game in which a group of players attempt to coordinate on a desired, but only partially known, outcome. The desired outcome is represented by an unknown state of the world. Agents' stage payoffs are represented by a quadratic utility function that captures the kind of tradeoff exemplified by the Keynesian beauty contest: each agent's stage payoff is decreasing in the distance between her action and the unknown state; it is also decreasing in the distance between her action and the average action taken by other agents. The agents thus have the incentive to correctly estimate the state while trying to coordinate with and learn from others. We show that myopic, but Bayesian, agents who repeatedly play this game and observe the actions of their neighbors in a connected network eventually succeed in coordinating on a single action. However, as we show through an example, the consensus action is not necessarily optimal given all the available information.