Experimentation as a setting for learning demands adaptability on the part of the decision-making system. It is typically infeasible for agents to have complete a priori knowledge of an environment, their own dynamics, or the behavior of other agents. In order to achieve autonomy in robotic applications, learning must occur incrementally, and ideally as a function of decision-making by exploiting the underlying control system. Most artificial intelligence techniques are ill suited for experimental settings because they either lack the ability to learn incrementally or do not have information measures with which to guide their learning. This chapter examines the Koopman operator, its application in active learning, and its relationship to alternative learning techniques, such as Gaussian processes and kernel ridge regression. Additionally, examples are provided from a variety of experimental applications of the Koopman operator in active learning settings.