In an era of emerging vehicle automation technologies and advanced traffic management strategies, traffic simulation has become an indispensable tool for giving agencies the insight they need for adoption and implementation decisions. The importance of calibration to ensure reliability of the simulation results cannot be over-stated. Current practice calls for analysts to calibrate their analytical tools to a base (or existing) condition, and then use those tools to predict performance of a future condition. However, many times these future conditions incorporate improvements that are significantly different than the base condition modeled when the analysis tool was calibrated. This can result in performance outcomes that are either not trustworthy, or just plain wrong. New approaches must be developed, so that the tools are calibrated to data that are reflective of what the future condition will be. With implications of future conditions in mind, this study proposes a framework for calibration of simulation models. The framework is built around the following four major components: scenarios, robustness, parameter libraries, and simulation agent trajectories. The role of the framework components is shown in a case study. Guidelines based on the proposed framework could be incorporated in conjunction with existing software packages. However, future development of intermediate tools is recommended, for improved efficiency and practicality. The ultimate result of utilizing the framework will be more accurate and better qualified traffic analyses, leading to improved trust in analysis tools, and improved transportation decision-making overall.