Models are an essential part of the scientist’s toolkit. By exploiting idealized, abstract surrogates, feats are achieved which are unviable through more direct means. It is typically thought that models are employed in the face of complexity, and fulfil three functions. First, ‘fudging’, models are heuristic, providing computational and cognitive traction. Second, ‘fathoming’, models pick out explanatorily salient features of their targets. Third, ‘forecasting’, models eschew causal details in order to maximize predictive power. I argue that examination of models in historical science (geology, paleontology, etc…) reveals new functions, and that these are warranted. Historical scientists sometimes face incomplete, fragmentary data sets and typically only have indirect access to their targets via material remains. Models are used to mitigate these challenges, providing ‘virtual tests’ which discriminate between hypotheses in the face of impoverished data. Moreover, the continuities between such virtual tests and physical experiments should lead us to think this role is justified. Just like experiments they allow us to control for variables, run repeated trials, explore the limits of variables, and construct instances. Also just like experiments, such models must answer to external and internal validity. I argue that these similarities give grounds for taking (some) models and (some) experiments to be epistemically equivalent; that is, if we think experiments provide empirical evidence for hypotheses, then we should think the same of models. In short, historical scientists use models because they have too little, rather than too much, information—and they use them for primarily epistemic purposes. Moreover, this use is justified.