Evaluation of predictive models in the clinical space presents an ever-growing need for the Learning Health System. Healthcare institutions are attempting to move away from a rules-based approach to clinical care toward a more data-driven model of care. To achieve this, machine learning algorithms are being developed to aid physicians in clinical decision making. However, a key limitation in the adoption and widespread deployment of these algorithms into clinical practice is the lack of rigorous assessments and clear evaluation standards. A framework for the systematic benchmarking and evaluation of biomedical algorithms assessed in a prospective manner that mimics a clinical environment is needed to ensure patient safety and clinical efficacy.
We are tackling the problem by focusing on a specific prediction problem: patient mortality. Due to it's well studied nature and relatively well-established predictiveness, patient mortality serves as a well-defined benchmarking problem for assessing predictive models. These models are also widely adopted and implemented at healthcare institutions and CTSAs, a feature we hope will stimulate participation from a wide range of institutions. We will ask participants of this DREAM Challenge to predict the future mortality status of currently living patients within our Observational Medical Outcomes Partnership (OMOP) repository. After participants predict, we will evaluate the model performances against a gold standard benchmark dataset.