Corticosteroid Randomization after Significant Head Injury and International Mission for Prognosis and Clinical Trialsin Traumatic Brain Injury Models Compared with a Machine Learning-Based Predictive Model from Tanzania

Citation: 
Cyrus Elahi, Syed M. Adil, Francis Sakita, Blandina T. Mmbaga, Thiago Augusto Hernandes Rocha, Anthony Fuller, Michael M. Haglund, João Ricardo Nickenig Vissoci, and Catherine Staton
Publication year: 
2021

Hospitals in low- and middle-income countries (LMICs) could benefit from decision support technologies to reduce time to triage, diagnosis, and surgery for patients with traumatic brain injury (TBI). Corticosteroid Randomization after Significant Head Injury (CRASH) and International Mission for Prognosis and Clinical Trials in Traumatic Brain Injury (IMPACT) are robust examples of TBI prognostic models, although they have yet to be validated in Sub-Saharan Africa (SSA). Moreover, machine learning and improved data quality in LMICs provide an opportunity to develop context-specific, and potentially more accurate, prognostic models. We aim to externally validate CRASH and IMPACT on our TBI registry and compare their performances to that of the locally derived model (from the Kilimanjaro Christian Medical Center [KCMC]). We developed a machine learning-based prognostic model from a TBI registry collected at a regional referral hospital in Moshi, Tanzania. We also used the core CRASH and IMPACT online risk calculators to generate risk scores for each patient. We compared the discrimination (area under the curve [AUC]) and calibration before and after Platt scaling (Brier, Hosmer–Lemeshow Test, and calibration plots) for CRASH, IMPACT, and the KCMC model. The outcome of interest was unfavorable in-hospital outcome defined as a Glasgow Outcome Scale score of 1–3. There were 2972 patients included in the TBI registry, of whom 11% had an unfavorable outcome. The AUCs for the KCMC model, CRASH, and IMPACT were 0.919, 0.876, and 0.821, respectively. Prior to Platt scaling, CRASH was the best calibrated model (χ2 = 68.1) followed by IMPACT (χ2 = 380.9) and KCMC (χ2 = 1025.6). We provide the first SSA validation of the core CRASH and IMPACT models. The KCMC model had better discrimination than either of these. CRASH had the best calibration, although all model predictions could be successfully calibrated. The top performing models, KCMC and CRASH, were both developed using LMIC data, suggesting that locally derived models may outperform imported ones from different contexts of care. Further work is needed to externally validate the KCMC model.