A comparison between predictive sepsis models: an automated algorithm in Electronic Health Report versus Artificial Intelligence (AI) and Machine Learning (ML) techniques
Marcio Borges, (Palma, Spain), Antonia Socias (Palma, Spain), Alberto Castillo (Palma, Spain), Maria Aranda (Palma, Spain), Cristina Pruenza (Madrid, Spain), Joana Mena (Palma, Spain), Victor Estrada (Palma de Mallorca, Spain), Julia Diaz (Madrid, Spain)
Introduction

The early detection of sepsis (SE) and septic shock (SS) with automated electronic models (AEM) is a problem due to the high percentage of false positives (FP), which generates fatigability to clinicians (1).

Objectives

To compare an AEM published and commonly used in our Electronic Health Record (EHR) (2) with those generated from AI and ML techniques for the detection of SE/SS.

Méthodes

Retrospective observational study comparing predictive models for the detection of SE/SS in patients of 14 years of age or more in all hospital areas (ED, wards and ICU). We value according to SEPSIS.2, because it was the definition used during the study period. The AEM is based on 15 clinical-analytical variables with different weights and a discrimination score that we usually use (2). In contrast to others based on ML techniques that used different structured and unstructured databases (free text) of the EHR. All cases were evaluated and validated prospectively by the Multidisciplinary Sepsis Unit. The Mann-Whitney-Wilcoxon test was used to identify statistically significant clinical and analytical variables, as well as wrapper techniques, with a significance level of 0.01. And to obtain relevant unstructured data Natural Language Processing (NLP) techniques such as the Dunning test were applied. The total sample was divided into 2 groups: the 5/7 proportion of the total of randomly selected records constituted the training set and the rest of the records (2/7) formed the test set.

Résultats

From January 2014 to October 2018, we included 218,562 patients, mean age of 67.5 years and 57% males, where 9301 (4.6%) patients had SE/SS. The ML models included 244 structured and unstructured variables associated with SE/SS. We have identified 75 clinical and analytical variables in our ML models compared to 15 in the EAM (p<0.0,01). Interestingly, three variables normally associated with sepsis such as Glascow Coma Score (GCS), mean arterial blood pressure (MAP) and platelet count were not significantly related to SE / SS in ML predictive models. And neither the MAP nor the platelet numbers were significantly associated in the predictive model of the AEM, which does not include the GCS in its score. There were 28,294 patients with AEM alerts, where there were 62% of false positive (FP) cases and 12% of false negatives (FN). We have obtained 3 models with ML, being that the best (named BISEPRO) identified 11,2% FP and 0,9% FN cases compared to AEM (p=0,001 in both analyzes). The AUC-ROC, sensitivity and specificity to detect SE/SS of the best ML model was 0.95 (95% CI 0,94-0,96), 0,94, 083 compared to the AEM with 0.86 (95% CI 0,83-0,88), 0,78 and 0,68, respectively.  But the three models of ML were significantly superior to that of the AEM both to detect SE / SS and with lower cases of FP and FN.

Conclusion

ML predictive models were significantly higher for SE/SS detection than traditional automated ones, lowering FP by more than 50%.

  • 2. B. de Dios, Borges M, et al., “Computerised sepsis protocol management. Description of an early warning system,” Enferm Infecc Microbiol Clin, vol. 36, issue 2, pp. 84-90, Feb 2018.
  • 1. F. J. Candel, Borges M, et al., “Current aspects in sepsis approach. Turning things around,” Rev Esp Quimioter, vol. 31, n.o 4, pp. 298-315, Aug 2018.
  • Co-funded by MSD and IDISBA