Contextualized clinical anomaly detection with explainable AI and patient modeling
Abstract
This study aims to reduce alarm fatigue and improve the clinical relevance of alerts in intensive care by combining sequential modeling, patient contextualization, explainable artificial intelligence (XAI), and probability calibration. To this end, we leverage the adult cohorts from MIMIC-III/IV, segmented into four-hour windows, explicitly handling missing data and constructing a context vector that integrates demographics, comorbidities, and therapeutic interventions. The approach relies on a tabular autoencoder, an long short-term memory (LSTM) autoencoder, and a transformer, complemented by an adjustment layer based on auditable clinical rules, local explanations (LIME/SHAP), and post-hoc calibration (temperature scaling). Evaluation involves receiver operating characteristic (ROC)/precision–recall (PR) area under the curve (AUC), F1-score, sensitivity and specificity, as well as calibration metrics (ECE, Brier score), alert burden, ablation studies, robustness tests, and subgroup fairness analyses. Across all experiments, the complete model (+Context+XAI+Calibration) outperforms baselines in AUPRC and F1, reduces alert burden, and improves calibration while providing understandable explanations. Specifically, the proposed model improves ROC AUC from 0.74 to 0.89 and reduces alert burden by approximately one third compared to clinical thresholds.
Keywords
Full Text:
PDFDOI: http://doi.org/10.11591/ijeecs.v41.i2.pp614-623
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES).