Skip to main content

Volume 3 Supplement 1


Semi automated adjudication of vital sign alerts in step-down units


Machine Learning (ML) has shown predictive utility in analyzing vital sign (VS) data collected from physiologically unstable monitored patients. Training an ML model usually requires sizable amounts of labeled ground-truth data typically obtained via laborious manual chart reviews by expert clinicians.


To reduce effort of clinicians adjudicating vital sign alerts as true alerts or artifacts. The approach can also enable real time filtering of artifacts in vital sign monitoring systems.


Noninvasive VS data including ECG-derived heart rate (HR), respiratory rate (RR), systolic and diastolic blood pressure (BP), and pulse oxygen saturation (SpO2) is monitored to issue alerts whenever VS exceed any of pre-set stability thresholds [1]. Two experts independently annotated 40 of such alerts only using informative low-dimensional projections of data onto statistical features extracted from raw VS data, automatically selected by our ML system. Then these experts adjudicated the same alerts using the available chart time series. We summarized the results to observe consistency of adjudication. The statistical features were extracted from each raw VS stream independently during the alert window. 260 such alerts were adjudicated using the framework described in [1] by a committee of 4 experts.


Figure 1 shows an example of a clinical alert used in the study: VS chart (top) and low-dimensional projection recommended for adjudicating this alert by ML (below). In this projection, this alert can be confidently labeled as a true alert, which was later confirmed via chart review of the VS time series. Table 1 shows the outcome of the expert annotation, with 23 alerts adjudicated correctly using the low-dimensional projections without the need for chart review, only 5 requiring chart review, and 10 alerts on which the expert clinicians disagreed. In 2 of the cases, the intuition behind the ML-chosen projection did not agree with the chart review. The use of the ML model substantially reduced the need for manual chart reviews and overall data annotation effort (approximately twofold). The ML system was trained on 260 labeled samples and tested on a separate set of alerts. It adjudicated 75% of them with high confidence, and identified 32% of them as likely artifacts.

Figure 1
figure 1

SpO2 alert chart time and low-d model.

Table 1 Expert annotation study.


Effective training of ML-based automatic alert adjudication systems can be achieved at substantial reduction of the effort required of expert clinicians. The resulting models can be used to confidently identify a significant percentage of the artifactual alerts.

Grant Acknowledgment

NIH NINR R01NR013912; NSF1320347.


  1. Wang , et al: Intensive Care Med. 2014, 40: S289-

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations


Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fiterau, M., Dubrawski, A., Wang, D. et al. Semi automated adjudication of vital sign alerts in step-down units. ICMx 3 (Suppl 1), A769 (2015).

Download citation

  • Published:

  • DOI: