Big data in the ICU - critical care databases and decision support

Big data in the ICU - critical care  databases and decision support

The deluge of data produced during medical care has typically been under-utilized or simply wasted. In the era of paper, this was explicable. However, in spite of nearly three decades of computerization,...

Supplied by

The deluge of data produced during medical care has typically been under-utilized or simply wasted. In the era of paper, this was explicable. However, in spite of nearly three decades of computerization, medical data remains difficult to access and organize, let alone use. Such a gap is both large and dramatic in the intensive care unit (ICU), where the complexity of illness and new possibilities unveiled by the unremitting march of technology transcend typical cognitive capabilities. In turn, this serves to further highlight the critical role of data support in evidence-based healthcare decision making.


From structured analysis to personalized treatment
Big Data’s case in the ICU, whose environment is both critical and intense by definition, is self-evident. One of the first arguments in its favour is that new ICU patients usually require extremely close monitoring. This is a highly data intensive process. The accumulation of data, in turn, can cause information overload in physicians who are providing the care.
Some experts foresee using Big Data in the ICU for structured analysis of complex decisions and the quantifying of expected benefits versus harms in different treatment options. Although such a tool has not been well received by several clinicians, it has considerable potential in terms of personalizing treatment. Today, ICU patients in particular can be provided with interventions that sustain life in spite of severe organ dysfunction. However, the treatments can also result in prolonged suffering with no guarantee of outcomes in line with patient preferences. Decision analysis based on Big Data might enable such concerns to be addressed.

Reducing uncertainty
There are several other practical drivers for Big Data in the ICU. Very often, ICU decisions have to be made with a high degree of uncertainty, and clinical staff may have minutes or seconds to make those decisions. These could cover issues such as knowing patient sub-populations that experience significant divergences in efficacy or unanticipated delayed adverse effects from drug treatments. At present, ICU practices vary due to either an absence of medical knowledge or conflicting opinions. Given time constraints, therapeutic decisions and choices depend largely on clinician preference and local practice patterns, leading to significant variability in quality of care.

Study shows scale of challenge in ICU interventions
As it stands, however, a large number of ICU interventions are not based on proven cases or standardized guidelines.
In 2008, a team at Erasmus Hospital in Brussels, Belgium, made a systematic review of 72 multi-centre randomized controlled trials evaluating the effect of ICU interventions on mortality and found that just 10 (about one in seven) showed benefit. 55 had no measurable value while as many as 7 (one in ten) were actually harmful.

Organizing critical care
Apologists for the lack of use of Big Data in the ICU point out that medicine can be as much art and science, and standardized protocols and best practices are not always sufficiently flexible. Such flexibility can indeed be imperative in an ICU, where decisions are subject to exceptional complexity and variability in patient status and clinical situation.

Nevertheless, a study on the concept of ‘organized care’ showed that applying W. Edwards Deming’s process management theory to manage variation in providing care can yield huge savings to the healthcare system. The study, titled ‘How Intermountain trimmed healthcare costs through robust quality improvement efforts’, was published in the June 2011 issue of ‘Health Affairs’. Its authors estimated that such efforts could save the US healthcare system about USD 3.5 billion (€3 billion) a year.
As a result, it may well be argued that variability in ICU practices is the result of a failure to research and establish evidence for a particular approach, in spite of the fact that both the data and the technology exist.

Scoring systems
Typical Big Data deployments in the ICU would be focused on the most expensive or high-risk parts of current clinical practice in critical care, and cover predictive alerts and analytics for complex case patients, decompensation and adverse events, intervention optimization for multiple organ involvement as well as triaging and readmissions.
Progress has already been made by using clinical data to infer high-level information in ICU scoring systems. These are largely used to compare ICU performance in terms of outcomes.

Two of the best known scoring systems are APACHE (Acute Physiology and Chronic Health Evaluation) and SAPS (Simplified Acute Physiology Score).
APACHE was designed to provide morbidity scores for a patient and help decide on a specific therapy. Methods to derive a predicted mortality from this score exist, but they are yet to be sufficiently well defined and precise.
SAPS was originally aimed at predicting mortality, originally for benchmarking. It has since been updated to provide a predicted mortality score for a particular patient or patient group by calibrating against recorded mortalities on an existing set of patients. SAPS can be used to compare the evolution in performance of an ICU over a period of time or compare treatment at different ICUs.

Variety of ICU databases in development
At present, ICU databases are being developed by hospitals/professional societies, academic institutions and medical equipment vendors. They structure and aggregate demographic data (age and sex of patient, condition or disease, co-morbidities, length of stay, date and time of discharge, mortality, readmission etc.) and provide such information on a hospital-specific basis. Rather than decision or standardization of protocols and practice, such databases simply provide monitoring and selective comparisons of ICU patient outcomes and costs – over time, or by region. However, there are new efforts to go further and build decision support tools.

Non-commercial databases

One good example of a non-commercial database is the Adult Patient Database (APD) from the Australia and New Zealand Intensive Care Society (ANZICS). It contains data from over 1.3 million patient episodes and is considered one of the largest single datasets on intensive care in the world. The database collects episodes from over 140 ICUs in Australia and New Zealand on a quarterly basis, and is used to benchmark performance of individual units.
The Danish Intensive Care Database (DID) is another non-commercial database, with data for over 350,000 ICU stays. DID made a big leap in introducing the ICU scoring indicator, SAPS II in 2010, which however remains less than 80% complete. DID quality indicators include readmission to the ICU within 48 hours and standardized mortality ratios for death within 30 days of admission using case-mix adjustment (age, sex, co-morbidity level and SAPS). Process indicators consist of out-of-hour discharge and transfer to other ICUs for capacity reasons.

Commercial databases

ICU databases are also being developed by medical technology vendors for commercial use. Cerner has created APACHE Outcomes, which has gathered physiologic and laboratory measurements from over 1 million patient records across 105 ICUs since 2010.  Although large, it still contains incomplete physiologic and laboratory measurements, and does not offer waveform data and provider notes.
Another commercial database known as eICU is provided by Philips. This telemedicine-intensive care support provider archives data from participating ICUs and is available to qualified researchers via the eICU Research Institute. The database size is estimated at over 1.5 million ICU stays, and it is reported to be adding 400,000 patient records per year from about 180 subscribing hospitals. As with APACHE Outcomes, eICU does not archive waveform data. However, provider notes are captured if entered into the software.


In contrast to commercial databases like eICU and APACHE Outcomes, MIMIC (Multiparameter Intelligent Monitoring in Intensive Care) is an open and public database with a host of clinical data from ICUs, vital signs, medications, laboratory measurements, observations and notes, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more.
Currently in its third generation, MIMIC provides a unique research resource with data from about 40,000 critical care patients. Hundreds of researchers from over 30 countries are given free access under data use agreements. In addition, several thousands of students, educators and investigators have used MIMIC’s waveform data, which is freely available to all.

MIMIC is the fruit of a collaboration since the early 2000s between Beth Israel Deaconess (a unit of Harvard Medical School), the Laboratory of Computational Physiology at the Massachusetts Institute of Technology (MIT), and Philips Healthcare, with support provided by the National Institute of Biomedical Imaging and Bioinformatics.
MIMIC was launched as a research project to establish a critical care alert and display (CCAD) system and assist decision support in the ICU, on the basis of a large temporal ICU patient  research  database. The system generated abnormal clinical values as clinician alerts via a user interface designed to allow efficient and ergonomic display of data. Within a short time after launch, it was producing over 50 alerts per patient ICU day.

Unique capability has promise for modelling
The MIMIC database is considered unique due to its capability to capture structured and extremely granular data. This includes per minute changes in physiologic signals, as well as time-stamped treatments with dosages, and permits modelling individual response to clinical intervention, which, in turn, allows for improved risk-benefit calculation and prediction of outcomes.
Some of these models might be optimal to develop effective early triage in terms of level of care and monitoring, as well as the allotment of scarce human and technical resources. In turn, such tools could assist emergency departments facing limitations in ICU resources.

Recent observational studies on the MIMIC ICU database have yielded several findings of interest. These cover areas such as long-term outcomes of minor elevations in troponin, heterogeneity in impact of red blood cell transfusion, the optimization of heparin dosing to minimize chance of under- or over-anticoagulation and the impact of selective serotonin reuptake inhibitors (SSRI) on mortality. Researchers are also studying areas of potentially great impact such as determining the proper duration for a trial of aggressive ICU care among high-risk patients.

International expansion
The MIMIC database is being used to design and develop decision support tools. Outcomes of concern are not limited to mortality or length of stay, but will instead be extended to include factors such as the probability of discharge to a nursing facility and expected duration of stay there, as well as the need for procedures such as hemodialysis or repeat hospitalization.
In spite of its clear utility, MIMIC is currently limited because its data is derived entirely from just one institution, namely Beth Israel Deaconess, and does not therefore account for practice variation across ICUs. There are however plans to expand the project to include data from ICUs in Britain and France.

Contact form

Get in touch directly with the above supplier

Your email address will not be communicated to any third party other than the above supplier for the purpose of fulfilling this enquiry. For more information: read our privacy policy.