Tracking change with quality improvement efforts requires establishing clear data definitions, using a standardized method of data collection, and displaying the data in a manner that generates meaningful information. To begin, in order to determine if improvement has been made, one must establish that a change has occurred and that it has led to improvement. Measurement can provide a way to determine this when conducted in a thoughtful manner. This includes using measures with clearly defined parameters, a standardized method of collecting the data, and if multiple staff are collecting data, ensuring consistency among the data collectors. Once this is done, data should be displayed in a manner that accurately tells the story and offers useful information. This can be done through graphs, figures and tables. This article will describe data definitions, collection, display with run charts, and interpretation in further detail as applied to an example of a quality improvement initiative involving quantitative blood loss (QBL).
In evaluating data, the following aspects should be defined:
- What is being measured
- How the metric is measured
- What team member will be responsible for measuring it
- When the measurement will occur (Kelly et. al, 2018)
There may be many staff collecting data, and having clear data definitions ensures consistency in collection and measurement. For example, let’s take a quality improvement initiative involving integrating quantitative blood loss (QBL) into workflow after vaginal and cesarean delivery and answer the questions above.
- What is being measured- compliance with all steps in the established QBL process after vaginal and cesarean delivery.
- How the metric is measured– percentage compliance with all steps in the process (numerator) divided by the total number of deliveries audited (denominator). Auditors will observe the process in real-time and complete paper audits.
- What team member will be responsible for measuring it– charge nurses
- When the measurement will occur – on all shifts
In addition, each charge nurse will complete 10 audits per month and hand these to the nurse manager, with a target of 30 audits/month.
It is important that data collection tools are both valid and reliable, meaning they measure what they are intended to measure and are done in a manner that is consistent. In order to establish validity, the charge nurses should receive training for competency and ensure they are accurately capturing the data. (Kelly et.al, 2018). For instance, auditors should understand what would be a miss and what would be 100% compliance. In order to establish interrater reliability, the charge nurses would be observed by a trainer to ensure that everyone is auditing in the same manner according to the established process.
A patient is admitted in active labor for her second baby at 39 weeks and 4 days gestation. Labor course and delivery are rapid and uneventful. Shortly after delivery, the provider notices brisk bleeding from a vaginal sidewall laceration, described as a “pumper”. It was quickly recognized and sutured. Shortly thereafter, the under-buttocks drape was noted to have almost a liter of fluid. At the time, the provider believed this to be mostly amniotic fluid and urine with a small amount of blood. As such, an estimate of blood loss was used.
The standard QBL process was not followed, in which baseline fluid level is established after the birth of the baby and subsequent level after completion of the repair are used to establish one component of blood loss (with the weight of sponges/blood-soaked pads and linens being the other component for blood loss in vaginal delivery). It was not until the next day, when the same provider was rounding on this patient postpartum and reviewing hemoglobin and hematocrit (H&H) values, that she realized the fluid in the under-buttocks drape was in fact actual blood loss because the patient experienced a significant drop in H/H values.
The charge nurse observing the QBL process would have marked this as a miss because the established process was not followed. What a great story to share with staff on the importance of following the established process, and how quickly blood loss can occur in these atypical cases.
Run charts display data over time and are one way to measure change. Evaluating data over time helps to determine if the change has occurred and if it is sustained over time. Run charts do not provide true statistical analysis of data, but are useful in providing a quick analysis and visual review of the data. The run chart in Figure 1 shows the outcome variable, percentage compliance with the QBL process, depicted on the y-axis (vertical) and time period on the x-axis (horizontal).
Figure 1: Birth Center A
The median, or middle value, is used as a benchmark and is depicted as the orange center line in this example. Data are analyzed in reference to the median. This is used because, unlike the mean, it is not sensitive to data point outliers. It is calculated by arranging the values from low to high and determining the middle point (Kelly et.al, 2018). In this example, there are 12-time points (48,50,52,54,60,65,68,75,80,84,86,90) and the median is calculated by taking the 6th and 7th data points, adding them together, and dividing by 2. In this case, the median is 65+68/2= 66.5%. In the figure above, the first 5 months had compliance below the median and then had overall improvement except for a small dip in the month of September.
Interpretation of run charts is based on where the data points are in reference to the median. If they are consistently close to the median, there is minimal variation in the process. If the data points are inconsistently away from the median, there is greater variation in the process. In addition, if a data point is significantly higher or lower than the median, this is indicative of an outlier or atypical finding (Kelly et.al., 2018). Common and special cause variations will also be explained. Let’s review a few examples as applied to these concepts.
Figure 2: Birth Center B- data points close to the median, reflecting minimal variation
This run chart shows data points consistently around the median (66%), reflecting a QBL process that has minimal variation and is in control. This is also referred to as common cause variation, which occurs when there is an expected/normal amount of variation within a process. While having a tightly controlled process and minimal variation is ideal, it is also important to consider the goal of the project. For example, the average (mean) compliance for Birth Center B is 65%, which would not be an ideal target for QBL compliance. Thus, although this example shows a process that is in control, there are still opportunities for improvement.
Figure 3: Birth Center C- data points consistently away from the median, reflecting greater variation in the process
This run chart shows data points inconsistently around the median (50%), reflecting a process that is not in control. There is great variation from month to month. More information is needed here to determine why the variation exists and then actions can be aimed at developing interventions to reduce variation and hardwire the intended process.
Figure 4: Birth Center D- outlier finding
This run chart shows data points consistently around the median (82%), which tells us there is little variation in the QBL process with the exception of the month of June (30% compliance). This is an example of an outlier, or an unusual finding, representing one type of special cause variation. Special cause variation can occur when a data point behaves in an unexpected or unpredictable way. In general, it’s worthwhile to determine why these types of variations occur, leading to a process that is not in control- in this example, was there a big changeover in staffing combined with the increased census, leading to less trained personnel and fewer resources to help with QBL? The month following shows compliance going back up but not quite at the level of the median. It’s also worthwhile to investigate what happened at this point in time- did staffing and census stabilize or was there an intervention to improve compliance?
In summary, effectively tracking improvement with quality initiatives involves clearly defining what is being measured, standardizing the data collection process, ensuring all data collectors are using the same process, and effectively displaying the data in a manner that accurately tells the story. Run charts are one method of displaying the data; interpretation will determine how well the process is in control and if targeted interventions to improve efforts are needed.
*If this article interests you, you may also enjoy my book titled: Obstetric and Neonatal Quality and Safety (C-ONQS) Study Guide: A Practical Resource for Perinatal Nurses, available on amazon: Amazon_obneonatalstudyguide
Kelly, P., Vottero, B. A., & Christie-McAuliffe, C. A. (2018). Introduction to quality and safety education for nurses: Core competencies for nursing leadership and management. Springer Publishing Company.
Copyright by Jeanette Zocco MSN, RNC-OB, C-EFM, C-ONQS