Can quality improvement improve the quality of care? A systematic review of reported effects and methodological rigor in plan-do-study-act projects – BMC Health Services Research
In this systematic review nearly all PDSA-based QI projects reported improvements. However, only approximately one out of four projects had defined a specific quantitative aim and reached it. In addition, only a small minority of the projects reported to have adhered to all four key features recommended in the literature to ensure the quality and adaptability of a QI project.
The claim that PDSA leads to improvement should be interpreted with caution. The methodological limitations in many of the projects makes it difficult to draw firm conclusions about the size and the causality of the reported improvements in quality of care. The methodological limitations question the legitimacy of PDSA as an effective improvement method in health care. The widespread lack of theoretical rationale and continuous data collection in the projects makes it difficult to track and correct the process as well as to relate an improvement to the use of the method [10, 11]. The apparent limited use of the iterative approach and small-scale-testing constitute an additional methodological limitation. Without these tools of testing and adapting one can risk introducing unintended consequences [1, 36]. Hence, QI initiatives may potentially tamper with the system in unforeseen ways creating more harm and waste than improvement. The low use of small-scale-testing could perhaps originate in a widespread misunderstanding that one should test large-scale to get a proper statistical power. However, this is not necessarily the case with PDSA [15].
There is no simple answer to this lack of adherence to the key methodological features. Some scholars claim that even though the concept of PDSA is relatively simple it is difficult to master in reality [4]. Some explanations to this have been offered including an urge to favour action over evidence [36], an inherent messiness in the actual use of the method [11], its inability to address “big and hairy” problems [37], an oversimplification of the method, and an underestimation of the required resources and support needed to conduct a PDSA-based project [4].
In some cases, it seems reasonable that the lack of adherence to the methodological recommendations is a problem with documentation rather than methodological rigor, e.g. the frequent lack of small-scale pilot testing may be due to the authors considering the information too irrelevant, while still having performed it in the projects.
Regarding our framework one could argue that it has too many or too few key features to encompass the PDSA method. The same can be said about the supplementary features where additional features could also have been assessed e.g. the use of Specific, Measurable, Attainable, Relevant and Timebound (SMART) goals [14]. It has been important for us to operationalize the key features so their presence easily and accurately can be identified. Simplification carries the risk of loss of information but can be outweighed by a clear and applicable framework.
This review has some limitations. We only included PDSA projects reported in peer-reviewed journals, which represents just a fraction of all QI projects being conducted around the globe. Further, it might be difficult to publish projects that do not document improvements. This may introduce potential publication bias. Future studies could use the framework to examine the grey literature of evaluation reports etc. to see if the pattern of methodological limitations is consistent. The fact that a majority of the projects reported positive change could also indicate a potential bias. For busy QI practitioners the process of translating a clinical project into a publication could well be motivated by a positive finding with projects with negative effects not being reported. However, we should not forget that negative outcome of a PDSA project may still contribute with valuable learning and competence building [4, 6].
The field of IS and collaboration between practitioners and scholars has the potential to deliver crucial insight into the complex process of QI, including the difficulties with replicating projects with promising effect [5, 12, 20, 32]. Rigorous methodological adherence may be experienced as a restriction on practitioners, which could discourage engagement in QI initiatives. However, by strengthening the use of the key features and improving documentation the PDSA projects will be more likely to contribute to IS, including reliable meta-analyses and systematic reviews [10]. This could in return provide QI practitioners with evidence-based knowledge [5, 38]. In this way rigor in performing and documenting QI projects benefits the whole QI community in the long run. It is important that new knowledge becomes readily available and application oriented, in order for practitioners to be motivated to use it. An inherent part of using the PDSA method consists of acknowledging the complexity of creating lasting improvement. Here the scientific ideals about planning, executing, hypothesizing, data managing and documenting with rigor and high quality should serve as inspiration.
Our framework could imply that the presence of all four features will inevitably result in the success of an improvement project. This it clearly not the case. No “magic bullets” exist in QI [39]. QI is about implementing complex projects in complex social contexts. Here adherence to the key methodological recommendations and rigorous documentation can help to ensure better quality and reproducibility. This review can serve as a reminder of these features and how rigor in the individual QI projects can assist the work of IS, which in return can offer new insight for the benefit of practitioners.