Implementing the WHO integrated tool to assess quality of care for mothers, newborns and children: results and lessons learnt from five districts in Malawi – BMC Pregnancy and Childbirth
This is the first time the WHO integrated tool has been used to assess quality of maternal newborn and child health care at country level. Application of the tool is feasible and has provided valuable information highlighting areas of good quality care as well as where there are deficiencies in the quality of care at healthcare facility and district levels in Malawi. Using a “dashboard” to display the assessment findings makes it possible to easily identify priority areas of care that require immediate action. This study also highlights the need for modification and further standardisation of the new WHO Quality of Care tool. In particular, we recommend a reduction in the overall number of standards assessed, revision of the current set of standards such that all aspects of the continuum of care are included and revision of the formulation of standards such that these are specifically reflective of all aspects of quality (i.e. both with regard to inputs, process and outcome), measurable and adaptable to context (healthcare facility level).
Strengths and limitations of the new WHO tool
While the integrated tool is designed to assess quality across the continuum of care, the standards currently included in the tool are not fully representative of all the areas of care that need to be assessed. Antenatal care is not assessed at all and postnatal care in a very limited way. These are typically neglected areas of care which are often not included in quality improvement activities. This is in part because national standards for antenatal and postnatal care are often not in place. Developing such standards and including them in comprehensive quality of care assessments is a priority. In addition, the tool would be enhanced by including indicators for routine intrapartum care practices, for example the choice of a companion at the time of birth, freedom in position and movement throughout labour, non-supine position in labour and careful monitoring of progress with the partograph [12]. These aspects of care, together with others relating to women’s experience of care (e.g. effective communication, care with respect and dignity), are essential and inter-linked dimensions of quality yet are difficult to assess and monitor well. Methods such as structured observation and properly conducted exit interviews with women would be appropriate to measure these aspects of care and could easily be incorporated into a revised version of the tool.
There are some important points to highlight in relation to how well the tool was able to provide complete and accurate data. It proved difficult to report on the size and capacity of the healthcare facilities assessed as the data on basic hospital statistics and outcome measures were often not available and were not collected consistently. In addition, for neonatal and paediatric module, data collection was frequently incomplete. In its current format, the tool is very long and detailed.
Some standards are easier to assess (e.g. for ward infrastructure) than others (e.g. for satisfactory progress in labour), some are better defined (e.g. criteria for the standard on referral) than others (e.g. availability of essential drugs, equipment and supplies). This does make it more likely that some aspects of care are scored more highly than others simply because the relevant “standard” is easier to measure and more accurately defined. For example, it would be more accurate and informative to collect data on stock-outs or non-availability of specific essential drugs.
Completion of exit interviews with women, caretakers and providers is a mandatory component and while some healthcare facilities did complete these, we did not have access to the complete data set and so have not reported these findings. Where case observation is used, it is not clear how many cases assessors should observe before judging whether standards are met or not. There are also challenges to relying on observation; especially the potential for assessor bias and the likely impact on provider behaviour of having an external assessor present [13]. Peer or self-assessment at healthcare facilities are alternative approaches. In addition, if there are no cases available to observe at the time of the assessment this part of the assessment cannot be completed. In these circumstances, assessors are advised to use staff interviews and data from registers to gather relevant information where possible.
Data collection could be made more efficient via use of technology including tablets or machine readable forms, and this is something to consider for future iterations of the tool. Table 2 summarises our key recommendations for improvement of the integrated tool.
Table 2 Recommendations for improvement of the WHO integrated quality of care assessment tool
Full size table
We have not reported on the resource requirements for implementing the WHO integrated quality of care assessment tool at national or sub-national level. These data could be generated reliably in future through careful implementation research conducted alongside country level assessments. The burden of collecting a large number of (additional) data on quality of care and performance at scale is a factor that other pilot assessments have encountered [14] and for this reason we would recommend that the tool is shortened whenever possible and/or that selected components or modules are used as needed.
The debriefing and action plan provided in the assessment tool annex was not completed for any healthcare facility, and the reasons for non-completion of this critical step in the process need to be understood, perhaps through dialogue with assessors.
Implications for policy and practice
Until recently, the emphasis has been on coverage and availability of care rather than quality [15]. A new tool to measure quality is, in principle, useful to provide baseline information and highlights specific areas for quality improvement. The new WHO tool has this potential. A key bottleneck in quality improvement efforts at healthcare facility level in Malawi and other low- and middle-income countries is translation of assessment data to action. The dashboard approach highlights in a very visual and accessible way where the key quality of care problems exist at both healthcare facility and district level. The findings were presented at a national workshop to share lessons learnt on maternal, newborn and child health quality of care in Lilongwe, where the Minister for Health in Malawi recommended that a dashboard similar to the one developed for this analysis be adopted to help map quality of care at district level. Subsequently, the assessment data were disseminated at district level and action plans were developed. A similar standards-based action-oriented healthcare facility assessment approach has been implemented in other lower-middle income countries [16, 17], and is the core component of clinical or standards-based audit [18].
At a global level, the shift towards improving quality of maternal and newborn health services demands a coordinated approach. Yet measurements of quality of care are often not done consistently and there are many different tools, indicators and methods, making it difficult to compare between and within countries. There is a need to clarify where and how the new integrated WHO tool fits with other facility-based assessment tools, such as SARA and the World Bank Service Delivery Indicators (SDI) survey. The new WHO Quality of Care tool is unique in its ability to judge quality not just quantity of services, but it assesses relatively more structure and process characteristics; ideally a healthcare facility assessment tool should assess quality in relation to structure, process and outcome [19]. There are plans to extend the SARA assessment to include structured observations of consultations between providers and women for, as well as vignettes to determine providers’ usual case management practices [20]. It is essential that partners prioritise alignment of quality of care assessment tools. The recent development and testing by WHO and partners of a core set of harmonized maternal newborn and child health indicators is a step in the right direction, but the final indicators need to be rapidly integrated into existing tools [4]. In addition, as with any new tool developed by international agencies, it is imperative that the standards on which the tools are based are accepted by healthcare workers and established in the local context as realistic. A recent assessment of quality of care in a low resource referral hospital in Zanzibar used a participatory approach with skilled birth attendants and midwifery and obstetrics specialists to agree realistic criteria for quality of care that reflected local realities [21]. In this ‘bottom-up’ approach fetal heart rate assessment every 30 min was maintained as ‘optimal’ practice, but team agreed that assessments within intervals of 90 min were an acceptable audit criterion.