Quality Leadership and Quality Control

The contents of articles or advertisements in The Clinical Biochemist – Reviews are not to be construed as official statements, evaluations or endorsements by the AACB, its official bodies or its agents. Statements of opinion in AACB publications are those of the contributors. Print Post Approved – PP255003/01665. Copyright © 2005 The Australasian Association of Clinical Biochemists Inc. No literary matter in The Clinical Biochemist – Reviews is to be reproduced, stored in a retrieval system or transmitted in any form by electronic or mechanical means, photocopying or recording, without permission. Requests to do so should be addressed to the Editor. ISSN 0159 – 8090

A key aspect of an organisation is leadership, which links the various components of the quality system. Leadership is difficult to characterise but its key aspects include trust, setting an example, developing staff and critically setting the vision for the organisation. Organisations also have internal characteristics such as the degree of formalisation, centralisation, and complexity. Medical organisations can have internal tensions because of the dichotomy between the bureaucratic and the shadow medical structures.

Different quality control rules detect different analytical errors with varying levels of efficiency depending on the type of error present, its prevalence and the number of observations. The efficiency of a rule can be gauged by inspection of a power function graph. Control rules are only part of a process and not an end in itself; just as important are the trouble-shooting systems employed when a failure occurs. ‘Average of patient normals’ may develop as a usual adjunct to conventional quality control serum based programmes. Acceptable error can be based on various criteria; biological variation is probably the most sensible. Once determined, acceptable error can be used as limits in quality control rule systems.

Quality Control (QC)

Introduction

Laboratories are subject to external environmental influences that have an impact on the way they work. Changes in the operating characteristics of analysers that are used (high throughput, random access, etc) and the greater centralisation of work because of economic factors have increased the numbers of samples processed in laboratories. The role of an effective QC system in these laboratories is to ensure that not only are medically significant errors detected, but also that no false rejections occur since re-work associated with these rejections attracts a high cost. This cost is measured in the dollar value of wasted reagents, QC sera, labour and the time delay in releasing results. The need to process large volumes of samples has brought new problems, but few laboratories have modified their approach to QC.

A discussion about QC systems inevitably leads to the definition of acceptable limits for the ‘in control’ situation. The question of what is an appropriate allowable medical error for an analytical system has probably only recently been answered. Consequently, there is as yet no recognition of these new requirements in the design of many QC systems.

QC procedures are components of an overall ‘quality system’ which includes the selection and monitoring of staff and equipment, staff training, the control of wider laboratory processes and means of capturing feedback from clients and staff. The aim of this quality system is to maintain a desired level of stakeholder satisfaction with the organisation. What can be overlooked in this documented system is the importance of leadership in any organisation. Leadership provides the mechanism for change and improvement in any system, and can dictate the speed and extent of any reform. Invariably the success of an organisation is dependent on the quality of its leadership.

QC Systems

Every clinical biochemist should have a working knowledge of the techniques of QC. The theory of QC was introduced to us early in our formal education and we often are called upon to put those lessons into practice. But that does not mean that these principles are always applied appropriately, as many who use them may not fully understand their inherent assumptions and the problems which may arise if these assumptions are not met. As well as this problem, analytical systems and medical requirements change but we do not always modify our QC approaches to reflect these altered inputs.

Let us consider the implementation of rules that will detect medically important errors in an analytical process. Most clinical chemistry laboratories utilise a QC procedure that is based on the following principles.

  • ♦ Use of a (synthetic) QC serum

  • ♦ The use of a set of QC rules which signal when an analytical run has failed and

  • ♦ A procedure to follow when this situation (detected failure) arises.

In order to fully understand the limitations of our routine QC procedures we need to fully understand the limitations of each of these components of the QC system.

QC Sera

Consider firstly the use of synthetic QC sera. Using this material has the obvious advantages that it can be purchased in large volume lots with a long shelf-life (2–3 years), that it mimics the characteristics of human sera but is non-infectious and that it is provided in conveniently sized vials, either as a freeze-dried or liquid stable form. However, it is not completely identical with human sera in all aspects and some analytes do behave differently in some assays (e.g. so-called matrix effects). The material may deteriorate in the vial (e.g. glucose in freeze-dried vials) over the shelf-life or during transport and storage.

The need to reconstitute freeze-dried sera also introduces a very significant potential source of error. The freeze-dried vials are produced in a manufacturing process where vial-to-vial filling variation is inevitable, recorded as being up to 0.6%.1 There may be instability in certain analytes (creatine kinase, bicarbonate) after reconstitution (freeze-dried) or thawing (liquid stable). The act of reconstitution can introduce an error far greater than the inherent error of the rest of the analytical process and may introduce contamination from the diluent.2,3 Therefore, many laboratories have adopted liquid stable QC material to eliminate this source of error despite the significant increase in cost.

All QC systems based on QC sera require estimates of the mean and standard deviation (SD) that accurately reflect the stable between run variability of the assay. This requires these estimates to be determined over a reasonable number of analytical runs. The more estimates that are performed the greater the reliability of the measures, but also the greater the cost. Running new QC material over as many different runs as possible will also identify problems such as vial-to-vial variation or instability in certain analytes after thawing or upon reconstitution. A general rule would be that the material should be run over at least 20 analytical runs.3,4 It would also make sense to link these runs to an independent assessment of performance such as the external quality assurance programme samples.3

QC Rules5

Next we must consider the set of QC rules that are used. The analyst should be aware of how these rules react to the presence of error in the analytical system. For the sake of simplicity, analytical errors can be broadly classified as either systematic error (SE) or random error (RE). Some examples of common analytical problems that present as either of these two different types of error are shown in .

Table 1

Systematic ErrorRandom ErrorIntrinsic method biasMatrix interferenceInstrument biasMechanical variationReagent lot biasElectrical interferenceCalibration biasPhotometer/detector variationWithin run biasSpecimen problems (fibrin clots)Open in a separate window

With synthetic QC sera based procedures, systematic error in the analytical system will present as a shift in the mean of the QC samples and some types of random error will manifest as an increase in the SD of these samples. Analytical errors can also be further characterised as persistent (e.g. a poor batch of reagent) or intermittent (e.g. external power fluctuations). This additional information about the duration of the problem may assist in the trouble-shooting process.

We need to understand that the QC rules that we use are not perfect. They do not detect all errors that may be present in an analytical system and the rules may signal an error when no error is present. The frequency of these false positive and false negative signals are quantified as the Probability of (error) Detection (Ped) and the Probability of False Rejection (Pfr). In the ideal world with perfect QC rules, the Ped would be 1.0 (100%) and the Pfr would be zero.

To illustrate some of these concepts consider the simplest of the commonly used QC rules, namely the 12S rule (see ). This rule fails if the value of the QC sample exceeds the mean of QC sera obtained at a time when the analytical system was in a stable situation plus or minus two SDs.

An external file that holds a picture, illustration, etc.
Object name is cbr24_3p081f1.jpgOpen in a separate window

This rule is usually used as a warning signal in basic Shewhart systems.3,4 From our basic knowledge of statistics we know that in approximately 5% of cases this rule will falsely signal a failure when no error has occurred, ie. Pfr is 0.05.

As can be seen from , this rule responds differently to systematic and random error, and the response is dependent on the magnitude of the error. The size of the error is usually measured in terms of SD. The measured error can be either the difference between the current mean and the stable mean (termed bias or systematic error), or the change in the SD from the stable situation (called random error), or a combination of both types of error.

An external file that holds a picture, illustration, etc.
Object name is cbr24_3p081f2.jpg Open in a separate window

Using simulation studies it is possible to model the response from the QC rule to the addition of increasing levels of random or systematic error. The resultant graphs are called power-function graphs,6,7 and are illustrated for the 12S control rule in .

An external file that holds a picture, illustration, etc.
Object name is cbr24_3p081f3.jpgOpen in a separate window

While running more controls does improve the chances of detecting an error, more controls also increases the chances of falsely rejecting an in-control run (see ). In fact, as the QC sample is run more often (e.g. in a multi-channel analyser) the probability of false rejection increases according to the formula Pfr(repeated) = 1 − [1 − Pfr(single)]n where n is the number of repeated control sample analyses. Thus for 5 repeats, the probability of false rejection is 23%, which means that approximately one quarter of the times that the rule suggests that there is an analytical error present, there is in fact none! It is also of note that this control rule is better at detecting systematic error rather than random error.

There are a variety of other rules that have been suggested, a selection of which are listed in ; all of these have different responses to random and systematic errors.3–5 Most of these rules are simple to interpret because they are based on the visual inspection of a Levey-Jennings graph.

Table 2

We would expect to find some correlation between different controls run on the same analytical run as they should be subject to the same systematic and random errors. An analyst would typically look at the control charts from all the quality control sera to detect if there was any trend. This troubleshooting technique is an acknowledgement of the probability of false rejection inherent in the rules that are used. There are control rules that utilise the results from different QC sera more formally to take account of this cumulative principle. Dechert and Case have proposed the use of the χ2 chart to simultaneously review data from more than one QC sera level.8 This chart is an example of the recent adoption in laboratories of control rules that have been developed and are widely used in industrial QC applications.

As well as combining the results from the application of the one control rule, it is possible to combine different control rules to increase the Ped for a given level of false rejection. The Westgard multi-rule procedure is an example of this principle.4,5 There are a number of combined rules which have been advocated to improve the efficacy of these rules.3,4 These include the combinations 13S/22S, 13S/22S/R4S, and 13S/22S/R4S/10x. The selection of rules to be used in specific situations will be discussed later in this article.

Action on QC Rule Failure

Once the QC rule system has flagged a failure what should the analyst do? Merely repeating the QC serum sample will not identify if random error is present as there is a high likelihood that the QC sample result will be within control limits when run the second time. Another common first action is to reconstitute another QC serum vial. This practice is a response to the problems associated with the freeze-dried control serum material referred to above. Similarly re-analysing the entire run is not an acceptable corrective action if there is a real problem with the analytical system as this increases turn-around time and cost. The action to be taken will depend on the degree of confidence the analyst has in the QC system and the analytical procedure.

The best solution is to introduce control rules that have a low probability of false rejection and carefully analysing the rules that have failed. This will suggest the type of error present, and the most effective troubleshooting to be followed.3,9,10 There needs to be a documented, systematic procedure used to investigate the failure and to take appropriate action. Otherwise the whole QC process will be a waste of time and effort. An example of such a procedure is given in which is a trouble shooting flowchart for the end of cycle external Quality Assurance Programme (personal communication, J. Calleja, RCPA-AACB Chemical Pathology Quality Assurance Programs Group).

An external file that holds a picture, illustration, etc.
Object name is cbr24_3p081f4.jpgOpen in a separate window

Patient-based QC Procedures

Another approach to monitoring the quality of analytical runs is to use patient data.3,11 This technique has not found significant usage as a frontline QC system in clinical chemistry; however, it is widely used on automated haematology analysers in the form of Bull’s algorithm which involves the calculation of a moving patient samples average.3

The potential advantages of this approach are that it obviates the matrix effects of synthetic control sera and may reduce the usage and hence the costs of the synthetic control sera. Patient data can be used in several different ways; for example, patient duplicates, discordance checks, delta checks, multi-parametric checks of individual patient data (e.g. anion gaps) and average of patient results. Many laboratories use some or all of these but there are obvious productivity and turn-around time problems with re-running patient samples either routinely or if they have abnormal results. However, there should be some documented procedure for dealing with critical or unexpected results. Delta checks based on previous results potentially alert the laboratory to specimen mix-up; however, they have been shown to have a poor predictive value for the detection of this error.12 A negative anion gap should alert the analyst to a problem with the electrolyte measurement system but as a control procedure it lacks sensitivity.13,14

The concept of averaging patient data for QC purposes is called the average of normals (AON) or average of patients. As the name suggests this method involves calculating the mean (and sometimes the range) of patient values for an analyte. If the mean (or range) value of the patient population calculated from a block (or run) falls outside previously determined error limits, then the rule signals failure. The error limits are usually based on mean and multiples of the SD.15

Cembrowski and Carey have investigated using AON for biochemical analysers.3 They have found that the power of AON procedures to detect error in an analytical system is dependent upon the following:

  • ♦ The ratio of the SD of the patient population (sp) to the SD of the analytical method (sa), expressed as sp/sa, (SDR or SD ratio)

  • ♦ The number of patient results averaged (Np)

  • ♦ The control limits and thus the probability of false rejection (Pfr)

  • ♦ The truncation limits for the exclusion of patient data from being averaged and

  • ♦ The population beyond the truncation limits.

These authors also provide power function graphs and control limits for AON calculations for some common analytes, which can used to design algorithms to monitor the stability of analytical systems. One of the problems with conventional AON procedures is the high false-rejection rate for a given level of error detection, mainly because of the presence of outliers.

The AON technique used successfully on haematology analysers uses a mathematical technique, exponential smoothing, which is less subject to interference from outliers. Smith and Kroft have described a similar system for clinical chemistry analysers, which they suggest has comparable Pfr to the Westgard multi-rule systems depending on the SDR.16

A possible use of these AON systems would be as a part of a hybrid system where a modified AON procedure would be used to detect bias and trigger the use of a synthetic QC serum. Thus AON systems could be used to complement traditional QC procedures using control sera to increase run lengths.17–19 The integration of these two systems has been reported in the literature but they have not been widely adopted in clinical chemistry laboratories.16–19

Defining Acceptable Medical Error

The entire discussion above has assumed that there is a given level of error (random or systematic) that is unacceptable. Furthermore, this level of unacceptable error is related to the SD of a control sample. That is, a run is rejected if a control serum sample is measured and returns a value that is outside a range based on the stable mean and SD of this sample. There is no reference to outside factors such as medical appropriateness or biological variation. As the laboratory does not operate in a vacuum, this approach may not yield results that are suitable for clinical practice. Therefore, some clinical input into the selection of what is an acceptable level of error in a particular assay is required.

It is essential to always bear in mind that no QC process will improve a fundamentally poorly performing assay. The best that a good control procedure will do is to let the analyst know that that run has failed. Indeed Jenny and Jackson-Tarentino reported that a common cause of failure in external proficiency programmes was an inability to detect an appropriate medically significant error.20 This was because the methods used did not have the precision or accuracy necessary to provide clinically useful results.

There are three aspects required for the development of an analytical quality system, namely specifications for analytical quality, creation of analytical quality and control of analytical quality.9 I have discussed the last of these three in the previous section. The creation of analytical quality involves laboratories choosing the appropriate methods and this will be discussed further below.

Different strategies have been used to define what is an allowable (total) error including fractions of the reference range,21 physician surveys,22–24 state-of-the art precision,25 consensus meetings,26,27 government regulations,28,29 biological variation,9,30–32 and Capability Index.33–35

Estimates of allowable error based on surveys, consensus meetings or ‘state of the art’ performance will necessarily be self-serving as decisions could be based on current rather than ideal or necessary performance. However some of these surveys do provide some valuable information about the way that medical consumers use laboratory results. Elion-Gerritzen was able to show that physicians tolerated only small levels of analytical error when they used their own reference ranges to classify a patient’s results as normal or abnormal, but a larger error was tolerated if serial results were being followed looking for change.22 We can convert these ideas into concepts related to the analytical process. For example, clinicians rely on the laboratory offering assays of high sensitivity for Troponin because they are interested in whether the patient’s value is above a certain cut-off, but they are more interested in the run-to-run imprecision for tumour marker assays.

The ‘state of the art’ can be determined using external quality assurance programmes. All external quality assurance programmes will provide tables of performance of currently available methods and their imprecision and bias.36 Some caution must be used in interpreting these figures as they are obtained using synthetic control sera that may have the same matrix and stability problems described earlier in this article. However this is the best readily available resource to determine the benchmark in analytical quality.

The approach using biological variation relies on the view that imprecision of an assay should be related to the inherent biological variation of the analyte. The imprecision and bias of the assay is set to some proportion of the biological variation. For example, the required imprecision might be < 0.50 CVI and bias should be < 0.25(CVI2 + CVG2)1/2, where CVI is within-subject variation (coefficient of variation) and CVG is between subject variation.9,37 These biological variations are available for many analytes.5,38,39

These different strategies for determining allowable error do not yield the same results for a given analyte. It would appear that the most widely accepted best solution, which is also the most rational, is to use a system based upon biological variation.40–43

Converting Medically Allowable Limits into Control Rule Limits

Once we have decided upon a set of medically allowable limits the next challenge is to convert these into some parameter that can be monitored by our QC rules. The tables of medically allowable limits or quality requirements provide for each analyte a total allowable error. These tables of allowable error are available at www.westgard.com/clinical.htm or Skendzel et al.24

According to Westgard, the sizes of the analytical errors that must be detected by the control procedure to ensure specified quality (with a maximum defect rate of 5%) can be determined from the following equation:2,29,44

equation image

where TEa is the allowable total error

biasmeas is the analytical bias;

ΔSEcont is the change in the systematic error to be detected by the control procedure;

smeas is the analytical SD;

ΔREcont is the change in the random error to be detected by the control procedure, and z is related to the chance of exceeding the quality requirement – when z is 1.65, a maximum defect rate of 5% may occur before rejecting a run.

The total error incorporates both allowable systematic and random error terms. This equation can be solved for ΔSEcont by setting ΔREcont to 1.0, or for ΔREcont by setting ΔSEcont to zero. Thus the critical errors to be detected by a QC system are given by the following:

equation image

Selecting Control Rules to Meet Medical Requirements

These equations allow the derivation of the critical systematic and random errors given the total allowable error. Remembering that we know what our QC rules can detect in terms of the SD of the control sera, we can convert the ΔSEcrit and ΔREcrit into these control sera SD units. Appropriate QC rules can then be selected which can detect these errors.4,44–46

Generally, the best rules for the detection of systematic errors are those based on mean (see footnote 1) and Cusum systems, while χ2 and range rules are best for the detection of random error.5 A specific example of this principle was provided by Koch et al. who developed a QC system for a Hitachi 737 analyser.47 They found it was necessary to have three QC procedures depending on the analyte:

  • ♦ 13.5S rule for sodium, potassium, urea, creatinine, phosphorus, urate, cholesterol, protein, total bilirubin, GGT, ALP, AST and LD

  • ♦ 12.5S rule for albumin and

  • ♦ multi-rule procedure for chloride and bicarbonate.

Perhaps not surprisingly, Koch also reported that the precision available for calcium barely meets the allowable error requirements. This suggests that there are some currently available assays that do not meet the medical requirements of treating physicians.

Westgard has published grids and operational process specifications charts (OPSpecs) which assist in the selection of appropriate QC rules given allowable medical error.5,48,49 Westgard also has available the ‘Validator programme’ which provides quick access to power-function graphs, critical error graphs and ‘OPSpecs’ charts.50 Laboratories should move towards effective rules and systems that detect appropriate levels of medically acceptable error rather than using an internally driven process control model.

Process Capability

An alternative method of selecting appropriate control rules, given a target allowable limit, is the technique of process capability. The concept of ‘process capability’ is used in the manufacturing industry to quantify the relationship between product specifications or allowable limits, and process performance, or in our case, analytical method performance. There are a variety of parameters that have been used but one of the most fundamental is the capability ratio (Cp) which is the ratio of tolerance width to process capability:51

equation image

where USL and LSL are the upper and lower specification limits of an analytical process and σ is the SD of that process. This is the so-called ‘six sigma’ technique. The application of Cp to clinical chemistry has been explored by Westgard and Burnett and Burnett et al.33,34 The USL and LSL are determined from the medically allowable limits and the SD comes from the SD of the method determined when the assay is stable. Calculating the Cp enables the analytical process to be classified as ‘poor’ (Cp<1), ‘good’ (Cp >1.33) or neither good nor poor (1<Cp<1.33). Based on this classification different QC procedures are then used to detect unacceptable performance as detailed in Table 3.

Frequency of Control Observations

Obviously the length of the interval between routine QC samples can have a major influence on the expected number of unacceptable results produced during the existence of an out-of-control error condition.52 Perhaps surprisingly, another variable in the control of an analytical batch is the distribution of the control sera in that run. Koch et al. also reported on different sampling strategies, that is, the distribution of the control sera samples in the analytical run and the run length (see footnote 2).47 They concluded that no more than two controls at a time, together with an appropriate control procedure, were sufficient to detect the allowable medical error. Other authors found that pre-operation controls before each batch offered little advantage and reduced productivity.53,54

Neubauer et al. have also reviewed the question of what is the optimal frequency and number of controls necessary to control a clinical chemistry analyser.55 These authors sought an optimised, cost-effective control model and found the following:

  • ♦ When using a 13S rule, charting the mean of two identical control samples run together has sufficient statistical power to detect medically significant error and

  • ♦ Controls should be run every fifteen minutes (on an analyser with a throughput of five patients/minute)

To improve QC performance further, strategies that do not depend on the periodic testing of control samples, such as QC methods that use patient samples, will be required.52

Frequency of Medically Important Error

The third variable that impacts on the efficiency of a QC process is the frequency of occurrence of medically important errors. A process that has a high occurrence of medically significant errors must have a QC procedure with a high error detection rate, whereas a process with a low occurrence of error must have a low false rejection rate.3,4

The performance of a QC procedure can be described in terms of predictive value, much like a test for a disease in a mixed population of diseased and ‘healthy’ patients. Predictive values for different QC rules and differing error prevalence are available.3,4 Cembrowski and Carey estimated that the prevalence of error for most methods was around 1–2% but could exceed 10%.3 The impact of this error prevalence can be illustrated by consideration of the 12S rule. If, for two controls per run, the error prevalence is 2%, then 87% of rejected runs will be false rejections. If the error prevalence is approximately 10% then only about 60% of false rejections will occur.

Quality Systems

Maintenance of the quality of a result requires more than just a set of QC rules and appropriate decision levels. There must be a correct physical environment, a staff recruitment and training programme, customer feedback and a process to integrate all of these into a working system. The process described above is an example of a quality systems approach to process control. This approach involves the use of monitoring process (QC rules), the input of stakeholder definition of quality (medically allowable error) and continuous improvement in the system. There is not the space in this review to discuss all the aspects of quality systems but the reader is reminded that QC is a component of a much larger system for ensuring the quality of a laboratory.56

The reader might also gain from referring to established models that serve as frameworks for managing improvement and excellence. On an international basis, these include the international ISO 9000 approach which has general applicability.57

The Baldrige Award Health Care Criteria are also used internationally.58 These provide an excellent framework for improvement in health care and are in fact tailored for health from the more generally applicable Malcolm Baldrige National Quality Award Criteria. There is also the Australian Business Excellence Framework that is based on similar foundations as the Baldrige award.59

These criteria are designed to help organisations use an integrated approach to organisational performance management that results in:

  • ♦ Delivery of ever-improving value to patients and other customers, contributing to improved health care quality

  • ♦ Improvement of overall organisational effectiveness and capabilities as a health care provider

  • 59

    ♦ Organisational and personal learning (page 1 2002 Health care criteria: core values, concepts, and framework).

The Baldrige Index is a notional stock portfolio based on the winners of this Award between 1991 and 2000 whereas the Standard and Poor’s (S&P) 500 lists the top 500 companies based on their performance on the stock exchange. The companies on the Baldrige Index outperformed the S&P 500 companies by 3 to 1 recording a 323% return on investment compared to a 110% on S&P 500.60 This provides evidence that organisations that pursue quality have an advantage.

The health care criteria are based on a set of inter-related core values and concepts:

  • ♦ Visionary leadership

  • ♦ Patient focussed excellence

  • ♦ Organisational and personal learning

  • ♦ Valuing staff and partners

  • ♦ Agility

  • ♦ Focus on the future

  • ♦ Managing for innovation

  • ♦ Management by fact

  • ♦ Public responsibility and community health

  • ♦ Focus on results and creating value

  • ♦ Systems perspective.

These values and concepts provide a basis for action and feedback. Effective management can reduce complexity, by reducing the number of choices staff members need to make when faced with a particular situation, through implementation of systems that are understood and followed. The role of leadership in an organisation is to put these systems in place, and monitor and improve them.

Quality Leadership

The quality models described above all list leadership as critical for an organisation to improve. Leadership should be a means to an end and not an end in itself. Effective leaders are those people who are able to change an organisation or community for the better.

Let us consider what leadership is all about. Kotter described four distinctions between management and leadership (see ).61

Table 4

Perhaps leadership is a property as well as a process but the two are clearly related and many modern management theorists would see the differences as being in degree rather than kind.62 One of the first attempts at finding out what managers do was made by Mintzberg who, after studying 160 managers, described the roles of managers as follows:63

  • ♦ Figurehead

  • ♦ Leader

  • ♦ Liaison

  • ♦ Messenger

  • ♦ Discriminator

  • ♦ Spokesman

  • ♦ Entrepreneur

  • ♦ Disturbance Handler

  • ♦ Resource Allocator

  • ♦ Negotiator

Undoubtedly leadership is about hard work and there are probably a number of areas where this hard work should be directed.64 The first is the area of goal setting, or managing ‘meaning’.65 The leader sets the goals, sets the priorities, and sets and maintains the standards. Bennis writes that leadership is about the “capacity to create a compelling vision and translate it into action and sustain it”.65 The leader articulates a compelling vision and a rationale of achievement. The vision provides the link between the current and the future organisation. However, the effective leader must be able to communicate this vision and translate it into action.

The leader must also set an example. The personal behaviour of a leader is constantly under the microscope. If ‘standards’ for the leader are what they can get away with, then they will be perceived as a hypocritical time-server. Mediocrity is contagious. Therefore the leader must be a highly credible role model. Thus the second requirement is that the leader must see leadership as responsibility rather than as rank and privilege (Bennis calls this the “the management of self”).65 Leaders must also realise that they need to develop themselves.

Leaders see that one of their roles is the development of their subordinates. Good leaders need strong subordinates and, by providing positive feedback, they can achieve very high performance levels in staff. Therefore leadership is about coaching and constantly exhorting a challenge of the status quo in a search for improvement. This coaching needs to be given in an unbiased way recognizing that able people tend to be ambitious. The leader should take responsibility for the mistakes of their subordinates, and see the development of their staff as their triumphs, rather than as threats. Sometimes even a bad decision is better than no decision at all. Some leadership perspectives emphasize the pivotal role of emotional connectedness in leadership (emotional leadership or emotional intelligence).66

The final requirement of effective leadership is to earn trust (“The management of trust”).65 Otherwise there won’t be any followers – and the only definition of a leader is someone who has followers. Leaders do not have to be liked to be successful, but they do need to be respected to be effective. A leader must be consistent in their actions and sayings in order to earn trust from their followers.64

The Organisational Background

It is worth noting that most of the great, and not so great, leaders didn’t actually create systems. Great generals such as Macarthur, Monash, Napoleon, Alexander the Great, did not create their armies or their internal processes. These leaders worked in the existing system to direct that organisation and its people onto greater deeds. In most cases, a leader can only lead an existing organisation.

This raises the question “What are the characteristics of successful organisations?” Organisations have three essential features; a structure, some internal processes and people.

We begin with organisational structure. ‘Structure follows strategy’; this implies that an organisation should have a structure that supports the organisational goals. But what does this mean? Organisational structures have a number of different dimensions, and organisations often change their structure as they enlarge or change direction. For example, compare a pathology network with a stand-alone multidisciplinary laboratory. The network has formal procedures for capital expenditure, possibly a human resource department and often a complex political system in place to manage the organisation. A small, self-contained laboratory may have only one key decision maker on site who will make all the staff and equipment decisions and deal with the outside world.

The key dimensions of organisational structure are complexity, formalisation and centralisation and different organisations can be classified on the basis of differences in these variables.67 The stand-alone lab is low on formalisation and complexity but high on centralisation. The pathology network is high on all three.

The degree of centralisation and formalisation often depends on the size of the organisation and the management style of the leaders. A feature of professional organisations, such as hospitals is that leadership and authority come from within and from outside the organisation, and not necessarily from people in senior management positions. Administrators retain everyday power to some extent, but can only retain this as long as the professionals (e.g. doctors or nurses working in the organisation) perceive them to be serving their interests effectively. Individually the administrators may be more powerful than individual professionals, but the power of the CEO can easily be overcome by the collective power of the professionals. Indeed, there is some evidence that bureaucratic control that seeks to limit professional autonomy may be counter-productive.68 However, the extent to which authority to supervise is allowed by professionals is also linked to the perceived professional expertise of the supervisor. The issue seems to be power, and power-plays between professional leaders and administrative leaders. Administrators emphasise mechanisms that enhance compliance and loyalty, while professionals emphasise mechanisms that enhance their autonomy.69

The model in many organisations is that the management controls the structure of work, productivity and outcomes. This is not always the case in organisations that have a high proportion of professionals in them. Professionals have, as the basis of their power, discretion to determine outcomes, expenditure and use of resources. The potential for clash of professional cultures exists in health care organisations – a cause for concern for managers, whatever their profession.69

Final Remarks

The major topics discussed above, namely leadership and QC may appear at first unrelated, but in fact they share common principles;

  • ♦ Quality improvement can only be effective if the basic processes have acceptable raw materials i.e. analytical systems with good precision, organisations with good staff.

  • ♦ The results of the monitoring systems need to interpreted carefully and with a good knowledge of these systems i.e. QC rules, financial parameters, Key Performance Indicators.

  • ♦ Leaders work within existing systems and organisation. Managers and analysts can only be effective at quality improvement activities if they know their systems (organisations or analysers).

  • ♦ The end result is the important factor and not the control process. Don’t concentrate on the QC rules and lose sight of the actual result of the assay, or the bottom line of the organisation.

  • ♦ Organisations have three essential features; a structure, some internal processes and people. Organisations and assays need input from outside to ascertain the expectations of clients.

  • ♦ Quality systems help to build efficient organisational structures.

  • ♦ Managers need to behave like leaders.

  • ♦ Organisations have features that reflect the politically powerful staff in them, and these characteristics need to be understood before the organisation can be effectively managed.

QC is a process where an appropriate group of rules with a carefully considered set of limits are applied in order to consistently deliver results with known levels of acceptable error. The management of this process involves an understanding of the rules, the limits and the expectation of all clients. Effective leadership also requires a deep understanding of an organisation, its processes, their limits and the expectations of all stakeholders.