Flyer

Archives of Medicine

  • ISSN: 1989-5216
  • Journal h-index: 17
  • Journal CiteScore: 4.25
  • Journal Impact Factor: 3.58
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Awards Nomination 20+ Million Readerbase
Indexed In
  • Genamics JournalSeek
  • China National Knowledge Infrastructure (CNKI)
  • Directory of Research Journal Indexing (DRJI)
  • OCLC- WorldCat
  • Proquest Summons
  • Publons
  • Geneva Foundation for Medical Education and Research
  • Euro Pub
  • Google Scholar
  • Secret Search Engine Labs
Share This Page

Research Article - (2018) Volume 10, Issue 2

Principles of Methodological Design of Clinical Audit

Soliman A*

Dartford and Gravesham NHS Trust, Darent Valley Hospital, Dartford, UK

Corresponding Author:

Soliman A
Dartford and Gravesham NHS Trust
Darent Valley Hospital, Dartford, UK
Tel: +441322428160
E-mail: asoliman1@nhs.net

Received date: February 21, 2018; Accepted date: March 15, 2018; Published date: March 22, 2018

Citation: Soliman A (2018) Principles of Methodological Design of Clinical Audit. Arch Med Vol No:10 Iss No:2:2 doi: 10.21767/1989-5216.1000262

Copyright: © 2018 Soliman A. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Archives of Medicine

Abstract

Clinical audit is used across the globe to ensure the safe and effective delivery of healthcare. As a quality improvement tool, clinical audit has been validated at all levels, from local hospitals to national interventions. Nonetheless, there is evidence from the literature that the results produced by the audits were widely variable, with failure to generate results attributed to weak audit design. This paper outlines the key ethodological considerations and design principles that should be taken into account when developing, designing and undertaking audit projects.

Keywords

Clinical audit; Medical audit; Audit design; Audit project

History and Background

“For us who Nurse, our Nursing is a thing which unless we are making progress every year, month, every week, take my word for it we are going back.” – Florence Nightingale [1]. Florence Nightingale’s pioneering work during the Crimean War is regarded by many as the first clinical audit. She set standards against which she measured practice. Florence Nightingale’s process of systemic observation, standard setting and improvement of care is the definition of the clinical audit process used widely today.

Today clinical/medical audit is used everywhere to measure the quality of care patients receive. However, there is conflicting evidence on the effectiveness of clinical audit [2]. A Cochrane Review of 140 studies showed that the results produced by the audits were widely variable, from a negative to a very positive effect [3].

Clinical Audit Effectiveness

Clinical audit effectiveness is paramount because clinical audit is in itself a measurement tool of clinical effectiveness. In the modern NHS clinical effectiveness is defined as: “The extent to which specific clinical interventions, when deployed in the field for a particular patient or population, do what they are intended to do, i.e. maintain and improve health and secure the greatest possible health gain from available resources.” [4] A good example of how national clinical audit has brought about clinical effectiveness is the Myocardial Ischaemia National Audit Project (MINAP), which is a national registry of patient admitted to hospitals in England and Wales with acute coronary syndromes (ACS). It was established in 1998 to provide participating hospitals with a common mechanism for auditing performance against standards defined in the National Service Framework for Coronary Heart Disease. Data collection began in October 2000 and by mid-2002 all acute hospitals in England and Wales were participating in the registry [5]. As an audit tool, MINAP provided participating hospitals with a record of ACS management against nationally agreed standards of care. Hospitals that contribute data to MINAP are able to view on-line their hospital’s performance in terms of NSF targets compared with national aggregate data. Results from this audit helped reconfigure cardiology services nationwide, resulting in a substantial reduction in post-ACS mortality.

Audit Design

An excellent audit can help achieve excellent outcomes. So, what makes for an excellent audit?

NICE listed the following as the criteria for best practice in clinical audit [6]:

Preparation and planning

The topic for the audit is a priority.

The audit measures against standards for the quality of care.

The organisation enables the conduct of the audit.

The audit engages with clinical and non-clinical stakeholders.

Patients and their representatives are involved in the audit if appropriate.

Measuring performance

The audit method is described in a written protocol.

The target sample should be appropriate to generate meaningful results.

The data collection process is robust.

The data are analysed and the results reported in a way that maximises the impact of the audit.

Implementing change

An action plan is developed and implemented to take forward any recommendations made.

Achieving and sustaining improvement

An action plan is made for achieving and sustaining improvement in the clinical audit.

Planning Considerations

Before participating in any clinical audit, all those involved must have a clear understanding of what exactly is clinical audit. Establishing this understanding must precede any audit planning measures. The top three hits from a Google search for “what is the definition of clinical audit?” returned:

“Clinical audit is a way to find out if healthcare is being provided in line with standards and lets care providers and patients know where their service is doing well, and where there could be improvements”[4].

“The key component of clinical audit is that performance is reviewed (or audited) to ensure that what should be done is being done, and if not it provides a framework to enable improvements to be made”.

“Clinical Audit forms the system for improving standards of clinical practice. Aspects of patient care are evaluated against expected standards of care and where necessary, changes are made at an individual, team or service level. A re-audit can then be used to confirm that improvements have been effective”.

Furthermore, it must be made clear that clinical audit is not a form of clinical research. Clinical audit and research are both systemic methods of investigation. Clinical audit is about measuring current clinical practice compared with established good practice. Research is about generating hypotheses and verifying scientifically a predicted but not necessarily proven relationship between or among variables such as clinical processes and outcomes. Research studies in healthcare also may be designed to describe or observe the outcomes and costs of healthcare interventions such as medicines, equipment, procedures, settings of care or healthcare systems [7].

Principles Of Clinical Audit Design

It might be a good idea before starting any audit project to think for answers to the following questions:

• What are we going to audit?

• Why are we going to audit that topic?

• Who is affected by this audit?

• Whom are we going to audit?

• When are we going to audit?

• How are we going to audit?

Methodological considerations for designing and undertaking successful clinical audits include:

Choice of topic

This revolves around whether the audit objective is a priority for the organisation or for the patients (i.e. recommended by lay members on audit committees), and if there is good evidence available to inform standards such as robust guidelines. Examples are listed [8] of ‘triggers’ for clinical audit as:

Perception that an activity needs to be audited, because of bad or good performance.

An adverse event such as a Serious Incident or Never Event.

Patient outcome monitoring, e.g. complaints, patient surveys, etc.

National guidelines: by measuring local practice against national standards.

National audits.

Local guidelines.

Local mandate to measure performance of a certain activity against national or local standards.

Quality Impact Analysis might be a useful tool in choosing which topics to priorities for auditing. Similar to risk assessment, Quality Impact Analysis can be achieved via a scoring technique by working out the impact of a certain area multiplied by the likelihood of such impact occurring. Areas for inclusion in the Quality Impact Analysis might include patient safety, clinical effectiveness, patient experience, prevention, productivity and innovation, and resource impact.

Criteria and standards

Good audits have clear objectives and are designed to be as specific as possible. Hence, it is necessary to be clear about what the audit criteria are and what the standard is. A helpful comparison of a criterion to a standard is the [9] definitions of the 2 entities, which are:

A criterion is “an item or variable that enables the achievement of a standard (broad objective of care) and the evaluation of whether it has been achieved or not.”

A standard is “an objective with guidance for its achievement given in the form of criteria sets that specify required resources, activities and predicted outcomes.”

To illustrate:

Criterion: Patients attending (Accident &) Emergency Departments in the NHS must be seen, treated, and admitted or discharged in under four hours.

Standard: 95%.

Hence, current practice is to be audited against the ‘standard’ of: 95% patients attending (Accident &) Emergency Departments in the NHS should be seen, treated, and admitted or discharged in under four hours.

Choice of standard

Audit is about measuring (and improving) quality. The measurement is of the process, structure or outcome of a service or activity. Clinical audits should focus on which features of quality of process, structure or outcome are being measured. Quality features include [10]:

a. Acceptability (as an experience).

b. Accessibility.

c. Appropriateness.

d. Effectiveness.

e. Efficacy.

f. Efficiency.

g. Safety.

h. Timeliness.

Once the quality features to be audited have been agreed, an appropriate standard is chosen. Choosing a standard is dependent on how important those audit criteria are, and how realistic and practical is that standard in relation to the local environment for the activity that is being audited. Guidelines are the commonest sources for audit standards. It is important to systemically appraise any guideline before auditing against it. The AGREE II instrument is a widely used instrument for appraising guidelines [11]. The Appraisal of Guidelines, Research and Evaluation (AGREE) II instrument recommends the appraisal of guidelines by at least 2 -preferably 4- appraisers using 23 key items organised within 6 domains followed by 2 global ratings [12]. Those 6 domains are Scope and Purpose, Stakeholder Involvement, Rigour of Development, Clarity of Presentation, Applicability, and Editorial Independence.

Data collection

Sampling

The obvious aim is to have a representative sample of the target population. This can be achieved by probability or nonprobability sampling.

Probability sampling: each individual has the same probability of being included in the sample. The commonest methods of probability sampling are:

Simple random sampling, e.g., random selection of patient hospital numbers.

Systematic sampling, e.g. every 10th patient who was admitted during an agreed time frame.

Stratified random sampling, the sample is divided into strata sharing the same characteristics such as age or gender. Random sample is then applied to each stratum.

Non-probability sampling: it is not possible to know what probability any individual has of being included in the sample. Methods of non-probability sampling include:

Purposive sampling: also known as judgmental, selective, or subjective sampling. It is often used in qualitative design.

Convenience sampling, e.g. the first 50 patients attending the ED on a particular day.

Quota sampling: is convenience sampling in a stratified manner.

Sample size is dependent on

The size of the target population. This is directly proportional to the size of the sample, in order to achieve a true representative sample for the audit.

How often are the audit measures likely to occur in the population. This is inversely proportional to the required sample size.

Confidence levels. This is directly proportional to the size of the sample. In research methodology the commonest confidence levels are 95% and 99%. However, these confidence levels deemed to be too constricting for clinical audit purposes (Parahoo, 1997).

Range of accuracy. This is directly proportional to the size of the sample. The commonest ranges of accuracy are 2.5% to 5%.

In summary, audit designers need to be fairly confident that their sample is representative and can be generalised to the target population, and when they have the audit findings, they are able to state how certain they are that the true value lies within a specific interval, e.g. an 81% compliance with a standard based on a confidence interval of 95% with 5% accuracy range translates into being 95% sure that the true value is between 76% and 86%.

Retrospective vs. prospective data collection

The most important advantages of retrospective data collection include convenience and relative ease of collecting the data, it is more achievable and quicker to collect data from the minimum required number of cases or over the set period of time. The biggest disadvantage which the audit design has to accommodate for is the possibility of having gaps in the episodes of care which occurred in the past.

Prospective data collection is advantageous in providing a clear picture of current practice, with no gaps in the data collection and it allows for more complete data collection. Inevitably, prospective data collection carries the risk of the Hawthorne Effect, which is a well-documented phenomenon of the tendency of individuals to modify their behaviour in response to their awareness of being observed.

Clinical audit and ethical issues

Consent and confidentiality

Most clinical audits in the NHS do not seek patients’ direct consent. In fact, a study by McKinney et al. has shown that it is not feasible to obtain individual signed consent for the sharing of patient-identifiable data with an external (national) database [13]. Nonetheless as with all aspects of healthcare, clinical audit must be conducted within an ethical framework, which is respectful of patients’ rights and of laws and regulations. In the context of clinical audit, the following should be observed:

Data Protection Act stipulates that processing (sensitive) personal data should be in accordance to the following principles:

Fairly and lawfully processed.

Obtained only for the specified purpose(s) for its processing.

Adequate, relevant and not excessive.

Accurate and kept up to date (where necessary).

Not kept for longer than the purpose(s) it was processed for.

Processed in line with the patients’ rights.

Technical and organisational measures are taken to ensure the secure processing of the data.

Data is not to be transferred or shared outside the European Economic Area (EEA), unless to a country with a similar level of protection. In the context of clinical audit, those principles can be applied in the form of:

Not recording personal details on the data collection form. This includes not recording/using patient hospital numbers. Sheets with a unique number/identifier could be used instead. A key must be kept to link the unique identifier to the patient’s hospital number.

Data collection must be completely specific to the audit. The convenience of simultaneously extracting data about other aspects of care can be very tempting indeed. If that occurs, it would be in contravention to the third principle of the Data Protection Act.

The audit protocol must include details of when and how will the patient-identifiable data be destroyed, so that it is not kept longer than necessary.

Secure storage of data is an integral part of information governance in any healthcare organisation.

The Caldicott Committee Report and the NHS Confidentiality Code of Practice consolidate the abovementioned principles of the Data Protection Act into a framework which is understandable to the patients and NHS staff [4].

Effectiveness

An audit is an investment of precious resources to improve the quality of care patients receive. Therefore, those in charge of audit design and implementation have a professional and ethical obligation towards their patients as well as their colleagues who are taking part in the audit, to ensure that the audit is as effective as possible in achieving its objectives.

Accountability

Where areas for improvement have been identified, a named individual should be the accountable officer for overseeing the successful implementation of any recommended action plans.

Such an accountable cannot succeed without being enabled to carry out their duties. They need to have a clear understanding of the service being audited including any limitations and the relevant audit design and its methodology. Appropriate for a need to create for the follow up of those action plans, this is usually at local and/or organisational audit and governance meetings. Action plans in response to the outcomes from national audits require careful consideration. Whilst the essence of audit is to measure activity against a standard, organisations can have significant variance in their ability to implement recommended changes, e.g. an Emergency Department which relies heavily on temporary staff can find it challenging to support the staff in changing their practice, compared to a department which has a complement of substantive staff who undertake regular training and appraisal within the same organization [14].

Re-audit

Tips for effective re-auditing include:

Setting a deadline for re-auditing.

To receive rapid feedback on the implementation of the recommended changes, re-auditing should take place as soon as possible following the implementation of those changes.

Re-audit using the same criteria exactly.

Using the same sample size and method makes for an accurate re-audit. However, this is the time to rethink the statistical significance of the findings and consequently the sample size.

Conclusion

Clinical audit is the most useful tool in ensuring the safety and improving the quality of healthcare. This can only be achieved through tenacious design and diligent preparation. An audit is complete only if re-audit takes place regularly. Clinical audit is an investment in the quality of the service. The organisation is thus obliged to invest in the audit process by taking into account the key design principles mentioned in this paper, when developing, designing and undertaking audit projects.

22142

References

  1. Morrell C and Harvey G (1999) The Clinical Audit Handbook. Improving the quality of health care. London: Bailliere Tindall.
  2. Grimshaw J, Shirran L, Thomas R, Mowatt G, Fraser C, et al. (2001) Changing provider behavior: An overview of systematic reviews of interventions. Med Care 39: 112-145.
  3. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, et al. (2012) Audit and feedback: Effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 6.
  4. NHS Executive (1996) Promoting Clinical Effectiveness: A Framework for Action in and Through the Nhs.
  5. Department of Health (2000) National service framework for coronary heart disease-modern standards and service models. London: Department of Health.
  6. National Institue for Health & Clinical Excellence (2002) Principles of best practice in clinical audit. Oxford: Radcliffe Medical Press.
  7. Black N, Brazier J, Fitzpatrick R and Reeves B (1998) Health services research methods: A guide to best practice. London: BMJ Books.
  8. Keane M, Low CS and Dhamija B (2009) Clinical Audit for Doctors. Nottingham: Developmedica.
  9. Royal College of Nursing (1990) Quality Patient Care: The dynamic standard setting system. Harrow: Scutari Press.
  10. Institute of Medicine (2001) Crossing the quality chasm: A new health system for the 21st century. Washington DC: National Academy Press (US).
  11. Brouwers MC, Kerkvliet K and Spithoff K (2016) The AGREE reporting checklist: A tool to improve reporting of clinical practice guidelines. BMJ 352: 1152.
  12. Brouwers M, Kho M and Browman G (2010) AGREE II: Advancing guideline development, reporting and evaluation in health care. CMAJ 182: E839-E842.
  13. McKinney P, Jones S, Parslow R, Davey N, Darowski M, et al. (2005) A feasibility study of signed consent for the collection of patient identifiable information for a national paediatric clinical audit database. BMJ 330: 877-879.