Received date:10 February 2006; Accepted date: 17 March 2006
Clinicians keep looking for new ways to improve patient treatment and if you are involved in health services delivery and quality development you try to improve the performance of your organisation.However, our experiences are often not communicated to others and do not contribute to an accumulation of knowledge on how change is best brought about. In this article we argue that the design and testing of interventions intended to change professional performance from their conceptions should include theoretical considerations of the complexities of implementation and we further suggest a stage approach in the development of such interventions. In the preclinical stage anintervention model is developed based on theoretical understanding and empirical research. In stage I you experiment with elements of the intervention. In stage II the intervention is tried in full scale in selected units of the target group. Stage III is a controlled randomised clinical trial and stage IV evaluates the routine intervention delivery.
context, complexity, evaluation studies, health services research, organizational innovation, programme evaluation, randomized controlled trial, research design
It is a key question to all those involved in delivery and quality development of the health services how health professional performance can be influenced. There are multiple effective interventions that have been shown to improve patient health, but not to reach routine clinical practice. A decade ago Oxman et al found that there are ‘no magic bullets’ to change professional performance. A decade later, a review from 2003 by Grol and Grimshaw concluded that ‘none of the approaches for transferring evidence to practice is superior to all changes in all situations’. A lack of understanding of how and why clinicians act as they do explains some of the failure to succeed in changing clinical practice. A recent comprehensive review of the topic commissioned by the UK Department of Health concluded that a whole-system approach was needed. Whole-systems approaches recognise the complexity and interdependence of different aspects to complex systems such as are found in healthcare delivery.
The Consolidated Standards of Reporting Trials (CONSORT) statement on the design and reporting of randomised controlled trials (RCTs) and the more recently published Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement, applying similar rigour to the design and reporting of non-randomised public health interventions, have been important steps in the evaluation of new interventions.[4,5] Description of the impact of an intervention under experimental conditions, however, does not give any indication of its likely impact under complex‘real life’ situations as seen in the complexity of primary and specialist care organisations charged with delivering health care to patients. The CONSORT and TREND statements, while important steps in developing the quality of the design and reporting of healthcare studies, do not recognise the complexity of the organisational settings within which studies will be implemented. An example of this can be seen in the report of the outcomes of the study designed and conducted by one of the authors. In this study, looking at the impact of within-practice referrals meetings, she describes how, despite designing and conducting her study based on pilot data and on the CONSORT statement, her study failed to show positive outcomes. The negative study outcomes may have arisen for a variety of reasons – one reason may, however, have been the lack consideration of different impacts of the intervention (the referrals meetings) in the different complexorganisa tional settings in which it was applied (in this case, different general practices).
We argue for wider criteria to ensure that the development of clinical and public health interventions that require a change in the delivery of services incorporates, as far as possible, the complexities seen in everyday practice. The need for this is thoughtfully argued by the Medical Research Council (MRC). We would suggest going a step further than suggested by the MRC. In accordance with others, we suggest a staged approach in the development of interventions intending to create changes in professional performance (ChIPP). Further, we would suggest that funding bodies and scientific journals assess such trials against these criteria.
We have used the development of new pharmaceutical agents as a model to describe our vision of how the design of new clinical and public health interventions might be improved to demonstrate an effect in everyday clinical practice (Table 1).[9,10]
Prior to a clinical drug trial the active substance is isolated, examined and described in detail, based on an established theoretical knowledge gained from the body of basic sciences research. The potential drug is further tested in vitro and in animal models to develop a theoretical model of its mode of action, and to develop reasonable hypotheses on its specific effects on human beings.
Likewise, ChiPP interventions must have a theoretical foundation. Today, most studies intending to lead to a change in professional performance do not supply any information about theoretical considerations and assumptions underlying the intervention, and do not argue for the chosen strategy or refer to previous research. Thorough theoretical considerations at the earliest stage in the design of ChIPP interventions would prevent interventions with low chances of successful implementation being tested on patients or health professionals.
Hospital departments, individual practices and primary care groups are social organisations, and should be studied as such in relation to implementing changes in the performance of their members.[12,13] The existing body of knowledge about organisations should therefore serve as the foundation for ChiPP interventions in the same way as clinical trials build on the knowledge from basic sciences.
The MRC document urges investigators to incorporate a theoretical phase to force themselves to consider underlying assumptions being made regarding postulated mechanisms and processes in the intervention being examined. Together with empirical evidence, the theoretical considerations serve as a platform to develop the analytical framework for the concrete project, which focuses attention on certain types of processes and describes the mechanisms that influence the professional practice. The NHS commission recommends that this is followed by a more pragmatic approach inwhich the potential interaction between these variables is considered in relation to a specific context and setting.3 Specific hypotheses about the mechanisms that influence the professional practice should be developed, and important contextual factors that may influence the effect of an intervention should be defined. Figure 1 depicts how an intervention can support a facilitating mechanism or inhibit a barrier for change. A specific model for the change mechanism is a prerequisite for understanding how a change comes about (or does not come about) in relation to an intervention, which is needed to move us from statistical to clinical significance.[3,15]
Most ChiPP interventions are complexand involve several steps for a change process to take place. It is important at each step to explain the processes that will be affected by the intervention, the mechanisms that the intervention aims to influence, and the role of context.[5,16] The end product of this process is a refined intervention model based on the best available empirical and theoretical knowledge.
In phase I of clinical trials the substance is administered to healthy volunteers and information is obtained about its biological properties in the human organism. Information about effect, safety, pharmacokinetics and metabolism is used to plan subsequent phase II studies.
In the ChiPP trial the healthy volunteers are represented by individual professionals or selected organisations (depending on the target of the intervention) with a capacity to reflect on their own practice. The intensity of the interventions may be varied for different professionals/organisations to obtain a dose– response curve, and the composition of the intervention may also be changed to examine the dynamics of the different elements.
In phase II of drug development the efficacy of the drug is tested in selected patients from the target group. These studies supply information on efficacy, safety and side-effects, and suggest a likely clinical dose. The findings will also provide an estimate of the variability of clinical response and allow power calculation for phase III studies.
In stage II of the ChiPP trial the intervention is tested in several units of the target group, under ideal and controlled conditions. The units are selected to obtain a maximum variation in the contextual factors that are expected to influence the impact of the intervention. The experience gained at this stage allows the researchers to further refine the theory and adjust the intervention strategy to produce a ‘standard’ intervention methodology.16 Stage II studies can also give an estimate of the efficacy of the intervention in different contexts and provide input for power calculations. In addition, it will be possible to identify particular members of the target group that do not respond to the intervention. Participants (whether patients or organisations) that are not expected to benefit from the treatment should be identified, and these exclusion criteria applied in the subsequent stages of the trial. Failure to undertake this step probably explains some of the disappointing results from intervention studies.
In phase III of drug development RCTs are conducted to evaluate the effect of the treatment under controlled conditions and using a research design that prevents bias and allows the findings to be extrapolated to a wider patient population. Stage III of the ChiPP trial follows the same methodology and has the same strengths and limitations as the RCT.
In the clinical trial the main focus is on patientrelated endpoints, whereas in the CHiPP intervention trial the main focus is on professional performance. It is implicit that the effect of the professional performance (if performed as intended) has been documented before trials to change professional performance are even thought of. Intermediate outcome measures may be relevant in clinical trials, but are even more important in CHiPP intervention trials. The causal pathway from intervention to change in professional performance and important contextual factors must be described, operationalised and measured to be able to evaluate to what extent the intervention is delivered as intended (control of Type III error), and to be able to evaluate each step in the change process.[5,19]
Findings from RCTs conducted by highly motivated staff in well-resourced settings on selected patients cannot be assumed to be translated into daily life. Effectiveness studies of drugs as well as ChiPP interventions evaluate the effect of the intervention under routine circumstances; this is where the impact of interventions in everyday practice can be ascertained. The design of stage IV needs to be adapted to the situation to which the intervention will be applied. The design may include a control group but no randomisation, the intervention may differ from the original, and monitoring and evaluation of the impact may also vary to reflect the setting in which the evaluation is taking place. The internal validity may thus be reduced, but generalisability will be high. Effectiveness studies are thus more difficult to control, involve many stakeholders, are costly, are less likely to produce positive findings, and are more difficult to publish than the RCTs. They are therefore also less frequent than RCTs despite the fact that they are important for decision makers for making an overall cost–benefit judgement.
It is difficult to disagree with Eccles et al when they state that ‘Randomised trials should only be considered when there is genuine uncertainty about the effectiveness of an intervention’. This holds true whether the intervention is a new drug or a training programme for practice nurses in treatment of diabetic ulcers. The problem is, however, that it may be difficult to be certain about an effect if randomised trials have not been conducted. Each step in the development of a new drug documents its effect and prevents dangerous treatments from being implemented. As a by-product the documentation process adds to our common pool of scientific knowledge.
If a remedy is considered safe and is only used for minor illnesses it may be reasonable to exempt it from going through a demanding and rigorous development. ChiPP interventions of minor importance may likewise be exempted from rigorous scrutiny, but if a ChiPP intervention has major implications for health professionals and health organisations, there is no reason why it should not be developed according to the same rigour as a drug before it is implemented. This is a demanding exercise, but it is arguable that if it is not considered worthwhile to thoroughly evaluate an intervention, it is not worth implementing either. If this principle were to be adhered to, many disrupting and demoralising attempts to introduce changes in the health sector could be avoided.
If the suggested staged approach to the development of ChiPP interventions was followed this would contribute to the development of the methodology in the field of implementation research and the necessary political and managerial decisions could be better founded.
The development of new drugs is driven by commercial interests, and there may also be commercial interests in developing effective ways of making health professionals change their practice. However, from a public point of view it is important to build the capacity to develop professional performance intervention within the public health sector. The health sector is undergoing major changes all over the world, reforms are instituted, guidelines and indicators are developed and promoted, and new ways of delivering health services are introduced. Many of these initiatives involve a change in the way health professionals operate. The necessary political and managerial decisions could be better informed if qualified research on professional practice interventions was available.
fla targeted investment in change in professional performance (ChIPP) intervention research is needed and should be accompanied by improved methodology.
• Development of new interventions must be based on a theoretical framework, and focus on the mechanism for change and the context which influences the change process.
• Development of new ChIPP interventions should follow a rigorous and staged approach.
• High-quality ChIPP interventions research will develop our understanding of the change process, save scarce resources, prevent futile experiments and provide decision makers with a better knowledge base.
Both authors work as general practitioners and their reflections are drawn from their common experiences from quality development activities within primary healthcare.
Contributors and sources: FB wrote the initial draft and both authors contributed to the final version. Both authors are guarantors.