There are two main types of evaluation: formative and summative.
- Formative evaluation is carried out in the development phase of an intervention with the purpose of making improvements.
- Summative evaluation is carried out after an intervention has been finalised and initial changes made.
Both types can involve evaluation of an intervention's processes (its delivery) and of its outcomes (its effects).
These are the basic steps to any evaluation:
- Define the aims and objectives of the evaluation (what is the reason for the evaluation, and how are you going to carry it out?)
- Define the target population (are you going to collect data from intervention recipients, funders, intervention designers, deliverers?)
- Decide on an evaluation design (for example: pre and post? post only? quasi-experimental?) See Evaluation Design Topic Guide
- Select and design data collection methods (think about whether you want quantitative or qualitative data, or both?)
- Collect data (known as the fieldwork period)
- Analyse the information collected (allowing sufficient time)
- Write up and publish your results, even if negative, in an evaluation report (share your findings with as many people as possible so that your results can be used)
- Make any identified improvements to the intervention.
Evaluation is a cyclical process, so that the final outcomes feed into future activities.
There are three main reasons to evaluate:
You can report back to funders on the intervention's effectiveness, and show how well the money was spent. Evaluation findings help budget holders to make evidence-informed spending decisions, and can support bids for future funding.
For knowledge gain
Evaluation helps you to understand more about the social problem you are addressing. How large an issue is it? It also helps you to understand more about the interventions that seek to remedy this problem. Why did your intervention work, or not work? Gaining this knowledge and sharing it can avoid re-inventing the wheel.
Learning & development
How well did the intervention go, what could be changed next time to improve it? What examples of good practice can be identified? Was your theory of how the intervention would work accurate?
In summary, evaluation provides the opportunity to improve your interventions, and to make the best use of resources. It can also be enjoyable and aids your professional development!
Wherever possible evaluation should be built into the initial design and planning stages of an intervention.
Evaluation can be conducted before, during, and after an intervention's delivery.
Before an intervention is decided upon, a needs assessment evaluation is recommended. This looks at the potential effects, and risks, of a proposed intervention, and can help to identify intervention aims, objectives, anticipated results and the evaluation data that needs to be collected. It can also look at the level of resources that an intervention is expected to require. When compared to the expected benefits, this gives a prediction of cost-effectiveness. Some of the information required for a prospective evaluation may already be available through previous evaluation reports and research studies for example.
During an intervention monitoring data can be collected which will tell you what is, and is not, happening. If it is a formative evaluation you will need to collect data in the early stages of an intervention. Formative evaluations tend to focus on processes (was the intervention delivered as it should have been?) but they can also report on short-term outcomes.
After an intervention has been delivered or, if an on-going intervention - after it has bedded in, you can conduct a retrospective evaluation of short and/or long term outcomes.