Making decisions about the methods used to assess impact To be able to 'compete' with other subjects and gain credibility for the subject in its own right, interventions need to be structured in line with other academic subjects. Effective monitoring and evaluation ... is dependent on structure. Anne Evaluation is often viewed as a one-off activity that happens at the end of a project. However, the best quality assessments of an intervention’s success have self-evaluation built in from the outset and continue as long as the intervention or service is being delivered. If the intervention forms part of a rolling programme, then between each phase, the findings from this process should lead to improvements in the next. It is important to continue using the resources or methods that have been found to work effectively and only adjust/refine areas that have not worked well. There are different ways of evaluating interventions; the decision as to which type is used often depends on the availability of time, resources, the skills of the team and the number of dedicated staff. We have drawn a broad distinction between ‘outcome/impact’ and ‘process’ evaluation. The latter is the specific focus of Facet 6. Here, we focus on the former that can involve: a monitoring process, maintaining an accurate record of who is involved in the intervention, and the degree to which it is engaging participants, finding out how participants feel the intervention has helped them, perhaps carrying out some case studies to gain an in-depth understanding of the intervention’s impact and how it resulted in change, and/or assessing changes that have occurred in participants’ knowledge, thoughts, feelings or behaviours using robust and reliable measurement tools. A comprehensive evaluation will include all of the above. However, no matter which type is used, information needs to be collected and documented systematically. Templates or proformas can be developed, making it easy to capture everything that happens throughout the process. It is also important to recognise limitations and focus first on establishing a good quality system for reliable monitoring and reflection. Once those are working well, and there is more time available, the evaluation process can be extended to include the gathering of data that provides more information relating to impact. Evaluation is often viewed as a one-off activity that happens at the end of a project. However, the best quality assessments of an intervention’s success have self-evaluation built in from the outset and continue as long as the intervention or service is being delivered. If the intervention forms part of a rolling programme, then between each phase, the findings from this process should lead to improvements in the next. It is important to continue using the resources or methods that have been found to work effectively and only adjust/refine areas that have not worked well. There are different ways of evaluating interventions; the decision as to which type is used often depends on the availability of time, resources, the skills of the team and the number of dedicated staff. We have drawn a broad distinction between ‘outcome/impact’ and ‘process’ evaluation. The latter is the specific focus of Facet 6. Here, we focus on the former that can involve: a monitoring process, maintaining an accurate record of who is involved in the intervention, and the degree to which it is engaging participants, finding out how participants feel the intervention has helped them, perhaps carrying out some case studies to gain an in-depth understanding of the intervention’s impact and how it resulted in change, and/or assessing changes that have occurred in participants’ knowledge, thoughts, feelings or behaviours using robust and reliable measurement tools. A comprehensive evaluation will include all of the above. However, no matter which type is used, information needs to be collected and documented systematically. Templates or proformas can be developed, making it easy to capture everything that happens throughout the process. It is also important to recognise limitations and focus first on establishing a good quality system for reliable monitoring and reflection. Once those are working well, and there is more time available, the evaluation process can be extended to include the gathering of data that provides more information relating to impact. Monitoring reach & engagement Identifying who your intervention has reached and engaged Capturing experiences Finding out how participants feel the intervention has helped Gathering robust evidence What is meant by robust evidence of change & impact? Presenting findings Examples showing how to present data/evidence