Conversation

Glossary


A  

Approved Driving Instructor (ADI)

A professional driving instructor who is on the Driving Standards Agency's register of Approved Driving Instructors. Only ADI's can charge a fee for driving lessons.

Aim

The aim of an intervention is a statement of what and who you hope to change by the intervention. For example: To increase seat belt wearing amongst 17-25 year olds. An aim has a measurable outcome.

Attitudes

Attitudes are the beliefs and opinions of a person or group of people.

Awareness

Awareness is the level of knowledge gain, or learning, of a person or group of people on a particular issue.


B  

BAME

Black and Minority Ethnic Groups

Baseline Data/Information

Baseline data or information is a measure of the current situation. It is the existing level of data or information (for example, about behaviour, knowledge, skills, attitudes or accidents) before the intervention starts. This can be compared to the same data at the end of the intervention (or a period of time afterwards) in order to measure the change achieved by the intervention.

Before and After Study

A study which compares data collected before an intervention begins (baseline), with the same data collected after the intervention has been delivered.

Benefit

The people or groups who benefit from an intervention are those whose safety on the road is likely to improve as a result of the intervention. They may or may not be the same as the people or groups who you wish to influence. For example, a child pedestrian training scheme aims to both influence and benefit child pedestrians by improving their road crossing skills. Whereas an intervention targeting HGV drivers to raise awareness of the need to check for cyclists when turning left, aims to influence HGV drivers, but to benefit cyclists.

Bias

Bias is a distortion of the true picture. There are different types of bias such as:

  • Selection bias

    Selection bias is where results of an intervention or an evaluation may be distorted due to the way in which participants were recruited.

  • Researcher bias

    Researcher bias is where results of an evaluation may be distorted due to the evaluator having a vested interest in the results, or simply an expectation about what the results will be.

  • Social Response bias

    Social response bias is where the results of an evaluation may be distorted due to the respondents wanting to give what they consider to be socially desirable responses, or the responses that they feel the researcher wants to hear.

Business Drivers

Anyone who undertakes work that involves driving or riding on behalf of a business in a vehicle owned, leased, or hired by the driver or their employer.


C  

Campaign

A campaign is a series of actions or interventions on a particular issue, usually designed to change awareness, knowledge, skills, attitudes or behaviour.

Case study

A case study is a type of evaluation design which looks in great detail at a single (or a small sample of) interventions, or intervention recipients. A case study can use a number of different methods and/or points of view of different stakeholders. It is often used to evaluate multi-component interventions in settings such as schools or local authorities. A case study can track an intervention and/or participants over time, or it can just look at and describe the situation at one time point only.

Children

In road casualty statistics, children are defined as aged 0-15 years inclusive. In NHS statistics, children are defined as under 15 years old.

Cluster Randomised Controlled Trial (Cluster RCT)

In a cluster RCT whole groups of participants, rather than individual participants, are randomly assigned to either an intervention or control group. It is suitable for ETP interventions delivered to institutions such as schools, rather than to individuals.

Comparison Group

A comparison group is a group of participants who do not receive the intervention being evaluated. A comparison group differs to a control group in that individuals or groups are not randomly allocated. Allocation to the comparison (non-intervention group) could be done on an arbitrary basis, or through more careful selection such as making sure an equal number of males and females go into each group (intervention and non-intervention).

Control Group

A control group consists of those participants who have been randomly allocated to the non-intervention group. As they have been randomly allocated they can be assumed to be equivalent to the individuals in the intervention group. Therefore any changes in the two groups seen after the intervention can be said to be caused by the intervention, and not simply by individual differences between those who received the intervention, and those who did not.

Cost-effectiveness

Cost-effectiveness is an assessment of the financial value of the benefits that result from an intervention, against the costs of conducting the intervention.

Costs

The cost of an intervention includes any money spent on developing, delivering, and evaluating it. This includes obvious costs such as advertising, designing and printing, promotional or training materials, fees for external consultants, travel and accommodation. But it also includes the proportion of the staff costs (salaries, national insurance, pension etc.) for the amount of time they spend working on the intervention. It also includes the proportion of the organisation's overheads, such as telephone, IT, and postage costs that are spent on the intervention.

Cross-sectional survey

A cross-sectional survey involves collecting data from a particular group of people (a sample of a chosen population) at one specific point in time.


D  

Data

Data is information that is collected and analysed. It can be in multiple forms such as numbers, facts, images, or opinions.

Demographics

Demographics are the characteristics of a population such as age, sex, marital status, family size, geographic location, ethnicity, education levels, and income levels.

Dissemination

Dissemination is the distribution and communication of, for example, research findings, to target groups.

Diversionary Scheme

A scheme that offers the opportunity for an individual to attend and satisfactorily complete an educational and/or training session as an alternative to prosecution, within set criteria for certain motoring offences, such as exceeding the speed limit or careless driving.

Draw and Write

Participants (often, but not exclusively children) are asked to draw and write in response to open ended questions which focus on a particular aspect of health or safety. Data can be used qualitatively and quantitatively.


E  

Education

See entry for 'ETP'

Effectiveness

The extent to which an intervention achieves its aims and objectives.

Efficacy

The optimum conditions for effectiveness. For example, an intervention can be effective but it might only get the best results if delivered in a prescribed way.

Efficiency

The ratio of effective outcomes to total inputs. The maximum achievement for the least input.

Engagement

The successful involvement of the intervention participants and/or target group in the intervention.

Ethical Issues

All research, including evaluation, must be conducted ethically. This means that attention must be given to protecting participants from harm; to balancing the benefits of taking part against the risks; and to respecting individuals' ability to freely make their own decisions.

ETP

Education, Training and Publicity

  • Education

    Education is a broad based activity, which usually takes place in schools and other educational establishments. Road safety education deals with ideas and concepts such as hazard perception and managing personal risk in relation to the road environment, and the developing of coping strategies. It also includes developing an individual's understanding of their responsibilities to other road users. It is a gradual process, which may take place over a number of years.

  • Training

    Training is mostly concerned with creating or developing practical skills, and delivery is generally short term in duration.

  • Publicity

    Publicity is designed to provide information, raise awareness, give advice on appropriate behaviour, and thereby change attitudes towards a particular issue. It can also reinforce positive attitudes and behaviour learned through education and training.

Evaluation

Evaluation is a systematic way of making a judgement about the value, merit, or worth of your intervention. Evaluation can also help you to improve your intervention.

There are different types of evaluation:

  • Ex-ante Evaluation

    Ex-ante evaluation is a form of needs assessment or feasibility study which considers the potential effects, and risks, of a proposed intervention. It is carried out during the planning stage of an intervention, before it is delivered, and is the first step of the ideal evaluation cycle.

  • Ex-post Evaluation

    An ex-post evaluation is an evaluation that is conducted after the intervention has begun to be delivered.

  • Formative Evaluation

    A formative evaluation is conducted during the development of an intervention so that results can be used, in real-time, to improve the design and/or delivery of the intervention.

  • Outcome Evaluation

    An outcome evaluation looks only at outcomes – the changes caused by the intervention.

  • Process Evaluation

    A process evaluation examines the implementation of an intervention, and monitors how it is being delivered.

  • Summative Evaluation

    A summative evaluation examines the extent to which an intervention met is aims and objectives. Summative evaluation can include both process and outcome data, but focuses on the longer-term outcomes. It is usually conducted by an independent evaluator who does not provide feedback during the delivery of the intervention, but who does provide a formal evaluation report at the end of the evaluation study.

Evaluation Design

Evaluation design is the type and structure of an evaluation. It does not refer to the data collection methods used.

Evaluation Design Recommendations

In E-valu-it, Evaluation Design Recommendations are there to help you specifically with the design of your evaluation. Often there will be more than one recommendation and you will need to decide which design to use. The recommendations will include strengths and weaknesses of the different designs, and examples to help you to decide. Strengths and weaknesses are indicated by + or - after the design name.

Evaluation Methods

This term describes the kind of data collection techniques you can use, for example a telephone interview or questionnaire. Most methods can be used with any evaluation design but some are more suitable for some designs than others. Some methods are most suitable for particular groups of participants – children for example.

Evaluation Report

An evaluation report describes the intervention and its aims and objectives; the methods used to evaluate it; and the results found. It often includes a discussion of the results along with recommendations for further work. An evaluation report has a standard structure of: Introduction, Methods, Results, Discussion, Recommendations, Conclusion, References, and Appendix. The exact structure of the report though will depend upon whom it is being written for.

Evaluator

  • Internal Evaluator

    An internal evaluator is a member of staff who evaluates interventions delivered by the same organisation that they are employed by. To reduce bias it is better if the internal evaluator is not directly involved in delivering the intervention.

  • External Evaluator

    An external evaluator is an independent consultant commissioned to undertake the evaluation. They are independent of the organisation and can often provide a new perspective.

Evidence-informed Practice

Evidence-informed practice refers to the systematic use of research and evaluation data to inform decision making on the road safety issues to be tackled, and on the most effective ways to address those issues. This takes place before the intervention has been chosen, designed and delivered. Evidence-informed practice accepts that other types of (non-research) data can also influence decision making, such as: Political Imperatives, Public Opinion, and Professional Wisdom.

Experimental Study

Experimental study means that you are attempting to manage some of the external factors which could influence the outcome of your intervention, so that you can see the impact of your intervention only. See pre and post design with control group as an example of an experimental design.

Exposure

People who are exposed to an intervention are those who see the intervention's materials or messages in any medium (leaflets, posters, websites, etc.) attend workshops or training courses, hear about the intervention or its messages on radio, or from a person. Not everyone who is exposed to an intervention will necessarily engage with it, nor be influenced by it.


F  

Facilitator

A facilitator is someone who helps others to understand. The term 'facilitator' is commonly used to refer to someone who runs a focus group interview.

False Recognition

False recall – where people say they remember seeing or hearing something that they have not actually seen or heard.

Focus Group

A focus group is a way of conducting an interview with a small group of people. The purpose of a focus group can be to discover the range of opinions on a particular topic, or through a series of focus groups, to reach a consensus view. They can also be used to develop further data collection methods, such as a questionnaire. The focus group participants take part in a discussion led by a facilitator who uses a topic guide to ensure that all the main questions are covered. Focus groups usually meet face to face, but online focus groups can also be conducted.

Formative evaluation

See entry for 'Evaluation'


G  

Gatekeeper

A gatekeeper is someone who is able to help you gain access to a particular group of people who you wish to reach. For example, youth workers may help you to get in touch with young people. When working in schools, the PSHE officer may be a useful gatekeeper.

Goals

Goals are the overall reason for conducting the intervention, over and above its specific aims and objectives. For road safety ETP interventions, the goals would usually include reducing road casualties and contributing to the national and local road safety strategies, but may also include issues such as increasing cycling or walking. A goal is another term for an 'overall aim'.


H  

Heavy Goods Vehicle (HGV)

Goods vehicles over 3.5 tonnes maximum permissible gross vehicle weight (gvw)


I  

Indicator

An indicator is data or information that can be measured to indicate whether the intervention is achieving its aims and objectives. There are different types of indicators:

  • Intermediate Indicator

    An intermediate indicator is a measure you might take to suggest that the intervention is on track to deliver the long-term outcome you expect to observe. For example you might collect information a short time time after an intervention to see whether your theory of change is working as expected (e.g. awareness of a penalty for parking on zigzag lines) while waiting to collect data about your long term outcome ( reduced parking on zigzag lines 6 months after the intervention). This can also be thought of as a short-term outcome.

  • Long-term Outcome

    A long-term outcome is the intended change caused by your intervention; long-term changes are seen a while after the intervention was delivered. The long-term outcome of your intervention should reflect achievement of your aims and objectives. For example: if your aim was to reduce the number of parents stopping on zig-zag lines outside primary schools, then a long-term outcome would be a reduction in the number of parents stopping on zig-zag lines outside primary schools, compared to before the intervention.

  • Monitoring Indicator

    A monitoring indicator is a measure of the progress of your intervention's activities, it relates to inputs and outputs. This involves the collection of process data which will alert you if your intervention is not going to plan. For example, it tells you if all leaflets and promotional materials have been delivered and distributed as intended.

In-depth Interview

See entry for 'Interviews'

Influence

People who are influenced by an intervention are those who have changed their knowledge, skills, attitudes, or behaviour as a result of being exposed to, and engaging with, the intervention.

Inputs

Inputs are the resources needed to conduct your intervention. For example: staff time, materials, funding.

Intended Outcomes

See entry for 'Outcomes'

Intervention

intervention is the action or series of actions that is implemented in order to try and achieve the aims and objectives. This is also known as a 'project'.

  • Multiple Component Interventions

    Multi component interventions consist of more than one element or component. For example there may be a publicity campaign linked to the introduction of a new penalty, or a poster campaign to reinforce school based interventions on the importance of having your cycle helmet properly fitted. When evaluating multi component interventions you may want to know what each element, if introduced separately, contributes to the overall outcome, or you may want to know the outcome of all components combined.

Intervention Planning

Intervention planning is the process by which you plan an approach to road safety. To plan an intervention you should first have reliable evidence of the need (needs assessments) which should relate to the aims and objectives of the intervention.

You will also need an understanding of:

  • the scale of the problem in your locality
  • the resources available, including budget and staffing
  • the theory of change or logic which relates your intervention aims and objectives to the outcome you expect to observe.

Intervention Planning Recommendation

Intervention Planning Recommendations are designed to help you to clarify your aims, objectives, target group, intervention design and approach. The most important step in planning the evaluation is planning the intervention itself.

Interviews

Interviews are ways of gaining information about the views of intervention participants, by talking to them. They are a method of systematic data collection and can be conducted face to face or via telephone. The interviewer follows a topic guide which can range from highly structured to very loosely structured.

  • In-depth Interview

    An in-depth interview allows the interviewer to increase their understanding of the issues involved by probing each interviewee about his or her answers. A loosely structured topic guide is used to steer the discussion and ensure the major interests are covered in each interview. The loose structure of the topic guide (also known as an interview guide) encourages the interviewee to raise issues that may not have occurred to the interviewer.

  • Semi-structured Interview

    A semi-structured interview consists of pre-determined questions but the interviewer does not have to ask the questions in the same rigid way for all participants. The interviewer is also free to expand on questions, and ask additional questions, in response to the answers given.

  • Structured Interview

    A structured interview is the same as a questionnaire in that there is a fixed set of questions that are asked of all respondents. A structured interview is different to a questionnaire by the fact that the questions are read out by an interviewer, and not given to respondents to self-complete.


J  

No entries


K  

Knowledge

Knowledge is awareness and recall of information. This is not the same as understanding!


L  

Logic Model

logic model describes the theory behind an intervention; it explains why, how, and when the intervention will achieve the expected results. Understanding the theory of how an intervention is expected to work helps the evaluator to design and plan their evaluation activities. A logic model has three core component parts: Inputs, Outputs, and Outcomes.

Longitudinal Survey

A longitudinal survey involves collecting data from the same group of people more than once over a period of time.


M  

Matched Pairs

Participants are matched up in pairs on certain characteristics. One of each pair receives the intervention (intervention group) whilst the other does not (non-intervention group).

In experimental evaluation designs it can be useful to match individuals (e.g. young drivers) or institutions (e.g. driving schools) on a number of features which might be expected to affect the outcome, before offering the intervention to one of each pair only. This approach is useful when you are not able to randomise participants to intervention or non-intervention group. It can help you to measure and control for confounding factors such as age, gender, location, experience and other variables, relevant to your study.

Milestones

Milestones are events that denote significant achievement. They are pre-determined check-points to ensure that delivery of the intervention is on track.

Monitoring

Monitoring is the routine recording of data to help understand how your intervention is doing, and what is happening to the people taking part in your intervention. Both outputs and outcomes can be monitored, and good monitoring data can help an evaluation.


N  

Needs Assessment

A systematic process that establishes priorities for action by providing an accurate understanding of existing and future challenges: and/or determining the size of the gap between current and desired conditions. See also, the entry for Ex-ante Evaluation, under 'Evaluation'.

Non-intervention Group

A non-intervention group is the umbrella term for a comparison or a control group. It refers to the group of participants who do not receive the intervention, whilst another group (the intervention group) does. A non-intervention group is used in experimental or quasi-experimental studies.


O  

Objective

An objective is a specific outcome that the intervention is intended to achieve. An intervention may have several objectives, all of which should be SMART – Specific, Measurable, Achievable/Agreed, Realistic/Relevant, and Time-bound.

Objectives are directly linked to the intervention's aims, but are more specific. For example, if am aim was 'To raise awareness among 15-18 year olds of the influence of passengers on driving style and speed', an objective might be: 'To achieve a 20% increase in pupils' knowledge of the impact of passengers on driver's speed and style'.

  • Implementation Objectives

    Implementation objectives are objectives relating to your outputs (the work that you need to do). For example: 'To develop a DVD by March 2011'.

Observation

Observation involves watching participants taking part in the intervention's activities, or watching their behaviour in public places in real life situations. This can be done using either a video camera or a human observer.

Covert observation means that the participants have no knowledge of the fact that they are being observed. This type of observation is often considered unethical and must only be used where there is no genuine alternative method, and when the research is likely to result in a great benefit to the population.

Overt observation means that the participants are informed of the fact that they are being observed.

Open-ended Question

Open-ended questions cannot be answered with a 'yes' or 'no', and so encourage a fuller response. Generally they ask: 'What, How, or Why?' They allow the participant to respond in their own words, instead of selecting a given response option.

Outcome Design

evaluation design which only measures outcomes and not processes. An outcome design is conducted after an intervention has been delivered.

Outcomes

Outcomes are the changes that result from an intervention. Outcomes could be changes in attitudes, or knowledge, or behaviour, for example. Short-term outcomes are the immediate changes that happen in the time period straight after an intervention: for example, increased knowledge. Long-term outcomes may not be achieved straight away but can still be the result of an intervention; they are a logical consequence of the short-term outcomes – changed behaviour as a result of increased knowledge for instance.

  • Intended Outcomes

    Intended outcomes are the changes that the intervention was designed to achieve, for example: increased seat belt wearing following a seat belt education campaign.

  • Unintended Outcomes

    Unintended outcomes are the changes that result from an intervention but that were not planned for or expected. Unintended outcomes can be positive (desirable) or negative (undesirable). Even if they were not expected or desired, they should still be recorded and reported on.

Outputs

An output is the actual physical product of your intervention. They are the activities done or the services delivered.


P  

Participants

Participants are the people involved in the intervention and/or evaluation. This can include the intervention designers, managers, and deliverers, as well as the evaluators. However it is normally taken to refer to recipients of the intervention, or to the people responding to evaluation questions.

Pilot Study

A pilot study may be used where the intervention is innovative, or where it is new in your locality even though it may have been well evaluated elsewhere. It can be used to test the process of delivery of the intervention, or to check that the expected outcomes can be achieved, before wider rolling out of the intervention. Pilot studies can help to reduce the risk of expensive but ineffective interventions being carried out on a large scale.

Plagiarism

Plagiarism is the use of, or close imitation of, someone else's words, ideas, or expressions and representing them as your own work, without proper attribution of the source. Plagiarism is considered to be immoral as well as misleading.

[This definition is based on the definition in Wikipedia, November 2010, http://en.wikipedia.org/wiki/Plagiarism].

Pre- and Post- test

In a pre- and post- test design, the participants are measured both before and after the intervention is delivered.

Programme

A programme is a co-ordinated series of individual interventions. A programme may achieve greater benefits than the sum of the individual interventions. A programme does not have the same discrete start and end points as its individual interventions.

Project

An individual piece of work or activity, with a specific focus, that runs for a set period of time. This is also known as an 'Intervention'.

Proxy Measure

A proxy measure is used when it is too difficult to measure exactly what you want to measure. It is a measure that is closely related to what you want to measure, and that is possible to collect. For instance you may want to know about children's safe road crossing behaviour outside of school but all you can really measure might be the number of times they report using a pedestrian crossing, or avoid crossing behind parked cars.

Post-test only

In a post-test only design, only one measurement is taken of the participants and this is taken after the intervention has been delivered.

Post-then-Pre

In a post-then-pre design participants are asked only after an intervention, to rate their behaviour, or level of knowledge or understanding etc. At the same time, they are also asked to give a retrospective rating for what they think their behaviour/knowledge/understanding etc. was for the period before they received the intervention.

PSHE

PSHE stands for: Personal, Social and Health Education. This is the curriculum area where road safety education is often carried out. In English schools this subject may also be known as PSHCE (where the C stands for Citizenship) or PSD (Personal and Social Development).

Publicity

See entry for 'ETP'


Q  

Qualitative

Qualitative data can be very varied in nature. It consists of any data which is not numerical or quantified but which describes the attributes of something. Interviews, photographs, children's drawings all generate qualitative data.

Quantitative

Quantitative data is a numerical measurement of data that can be statistically analysed.

Quasi-Experimental Design

A quasi-experimental design has some of the characteristics of a true experiment, such as allocating participants to a group receiving the intervention or to a group not receiving the intervention. A quasi-experiment though lacks randomisation in the way that participants are allocated to their groups. This lack of randomisation is the difference between a quasi- and a true experiment.

Questionnaire Survey

A questionnaire is a type of survey. A survey is a method for collecting information about a group of participants from a population. A questionnaire survey consists of a series of questions given to participants to self-complete. Questions can be open or closed but most are closed with set response options.


R  

Randomised Controlled Trial

A randomised controlled trial is an experimental design. Individual participants are randomly allocated to their groups (intervention or control) so that both groups are representative of the target population. The traditional experimental design is a pre and post test with control group, although it can also be post-test only with a control group.

Reality Check

Reality Checks are used by E-valu-it to help you think about your answers to particular questions and decide if you need more information before going ahead with either your intervention or your evaluation.

Recipients

Recipients are the people who receive your intervention, for example: the people who turn up to watch a theatre in education presentation.

Recommendations

Based on your answers to the Toolkit questions, E-valu-it makes suggestions to help you plan your intervention, and also to help you evaluate it.

Reliability

An evaluation report is reliable if somebody else conducts the evaluation again, under the same type of conditions, and finds the same results. If the results can be replicated by others, then the findings are strengthened.

Response Bias

See 'Social Response Bias' entered under 'Bias'.

Response Rates

The proportion of people invited to take part in, for example a questionnaire survey, who actually respond. Response rates vary depending upon many factors, including the type of survey, e.g. a telephone or a mail survey. As a general guide you should be looking for a response rate above 30%.

Risk

Risk is the product of probability of harm and the severity of the outcome.

Risk = probability x severity

Most members of the general public equate risk with danger (very high probability or a severe outcome) and this may be why we have become a risk averse culture.


S  

Sample

Sampling refers to the number of people whose views you collect for your evaluation study. If it is not possible to survey everybody in the population of interest, you will need to select a sample. Results from the people in your sample give you an estimate of what the results would have been had the whole population been surveyed. Samples can be randomly selected, or non-randomly selected.

  • Non-random Sample

    The people in the sample have not been randomly selected. Non-random samples are mainly used in qualitative research: or in quantitative research where statistical significance tests are not required.

  • Random Sample

    The people in the sample are selected at random. This means that every member of the population you are interested in has an equal chance of being selected, and results can therefore be generalised. To use this technique you will need a complete list of everyone in the population.

  • Stratified Sample

    A population of interest often contains distinct sub-groups, for example: male and female, children, young people, and adults, or different ethnicities. The views of these sub-groups may differ to each other so it is useful to stratify your sample so that each sub-group is proportionately represented. For example: if 30% of a population are female and 70% male, then in a stratified sample 30% of the participants would be female, and 70% male.

Saturation

Saturation is used to refer to the repetition of similar ideas or themes by different groups of participants. Qualitative methods such as focus groups offer open ended opportunities for participants to give their views. This means that you do not have a clear idea of the kinds of answers the participants might give. During the first few groups you would expect to hear a wide range of views or opinions but as you collect more data, familiar themes or topics will arise. When no more new themes or topics arise, the method has achieved saturation and you can assume that another similar group would add no more to your understanding of the issue.

SMART

SMART stands for Objectives which are:

  • Specific – the objective clearly identifies who will be affected by what is done, as well as how they will be affected
  • Measurable – there are ways of measuring the achievement of the objective
  • Achievable or Agreed – the objective can be achieved and has been mutually agreed
  • Realistic or Relevant – the objective is not just achievable but also realistic given the available resources. The objective also directly relates to the intervention's aims and intended outcomes
  • Time-bound – the objective can be achieved within a defined time-frame

Stakeholders

Stakeholders are individuals, groups, or organisations who have a vested interest in an intervention or programme. For example: delivery staff, designers, managers, funders, intended users, Members of Parliament, and local Councillors, are all stakeholders.

Summative Evaluation

See entry for 'Evaluation'


T  

Target Group

The target group is the particular population who you intend to receive the intervention.

Theory of Change

A theory of change is your/your team's understanding of exactly how your intervention will bring about the desired change. See also the entry for 'Logic Model'.

Time Series Data

Data collected at regular intervals over a period of time.

Training

See entry for 'ETP'


U  

No entries


V  

Validity

There are different forms of validity but it is generally taken to mean the accuracy of your measurements, findings, or claims.

  • Internal Validity

    The degree to which you can claim that it was your intervention which caused the observed changes, e.g. improved knowledge or attitudes, as opposed to other factors such as: individual personality of participants, or natural changes over time.

  • External Validity

    External validity means that your findings can be generalised to the whole population. This is only possible if a randomised sample has been used for the evaluation study.

Vulnerable Groups

People (adults and children) are described as vulnerable if there are grounds to doubt their ability to make a free (free from influence) and informed decision.

W  

No entries


X  

No entries


Y  

Young People

In the Children and Young People's Act 2008, a 'young person' is defined as being over the age of 18 years, but under the age of 25 years.


Z  

No entries