Which indicators and
methods to monitor and
evaluate at the national
level
Technical Consultation – International
Standards on Prevention, January 23-25,
Vienna
Dr. Ken-Garfield Douglas
Drug Abuse Epidemiologist
Presentation will….
Provide a really quick overview of evaluation
and its methodologies
Present basic steps and questions to keep in
the back of our minds as we prepare to
monitor or evaluate
Present some critical to successful
evaluation factors/critical indicators
Case study of a not so typical prevention
program
Why evaluate program?
(the usual first question we ask ourselves)
To determine the effectiveness of programs for
participants --- MAKING A JUDGMENT;
To document that program objectives have
been met;
To provide information about service delivery
that will be useful to program staff and other
audiences; and
To enable program staff to make changes that
improve program effectiveness.
(evaluations help to foster accountability, determine whether programs
“make a difference,” and give staff the information they need to
improve service delivery; help expand practitioners’ and policymakers’
understanding of the effectiveness of programs. ).
Where do we start?
Use of the LOGIC MODEL (as a best practice)
The evaluator looks towards the logic model
that was formed by the stakeholders to solve
the problem identified
◦ The LM depicts the rationale underlying the
existence of the problem
◦ Depicts how various elements are expected to
interact
◦ The resources needed
◦ The goods and services they produce
◦ How these generate the desired results (short,
medium and long-term)
How do we continue?
Pinpoint the services needed—for example,
finding out what knowledge, skills, attitudes, or
problem behaviors a drug or alcohol prevention
program would be addressing;
Establish program objectives and deciding the
particular evidence (such as the specific
knowledge, attitudes, or behavior) that will
demonstrate that the objectives have been met.
(The key to a successful evaluation is a set of clear, measurable, and
realistic program objectives. If objectives are unrealistically optimistic
or are not measurable, the program may not be able to demonstrate
that it has been successful even if it has done a good job. In addition,
did we implement actions as intended and if so, did they lead to the
desired results)
What should we be aiming for?
Developing or selecting from among alternative program
approaches––for example, trying different curricula or
policies and determining which ones best achieve the goals;
Tracking program objectives––for example, setting up a
system that shows who gets services, how much service is
delivered, how participants rate the services they receive,
and which approaches are most readily adopted by staff; or
Trying out and assessing new program designs––
determining the extent to which a particular approach is
being implemented faithfully by school or agency personnel
or the extent to which it attracts or retains participants.
(Through these types of activities, those who provide or administer
services determine what to offer and how well they are offering those
services)
Evaluation settings/dimensions
The different dimensions of evaluation have
formal names: process, outcome, and impact
evaluation. These three dimensions can also
be thought of as a set of assessment options
that build upon one another, allowing program
staff to increase their knowledge about the
activities they undertake as they incorporate
more options or dimensions into their
evaluation.
(the overall intent behind monitoring and evaluation is to inform
policymakers, practitioners, and stakeholders of the progress of the
programme in achieving its intended results)
What happens at the process stage?
Process evaluation describes and assesses
program materials and activities (monitoring
activities).
For example, program staff might
systematically review the units in a curriculum
to determine whether they adequately address
all of the behaviors the program seeks to
influence.
A program administrator might observe
teachers using the program and write a
descriptive account of how students respond,
and then provide feedback to instructors
What happens at the outcome
stage?
Outcome evaluation assesses program
achievements and effects. Outcome
evaluations study the immediate or direct
effects of the program on participants.
◦ For example, when a 10-session program aimed
at teaching refusal skills is completed, can the
participants demonstrate the skills successfully?
(The scope of an outcome evaluation can extend beyond
knowledge or attitudes, to examine the immediate
behavioral effects of programs)
What happens at the impact stage?
Impact evaluation looks beyond the immediate
results of policies, instruction, or services to identify
longer-term as well as unintended program effects.
It may also examine what happens when several
programs operate in unison.
◦ For example, an impact evaluation might examine
whether a program’s immediate positive effects on
behavior were sustained over time.
It might also look at whether the introduction of a
community-wide prevention program with
components administered by schools, agencies, and
churches resulted in fewer teenage drug-related
arrests or deaths
Evaluation: for what purpose? And
to be used by whom?
Before assessing a program, it is critical to consider
who is most likely to need and use the information
that will be obtained and for what purposes
For project management - monitors the
routines of program operations. It can provide
program staff or administrators with information on
such items as participant characteristics, program
activities, allocation of staff resources, or program
costs
For staying on tract – this type of evaluation can
help to strengthen service delivery and to maintain
the connection between program goals, objectives,
and services
Evaluation: for what purpose? And
to be used by whom?
For project efficiency - Evaluation can help to
streamline service delivery or to enhance coordination
among various program components, lowering the cost
of service. Increased efficiency can enable a program to
serve more people, offer more services, or target
services to those whose needs are greatest
For accountability - the users of the evaluation results
likely will come from outside of program operations:
parent groups, funding agencies, elected officials, or
other policymakers
For new program development and
dissemination – Rigorous evaluation of longer-term
program outcomes is a prerequisite to asserting that a
new model is effective
What is the evidence to make our
JUDGMENT?
Regardless of the kind of evaluation, all
evaluations use data collected in a
systematic manner (the INDICATORS).
These data may be quantitative—such as
counts of program participants, amounts of
counseling or other services received, or
extent of drug use. They also may be
qualitative—such as descriptions of what
transpired at a series of counseling sessions.
(Successful evaluations often blend quantitative and qualitative data collection. The
choice of which to use should be made with an understanding that there is
usually more than one way to answer any given question)
Steps in planning for the evaluation
Identifying the Evaluation’s Consumers – this
will help to determine what questions are most
important, what data will be viewed as credible,
what analyses should be conducted, and how results
should be transmitted and displayed
Choosing the Important Evaluation
Questions - There is rarely enough time or
resources to answer all of the questions about
program practice and effects that consumers pose.
A way must be found to establish priorities and to
limit the number of questions, ask the questions: “I
need to know ____ because I need to
decide____.”
Steps in planning for the evaluation
Mapping Out an Evaluation Work Plan -
create a step-by-step work plan; review the
questions and group them in some logical
manner––by subject area, by the data needed
to address them, by process/outcome/impact,
or in some other manner.
(The plan should therefore outline the data that will
be collected and how the information gathered will
relate to each evaluation question)
Making good on the scarce resources
Making sure adequate resources are at hand
to carry out all functions
Be creative; use program staff and even
students to do data collection and date
entry
Use survey instruments that already exist
for similar types of program intervention
Use national statistical offices or
universities to support the analysis of the
data and compiling reports
Making good on the scarce resources
Use existing data sources where applicable
and available (be mindful of validity and
reliability)
Do not put all of your information “eggs” in
one data collection “basket.” It is useful to
begin an evaluation with multiple data
collection strategies or alternatives in mind
Obtain technical information and help from
outside the project
What is critically important in the
long run?
Establishing that implementation
took place
Ensuring that evaluations yield valid
and reliable findings
Interpreting and reporting
evaluation findings
Q. What defines the findings? - INDICATORS
Evaluating students outcomes in the context of
a drug prevention program
(minimum menu of indicators)
Knowledge and attitude towards/about drug
and alcohol
Rates of drug use (incidence and
prevalence) – has the program forestalled
use or reduced use
Intent to use drugs
Perceived risk involved in drug use
Drug education – from the students
perspective
minimum menu of indicators
School drug policy – is the policy reducing the
availability of drugs in school, for example, how
difficult/easy is it to obtain drug at school
Disciplinary environment - What happens to a
student who gets caught doing the following
things, for example possession of drugs
Drug availability and acceptability – do your
friends use drugs, how often are you around
friends that use drugs
Example of how outcome indicators
can be displayed?
Teacher reports of number of 50-minute class sessions per
school year devoted to drug education
Student’s indication of how many drug education sessions
attended per school year
Drug-related disciplinary actions per school year
Percentage of students who reported being drunk at least
once in the last 30 days, prior to and after implementation
Percentage of students who reported ever trying marijuana,
prior to and after implementation
Percentage of students who disapproved or strongly
disapproved of having five or more drinks in a row, prior
to and after implementation
Percentage of students who saw great risk in having five or
more drinks in a row, prior to and after implementation
Changing the focus
Not all drug prevention programs are school
based. Some are community based and can
sometimes form part of a wider social
intervention for high-risk students or high-
risk communities, and they too are subject
to evaluation .
I will like to present a two slide case study of
such a program – a social safety net
protective factor program
Social safety net program in Jamaica
(Programme of Advancement through Health and
Education (PATH)
Objectives: to support children of poor families
to attend and stay in school and access health
care
Elements of the support:
◦ Qualifying criteria
◦ Monetary support provided by the state
◦ Verification of attendance and use of funds (done
through a national management information system)
◦ Monitoring and evaluation of outcomes (in-person
monitoring by staff, personal visits to homes and
school to check records of attendance etc)
PATH program
Intended benefits:
Poor families will afford to provide
transportation means for children to attend
school
Families will be able to provide books
Recipient students are enrolled in a school
feeding program and given breakfast and lunch
Successful participation overtime (from one
school year to the next) trigger additional social
benefits to the family, for example enrolment in
the health insurance scheme
PATH Program - outcomes
(accrued benefits on evaluation with comparison group)
Increased school attendance with greater
regularity
Low school drop-outs among participating
families
Increased participation in protective life
skills programs, drug education program,
etc
Increased attachment and participation in
after-school activities
PATH Program - outcomes
(accrued benefits on evaluation with comparison group)
Increased participation in community
programs (greater community attachment)
Increased good-faith efforts by parents to
support children to participate in school and
community activities (pro-social
involvement)
Reduced rates of drug use and intent to use
drugs by participating students
More pro-social involvement by students
overall