M&E TRAINING
Introduction
PART ONE
Objectives:
At the end of this course, you will be able to:
• Identify the basic purposes and scope of M&E
• Differentiate between monitoring functions and
evaluation functions
• Describe the functions of an M&E plan
• Identify the main components of an M&E plan
• Identify and differentiate between conceptual
frameworks, results frameworks, and logic models
• Describe how frameworks are used for M&E
planning
• Identify criteria for the selection of indicators
• Describe how indicators are linked to frameworks
• Identify types of data sources
• Describe how information can be used for decision-
making
What Is Monitoring?
• the process of regularly checking on the status of the
program by comparing the actual implementation of
activities against a work plan.
• It involves the collection of routine data that measure
progress toward achieving program objectives. It is
used to track changes in program performance over
time. Its purpose is to permit stakeholders to make
informed decisions regarding the effectiveness of
programs and the efficient use of resources. It asks
key questions:
– How well has the program been implemented?
– How much does implementation vary from site to site?
– Did the program benefit the intended people? At what cost?
A graphic illustration of program
monitoring over time;
Types of monitoring
• Field visits
• Progress reports
• Tracking inputs
• Review meetings
M&E TRAINING
Introduction
PART TWO
EVALUATION
What Is Evaluation?
• It is the systematic application of both
quantitative and qualitative research techniques
to determine the appropriateness and
effectiveness of the design and implementation
of social programs.
• It includes measuring the extent to which the
changes that have occurred are attributable to
your programs activities.
A graphic illustration of program impact.
Types of evaluation
1. Thematic evaluations (Situational
analysis)
2. Formative evaluation (Midterm)
3. Summative evaluation (Final)
4. Impact evaluation (Overall end)
Why Is M&E Important?
• It helps program implementers to make informed
decisions regarding program operations and service
delivery based on objective evidence
• Ensure the most effective and efficient use of
resources objectively assess the extent to which the
program is having or has had the desired impact, in
what areas it is effective, and where corrections need
to be considered.
• Meet organizational reporting and other requirements,
and convince donors that their investments have been
worthwhile or that alternative approaches should be
considered.
Examples of questions that M&E
can answer:
• Was the program implemented as
planned?
• Did the target population benefit from
the program and at what cost?
• Can improved health outcomes be
attributed to program efforts?
• Which program activities were more
effective and which less effective?
M&E TRAINING
PART THREE
PROJECT & PROGRAM
A project or a program?
• A project is an undertaking designed to
achieve certain specific changes within a
given time through use of specific
resources.
• A program is not time bound. It may be
composed of several projects implemented
at different times and aims to achieve long
term objectives/targets expressed in the
vision and mission statements.
Phases of Evaluation process
• Conceptualization
- Study the environment
- Review project objectives.
- Define the scope of evaluation and outline study questions
- Select evaluators
- Identify and watch indicators that will most adequately your
question answer
- Identify information sources from whom answers will be collected
• Designing the evaluation
- Define the evaluation approach (qualitative and or quantitative
methodology or both)
- Determine whether the approach applies to all or a sample of
the information sources
- Design sampling methodology and size (if needed}
- Determine and design the evaluation instruments
- Train people to collect data
Phases of Evaluation process
Cont’d
• Collect data
• Analyzing the data
- Validate data collected.
- Define statistical analysis to be applied
- Perform the statistical analysis
• Reporting the findings
- Report to project administrators and
funding source
The M&E Plan
• This is the fundamental document that details a
program’s objectives, the interventions developed to
achieve these objectives, and describes the
procedures that will be implemented to determine
whether or not the objectives are met.
• It shows how the expected results of a program
relate to its goals and objectives,
• It describes the data needed and how these data will
be collected and analyzed, how this information will
be used, the resources that will be needed, and how
the program will be accountable to stakeholders.
• M&E plans should be created during the design
phase of a program
Organizing a typical M&E Plan:
The underlying assumptions on which the achievement of
program goals depend
• The anticipated relationships between activities, outputs,
and outcomes
• Well-defined conceptual measures and definitions, along
with baseline values
• The monitoring schedule
• A list of data sources to be used
• Cost estimates for the M&E activities
• A list of the partnerships and collaborations that will help
achieve the desired results
• A plan for the dissemination and utilization of the
information gained
• An M&E plan should be considered a living document and
revised whenever a program is modified or new information
is needed.
M&E Plan Components:
• The introduction
• The program description and framework
• A detailed description of the plan indicators
• The data collection plan
• A plan for monitoring
• A plan for evaluation
• A plan for the utilization of the information
gained
• A mechanism for updating the plan
M&E TRAINING
PART FOUR
M&E FRAMEWORKS
How to Create an Effective Monitoring and Evaluation
Framework
Frameworks
• Frameworks are key elements of M&E plans that show the
components of a project and the sequence of steps needed to
achieve the desired outcomes.
• They help increase understanding of the program's goals and
objectives, define the relationships between factors key to
implementation, and define the internal and external elements that
could affect its success.
• They are crucial for understanding and analyzing how a program is
supposed to work.
• There is no one perfect framework and no single framework is
appropriate for all situations, but several common types will be
discussed here:
- Conceptual framework
- Results framework
- Logical frameworks which is a diagram or matrix that illustrates the
linear relationships between key program inputs, activities,
immediate results/outputs, and desired outcomes.
Conceptual Frameworks
Result Framework
Logical Framework
Hierarchy of Verifiable Means of Risks &
Aims Indicators Verification Assumptions
(MOV)
Goal
Purpose
Output
Activities
Summary of Frameworks
What Is an Indicator?
• An indicator is a variable that measures one aspect of a
program or project that is directly related to the program’s
objectives. Examples of indicators include:
Percentage of clinic personnel who have completed a
particular training workshop
Number of radio programs about family planning aired in
the past year
Percentage of clinics that experienced a stock out of
condoms at any point during a given time period.
• An indicator is a variable whose value changes from the
baseline level at the time the program began to a new value
after the program and its activities have made their impact
felt. At that point, the variable, or indicator, is calculated
again.
• An indicator is a measurement for the value of the change
in meaningful units that can be compared to past and future
units.
• It is usually expressed as a percentage or a number.
Quantitative and Qualitative Indicators:
• Quantitative indicators are numeric and are
presented as numbers or percentages.
• Qualitative indicators are descriptive
observations and can be used to
supplement the numbers and percentages
provided by quantitative indicators.
• Examples include "availability of a clear,
strategic organizational mission statement"
and "existence of a multi-year procurement
plan for each product offered."
Important of Indicators
• Indicators provide M&E information crucial for
decision-making at every level and stage of program
implementation.
• Indicators of program inputs measure the specific
resources that go into carrying out a project or
program (for example, amount of funds allocated to
the health sector annually).
• Indicators of outputs measure the immediate results
obtained by the program (for example, number of
multivitamins distributed or number of staff trained).
• Indicators of outcomes measure whether the
outcome changed in the desired direction
and whether this change signifies program
“success” (E.g, contraceptive prevalence rate).
What Is a Metric?
• A metric is an important part of what comprises an indicator. It
is the precise calculation or formula on which the indicator is
based. Calculation of the metric establishes the indicator’s
objective value at a point in time. Even if the factor itself is
subjective or qualitative, like the attitudes of a target population,
the indicator metric calculates its value at a given time
objectively.
Here is an example:
• Indicator: Percentage of urban facilities scoring 85-100% on a
Quality of Care Checklist
Note that because this indicator calls for a percentage, a fraction
is required to calculate it.
Possible metrics:
• Numerator, or top number of the fraction: number of urban
facilities scoring 85-100% on a Quality of Care Checklist.
• Denominator, or bottom number of the fraction: total number of
urban facilities checked and scored.
• Defining good metrics is crucial to the usefulness of any M&E
plan because it clarifies the single dimension of the result that is
being measured by the indicator.
Characteristics of a good indicator:
• Produce the same results each time it is used to measure the
same condition or event
• Measure only the condition or event it is intended to measure
• Reflect changes in the state or condition over time
• Represent reasonable measurement costs
• Be defined in clear and unambiguous terms
• Indicators should be consistent with international standards
and other reporting requirements.
• Indicators should be independent, meaning that they are non-
directional and can vary in any direction. For instance, an
indicator should measure the number of clients receiving
counseling rather than an increase in the number of clients
receiving counseling or The contraceptive prevalence rate
should be measured, rather than the decrease in contraceptive
prevalence. ]
• Indicator values should be easy to interpret and explain, timely,
precise, valid, and reliable.
• They should also be comparable across relevant population
groups, geography, and other program factors.
Linking Indicators to
Frameworks:
Linking Indicators with Result
Framework
Linking Indicators to Logic Models:
Guidelines for Selecting Indicators:
• Select indicators requiring data that can
realistically be collected with the resources
available.
• Select at least one or two indicators (ideally,
from different data sources) per key activity or
result. Select at least one indicator for each
core activity (e.g., training event, social
marketing message, etc.).
• select no more than 8-10 indicators per area of
significant program focus. Use a mix of data
collection sources whenever possible.
Linking Evaluation to the Program
Planning and Implementation cycle:
• The steps that an organization or program goes through in
managing its activities can be presented as a continuous cycle
of management actions from assessing needs, to planning and
implementing activities, to measuring final programmatic
outcomes, the results of which feed back into the planning
stage to start the cycle again. Whether an evaluation is
conducted internally by program staff or by an external
consultant, there are three main elements in any evaluation:
• planning the evaluation
• conducting the evaluation
• Using the results.
• The following chart breaks down the steps in the evaluation
process and shows how they directly relate to the steps in the
planning and implementation cycle.
Evaluation as part of the program
planning and Implementation Cycle
Assessing
Program
Needs Identifying
Making Problems
Revisions
Evaluation
Determining Setting Objectives
-Planning the evaluation (and indicators)
Progress -Conducting the Evaluation
-Using the results from the
evaluation
Preparing the
Implementing work plan
Activities
M&E TRAINING
PART SIX
DATA SOURCES
Data Sources
• Data sources are the resources used to obtain data for
M&E activities.
• There are several levels from which data can come,
including client, program, service environment, population,
and geographic levels.
• Regardless of level, data are commonly divided into two
general categories: routine and non-routine.
Examples of routine data sources:
• Vital registration records
• Clinic service statistics
• Demographic surveillance
Examples of non-routine data sources:
• Household surveys, such as DHS
• National censuses
• Facility surveys
Different Sources, Same
Indicator
Types of evaluation questions depending
on the focus of the evaluation
Relevance Are the programs, services and strategies appropriate to the
needs they are supposed to address?
Adequacy Is the program addressing all the needs it is designed to
address?
Progress Is the program doing what it planned to do within the planned
amount of time and in accordance with the budget?
Effectiven Is the program achieving its intermediate objectives and serving
ess the needs of its clients?
Impact Has the program produced the expected long term results?
Efficiency Are the results of the program (outputs) appropriate to the use
of its resources (inputs)?
Sustainabi Is the program/ organization providing quality services to its
lity clients, increasing or maintaining demand for services, and
generating income locally while decreasing its dependency on
funds from external donors?
Data Collection
• The term data refers to raw, unprocessed information
while information, or strategic information, usually
refers to processed data or data presented in some
sort of context.
• The M&E plan should include a data collection plan
that summarizes information about the data sources
needed to monitor and/or evaluate the program.
• The plan should include information for each data
source such as:
- The timing and frequency of collection
- The person/agency responsible for the collection
- The information needed for the indicators
• Any additional information that will be obtained from
the source will depend on the differences between
qualitative and quantitative evaluation methods
Differences between qualitative and quantitative evaluation methods
# Quantitative methods
. Qualitative methods
1 Describes “how many” or “how much” Describe “how” and “why”
2 Uses predominantly closed-ended Uses predominantly open-ended
questions. questions.
3 Provides numerical data and statistics Provides data on perceptions, beliefs and
that facilitate similar interpretation values which can be interpreted
by evaluators. differently by different evaluators
4 Requires large samples preferably Permits more limited samples generally
selected at random not selected at random.
5 Requires staff with experience in Requires expertise in qualitative data
statistical methods. analysis.
6 Results can be generalized to the target Results can not be generalized and they
population. are only indicative of a segment of
the population.
7 Yields more superficial responses to Offers more in-depth responses in
sensitive topics e.g. sexual behavior. sensitive topics e.g. sexual behavior
Data Quality
• Data quality is important to consider when determining
the usefulness of various data sources; the data collected
are most useful when they are of the highest quality.
• The highest quality data are usually obtained through the
triangulation of data from several sources.
• It is also important to remember that behavioral
and motivational factors on the part of the people
collecting and analyzing the data can also affect its
quality.
• Some types of errors or biases common in data collection
include:
- Sampling bias: occurs when the sample taken to represent
population values is not a representative sample
- Non-sampling error: all other kinds of miss-measurement,
such as courtesy bias, incomplete records, or non-
response rates
- Subjective measurement: occurs when the data are
influenced by the measurer
Here are some data quality issues to
consider:
- Coverage: Will the data cover all of the elements of
interest?
- Completeness: Is there a complete set of data for
each element of interest?
- Accuracy: Have the instruments been tested to
ensure validity and reliability of the data?
- Frequency: Are the data collected as frequently as
needed?
- Reporting Schedule: Do the available data reflect the
time periods of interest?
- Accessibility: Are the data needed collectable /
retrievable?
- Power: Is the sample size big enough to provide a
stable estimate or detect change?
Data analysis
• Analyzing the data you have collected is often one of the most
difficult aspects of evaluation and requires careful planning.
• In analyzing the data, you need to develop skills in finding
patterns in the data and to have the ability to isolate critical
facts and information from other information that is not so
important.
• How you analyze the data depends greatly on how the data
were collected.
• In some evaluations the major interest may be to measure
short-term progress by comparing numbers and information
with different service sites with in the program or the
organization.
• In other evaluations, you may want to measure your programs
success by comparing the programs achievements against the
baseline established by your programs.
Data Use
• Collecting data is only meaningful and worthwhile if it is
subsequently used for evidence-based decision-making.
• To be useful, information must be based on quality data, and it
also must be communicated effectively to policy makers and other
interested stakeholders.
• M&E data need to be manageable and timely, reliable, specific to
the activities in question, and the results need to be well
understood.
• The key to effective data use involves linking the data to the
decisions that need to be made and to those making these
decisions.
• The decision-maker needs to be aware of relevant information in
order to make informed decisions.
• Alternatively, the data may prompt the implementation of a new
distribution system and could spur additional research to test the
effectiveness of this new strategy compared to the existing one.
• When decision-makers understand the kinds of information that
can be used to inform decisions and improve results, they are
more likely to seek out and use this information.