0% found this document useful (0 votes)
79 views66 pages

M& e Unit 1-6

The course outline for Monitoring and Evaluation at Mountains of the Moon University covers essential concepts, historical context, methodologies, and practical applications of M&E in development projects. Students will learn to design and implement M&E plans, understand various evaluation types, and engage in participatory approaches. The course aims to equip students with the skills to assess project progress and impact effectively, culminating in a comprehensive understanding of the project management cycle.

Uploaded by

sdaaki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views66 pages

M& e Unit 1-6

The course outline for Monitoring and Evaluation at Mountains of the Moon University covers essential concepts, historical context, methodologies, and practical applications of M&E in development projects. Students will learn to design and implement M&E plans, understand various evaluation types, and engage in participatory approaches. The course aims to equip students with the skills to assess project progress and impact effectively, culminating in a comprehensive understanding of the project management cycle.

Uploaded by

sdaaki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

MOUNTAINS OF THE MOON UNIVERSITY

COURSE OUTLINE

Course Name: Monitoring and Evaluation


Course code: BPAM 3204
Credit units: 3
Course level: Year 3 Semester 2
Contact Hours: 45 Hours
Course Instructor:
Contact:

Course Description
Monitoring and evaluation are key components in any development undertaking. The aim of
monitoring is to ascertain progress. Evaluation is an activity to measure impact or change.
Therefore, this course unit introduces student concepts in monitoring and evaluation, the historical
underpinnings of monitoring and evaluation to enable students know the origin of Monitoring and
Evaluation. Students will learn the theory of change a vital component in developmental projects
and programs that describes the desired new picture in any intervention. Students will study:
approaches used in monitoring and evaluation, monitoring and evaluation system and there after
they will be taught the practical part of implementing the monitoring and evaluation. Students will
study strategies and tools of data collection and their relevance.

Course objectives
1. To teach students concepts in monitoring and evaluation and show them how they are
applied.
2. To teach students monitoring and evaluation systems and tool for assessing progress and
impact.
3. To teach students monitoring and evaluation strategies, methods and tools that are used to
measure progress and impact in development interventions.

Learning Outcomes
Upon completion of this course, students should be able to;
1. Explain concepts in monitoring and evaluation and show them how they are applied.
2. Describe monitoring and evaluation systems and tool for assessing progress and impact.
3. Explain monitoring and evaluation strategies, methods and tools that are used to measure
progress and impact in development interventions.

Course Outline

Conceptualizing Monitoring and Evaluation (3Hrs)


• Definitions of Monitoring and Evaluation
• Justification for adopting Monitoring & Evaluation in development work
• Purposes of Monitoring and Evaluation
• Areas considered during an evaluation activity
• Key Principles of the M&E
Historical Roots of Monitoring and Evaluation (3Hrs)
• The Historical Development of Evaluation Use
• Monitoring and Evaluation in the traditional government from the ancient context
• Reporting and Citizen based evaluation of traditional leaders

Types And Groups of Evaluations (3Hrs)


Types
• Formative evaluations
• Mid-term evaluations
• Real-time evaluations (RTEs)
• Summative evaluations/ Final evaluations
• Ex-post evaluations
Groups
• Internal or self-evaluations
• External or independent evaluations
• Participatory evaluations
• Joint evaluations
• Real-time evaluations (RTEs)

Some of the Approaches, Methods and Tools for Monitoring And Evaluation (3Hrs)
Approaches
• The Logical Framework Approach
• Result-oriented approach
• Constructivist approach
• Reflexive approach
Methods
• Formal surveys
• Surveys
• Interviews
• Focus group discussions
• Case studies
• Observations
Tools
• Structured questionnaires
• Interview guides
• Observation checklist

Monitoring And Evaluation System and Framework (3Hrs)


• M and E systems
• Types of a monitoring and evaluation system
• Implementation-focused M&E system
• Results-based M&E system
• Logical framework
• Results framework
• Conceptual or narrative framework
• Steps in developing a monitoring and evaluation framework

2
Theory of Change (6Hrs)
• Conceptualizing a Theory of Change
• Why use a Theory of Change?
• Key elements of a theory of change
• Purpose: Why use a theory of change?
• When is it appropriate to use a theory of change?
• The process of constructing a theory of change
• Key principles for developing a theory of change
• How to use a theory of change for evaluation

Developing an M&E Plan (3Hrs)


• What is a Monitoring and Evaluation Plan?
• Why develop a Monitoring and Evaluation Plan?
• Goal and objectives of the monitoring and evaluation strategy
• Process for developing a monitoring and evaluation plan

Implementing Monitoring and Evaluation (6Hrs)


• Planning for monitoring and evaluation (basic principles for planning and the planning
process)
• The monitoring process (Key principles for monitoring, the building blocks and monitoring
tools and mechanisms.
• The Evaluation Process (Preparing for and managing an evaluation)
• Monitoring and evaluation framework
• Resources for monitoring and evaluation
• Engagement of stakeholders in monitoring and evaluation
• Capacity for monitoring and evaluation

Methodologies for data collection and analysis for monitoring and evaluation (3Hrs)
• Identifying purpose for data collection
• Planning for data collection
• Sources of data
• Procedure for data collection
• Data collection methods and tools
• Data processing and analysis
• Data dissemination or feedback

Assuring the quality of evaluation design and methodology (3Hrs)


• Defining the context
• The evaluation purposes
• Focusing the evaluation
• Evaluation methodology

Participatory Monitoring and Evaluation (6Hrs)


• Conceptualizing participatory monitoring and evaluation
- Participatory monitoring,
- Process monitoring,
- Participatory impact monitoring,
- Participatory evaluation,

3
- Stakeholder-based evaluation/stakeholder assessment
• Purposes of participatory monitoring and evaluation
• Participatory monitoring and evaluation and its relevance
• Principles of participatory monitoring and evaluation (participation, negotiation, learning
and flexibility)
• Participatory monitoring and evaluation and implications on development

Suggested Reading Materials


1. Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American
Journal of Evaluation, 37(4), 568-579.
2. Annie, E. (2004). Theory of Change: A Practical Tool for Action, Results and Learning.
Casey Foundation, 10-11.
3. Kabonga, I. (2018). Principles and practice of monitoring and evaluation: a paraphernalia
for effective development. Africanus: Journal of Development Studies, 48(2), 1-21.
4. Masuku, N. W., & Ijeoma, E. O. (2015). A global overview of monitoring and evaluation
(M&E) and its meaning in the local government context of South Africa. Africa’s Public
Service Delivery & Performance Review, 3(2), 5-25.
5. Onyango, R. O. (2018). Participatory monitoring and evaluation: An overview of guiding
pedagogical principles and implications on development. International Journal of Novel
Research in Humanity and Social Sciences, 5(4), 428-433.
6. United Nations Development Programme. Evaluation Office. (2002). Handbook on
monitoring and evaluating for results. Evaluation Office.

Mode of delivery
• Lecturers, Class assignments and presentations, Tutorials
Mode of assessment
• Individual assignment, Group Assignment, Tests,Final exam

4
UNIT ONE

OVERVIEW OF PROJECT MANAGEMENT CYCLE

1.1 Introduction

Thank you for your interest in studying monitoring and evaluation of projects which is an
indispensable management function. You can call it “M&E” – it is much easier. In this lecture
we will try to review a few background issues on the projects that you covered in the unit:
Project planning and Management. This will give us a good foundation to discuss Monitoring
and Evaluation.

The mainstay of work in the area of ‘Public Administration’ is in the form of projects, which
are targeted at various areas like health, education, sanitation, livelihoods, child rights, climate
change etc., depending on the mandate and the objective of project implementation and funding
organisation. In this unit, the fundamental concepts related to projects are explored and a
project-based approach is discussed, which is imperative for an in depth understanding of M&E.

1.2. Objectives

At the end of this lecture, you should be able to;

i. Define a project

ii. Define the project management cycle

iii. Describe the major stages of the project cycle

iv. Explain the element of a project document

1.3 Definition of a project

As we embark upon our journey to understand how to monitor and evaluate projects, it is
important to first understand the fundamentals and underlying concepts of projects and project
management. At the outset of every project, it is envisaged that several activities will be
performed over the course of the project's implementation. These activities constitute the
work that will be done during the project and they form the mainstay of the action that will
take place.

Every project has a specific objective and it is envisioned that through these activities the
project will achieve its objective. The example of a five-year project of making its target
villages open defecation free (ODF) is used to illustrate this point. To achieve this objective,
the project engages in several activities like construction of household (HH) and community
toilets, conducting awareness campaigns to motivate people not to defecate in the open,
educating people about the technologies that should be used for toilet construction etc. The

5
types of activities performed as part of the project vary depending on the project objective
and the implementing organisation’s capacity. These activities form the key work that is done
as part of the project implementation.

Another important aspect of every project is that it has a specific start date and a specific end
date i.e., a specific time period within which it has to be executed. The project is expected to
achieve its desired objective within this specific time period, which in the example quoted
above, is a duration of five years. Last but not the least, it is very critical to understand that
each project is allocated a limited set of resources. Resources, which may be financial, human
and physical, are allocated to a project so that its activities may be implemented and its
objectives achieved within a specific time period. Accordingly, the example project is also
allotted a fixed budget, human resources and fixed physical resources with which its activities
may be implemented and its objective achieved in a specific period of time.

Hence, a project may be defined as:

“A set of activities implemented within a specific period of time and with


specific resources to achieve a specific objective.”

However, as it was in the previous course unit on “Project Planning and Management’ you
may realize that the term project was defined differently by different experts. Let us single out
a few definitions and try to understand them in the context of Monitoring and Evaluation.

Singh and Nyandemo (2004) define a project as “an endeavor in which human, material and
financial resources are organized in a novel way to undertake a unique scope of work of a
given specification within constrains of cost, time and the prevailing environment, so as to
achieve beneficial change defined by quantitative and qualitative objectives.” On the other
hand, International Standard Organization (ISO) 10006 looks at the project as “a unique
process that consists of a set of coordinated and controlled activities with start and finish dates
undertaken to achieve an objective conforming to specific requirements, including the
constrains of time, cost and resources.”

In the two definitions it is clear that the project ‘involves resources which include human,
material and financial among others. It also involves tasks defined in terms of activities that
are organized in a unique way to achieve a set of predetermined objectives. Other issues that
come out clearly are the timeliness of the projects and the aspect of coordinating and controlling
of activities to achieve the desired objectives.

We can therefore conclude that:


• Activities that comprise a project are intentionally designed to achieve certain ends in
consideration of available resources and time.
• Objectives therefore become the major target of each and every activity.
• Monitoring of the project activities is therefore very important to ensure that they are
implemented as planned.
• It is important to ensure that the activities produce the intended results at the end of the

6
project cycle.
• It is also important to ascertain the changes brought to the project beneficiaries in terms
of quantitative and qualitative data.

It is only when this is achieved that we can conclude that the project has fulfilled its objectives.
Evaluation of projects therefore becomes not only important to projects but a part and parcel of
project design.

1.4 Project Cycle

Usually, ‘Public Administration’ involve policy interventions which are usually formulated and
implemented in the form of projects. Such projects follow a cycle or a sequence which is known
as the project cycle. From its inception to its closure, every project has its unique cycle of
operation though the fundamental project cycle remains the same. Therefore, it is essential to
understand the project cycle in order to better conceptualise, design, plan and implement it and
also to monitor and evaluate it effectively. From the beginning till the end of the project, the
project cycle comprises of various phases or stages. All the stages in the project cycle are
delineated and implemented successively in a phased manner. Each of these stages are defined
by their objective, information requirements, responsibilities and key outputs.

A project cycle is a sequence of continuous events which a project follows. The events, stages
or phases can be divided into several equally valid ways depending on the executing agency or
parties involved. For instance, in 1970s the World Bank identified five stages in which a project
undergoes namely project identification, project formulation, project appraisal, implementation
and project evaluation. The various stages of a generic project cycle are:

Fig. 1.1: Project Cycle

7
Stage I: Situation Analysis

It is a well-known fact that a project does not exist in vacuum. It is formulated to respond to a
negative situation or condition. The stage of identifying and understanding this existing negative
situation which needs to be responded to through intervention is called situation analysis. This
stage consists of understanding the prevalent situation and identifying the cause(s) of this
situation. Situation analysis is useful in the later stages when the strategy and subsequently, the
specific activities to target these causes are defined. A good situation analysis serves as an entry
point for the project by throwing light on what needs to be done to address the negative conditions
in each context.

Stage II: Gap Analysis

A project always works towards achieving its desired objective. By the time it is completed, the
project envisages reaching the intended or desired situation as opposed to the situation from where
it had started. The project works towards bridging the “gap” that exists between the present and
the desired situation. Gap analysis is thus done to identify the gap between the current situation
and the desired situation.

Stage III: Project Planning

Project planning follows the gap that needs to be bridged through the project that has been
identified. During the project planning stage, objectives are defined, strategies by which to
achieve this objective are formulated, activities are identified, timeline-based targets are set and
resources are allocated to the project. A detailed implementation plan with the activity schedule
and milestone timelines is also prepared as part of the project planning. During this phase, a
project monitoring plan (PMP) is also devised to assess its achievement.

Discussing the project planning phase in reference to the example, the first step is to define the
project objective, which should be specific and realistic. The second step is to identify the
activities that are undertaken as part of the project to achieve its intended objective. This is
followed by deployment of resources for the project, primarily in the form of finances available
for implementation of the project. The money is utilised to recruit human resources (project staff)
based on the defined roles and responsibilities. Physical and infrastructure resources like office,
equipment etc., required for the project are also purchased. Timeline targets are set for
communication campaigns and constructing toilets in a phased manner.

Stage IV: Implementation and Monitoring

The next stage is project implementation during which the formulated plan is executed.
Monitoring of project activities is done concurrent to their implementation to ensure that the
project is on track and as per the formulated plan. Monitoring helps to identify deviations, if any,
from the project plan and also to introduce mid-course corrections. While executing a project, its
quality, time, cost and risk management needs to be considered to ensure that it is successfully
implemented within its predefined resources and timeline.

8
Stage V: Evaluation

After project activities are completed, many stake holders like project implementers, policy
makers, the government, and the external audience, among others, want to know whether there is
any change in the ‘situation’. The stake holders also want to know whether this change is due to
the project intervention or other external factors. An evaluation, helps to systematically assess
the impact, effectiveness and the contribution of the project. Mid-term evaluations are helpful
because they provide timely learning which helps in course correction. Post project evaluations
help in getting insights that are helpful in formulation of other similar projects. Various techniques
or designs are thus adopted for different projects in different situations. These evaluation designs
are explained in detail in the following chapters.

From the above demonstration of stages in project cycle it is clear that monitoring and evaluation
forms a very key component. For instance, in figure 1 above it is implied that at all the stages of
project cycle monitoring and evaluation is required. For instance:

i. At the problem identification or project conceptualization stage one needs to undertake


project needs analysis in which data is collected and evaluated to identify the needs of
the communities; possible project ideas to satisfy needs identified are also evaluated and
closely analyzed (filtered) to finally arrive at the indented projects.

ii. Formulation of the project also involves evaluation to some extent. Project objectives
formulation is a participatory activity that requires careful evaluation by all project
stakeholders. Cost and benefit analysis of each and every activity is done to give the final
activity that will be included in the project. The purpose is to arrive at the activities that
have the highest impact in terms of fulfilling the project objectives.

iii. Implementation stage involves rolling out the project activities. This calls for monitoring
to ensure that the activities are implemented as planned.

iv. At the end of the project cycle, the terminal evaluation is done to determine the impact
of the whole project to the project beneficiaries.

1.6. Designing Projects

Having understood what is project and project cycle, it is important to understand what project
designing is in order to ensure effective and useful M&E. M&E per se is not an activity that starts
at the time the project is nearing completion. Monitoring starts from the day the project is rolled
out; therefore, the M&E system needs to be conceived during the project design itself.

1.6.1 Results Chain

9
During the project designing phase, it is essential to specify all activities and objectives that are
to be achieved through the project. The results chain helps to manage projects, and at the same
time, to understand the causal linkage between project intervention and its desired impact rather
than managing the project based solely on activities. It helps to formulate a roadmap to the
envisioned change, while highlighting the necessary conditions and assumptions required for
ushering in a change in each situation (Foundations of Success, 2007).

Operations are based on the 'if-and-then' logic. For example, if we put fuel in a car’s fuel tank,
then only can we drive and go somewhere in it. This 'if-and-then' logic is the means-to-an-end
relation or a cause-and-effect connection between the system components. So, what implication
does this system model have for projects?

Every project, as we know, has its own rationale of intervention, one that clearly addresses the
nuts-and-bolts of the problem of 'what', 'when', 'why', 'how', 'who' and 'where'. The clearer a
project is about the logic of change underpinning its project activities or processes, the better it
can deliver the results or achieve the objective it has in mind.

A results chain thus describes the causal pathways of the activities translating into expected results
i.e., the outputs, outcomes and impacts of a project. The results chain helps to track the progress
of the project from its more immediate results (outputs), to a result more proximate to the
achievement of the objective (outcome) and finally to a long-lasting result or goal (impact).

A basic results chain has the following components:

i. Inputs: This includes the resources that are available or allocated for the project. Input
resources may be natural, human, and financial, depending upon the nature of the project.
For example, funds allocated, human resources deployed, laptops allotted etc.

ii. Activities: Activities are actions undertaken using the resources. In simpler terms, this is
the work performed that converts inputs into outputs. For example, the training of
frontline health workers (FLWs) on the counselling of women, building of separate toilets
for girls in schools etc.

iii. Outputs: Outputs are the immediate effect of the activities of a project. Outputs are also
defined as the short-term results and often form the deliverables of the project. For
example, counselling of mothers on institutional delivery is the output achieved from the
activity training of FLWs on counselling. Also, the increased attendance rate of girls is
an output of the activity of building separate toilets for girls in schools.

iv. Outcomes: The mid-term results likely to be achieved from outputs are called outcomes.
Outcomes are generally the objective which the project aims to achieve. For example,

10
‘increase in the rate of institutional delivery’ is an outcome achieved through the output
of ‘effective counselling of women on institutional delivery’. Also, ‘increased female
literacy’ is an outcome achieved through the output of ‘increased female attendance rate’.

v. Impact: The final desired goal or the macro level goal that the project envisages to
achieve is defined as its impact. Impact is what the project aims to contribute towards
rather than trying to claim that it is what it would achieve by itself. For example,
‘decreasing the Maternal Mortality Rate (MMR)’ is the impact which the project aims to
contribute to by providing the outcome, which is, ‘increase in the rate of institutional
delivery’. Also, ‘increase in the empowerment level of women’ is the impact which the
project aims to achieve through its outcome of ‘increased female literacy rate’.

Governing the interrelationships between inputs, activities, outputs, outcome and impact are
several assumptions or enabling pre-conditions that are necessary for the delivery of project results
and achievement of the project objective. They provide the necessary if not sufficient
preconditions without which the project cannot hope to achieve its results.

Examples of the results chain are presented below:

1.6. Components of Project design

At this point, we need to examine the components of a project design and see how they all hinge
on monitoring and evaluation. It is important to note that a well-designed project should have a
written document which is logical and complete.

Let’s look at some of the components of a project design. The project document has the following;

i) Statement of project: Describe the areas that emerged during the need assessment and
that the project seeks to address.

ii) Project strategy: Explain clearly the beneficiaries of your project. Show the beneficial
changes to be brought by the project. Indicate the partners/ stakeholders involved and
show how the project will deliver its benefits to the intended group.

11
iii) Goals/ purpose/vision: This is the ultimate objective of the project. It is the long-term
objective. E.g. to ensure that every youth at Kagote village is self employed by 2025.

iv) Objectives/mission: State the immediate achievement at the end of the project e.g at
the end of the project 400 youth from Kagote village will have been trained on how to
run their own small businesses.

v) Outputs: Describe the products that would result from the project activities

vi) Activities: Show all the activities which will be undertaken to produce the desired output
e.g workshops, developing training manual/ modules.

vii) Inputs: Give a full range of the resources needed (human, financial, technical etc) to carry
out the activities in terms of costs.

viii) Indicators: State the end result /changes achieved at the end of a project. Indicators are
shown by the objectives and outputs of a project

12
UNIT TWO

MONITORING AND EVALUATION: DEFINITIONS AND BASIC CONCEPTS

2.1 Introduction

After looking at the overview of projects, we will now focus on in-depth understanding of the
major concepts of Monitoring and Evaluation, and Social Research.

In this chapter, learners will develop an understanding of the fundamental concepts of M&E,
its importance for project management and learn the difference between the two. The
practitioner will also understand how M&E is juxtaposed with the results chain of a project.
Finally, at the end of the chapter, they will understand about indicators and learn how to design
them. At the completion of the chapter, the practitioner will be able to:
i. Define the terms Project Monitoring and Evaluation
ii. Explain why there is need for project monitoring and evaluation
iii. Discuss project monitoring
iv. Discuss project evaluation
v. Explain the relationship between project monitoring and evaluation
vi. Differentiate between Social Research and Evaluation

2.2. Definition of Monitoring and Evaluation

2.2.1. What is Monitoring?

Monitoring is defined as the concurrent process of tracking the implementation of activities


of the project and attaining its planned outputs. It helps to provide real time information of the
progress of the project in terms of completing its activities and achieving its immediate
outputs, both in terms of quality and target.
Monitoring: a continuous process
Monitoring, thus, is an activity to see if an ongoing of collecting, analyzing,
project is proceeding on track. It involves the documenting, and reporting
process of systematically collecting data to provide information on progress to achieve
set project objectives. It helps
real time information for all stakeholders identify trends and patterns, adapt
(managers, funders, participants) on the progress of strategies and inform decisions for
implementation and the achievement of desired project or programme
management.
outcomes.

The critical functions of monitoring are: to gather feedback from the participants; collect data;
observe the implementation of activities of the project; analyse contextual changes; and
provide an early warning system of potential challenges. Analysis of monitoring data is critical
to ensure that the project is being implemented in the right direction for it to achieve its

13
intended outcomes. In case the project is not moving in its intended direction, midcourse
correction should be done. Monitoring is applicable to all programme levels (from input,
process, output and outcome). Most commonly, the focus is on output data, although it is also
important to track the goals and the objectives. Monitoring should ideally be an internal
function of the project management team. Monitoring, thus, plays a critical role in the success
of a project.

Monitoring of results helps to:

• Improve strategies and targeting. Enabling decision makers to focus the project
resources on areas where they can get the maximum output.

• Understand project implementation barriers or challenges in real time and suggest


course correction measures.

• Ensure that the project is more effective and result oriented. It also focuses on impact
level changes throughout the project, rather than just at the end of project evaluation.

2.2.2. What is Evaluation?

Evaluation is defined as systematic research to see if a programme can achieve its intended
outcomes and impacts. Evaluation is done firstly to see whether the envisaged objectives and
goals have been achieved or not, and secondly, to see whether the achievement is because of
the project interventions. It should assess the magnitude of change in the outcome and impact
and whether the change in the outcome or the impact can be attributed to the project
intervention.
Evaluation is a periodic assessment,
Evaluation assesses if there is any deviation as systematic and objective as
possible, of an on-going or completed
from the goals and the objectives, and whether project, programme or policy, its
it can confidently be said that the objectives are design, implementation and results. It
achieved only because of project intervention. involves gathering, analysing,
Evaluation, then, is a type of causal research interpreting and reporting information
based on credible data.
that establishes the cause-effect relationship The aim is to determine the relevance
between the activities and the outputs on the and fulfilment of objectives,
one hand and the objectives and the goals on the developmental efficiency,
other. effectiveness, impact and
sustainability.
While monitoring facilitates mid-course correction in attainment of project outcomes,
evaluation helps analyse variances from envisioned objectives and goals. By providing
feedback to the project functionaries, M&E facilitates learning by doing. Development and
enhancement of in-house capacities to anchor the M&E functions is, thus, a prerequisite for
learning organisations.

14
2.3 Concepts of Monitoring and Evaluation

2.3.1. Concept of Monitoring

Ø Project Monitoring is a continuous process of collecting, analyzing, documenting, and


reporting information on progress to achieve set project objectives. It helps identify
trends and patterns, adapt strategies and inform decisions for project or programme
management.

Ø Project monitoring is a continuous and periodic review, and overseeing of the project
to ensure that input deliveries, work schedules, target outputs and other required action
proceed according to plan (UNFPA, 1990).

Ø Monitoring is a continuous process of collecting information at regular intervals about


on- going projects or programmes concerning the nature and level of their
performance.

Monitoring is an on-going activity which aims at tracking project progress against planned
tasks to ensure that the project is moving towards the right direction at the right time. It aims
at providing regular, oversight of the implementation of an activity in term of inputs delivery,
work schedules, and targeted outputs among other desired results.

Through routine data gathering, analysis and reporting, project monitoring aims at providing
project management staff and other stakeholders with information on whether progress is
being made towards achieving project objectives. In this regard, monitoring represents a
continuous assessment of project implementation in relation to project plans, resources,
infrastructure and use of services or products by project beneficiaries. Let us try to discuss the
importance of project monitoring.

1. Project managers and their stakeholders (including funding agencies) need to know
the extent to which their projects activities are implemented as per the plan. Meeting
the set objectives and leading to their desired effect.

2. Monitoring and to some extent evaluation builds greater transparency and


accountability in terms of use of project resources. All project stakeholders develop
confidence in the project when they know that resources are well spent on the planned
project activities.

3. Information generated through Monitoring exercise, provides project managers and


staff with a clearer basis for decision-making. This decision concerns the continuing
or discontinuing certain activities that may be expensive to implement and which may
be having less impact as far as achieving project objectives.

4. Future project planning and development is improved when guided by lessons learned
from project experience. Documented results of previous monitoring activities may
serve as good lessons for future project implementation.

15
5. Monitoring allows the project manager to maintain control of the project by providing
him/her with information on the project status at all times.

6. Project monitoring alerts managers to actual and potential project weaknesses,


problems and shortcomings before it is too late. This provides managers with the
opportunity to make timely adjustments and corrective actions to improve on the
program/project design, work plan and implementation strategies. In short, monitoring
activities must be undertaken throughout the lifetime of the project.

Effective monitoring needs adequate planning; baseline data; reliable indicators of


performance and results; practical implementation mechanisms that include actions such as
field visits, stakeholder meetings, documentation of project activities, regular reporting etc.
Project monitoring is normally carried out by project management staff and other stakeholders

2.3.2 The concept of project Evaluation

Ø Project Evaluation is a periodic assessment, as systematic and objective as possible,


of an on-going or completed project, programme or policy, its design, implementation
and results. It involves gathering, analysing, interpreting and reporting information
based on credible data. The aim is to determine the relevance and fulfilment of
objectives, developmental efficiency, effectiveness, impact and sustainability.

Ø Project evaluation can be viewed as the process of systematic collection, analysis and
interpretation of project related data that can be used to understand how the project is
functioning in relation to the project objectives. It is a process of ascertaining decision
areas of concern selecting appropriate information and collecting and analyzing
information in order to report summary data useful to decision makers in selecting
among alternatives (Alkin, 1969). Project evaluation is a necessary component that
must be included in the project design.

Evaluation is a systematic approach to attribute changes in specific outcomes to program


activities. It has the following characteristics:
a. Conducted at important program milestones
b. Provides in-depth analysis
c. Compares planned with actual achievements
d. Looks at processes used to achieve results
e. Considers results at outcome level and in relation to cost
f. References implemented activities
g. Reports on how and why results were achieved
h. Attributes program inputs and outputs to observed changes in program outcomes
and/or impact

16
As we continue with our discussion and understanding project evaluation, we will realize that in
lecture five, various scholars have attempted to define evaluation differently according to the
purpose of evaluation results and evaluation models employed. Most of the definitions are geared
towards justifying the evaluation models that they subscribe to, but their definition does not go
beyond the above definition. We shall examine this later, for now, let us focus on various reasons
why it is important for us to carry out project evaluation.

i) First and foremost, project evaluation provides managers with information regarding
project performance. You will realize that, sometimes during project implementation,
project plans may change significantly. In this case evaluation may come in handy to
verify if the program is running as originally planned. In addition, evaluations provide
signs of project strengths and weaknesses and therefore, enable managers to improve
future, planning, delivery of services and decision making.

ii) Project Evaluation assists project managers, staff and other stakeholders to determine
in a systematic and objective way the relevance, effectiveness and efficiency of
activities (expected and unexpected) in light of specific objectives.

iii) Mid-term evaluations may serve as a means of validating the results of initial
assessments obtained from project monitoring activities.

iv) If conducted after the termination of a project, an evaluation determines the extent to
which the interventions were successful in terms of their impact and sustainability of
results.

v) Evaluations assist managers to carry out a thorough review and rethinking about
projects in terms of their goals and objectives and means to achieve them.

vi) Evaluation can be used to generate detailed information about project implementation
process and results. Such information can be used for public relations, fundraising, and
promotion of services in the community as well as identifying possibilities for project
replication.

vii) Evaluation improves the learning process- Evaluation results should be documented to
help in explaining the causes and reasons why the project succeeded or failed. Such
documentation can help in making future project activities more relevant and effective.

There is need for all project stakeholders to have a clear knowledge and understanding
of Monitoring and Evaluation. This is because knowledge of M&E helps project staff
to improve on their ability to effectively monitor and evaluate the progress of the
projects. It also enables them to strengthen the performance of their projects thus
increasing the impact of the project results to beneficiaries.

With basic orientation and training in monitoring and evaluation, project staff can implement
appropriate techniques to carry out a useful evaluation of their projects. Project staff with
knowledge in monitoring and evaluation can be in a good position to vet and evaluate external

17
evaluators’ capacity to evaluate their projects - Program/project evaluations carried out by
inexperienced persons might be time consuming, costly and could generate impractical or
irrelevant information.

What is Monitoring and Evaluation

Monitoring and Evaluation is a process of continued gathering of information and its


analysis, in order to determine whether progress is being made towards pre-specified goals
and objectives, and highlight whether there are any unintended (positive or negative) effects
from a project/programme and its activities.

2.3.1 Relationship between Monitoring and Evaluation

You will realize that form above discussions of project monitoring and evaluation we can
comfortably conclude the two serve the project managers differently. However, sometimes
you may find it difficult to separate the two concepts since they are closely related. The two
support each other. Now let us see how the two concepts are related;

• Through routine tracking of project progress, monitoring can provide quantitative


and qualitative data useful for designing and implementing project evaluation
exercises.

• Through the results of periodic evaluation monitoring tools and strategies can be
refined and further developed.

• Good monitoring may substitute evaluation in cases where:

- projects are short-term

- projects are small-scale

1. The main objective of Monitoring is to obtain information that can be used in


improving the process of implementation of an ongoing project, however, when a final
judgment regarding project results, impact, sustainability and future development is
needed, an evaluation must be conducted.

2. Project evaluations are less frequent than monitoring activities, considering their costs
and time needed.

It is important to understand that project monitoring can be different from project


evaluation in some aspect. The table below shows a summary of the differences
between project monitoring and project evaluation.

18
Table 2.1 Comparison between Monitoring and Evaluation

Kurze and Rist (2004) identifies other complementary roles of Monitoring and Evaluation as
Indicated in table 2.2

Table 2.2: Complementary role of monitoring and evaluation

19
2.3 The difference between Research and Evaluation

We would like to introduce a concept of ‘social research’ which you well known from the
Research Methods unit you covered in your first year. The discussion of the concept of project
evaluation may have left you wondering how different the concept is from research. In this
section we will attempt to highlight on the differences between evaluation and social research.

Social research is an inquiry that is based on logic through observation and involves
the interaction between ideas and evidence. Ideas help social researchers make sense of
evidence and use such evidence to test, extend or revise existing knowledge or facts. Social
research is based on logic through observation and involves the interaction between ideas and
evidence. Ideas makes social researchers make sense of evidence and use such evidence to
test, extend or revise existing knowledge or facts. Social research thus attempts to create or
validate theories through data collection and analysis and its goals are exploration, description,
prediction, control and explanation.
Take Note:
From the above description of social research, Research is a process that involves
we can note that research shares some aspects systematic collection, analysis and
with evaluation in that they both are concerned interpretation of data with the purpose
with generation of knowledge and are both of describing, explaining, predicting
aimed at finding answers to significant inquiry and controlling a phenomenon.
questions. In addition, both employ scientific
approaches of inquiry which is systematic in
nature.

However, the two concepts differ to some extent as shown below;

i. Evaluation findings are concerned with phenomena which are not generalized beyond
their application to a given project or program while research aims at generalizing
findings to the population

ii. Research and evaluation are undertaken for different reasons. Research satisfies
curiosity by advancing knowledge while evaluation contributes to the solution of
practical problems through judging the value of whatever is evaluated

iii. Research seeks conclusions while evaluation leads to decisions

iv. Research is concerned with relationships among two or more variables while
evaluation describes the objects of evaluation

v. The researcher sets his own problems. Evaluations are normally commissioned by
clients

vi. Evaluation follows the set standards of Feasibility, Propriety, Accuracy and Utility
while research does not.

20
2.4. Purpose/Importance of Monitoring and Evaluation

• Support project/programme implementation with accurate, evidence-based reporting


that informs management and decision-making to guide and improve
project/programme performance.

• Contribute to organizational learning and knowledge sharing by reflecting upon and


sharing experiences and lessons.

• Uphold accountability and compliance by demonstrating whether or not our work has
been carried out as agreed and in compliance with established standards and with any
other stakeholder requirements

• Provide opportunities for stakeholder feedback.

• Promote and celebrate project/program work by highlighting accomplishments and


achievements, building morale and contributing to resource mobilization.

• Strategic management in provision of information to inform setting and adjustment of


objectives and strategies.

• Build the capacity, self-reliance and confidence stakeholders, especially


beneficiaries and implementing staff and partners to effectively initiate and implement
development initiatives.

2.5. Characteristics of Monitoring and Evaluation

Monitoring tracks changes in program performance or key outcomes over time. It has the
following characteristics:

i) Conducted continuously, eeps track and maintains oversight

ii) Documents and analyzes progress against planned program activities

iii) Focuses on program inputs, activities and outputs

iv) Looks at processes of program implementation

v) Considers program results at output level

vi) Considers continued relevance of program activities to resolving the health problem

vii) Reports on program activities that have been implemented and on immediate results
that have been achieved

21
Evaluation is a systematic approach to attribute changes in specific outcomes to program
activities. It has the following characteristics:

i. Conducted at important program milestones

ii. Provides in-depth analysis

iii. Compares planned with actual achievements

iv. Looks at processes used to achieve results

v. Considers results at outcome level and in relation to cost

vi. Considers overall relevance of program activities for resolving health problems

vii. References implemented activities

viii. Reports on how and why results were achieved

ix. Contributes to building theories and models for change

x. Attributes program inputs and outputs to observed changes in program outcomes and
impact

2.6. Key benefits of Monitoring and Evaluation

i. Provide regular feedback on project performance and show any need for ‘mid-course’
corrections

ii. Identify problems early and propose solutions

iii. Monitor access to project services and outcomes by the target population;

iv. Evaluate achievement of project objectives, enabling the tracking of progress towards
achievement of the desired goals

v. Incorporate stakeholder views and promote participation, ownership and accountability

vi. Improve project and programme design through feedback provided from baseline, mid-
term, terminal and ex-post evaluations

vii. Inform and influence organizations through analysis of the outcomes and impact of
interventions, and the strengths and weaknesses of their implementation, enabling
development of a knowledge base of the types of interventions that are successful (i.e.
what works, what does not and why.

22
viii. Provide the evidence basis for building consensus between stakeholders

2.7. Monitoring vs Evaluation: Summary

Having understood the definition of M&E, the practitioner can now list the key differences
between the two. Some of the distinctions between monitoring, which is to see ‘what we are
doing’ and evaluation, which is to assess ‘what we have done’ are given in the matrix below
(KEPA, 2015).

23
Unit 2 Self-assessment questions
1. Define the following terms;
a) Monitoring and;
b) evaluation
2. Explain why there is nee monitoring and evaluation
3. Discuss the concept project monitoring
4. Discuss project evaluation
5. Explain the relationship between monitoring and evaluation
6. Identify the core concerns of evaluation

24
UNIT THREE

LEVELS OF MONITORING AND EVALUATION

3.0. Introduction

Welcome to this unit which is going to take you through various levels of monitoring and
evaluation. First and foremost, the lecture will attempt to discuss the concept of ‘project
evaluator’. The unit will then explore the core concern of monitoring and evaluation and then
highlight on various levels of monitoring and evaluation.

3.1. Unit Objectives

At the end of this unit, you should be able to;


i) Differentiate between internal and external evaluator
ii) Explain the advantages using internal and external project evaluator.
iii) Outline key questions that evaluators are concerned with when evaluating projects
iv) Describe levels of Monitoring evaluation

3.2 Project Evaluators

In unit two we discussed the concepts of monitoring and evaluation. In this section we are
going to discuss the concept of project evaluators a concept that is closely related with what
we discussed in the previous lectures.

Let’s now focus on what you wrote in the attempt to answer the above question. It is clear
that for one to be called project ‘evaluator’, he or she must be qualified and experienced in
carrying out monitoring and evaluation. We can therefore conclude that project evaluators are
individuals with skills, knowledge and hands on experience involving theories and practices
in monitoring and evaluation. These individuals may either be within the projects or outside
the project. In general, there are two types of project evaluators: external evaluators what we
commonly refer to as consultants, and internal evaluators – those within the project. All these
evaluators are at the disposal of the project manager only that he or she must determine what
type of evaluator would be most beneficial to the project. Let us now try to examine possible
option that a project manager can explore in terms of choosing and utilizing project evaluators;

a. External Evaluator

External evaluators are contracted from outside the project. These may include qualified
and experienced individuals, agency or organization with credible track record
concerning evaluation. These evaluators often are found in the universities, colleges,
hospitals, consulting firms, or within the home institution of the project. Because external

25
evaluators maintain their positions with their organizations, they generally have access to
more resources than internal evaluators (i.e., computer equipment, support staff, library
materials, etc.). In addition, they may have broader evaluation expertise than internal
evaluators, particularly if they specialize in project evaluation or have conducted
extensive research on your target population. External evaluators may also bring a
different perspective to the evaluation because they are not directly affiliated with your
project. However, this lack of affiliation can be a drawback. External evaluators are not
staff members; they may be detached from the daily operations of the project, and thus
have limited knowledge of the project’s needs and goals, as well as limited access to
project activities.

b. Internal Evaluator

A project manager may have an option of assign the responsibility for evaluation to one
of the staff members or to hire an evaluator to join the project as a staff member. This
internal evaluator could serve as both an evaluator and a staff member with other
responsibilities. Because an internal evaluator works within the project, he or she may be
more familiar with the project and its staff and community members, have access to
organizational resources, and have more opportunities for informal feedback with project
stakeholders. However, an internal evaluator may lack the outside perspective and
technical skills of an external evaluator.

c. Internal Evaluator with an External Consultant

A final option combines the qualities of both evaluator types. An internal staff person
conducts the evaluation, and an external consultant assists with the technical aspects of
the evaluation and helps gather specialized information. With this combination, the
evaluation can provide an external viewpoint without losing the benefit of the internal
evaluator’s first-hand knowledge of the project. This may be an appropriate option but it
may be too expensive.

3.2.1 The Evaluator’s Role


Take Note:
Whether you decide on an external or internal
It is important to note that the idea of
evaluator or some combination of both, it is multiple evaluator roles is a
important to think through the evaluator’s role. As controversial one. Those operating
the goals and practices of the field of project within the traditional project evaluation
evaluation have diversified, so are the evaluators’ tenets still view an evaluator’s role as
roles and relationships with the project they narrowly confined to judging the merit
or worth of a program.)
evaluate.

In most cases the project manager will draft the roles of a project evaluator depending on the
nature of the evaluation and the kind of the information required. The roles will also be based
on the option of the evaluator that the project manager deems fit. For those evaluators that are

26
recruited as part of the staff of a project their roles may be defined by job specification and
description, while the external evaluators roles may be specified by term of reference (TORs).

Depending on the primary purpose of the evaluation and with whom the evaluator is working
most closely (funders vs. program staff vs. program participants or community members), an
evaluator might be considered a consultant for program improvement, a team member with
evaluation expertise, a collaborator, an evaluation facilitator, an advocate for a cause, or a
synthesizer. If the purpose of evaluation is to determine the worth or merit of a project, the
project manager may look for an evaluator with methodological expertise and experience. If
the evaluation is focused on facilitating project improvements, an evaluator with a good
understanding of the project and is reflective may be suitable. If the primary goal of the
evaluation is to design new projects based on what works, an effective evaluator would need
to be a strong team player with analytical skills.

3.3 Core Concern of Project Evaluators

After discussing the concept of evaluators, let us now focus on the core concern of evaluations.
All experience and experts in evaluation have certain aspects of concern that they would want to
establish or understand whenever they are given a project evaluation task. These aspects are as
follows;

a. Project Progress: The project evaluator will be concerned with continual development
of the project towards the achievement of the planned objectives.

b. Project Adequacy: Project adequacy means that the project objectives, inputs or
activities are enough for the purpose indented

c. Project Relevancies: Relevancy is related to how the project’s objectives and activities
respond to the needs of indented beneficiaries.

d. Validity of the project design: validity of project design assesses the extent to which
the project design;

i. Sets out clear immediate objectives and indicators of their achievement,

ii. Focuses on the identified problems and needs and clearly spell out the
strategies to be followed for solving the problems and meting identified
needs,

iii. Describes the main inputs, outputs and activities needed to achieve the
objectives

iv. Stated the means of verification of achievements of objectives


and valid assumptions about the major external factors affecting the
project

27
e. Project effectiveness: Effectiveness refers to the extent to which a project produces the
desired result. Effectiveness measures the degree of attainment of the pre-determined
objectives of the project. A project is effective if its results are worthwhile.

f. Project efficiency: this is an expression of the extent to which the methods used by
the project, or activities are the best in terms of their cost, resources used, time required
and appropriateness of the task. It examines whether there was an adequate
justification for the resource used and identifies alternative strategies to achieve better
results with the same inputs.

g. Project impact: Measurement of impact is concerned with determining the overall


effect of a project activities in terms of socio-economic and other aspects of the
community

h. Project cost –effectiveness analysis: This refers to the evaluation of alternatives


according to both their costs and their effect with regard to producing an outcome or a
set of outcomes

i. Project sustainability: sustainability examines the extent to which the projects


strategies and activities are likely to continue to be implemented after the termination
of the project and the withdrawal of external assistance.

j. Project Unintended outcomes: Unintended outcomes are unforeseen negative or


positive effects of a project. For example, an adjacent community benefiting as a result
of a project implemented in the neighboring community.

k. Project alternative Strategies: Alternative strategies to solving the identified needs


or problems are analyzed and recommended for the next phase of the project, normally
if the original strategy is found inappropriate.

l. Project cost benefits: Cost benefit analysis compares the financial costs of a project
to the financial benefits of that project. It is normally conducted on more than one
project

3.4. Levels of Evaluations

After looking at various concerns of evaluations, let us now focus on levels of monitoring and
evaluations. A project of a national concern with multiple beneficiaries requires that its effects
be monitored and evaluated at different levels. These levels include community, district, national
and donor among others. Monitoring and evaluating of such project at the mentioned level is
very important since each level is unique. Due to this uniqueness, it is possible for an evaluator
to apply different monitoring and evaluation methods befitting each level. These may bring about
unique project results and effects depending on each level. To some extend these results
complement other findings that may be experienced at a higher level.

28
Take Note:
• Consider that the government of Uganda has acquired funds from World Bank
aimed at investing in construction of health centers in order to improving access
to medical care by all Ugandans. This can be regarded as a national project with
multiple stakeholders.
• The lowest level that can determine the effects of the project is at the
community level. If access to medical care has been achieved at the community
level, the effects can be felt at the district level and then the province and even
nationally. The total effects will contribute to achievement of the project objectives.

Let us now discuss each of the levels mentioned above:

3.4.1. Monitoring and Evaluation at Community Level

This is done at grassroots and zonal level because this is where the implementation and utilization
of the benefits of the projects take place. The major purpose of monitoring and evolution at this
level is to improve the implementation and management of projects.

The objectives for monitoring and evaluation at this level include;

§ Ensuring that project activities are implemented in time

§ Experts have been contracted to provide consultancy on the project

§ Ensure that project inputs are available and utilized in the right way as planned The
activities for monitoring and evolution at this level include:

§ Identify community’s needs

§ Organize the needs in order of priority

§ Develop projects to address those priority areas

§ Identify teams and their roles to spearhead the projects

§ Design work plans and their performance standards

§ Compare what is happening with what was planned, to determine whether the
project is on schedule as planned

§ Involve the local community to ascertain the quality of the projects

The monitoring teams should ensure that they make frequent visits to the project sites to observe,
and discuss with everyone involved in the projects. This should be captured in field visit reports.
This information can be utilized to improve the implementation of the project or stored for future
use.

29
3.4.2. Monitoring and Evaluation at District and Local Authority Level

The monitoring and evaluation team should get information from the teams at local levels. It is
important for the team to monitor and evaluate the outcome of the project. They should also
monitor and evaluate the increase in strength, capacity and power of the target community to
stimulate its own development. With the above example, the team should be able to establish
whether the community will be able to maintain and manage the even health centers even when
the donor funding is withdrawn.

The objectives of Monitoring and evolution at this level include:

a. Supporting the improvement in project performance

b. Measuring the applicability of the way the project was designed in relation to
community strengthening

The methods used include routine monitoring and supervisory support by the district project
coordinator, community development assistance, other technical staff, and politicians

The major issues to consider in the routine monitoring include:

i. Levels of actual community, local authorities, districts and donor contributions (in terms
of funds, materials, time and expertise)

ii. Timely implementation and quality of projects

iii. Appropriate use and accountability of community and donor resources; levels of
community involvement in the project

iv. Community involvement in projects

v. Timely use of information generated through the community routine monitoring and
evaluation

3.4.3. Monitoring and Evaluation National and Donor Level

At the national or country level, there are two main stakeholders,

a. The ministry or agency that is implementing the intervention or project – the government
interest in projects is to ensure national wide community development. Their interests
will be to ensure community participation in projects that caters for their interests. Major
involvement of the government agencies (Ministry of Agriculture) will be to ensure that
the project evaluation methodology is well known to the community. The evaluation will
be concerned with the impact of the project to a wider target group- This will involve the
contribution of the agricultural project to the economic development of the country as a
whole.

30
b. Any external Nation or international donors – the major concern is the
effectiveness of the projects. Their major focus is the percentage of output attained
as a result of the projects

3.5. Unit Summary

The lecture explores various levels of evaluation by first looking at the concept of evaluator and
types of evaluators that the project manager can exploit in terms of project evaluation. The lecture
also provides an insight on what project managers should look at when selecting types of
evaluators for project evaluation. Core concerned for project monitoring and evaluation have also
been discussed. The effects of project that of National concern can be assessed adequately as per
certain levels.

This unit using relevant illustration discussed the levels of project monitoring. The lecture also
demonstrates how project monitoring and evaluation activities differ as per each level of
monitoring and evaluation.

Unit 3 Self-assessment questions


1. Explain the options that a project manager has in deciding the type of evaluator that will
handle project evaluation activities?
2. Outline key questions that evaluators are concerned with when evaluating projects?
3. With illustrations discuss ways in which monitoring and evaluation varies as per levels of
project evaluation?

31
UNIT FOUR

TYPES OF MONITORING AND EVALUATION

4.0. Introduction

In the previous unit we discussed the concept of evaluator, core concerns of evaluation and
various levels of evaluation. We also established that monitoring and evaluation varies
with different levels of evaluation, however the levels complement each other. In this unit
we shall examine in details various types of monitoring and evaluation. At the end of this
unit, you should be able to:
1. Explain types of project monitoring
2. Describe types of project evaluations

4.1. Types of monitoring

You recall that in unit one, we learned that among the main components of a project design
included project purpose which the ultimate objective of the project is; project objectives which
state the immediate achievement at the end of the project; project outputs which describe the
kind of products produced by the project; Project activities which show all actions that will be
undertaken to produce the desired output; and project inputs that give a full range of the resources
needed (human, financial, technical, etc) to carry out the project activities. During the
implementation of the project, all these aspects of the project must be monitored closely.

Figure 4.1 Key Types of Monitoring

32
Activity:
1. Look at the figure 4.1 keenly.
2. Using a project of your own choice,
outline at least four aspect that you will
monitor at both implementation and
result stage
Figure 4.1 shows two main types of monitoring: implementation monitoring and results
monitoring. Let use examine each one of these types of monitoring.

a. Implementation Monitoring: This is concerned with tracking the means and strategies
used in project implementation. It involves ensuring that the right inputs and activities
are used to generate outputs and that the work plans are being complied with in order to
achieve a given outcome. Implementation monitoring as the name suggests is the type
of monitoring carried out during the roll out of project plans. Figure 4.1 clearly shows
that the main concern of implementation monitoring is the inputs, activities and
outcome. It involves determining both the amount of activity and the compliance to the
plan’s standards. The question regarding amounts of activities is addressed for the entire
project rather than for an individual activity.

The other concern of project managers is on whether planned inputs are utilized for
intended purpose. This sort of monitoring is normally done annually to determine if the
planned projects and activities are completed on time and then use that information to
better interpret the ‘effectiveness of the projects’ - monitoring results.

b. Results Monitoring: This looks at the overall goal/impact of the project and its impacts
on society. It is broad based monitoring and aligns activities, processes, inputs and
outputs to outcomes and benefits. Ideally, all monitoring should be results based. Now,
let us focus on the second type of monitoring otherwise known as results monitoring.
When you look at figure 4.1 carefully, you will notice that the monitoring for results is
a stage higher than implementation monitoring. It defines the expected results in terms
of project outcome and project impact.

A single project activity may be divided into different milestones. The milestones can
be referred to as segments of an overall result. Monitoring for result therefore means
that the project managers’ major concern is whether the project has attained the
milestones that lead it to the overall results.

c. Activity based monitoring: This focuses on the activity. Activity Based Monitoring
seeks to ascertain that the activities are being implemented on schedule and within
budget. The main short coming of this type of monitoring is that activities are not aligned
to the outcomes. This makes it difficult to understand how the implementation of these
activities results in improved performance.

d. Process (activity) monitoring: Tracks the use of inputs and resources, the progress of

33
activities, how activities are delivered – the efficiency in time and resources and the
delivery of outputs

e. Compliance monitoring: Ensures compliance with, say, donor regulations and


expected results, grant and contract requirements, local governmental regulations and
laws, and ethical standards.

f. Context (situation) monitoring: Tracks the setting in which the project/programme


operates, especially as it affects identified risks and assumptions, and any unexpected
considerations that may arise, including the larger political, institutional, funding, and
policy context that affect the project/programme.

g. Beneficiary monitoring: Tracks beneficiary perceptions of a project/programme. It


includes beneficiary satisfaction or complaints with the project/programme, including
their participation, treatment, access to resources and their overall experience of change.

h. Financial monitoring: Accounts for costs by input and activity within predefined
categories of expenditure, to ensure implementation is according to the budget and
time frame.

i. Organizational monitoring: Tracks the sustainability, institutional development and


capacity building in the project/programme and with its partners.

Take Note:
Take an example of a Bore hole project that has a general purpose of providing clean
and safe drinking water to the community, the milestone can be considered as:
§ Securing funds for the project
§ Sensitizing the community
§ Putting together a steering management committee
§ Procuring the consultancy for the project
§ The actual sinking of the bore hole
§ Commissioning of the bore hole
All the above milestones lead to the final result which is complete borehole that can
provide clean and safe water to the community. These milestones are arranged in order
of priority leading towards the overall results. Achieving of the first one leads to the
achievement of the second milestone. The achievement of each milestone gives us an
assurance of achieving the overall results.

4.2. Types of Evaluation

We have discussed the concept of monitoring in chapter one and two, in this section we will
focus on various types of evaluation. You should note that types of evaluation are very different
from the models of evaluation that we will examine in subsequent lectures. In this section we
are going to discuss three types of evaluations.

34
4.3.1. Ex-Ante Evaluation (Need Assessment)

Conducted before the implementation of a project as part of the planning. Needs assessment
determines who needs the program, how great the need is, and what might work to meet the need.
Implementation (feasibility) evaluation monitors the fidelity of the program or technology
delivery, and whether or not the program is realistically feasible within the programmatic
constraints.

According to Singh and Nyandemo (2004), ex-ante evaluation is pre-project evaluation undertaken
before the implementation of a given project in order to assess the development needs and
potentials of the target group/region to test project hypothesis or determine the feasibility of a
planned project. This kind of evaluation is carried out during the planning phases of a project.

During such evaluation, the following key questions need to be addressed:

- What has the project set out to achieve?

- What are the objectives of the project?

- Who are the intended beneficiaries and how are they to benefit?

- What are the main intended inputs (financial, technical, manpower e.t.c)?

- What are the main intended outputs?

- How do the outputs relate to the objectives?

- What is the implementation plan?

- Have the alternative methods of achieving objectives considered?

Take Note
The following areas should be addressed at this stage of evaluation:
§ Needs assessment to determine who needs the project and how great
is the need
§ Evaluability assessment to determine whether the evaluation is
feasible and how stakeholders can help to shape its usefulness
§ Project structure conceptualization-defines project or
technology, the target population and possible outcomes
§ Project implementation evaluation- determines the fidelity of the
project or technology delivery
§ Process evaluation- investigates the processes required to deliver the
project including alternative delivery procedures

35
4.2.2. Formative/ Mid-term Evaluation

Conducted during the implementation of the project. Used to determine the efficiency and
effectiveness of the implementation process, to improve performance and assess compliance.
Provides information to improve processes and learn lessons. Process evaluation investigates the
process of delivering the program or technology, including alternative delivery procedures.
Outcome evaluations investigate whether the program or technology caused demonstrable effects
on specifically defined target outcomes. Cost-effectiveness and cost-benefit analysis address
questions of efficiency by standardizing outcomes in terms of their dollar costs and values.
Formative evaluation is conducted during the development and implementation of a project in
order to provide project managers with information necessary for improving the project. This type
of evaluation is sometime referred to as Mid-term evaluation.

In general, formative evaluations are process oriented and involve a systematic collection of data
to assist decision-making during the planning or implementation stages of a project. They usually
focus on operational activities, but might also take a wider perspective and possibly give some
consideration to long term effects. While staff members directly responsible for the activity or
project are usually involved in planning and implementing formative evaluations, external
evaluators might also be engaged to bring new approaches or perspectives (Nadris, 2002).
Questions typically asked in those evaluations include:

- To what extent do the activities and strategies correspond with those presented in
the plan? If they are not in harmony, why are there changes? Are the changes
justified?

- To what extent did the project follow the timeline presented in the work plan?

- Are activities carried out by the appropriate personnel

Other issues addressed by formative evaluations include:

• To what extent are project actual costs in line with initial budget allocation?

• To what extent is the project moving towards the anticipated goals and objective
of the project?

• Which of the activities or strategies are more effective in moving towards


achieving the goals and objectives?

• What barriers were identified? How and to what extent were they dealt with?

• What are the main strengths and weaknesses of the project?

• To what extent are the beneficiaries of the project active in decision


making and implementation?

36
• To what extent do project beneficiaries have access to services provided by the
project? What are the obstacles?

• To what extent are the project beneficiaries satisfied with project services

4.2.3. Ex-post evaluation: Conducted after the project is completed. Used to assess sustainability
of project effects, impacts. Identifies factors of success to inform other projects. Conducted
sometime after implementation to assess long-term impact and sustainability.

4.2.4. External evaluation: Initiated and controlled by the donor as part of contractual
agreement. Conducted by independent people – who are not involved in implementation.
Often guided by project staff

4.2.5. Internal or self-assessment: Internally guided reflective processes. Initiated and


controlled by the group for its own learning and improvement. Sometimes done by
consultants who are outsiders to the project. Need to clarify ownership of information
before the review starts

4.2.6. Real-time evaluations (RTEs): are undertaken during project/programme implementation


to provide immediate feedback for modifications to improve on-going implementation.

4.2.7. Meta-evaluations: are used to assess the evaluation process itself. Some key uses of meta-
evaluations include: take inventory of evaluations to inform the selection of future
evaluations; combine evaluation results; check compliance with evaluation policy and
good practices; assess how well evaluations are disseminated and utilized for
organizational learning and change, etc.

4.2.8. Thematic evaluations: focus on one theme, such as gender or environment, typically
across a number of projects, programmes or the whole organization.

4.2.9. Cluster/sector evaluations: focus on a set of related activities, projects or programmes,


typically across sites and implemented by multiple organizations

4.2.10. Impact evaluations: is broader and assesses the overall or net effects -- intended or
unintended -- of the program or technology as a whole focus on the effect of a
project/programme, rather than on its management and delivery. Therefore, they typically
occur after project/programme completion during a final evaluation.

4.2.11. Summative evaluation: Conducted at the end of the project to assess state of project
implementation and achievements at the end of the project. Collate lessons on content and
implementation process. Occur at the end of project/programme implementation to assess
effectiveness and impact. Summative evaluation (also called outcome or impact
evaluation) addresses the first set of issues from those discussed above. They look at what
the project has actually accomplished in terms of its stated goals. There are two approaches
under this type of evaluation.

37
a. End Evaluation - that aims at establishing the project status at the end of the
project cycle. For example, when external aid is terminated and there is need to
identify the possible need for follow- up activities either by donor or project
staff.

b. Ex-post – these evaluations are carried out two to three years after external
support is withdrawn. The main purpose is to assess what lasting impact the
project has had or is likely to have and to extract lessons of experience. This type
of evaluation is sometimes referred to as impact evaluation.

Summative evaluation questions include:

§ To what extent did the project meet its overall goals and objective?

§ What impact did the project have on the lives of the beneficiaries?

§ Was the project equally effective for all the beneficiaries?

§ What components were the most effective?

§ What significant unintended impacts did the project have?

§ Is the project replicable?

§ Is the project sustainable?

For each of those questions qualitative data and quantitative data can be useful.

Take Note:
The following areas should be addressed at this stage of evaluation:
§ Outcome evaluation-to investigate whether the programme or technology
caused demonstratable effects on specifically defined target outcome
§ Impact evaluations- to assess the overall or net effects intended or untended
of the project or the technology as a whole
§ Cost effectiveness and cost benefits analysis – to address questions of
efficiency by standardizing outcomes in terms of their dollar costs and values
§ Secondary analysis – to reexamine existing data to address new questions or
use methods not previously employed
§ Meta-analysis –to integrate the outcome estimates from multiple studies to
arrive at an overall or summary judgment on an evaluation question.

Unit 4 Self - assessment Questions


1. Identify and explain two major types of project monitoring
2. Outline and examine types of project evaluations

38
UNIT FIVE

MONITORING AND EVALUATION THEORIES AND MODELS

5.1 Introduction

In our previous unit we learned that different evaluations can have different demands depending
on core concerns of the evaluators. This has made different scholars devise different ways of
approaching various evaluations activities. In this lecture we are going to look at some of the
evaluation models and approaches that have been employed over years in project evaluation.

5.2 Lecture objectives

At the end of this unit, you should be able to;

1. Differentiate between evaluation model and evaluation theories

2. Outline at least five models employed in evaluations of projects

3. Form a structure for placing different evaluation needs in terms of methodologies

4. Distinguish between different models used in project evaluation

5. Outline at least five advantages and disadvantages of various evaluation Models

5.3 Meaning of Models and Theories

5.3.1 Definitions of Theories and Models?

According to Dorin, Demmin and Gabel, (1990) a theory provides a general explanation for
observations that are made over time. A theory attempts to explain and predict behaviour based on
observations, and conclusions are based on the data that is systematically collected, analysed and
interpreted.

The theories are based on conclusions and observations that have stood the test of time and
conditions and thus are established beyond all doubt. This notwithstanding, a theory may be
modified depending on new observations. Theories seldom have to be thrown out completely if
thoroughly tested but sometimes a theory may be widely accepted for a long time and later
disapproved.

39
5.2.2 What is a Model?

Dorin, Demmin and Gabel, (1990) defined a model as “ A mental picture that helps us understand
something we cannot see or experience directly. Scriven (1974) argues that the term “model” is
loosely used to refer to a conception or approach or sometimes even a method (e.g., naturalistic,
goal-free) of doing evaluation ‘Models’ are to ‘paradigms’ as ‘hypotheses’ are to ‘theories’, which
means less general but with some overlaps.

5.2.3 Evaluation Theories

Some scholars (Hamlin, Kirkpatrick) link theories of evaluation to different learning theories.
They argue that the main goal of evaluation is learning. There are three basic theories of learning.
They are behaviourism, cognitivism and constructivism. Each of these is briefly described below:

5.2.3.1 Behaviourism

Behaviourists believe in the stimulus response pattern of condition behaviour. According to the
behaviourist theory of learning, “a child must perform and receive reinforcements before being
able to learn”. Behaviourism is based on observable changes in behaviour. As a learning theory
as a ‘black box’ in the sense that responses to stimulus can be observed quantitatively, totally
ignoring the possibility of thought processes occurring in the mind?

5.2.3.2 Cognition

The cognitive theory of learning is based on the thought process behind the behaviour “Cognitive
theorists recognise that much learning involves associations established through contiguity
and repetition. They also acknowledge the importance of reinforcement, although they stress
its role in providing feedback about the correctness of responses over its role as a motivator.
However, even while accepting such behaviouristic concepts, cognitive theorists view learning
as involving the acquisition or reorganization of the information.” (Good and Brophy, 1990,
p. 187). After understanding the differences between evaluation models and evaluation theories
let us try to discuss the various evaluation models that are commonly used in evaluations of
projects.

5.4. Common evaluation models used in project evaluation

5.4.1. Objective oriented Models

You may recall that in your previous unit of project planning, design, and implementation you
defined project objectives as statement of intent that outline what the project intends to achieve in
both quantitative and qualitative terms in a specified period of time. Objective oriented Model’s
concern is whether project objectives have been realized. The distinguishing feature of an objective
–oriented evaluation approach is that the purposes of project activities are specified. After which
the evaluation efforts is focused on the extent to which those purposes are achieved. Consider an

40
NGO that has an objective of improving the community’s life through sensitization.

Take Note
The government plans to initiate road construction projects in highly agriculturally
productive areas of Kenya. The purpose of the project is to improve access of the
community to basic social services such as schools, and health services and also to
increase access of the community to a ready market for their farm products.
Objective oriented evaluation model will focus on the extent to which the project
improved the community access to basic social services. The evaluation will also
seek to establish the extent at which the project increased community access to the
market for their products.

The objective oriented approach was developed in 1930s and was credited with the works of
Ralph Tyler. Tyler regarded evaluation as the process of determining the extent to which the
objectives of a project are actually attained. He proposed that for one to evaluate a project he or
she must:
1. Establish broad goals or objectives of that project
2. Classify the goals or the objectives
3. Define those objectives in measurable terms
4. Find situations in which achievement of objectives can be shown
5. Develop or select measurement techniques
6. Collect performance data
7. Compare performance data with measurable terms stated

These can be conceptualized in the model below:

Figure 5. 1 Tyler’s Model

From this figure it is clear that the purpose of objective oriented model of evaluation is to
determine the extent to which the objectives of a project have been achieved and emphasis is
on the specification of objectives and measuring outcomes. To determine the outcome between
project specified performance standards and actual project performance there is need to perform

41
pre-tests and post test to determine the extent to which the objectives have been achieved.

Advantages of objective – oriented model

1. It is easy to assess whether the project objectives are being achieved

2. The model checks the degree of congruency between performance and objective

3. The model focuses on clear definition of the objectives

4. It is easy to understand in terms of implementation

5. It produces relevant information to the project

Disadvantages of the model

1. It tends to focus on terminal rather than on-going programme performance

2. It has a tendency to focus directly and narrowly on objectives with the little
attention on the worth of the objectives

3. It neglects the value of the objectives themselves

4. It neglects the transaction the occurs within the project being evaluated

5. It neglects the context in which the evaluation is taking place

6. It ignores important outcomes other than those covered by the objectives

7. It promotes linear, inflexible approach to evaluation

8. There is a tendency to oversimplify project and tendency to focus on terminal rather


than on –going and pre-project information

9. It does not take unplanned outcomes into account. This is because it focuses on the
stated objectives.

10. It does not pay enough attention to process evaluation. In other words, it does not
consider how the activities that lead to achievement of project objectives are carried.

5.4.2 Management - Oriented Approaches

The management-oriented evaluation model is more concerned with providing information


that can help project managers make crucial decision about the project. The rationale of the
management –oriented evaluation approach is that evaluation data is an essential component
of good decision making. Management oriented model of evaluation manifest in various ways.

42
Let us discuss some of these approaches.

5.4.2.1 The Context –Input- Process –Product evaluation model (CIPP)

The purpose of this model is to provide relevant information to decision makers for judging
decision alternatives. The proponent of this model is Daniel Stafflebeam who argues that
evaluation should assume a cyclical approach whereby feedback is continuously provided to the
decision makers. The models highlight different levels of decision makers and how, where and in
what aspects of the project the results will be used for decision making. The model assumes that
the decision maker is an audience to whom management-oriented evaluation is directed (Worthen,
et al, 1997). The model has various types of evaluation that must be accomplished. Let us
analyze each one of them.

a. Context Evaluation: Context evaluation is the most basic type of evaluation under CIPP
model. Its purpose is to provide a rationale for determining of objectives. Specifically, it
defines the relevant environment, identifies unmet needs and unused opportunities and
diagnoses the problems that prevent needs from being met and opportunities from being
used. Diagnosis of the problems provides an essential basis for developing objectives
whose achievement results in project improvement.

b. Input Evaluation: The purpose of input evaluation is to provide information for


determining how to utilize resources to meet project goals. This is accomplished by
identifying and assessing relevant capabilities of the responsible agency, strategies for
achieving project goals, and designs for implementing a selected strategy. The end product
of input evaluation is an analysis of one or more procedural designs in terms of cost benefit.
Specifically, alternative designs are assessed concerning staffing, time, budget
requirements, potential procedural barriers, the consequences for not overcoming these
barriers and the possibilities and the cost of overcoming theme, relevant of design to the
project objectives, and overall potential of the design to meeting the objectives. Essentially,
the input evaluation provides information to decide if outside assistance is required to meet
the objectives.

c. Process Evaluation: Process evaluation is necessary to provide periodic feedback


to persons responsible for implementing plans and procedures. Process evaluation has
three main objectives

i. To detect or predict defects in the procedural designs or its implementation


during the implementation stages

ii. To provide information for project design

iii. To maintain record of the procedures as it occurs

There are three strategies that should be followed during process evaluation. The first is
to identify and monitor continuously the potential source of failure in a project. This
include, but not limited to, interpersonal relationships among staff and students,
communication channels, logistics, understanding s and agreement within the intent of
the project by person involved in and affected by it, and adequacy of the resources,

43
physical failures, staff and time schedules. The second involves projecting and serving
pre-project decisions to be made by project managers during the implementation of a
project. The third process evaluation strategy is to note main features of the project
design.

d. Product Evaluation: The purpose of product evaluation is to measure and interpret


attainments not only at the end of a project cycle, but as often as necessary during the
project. The general method of product evaluation includes devising operational
definitions of activities, measuring criteria associated with the objectives of the activity
comparing these measurements with predetermined absolute of relative standards and
making rational interpretations of the outcomes using the recorded context, input and
process information.

Strengths of CIPP

1. It provides data to administrators and other decision makers on a regular basis.

2. It is sensitive to feedback.

3. It allows for evaluation to take place at any stage of the programme/project.

Limitations of CIPP

1. It lays little emphasis on value concerns.

2. Decision-making process is unclear.

3. Evaluation may be costly in terms of funds and time if this approach is widely
used.

5.4.2.2 Alkins Model (UCLA) - The UCLA Evaluation Model

The UCLA (University of California at Los Angeles) model was developed by Alkini (1969).
The conceptual framework for the UCLA model closely parallels that of the CIPP. According
to Alkin, evaluation is the process of ascertaining the decision areas and concern, selecting
appropriate information and collecting and analyzing information in order to report summary
data useful to decision makers in selecting among alternatives (Alkin, 1969).

The model has the following five steps (Worthen and Sanders 1997:5).

i) System assessment – which provides information about the state of the system.
This is similar to context evaluation in the CIPP model

ii) Project planning which assists in the selection of particular project likely to be
effective in meeting specified project needs. (Very similar to input evaluation)

iii) Project implementation which provides information about whether the project was
introduced to the appropriate group in the manner intended.

44
iv) Project improvement which provides information about how a project is
functioning, about whether the interim objective is being achieved and whether
unanticipated outcomes are appearing This is similar to process evaluation in the
CIPP model.

v) Project certification which provides information about the value of the project and
its potential for use elsewhere (Very similar to product evaluation).

Both the CIPP and UCLA frameworks for evaluation appear to be linear and sequential, but the
developers have stressed that such is not the case. For example, the evaluator would not have to
complete an input evaluation or a systems assessment in order to undertake one of the other types
of evaluation listed in the framework.

Often evaluators may undertake ‘retrospective’ evaluations (such as a context evaluation or a


system assessment) in preparation for a process or project improvement evaluation study,
believing this evaluation approach is cumulative, linear and sequential: such steps are not always
necessary. A process evaluation can be done without having completed context or input evaluation
studies. At other times, the evaluator may cycle into other type of evaluation if some decisions
suggest that earlier decisions should be reviewed (Sanders et al, 1997: 102).

Strengths

1. It provides administrators and other decision makers with useful information.

2. It allows for evaluation to take place at any stage of the programme. It is holistic.

3. It stresses timely use of feedback by decision makers.

Limitations

1. It gives preference to top management.

2. The role of value in evaluation is unclear.

3. Description of decision-making process is incomplete.

4. It may be costly and complex.

5. It assumes that important decisions can be identified in advance.

5.4.2.3 Provu’s Discrepancy Model:

Some aspect of the model is directed towards serving the information needs of project managers.
It is system oriented and it focuses on input, process, and output at each of five stages of
evaluation: project definition, project installation, project process, project products, and cost-
benefit analysis.

45
5.4.2.4 Utilization- focused Evaluation:

This approach was developed by Patton (1986). He emphasized that the process identifying and
organizing relevant decision makers and information users is the first step in evaluation. In his
view the use of evaluation findings require that decision makers determine what information is
needed by various people and arrange for that information to be collected and provided to those
people. He recommends that evaluators work closely with primary intended users so that their
needs will be met. This requires focusing on stakeholders’ key questions, issues, and intended
uses. It also requires involving intended users in the interpretation of the findings, and then
disseminating those findings so that they can be used. One should also follow up on actual use. It
is helpful to develop a utilization plan and to outline what the evaluator and primary users must do
to result in the use of the evaluation findings. Ultimately, evaluations should, according to Patton,
be judged by their utility and actual use.

5.4.2.5 System analysis approach:

The approach has been suggested to be linked to management – oriented evaluation model.
However, most system analysis may not be evaluative oriented due to their narrow research focus.

5.5 Expertise - Oriented Evaluation Approaches

The expertise-oriented approaches to evaluation depend primarily on professional expertise to


judge an educational activity, programme or product. Some scholars regard evaluation as a process
of finding out the worth or merit of a programme. Stake (1975), for example, views evaluation as
being synonymous with professional judgments. These judgments are based on the opinion of
experts. According to these approaches, the evaluator examines the goals and objectives of the
programme and identifies the area of failures or successes.

5.6 Consumer oriented evaluation approaches

Some theorists consider evaluation a consumer service. They stress that although the needs of
project funder and mangers are important, they are often not the same as those of consumers. The
main proponents of this theory are Michael Scrivens. A consumer-oriented evaluation approach
typically occurs when independent agencies, governmental agencies, and individuals compile
educational or other human services products information for the consumer. Such products can
include a range of materials including: curriculum packages, workshops, instructional media, in-
service training opportunities, staff evaluation forms or procedures, new technology, and software.
The consumer-oriented evaluation approach is increasingly being used by agencies and individuals
for consumer protection as marketing strategies are not always in the best interest of the consumer.
Consumer education typically involves using stringent evaluation criteria and checklists to
evaluate products.

The consumer-oriented evaluation approach is typically applied to education products and


programs. It is typically used by government agencies and other independent educational
consumer advocates (i.e. the Educational Products Information Exchange), with the common goal
to make more product information available. Although this approach can be used for any consumer

46
product, in the public sector it is typically used for educational products and programs.

Advantages of using a consumer-oriented evaluation approach

1. Has made evaluations available on products and programs to consumers who may have
not had the time or resources to do the evaluation process themselves

2. Increases the consumers’ knowledge about using criteria and standards to


objectively and effectively evaluate educational and human services products

3. Consumers have become more aware of market strategies

Disadvantages of using a consumer-oriented evaluation approach

1. Increases product costs onto the consumer

2. Product tests involves time and money, typically passed onto the consumer

3. Stringent criteria and standards may curb creativity in product creation

4. Concern for rise of dependency of outside products and consumer services rather than
local initiative development

5.7. Adversary oriented evaluation approaches (Judicial).

Judicial or adversary-oriented evaluation is based on the judicial metaphor. It is assumed here


that the potential for evaluation bias by a single evaluator cannot be ruled out, and, therefore,
each “side” should have a separate evaluator to make their case. For example, one evaluator can
examine and present the evidence for terminating a project and another evaluator can examine
and present the evidence for continuing the project. A “hearing” of some sort is conducted where
each evaluator makes his or her case regarding the evaluand. In a sense, this approach sets up a
system of checks and balances, by ensuring that all sides be heard, including alternative
explanations for the data. Obviously, the quality of the different evaluators must be equated for
fairness. The ultimate decision is made by some judge or arbiter who considers the arguments
and the evidence and then renders a decision.

Example of this model includes multiple “experts” otherwise known as blue-ribbon panel, where
multiple experts of different backgrounds argue the merits of some policy or project. Some
committees also operate, to some degree, along the lines of the judicial model. As one set of
authors put it, adversary evaluation has “a built-in metaevaluation” (Worthen and Sanders,
1999). A metaevaluation is simply an evaluation of an evaluation.

By showing the positive and negative aspects of a program, considering alternative


interpretations of the data, and examining the strengths and weaknesses of the evaluation report
(metaevaluation), the adversary or judicial approach seems to have some potential. On the other
hand, it may lead to unnecessary arguing, competition, and an indictment mentality. It can also
be quite expensive because of the requirement of multiple evaluators. In general, formal judicial

47
or adversary models are not often used in project evaluation.

5.8. Goal free Evaluation Approach

According to this approach, project goals and objectives should not be taken as given. Like other
aspects of the project or activity, they should be evaluated. In addition, the evaluator focuses on
the activity rather than its intended effects. In goal free evaluation, the evaluator is not limited to
the goals of the project; he or she focuses on actual outcomes.

5.9. Naturalistic and participation-oriented approaches

This approach stresses firsthand experience of project settings and activities. It involves intensive
study of the project as a whole. Stake calls it responsive evaluation i.e. what people do naturally.
Evaluators are expected to be responsive to project realities and to the reactions. They are also
expected to be responsive to concerns and issues of participants rather than being preordinate i.e.
strictly following a prescribed plan. In this approach, the evaluator studies project activities as
they occur naturally, without manipulating or controlling it. Naturalist evaluation tends to be
based on project activity rather than project outcomes. Naturalistic evaluators use collaboration
of data through cross-checking and triangulation to establish credibility.

5.10. Participatory evaluation approach

This model is also called collaborative or stakeholder-based evaluation model. Proponents of


this model contents that since different parties have an interest in the outcomes of the evaluation
they should always be involved in the design and conduct of evaluations. Stakeholder-based
evaluation is expected to yield two positive outcomes, realistic and more effective results and
improved utilization of the findings. However, this approach should be used sparingly because
of the requirements of confidentiality and credibility that dictate the distancing of the evaluator
from the evaluated (Scriven, 2001, p. 28). Using a collaborative approach is also costly in time
and money. Moreover, different stakeholders tend to have conflicting expectations.

4.12 Self Evaluation questions


1. Distinguish between evaluation theory and evaluation models
2. Discuss the advantages of using a model in monitoring and evaluation.
3. What are the main differences between objective-oriented and management-oriented
evaluation approaches

48
UNIT SIX

INDICATORS FOR MONITORING AND EVALUATION

6.1 Introduction

In the previous unit, we discussed monitoring and evaluation theories and models. In this lecture
we are going to discuss indicators for monitoring and evaluation. More specifically, we will
attempt to define the term ‘indicator’ and then examine various types of indicators. The
importance of indicator in monitoring and evaluation will also be discussed. We will later
examine the characteristics of good indicators, and steps that a project manager can follow in
selecting SMART indicators for monitoring and evaluation.

6.2 Lecture objectives

By the end of this unit, you should be able to:


1. Definition the term indicator
2. Explain the importance of indicators in monitoring and evaluation
3. Outline categories of indicators used in monitoring and evaluation
4. Explain types of indicators
5. Discus the characteristics of good indicators
6. Describe steps in selecting SMART indicators
7. Describe the vertical and horizontal logic used in logical framework

6.3 The Concept of Project Indicators

In this section we are going to attempt to define the term ‘Indicator’, and then discuss the
importance of indicators in project monitoring and evaluation.

The concept of indicators is pivotal to M&E. As per its dictionary definition, an indicator is defined
as a sign or a signal. In the context of M&E, an indicator is said to be a quantitative standard of
measurement or an instrument which gives us information (UNAIDS, 2010). Indicators help to
capture data and provide information to monitor performance, measure achievement, determine
accountability and improve the effectiveness of projects or programmes.

Designing indicators is one of the key steps in developing an M&E system. As mentioned above,
indicators are units which measure information over time to document changes in the specific
conditions. With respect to the various M&E levels and the result chain of the project, specific
indicators need to be developed for each stage of the results chain. Thus, there should be a different
set of indicators at the impact level, at the outcome level as well as at the output, activity and input
level. Also, for each level, there can be more than one indicator.

49
6.3.1 Definition of an Indicator.

An indicator is a specific, observable and measurable characteristic that can be used to show
changes or progress a programme is making toward achieving a specific outcome. There should
be at least one indicator for each outcome. The indicator should be focused, clear and specific.
The change measured by the indicator should represent progress that the programme hopes to
make.

An indicator should be defined in precise, unambiguous terms that describe clearly and exactly
what is being measured. Where practical, the indicator should give a relatively good idea of the
data required and the population among whom the indicator is measured. Indicators do
not specify a particular level of achievement -- the words “improved”, “increased”, or
“decreased” do not belong in an indicator.

An indicator is a sign showing the progress of a situation. It is a basis for measuring progress
towards the objectives. A specific measure, that when tracked systematically over time indicates
progress (or no progress) toward a specific target. An indicator asks the question: - How will
we know success when we see it? You can also consider an indicator as road signs that show
whether you are on the right road, how far you have traveled, and how far you are yet to travel
to reach your destination.

6.3.2 Importance of project indicators in project monitoring and evaluation

Indicators play very important roles in project monitoring and evaluation. Let’s now focus on
some of these importances.

1. Indicators measure progress in project inputs, activities outputs, outcomes and goals

2. Indicators enable you to reduce a large amount of data down to its simplest form.
(for instance a project to sink borehole with an aim of improving access of a certain
community to safe drinking water may have outcome indicator reduced to ‘ the
percent of households in that community with safe drinking water)

3. When compared with targets or goals, indicators can signal the need for corrective
management action. For instance, if in the project of sinking the borehole for the
community was supposed to be completed in the duration of one year and it happens
to overrun the duration, (One year in this case serves as a time indicator in which
the project should be completed) Project managers need to make quick corrective
decision to ensure that the project is within its completion time.

4. Indicators can evaluate the effectiveness of various project management action

5. Indicators can provide evidence as to whether the objectives are being achieved

6. Indicators provide the qualitative and quantitative details to a set of objectives

50
6.4 Classification and types of indicators

Indicators can be classified in three categories as follows;

a. Quantitative indicators; these types of indicators provide hard data to


demonstrate results achieved. They also facilitate comparisons and analysis of
trends over time. Quantitative indicators are statistical measure that are
expressed in numbers, percentages, rates, ratio e.t.c

b. Qualitative indicators: these are indicators that provide insight in changes in


organizational process, attitudes, beliefs, motives and behaviuors of
individuals. They imply qualitative assessments, compliance with, quality of,
extent of, level of e. t c. Qualitative indicators must be expressed quantitatively
(in figures) in order to illustrate change.

c. Efficiency indicators: These tell us whether we are getting the best value for
our investment. In order to establish such indicator, we need to know the
market, i.e. the current price of desired output considering quantity and quality
aspects. Efficiency indicators are unit cost measures expressed in cost per unit
of client, students, schools e.t.c

An indicator may be quantitative or qualitative based on the characteristics of information that


it provides. Those that deal with information that can be expressed in numbers are quantitative
indicators, while those dealing with information units expressed in any form other than in
numbers, e.g., statements, are qualitative indicators. Another important attribute of quantitative
indicators is that arithmetic functions can be applied to its corresponding data while this is not
possible in the case of qualitative indicators.

For qualitative indicators, their count or frequency may be considered. Income measured in
rupees, the weight of a baby measured in kilograms and the number of toilets built are examples
of quantitative indicators. If the same information of income or weight is collected in categories
of high, medium and low, they are qualitative indicators.

6.4.1 Types of indicators

The above classifications of indicators give rise to various types of indicators. The main
criterion for differentiating them is the level at which the project is assessed e.g output, outcome,
or impact. Some of the types of indicators are discussed below;

i. Input indicators:

These are quantified statements about the resources provided to the project. They rely on
management, accounting and other resource’s used in the development of the project. They use

51
management records illustrating the use of resources by the project. Because indicators use the
functioning of the organization at the input level, a good accounting system is needed to keep
track of expenditures and schedules developed to track timelines. Input indicators are used
mainly by managers closest to the tasks at the implementation level and are consulted
frequently, probably as often as daily or weekly. They focus on the use of funds personnel,
materials and other inputs necessary to produce the intended outputs of project activities. These
indicators can utilize the relevance and performance criteria applicable at implementation level

ii. Process indicators

The term ‘process; is used to imply all that goes on during the implementation phase of the project. Process
indicators therefore are those indicators that measure the progress of the project during implementation.
That is, the extent to which stated objectives are being achieved. The indicators capture information from
project management records from the field or project sites. They are based on cost, timelines and the scope
of the project. They apply at the relevance and performance criteria of the project. Examples include: date
by which building site clearance must be completed, latest date for delivery of fertilizers to the firm store,
number of health outlets, number of women receiving contraceptive, status of procurement of school
textbooks.

iii. Output indicators

Outputs are tangible products of project activities. They show the immediate output of the
project availed after each of the tasks conducted at the project implementation. They are the
results of activities performed by different components of the project and use quantitative ways
of measuring physical entities or some sort of qualitative judgment on timed production of
outputs. Decision on the performance of the project is determined by reading the output
indicators. They show the worth of the project strategy, more so where the outputs are weak and
poor, then the project effectiveness is cynical and hence needs adjustment. Therefore, output
indicators will use the effectiveness criteria to show the performance of the project. Outputs
include; physical quantities, improved capacities, services delivered, systems introduced,
milestones achieved, legislation passed, awareness campaigns affected etc. Examples may
percentage of community members attending community workshop, number of buildings
constructed by the project.

iv. Impact indicators

Impact is the positive or negative long-term changes that can be attributed to the project
intervention. When developed, they forecast long –term effects of the project on the target
population after some duration from the project completion. Precisely impact refers to medium
or long-term development changes expected on the beneficiaries or target region upon project
completion. They are at a higher level of project process. Impact depends on data gathered from
beneficiaries. To obtain early indication of impact, a survey of beneficiary perception about
project services is conducted. Measures of change often involve complex statistics about
economic or social welfare and depend on data that is gathered from the beneficiaries.

52
v. Exogenous indicators

These are indicators that cover factors outside the control of the project but which might affect
its outcome. They include risks and the performance of the sector in which the project operates.
Data collection for monitoring and evaluation cover a wider external environment if expected
to impinge on the projects performance not withstanding additional burden on the projects
monitoring and evaluation effort. Exogenous indicators will help in checking the project
assumptions and risks that are likely to affect the project. Example is during project
implementation, policy decision about currency exchange rates can adversely affect
profitability. Management should carefully monitor and alert project participants about
deteriorating situations if the indicators of environment dictate so.

vi. Proxy indicators

These refer to indirect measures or signs that approximate or represent a phenomenon in the
absence of a direct measure. Cost, complexity or the timeliness of data collection may prevent
results from being measured directly. Proxy indicators are expected to provide reliable
estimation of the direction of movement of the ideal but un attainable indicators for example,
number of children fully immunized is a reliable proxy for infant mortality from immunizable
diseases because immunization is known to be highly infective. The proxy indicators that
qualify as a measure must have strong causality link to the direct measure and should be
measurable on regular basis. It can supplement available information by obtaining data from
related topics or different sources. These is often the case for outcomes in behavioral change,
social cohesion and other results that are difficult to measure. For example, if ethnicity in target
villages is unavailable, you can complement the data by use of data on the mother tongue or
spoken language. Therefore, caution should be taken when interpreting proxy indicators
because over reliance on indicators that can be manipulated by individuals like mother tongue
may lead to wrong interpretation.

Take Note
1. Indicators only indicate: -
§ An indicator will never completely capture the richness and
complexity of a system
§ Indicators are designed to give ‘slices’ of reality
§ They might provide the truth but they rarely give the whole truth
2. Indicators encourage explicitness in that they force us to be clear and
explicit about what we are trying to do
3. Indicators usually rely on numbers and numerical techniques
§ Indicators should not just be associated with faulty finding: they
can help us understand our performance be it good or bad
§ Well-designed measurement systems identify high performers
(from whom we can learn), as well as systems (or parts of the
systems), that may warrant further investigation and intervention

53
6.3.1 Characteristics of good indicators

SMART and SPICED indicators: In order that development interventions are more result
oriented, projects must be made SMART. At the same time, it is important to make sure that the
indicators or the performance measures also fit the SMART criteria. SMART indicators play an
important role in results-based project management as they ensure accountability (MDF Training
& Consultancy, 2016). They have the following characteristics:

• Specific

• Measurable

• Attainable

• Realistic

• Timebound

Another school of thought advocates qualitative indicators represented by the acronym SPICED,
which stands for the attributes listed below:

• Subjective

• Participatory

• Interpreted and communicable

• Cross-checked and compared

• Empowering

• Diverse and disaggregated

After looking at the types of indicators, let us ask ourselves: What does a good indicator look
like? What qualities does it display? Here are some of the characteristics of a good indicator:

i. Validity: This is accurate measure of behaviuor practice or task. Data may not be valid if:

§ Inaccurate measurement tools are used in collecting data

§ Sample is unrepresentative (not from correct target population or small sample size)

§ Data is incomplete

§ Evaluators are biased

ii. Reliability: The indicator is reliable when it consistently measures what it purports to
measure in the same way even when used by different evaluators.
iii. Precise: Indicator should be operationally defined in clear terms and should be

54
context specific, subjective, or specified with clear yard sticks. This reduces confusion
between indicators
iv. Independent: Indicators should be non-directional and un-dimensional, depicting a
specific definite value at one point in time. Example of directional could be healthier
families or policy improvement. In this indicator you will realize that the result is one.
Example of mult-dimension indicators could be sustainability or quality. The
characteristic of independent captures the idea that the value of the indicator should stand
alone. It is best to avoid ratio rate of increase or decrease, or other directional definition.

v. Objectively verifiable indicator: An indicator is said to be objectively verifiable if;

§ it shows the right direction (progress or failure of the project)

§ it produces the same value in repeated measures/calculation on the same


observation

§ it leads to the same conclusions if underlying situations are similar or same

§ its interpretation is independent of evaluator or researcher

vi. Integrity: Indicators should be truthful.

§ For example, number of HIV positive tested by ELISA against the number
of HIV positive tested by RAPID HIV CHECK

§ To improve service delivery, you train service providers; what indicator would
be more truthful – number of providers trained or number of trained providers?

§ How truthful can an indicator be on self-reported sexual behaviour?

vii. Measurable: One should be able to quantify an indicator by using available tools and
methods. An evaluator should consider whether tools and methods for collecting or
calculating the indicator information are available.

viii. Timely: An indicator should provide a measurement of a period of time of interest with
data available for all appropriate intervals. Timelines considerations include:

- Reporting schedules

- Recall periods

- Survey schedules

- Length of time in which project change can be detected

ix. Programmatically important: This implies that indicator should be linked to an impact

55
or to achieving the project objectives that are needed for impact

x. Disaggregated if possible: It is important to disaggregated project output by either


gender, age, location or any other dimensions suitable for the project. This is very
important for better management and reporting. Projects often require different
approaches for different target groups and therefore disaggregated indicator could help
decide whether or not specific groups participate in and or benefit from projects.

xi. Feasible: Data can be gathered over a specific time period and at an acceptable level of
effort and cost

xii. Comparability: This assists in understanding results across different population groups
and project approaches

6.4.3 Steps in selecting SMART indicators

After examining the characteristics of indicator, we now need to discuss the steps a project
manager can follow in selecting SMART indicators. The term ‘SMART’ mean Specific,
Measurable, Attainable, Realistic and Time bound. In other words, when selecting indicators,
you need to ensure that they satisfy the SMART criterion.

Step One: Clarify the result statement. Identify what needs to be measured. Good indicators start
with result statements. Start with overall objective or goal or work backward

Step Two: Develop a list of possible indicators. With the help of project stakeholders try
brainstorm all the indicators listed at each level of results. This brainstorming can be internal,
consultation with experts, seeking experiences of other similar organizations or pre-existing
resources.

Step Three: Assess each possible indicator in terms of:

- Measurability- can it be quantified and measured by some scale

- Practicability – can data be collected on timely basis and at reasonable cost

- Reliability- can it be measured repeatedly with precision by different people

- Relevance– is the indicator attributed to your organization

- Management usefulness- Does the project staff and audience feel that the information
provided by the measure is critical to decision-making

- Directness/Precision – does the indicator closely track the result it is intended to

56
measure?

- Sensitivity – does it serve as early warning of changing conditions

- Capability of being disaggregated- can data be broken down to by gender age


location or other dimension e.g. class tribe where appropriate

Step Four: Select the best indicators:

Based on your analysis and the context, narrow the list to the final indicator that will be used in
the monitoring system. Ensure that every element of the indicator and how it is measured is
defined. There should be an optimum set that meets management needs at a reasonable cost

You should limit the number of indicators used to track each objective or result to a few (two
or three) while remembering your target audiences both external and internal.

6.5 Designing Indicators

Indicators are essential instruments for M&E, thus, practitioners need to keep in mind some of
the critical points while designing or formulating them.

Firstly, creating new indicators or reinventing the wheel should be avoided till the time it is
absolutely required. Over the years, development professionals have worked to provide M&E
practitioners with sets of well tested and proven indicators and these sets of indicators should be
referred to while formulating indicators for any project.

Secondly, while designing indicators, it should be made sure that they fulfil either the SMART or
the SPICED criteria. Indicators can document change therefore, any indicator finalised should
essentially be able to capture change in the condition that is being assessed using the indicator. A
good indicator is therefore:

• Simple: As all the good things in the world are

• Measurable: Provides a measure for depicting change

• Precise: Has a definition so that it can be defined in the same way by all

• Consistent: Has consistent measurement results. On measuring the same thing, its value
remains consistent and does not change over time

• Sensitive: Can capture the smallest amount of change in the indicator value

• Action Focused: Captures information that is eventually useful for stakeholders and leads
to some action.

57
While designing indicators, it is very important to collectively brainstorm to identify candidate
indicators for a specific condition. Once several indicators are listed for a given specific
condition, the next step is to assess each of the indicators using the characteristics of a good
indicator to find out whether the candidate indicator is simple, measurable, precise, consistent
and sensitive. The source of data for the indicator and the reliability of the sources is also
considered. The cost incurred in collecting data for this indicator is also considered while
finalising the indicators. Candidate indicators that satisfy the criteria are then taken as indicators
for assessment of that condition. Candidate indicators are also modified till they acquire the
characteristics of a good indicator.

For example, in the case of the project which aims to make its target area ODF, the output level
indicator is ‘The number of individual household latrines (IHHL) constructed’. As constructing
toilets is one of the key outputs expected from the project, this indicator helps in measuring the
same. Similarly, as creating awareness about sanitation is another key activity, ‘The number of
village level meetings conducted to create awareness about sanitation’ is another output level
indicator.

Considering another example of a project which aims to improve maternal and child health (MCH)
in its target area, ‘Maternal Mortality Ratio’ (MMR) and ‘Infant Mortality Ratio’(IMR) are the
result level indicators for this project. At the outcome level, the indicators are, ‘The number of
women with incidences of serious health problems related to child birth’ and ‘The number of
women consuming iron fortified food or iron supplements during pregnancy’. At the output level,
the indicators are ‘The number of deliveries conducted by skilled health professionals’, and ‘The
number of women receiving at least three antenatal care (ANC) visits’.

6.5.1. Defining Indicators

After selecting suitable indicators, it is very important to fully define them. No indicator should
be deployed without fully defining it and making sure its essential components are lucid and
concrete (UNAIDS, 2010). Each indicator definition should have the following components:

• Title: A brief heading that captures the summary or focus of the indicator.

• Definition: A lucid and to the point definition of each indicator so that everyone can
interpret it in the same way.

• Source: The source i.e., the tool used for getting this indicator value and the respondent
from whom this information is collected is also defined.

• Data Collection Frequency: The frequency at which the data is collected to derive at the
indicator value is defined. This could be at quarterly, half yearly or annual intervals etc.

• Numerator: The variable that is included above the line in a common fraction.

• Denominator: The variable that is included below the line in a common fraction.

58
• Calculation Method: The method for calculating the indicator value is defined.

For instance, the complete definition of the indicator, ‘The number of deliveries conducted by
skilled health professionals’, is stated below:

- Title: The number of deliveries conducted by skilled health professionals

- Definition: This indicator measures the number of deliveries conducted by skilled health
professionals. The term skilled health professional refers exclusively to people with
midwifery skills (auxiliary nurse midwives (ANMs), doctors and nurses) who are trained
in the skills necessary to manage normal delivery cases and diagnose, manage or refer
obstetric complications

- Source: The sample survey conducted as part of the baseline, midline and endline survey

- Data Collection Frequency: In the first quarter of the first year and in the last quarter of
the third and in the fifth year of the project.

- Numerator: The total number of deliveries attended by skilled birth attendants (SBA) as
reported in the sample survey.

- Denominator: The overall sample size of the number of women who had deliveries in the
last two years.

- Calculate: This indicator value is calculated by dividing the number of births attended by
skilled health professionals by the total sample size of the number of deliveries conducted
in the last two years.

6.6 Indicators and Logical Framework

From the previous discussion, we learned that indicators help us assess progress of the project and
also quantify the achievement of the project results. This makes indicators important aspect of
project planning and implementation. In this section we are going to discuss the central role
indicators play in logical framework (Log-frame) which is considered as a key project planning,
monitoring and evolution tool. We will start by examining the concept of log-frame.

6.6.1 Logical framework

Log-frames have now been in use for more than 30 years, and their overall structure has changed
very little since they were first developed. When USAID first began to use log-frames, they served
mainly as guides to project design and to make evaluation possible, by clearly identifying
objectives and indicators. Now they serve as a guide to understanding logical project structure and
the expected impacts and results. They make evaluation of projects possible.

59
Uses of Log framework

1. Log-frame helps improve the quality of projects design. The framework requires that
project objectives are specified in clear terms. It requires the use of performance
indicators and assessment of risks.

2. Summarizing design of complex activities

3. Assisting the preparation of detailed operation plans

4. Providing an objective basis for activity review, monitoring and evaluation

Advantages of Log frame

• Ensures that decision makers ask fundamentals questions and analyze assumptions and
risks

• Engages stakeholders in the planning and monitoring process

• When used dynamically it is an effective management tool to guide the


implementation of monitoring and evaluation

Disadvantages

• If managed rigidly, it stifles creativity and innovation

• If not updated during implementation, it can be a static tool that does not reflect
changing conditions

6.6.2 Components of logical framework

For us to understand the components that characterizes the logical framework, it is imperative that
we focus on elements that are presented in logical framework that shows vertical order flow
(Vertical Logic) and those represented in a horizontal flow (Horizontal logic)

6.6.2.1 Horizontal Logic

Horizontal logic is the logic that goes across the matrix and describes how the achievement of
objectives will be measured or verified (indicators), how this information will be obtained
(Means of Verification), what external factors could prevent the project from achieving the next
level objectives (assumptions).

60
a) Narrative summary column

These contain the following three strategic elements: resources, purpose and goals. The
first two levels i.e purpose and resources are specific to the project itself. The logic that
links them can be illustrated with the following questions: What resources (inputs/activities
usually in dollar amount) will have to be invested in the project in order for the women and
men from targeted population groups to benefit from the achievement of the project
purpose?

It is important that we, not only design projects to achieve meaningful results but also for
the benefits of the society at reasonable costs. The purpose statement of the project must
identify the intended beneficiaries. The first two levels of the narrative summary are
essential to the strategic planning process and must be taken into consideration in a result
oriented logical framework. Although the purpose is the reason or basic motive why the
project is to be undertaken, it should be defined in the context of broader strategic planning.
A result –oriented logical framework thus serves project level management purposes by
ensuring that projects are identified, selected, designs and approved within the context of
a strategic planning framework at all levels of the project.

b) Means of Verification:

Information for the means of verification (MOVs) column should be developed at the same
time as the indicators. It provides information to help justify the achievement of the project
at the indicator level. The means of verification is like the exhibit to help verify what has
been said to have been done by the project manager at various project levels. During the
process of the project, care should be taken to keep these exhibits which are in the forms
of registers, receipts, records, notices, memos, e.t.c. It can also be data previously captured
by various means which can be available when needed in the cause of evaluation. Means
of verification should clearly specify the anticipated source of information, the methods to
collect that data, such as sample surveys, administrative records, workshops or focus group,
observation, Participatory Rural Appraisal (PRA) techniques or Rapid Rural Appraisal
(RRA) techniques. MOVs should also specify those who are responsible for data collection
e.g. project staff, independent survey teams e.t.c.

Also, they should indicate the frequency with which the information should be provided
(eg Monthly quarterly, annually etc) and the format required to collect the data. The means
of verification is either more or less structured depending on the intervention logic level.
At the lower level, monitoring and evaluation relies more on secondary information than
primary. Secondary information is captured from such items as receipts; register records
etc. which is more applicable at the lower level of the matrix. At the upper level, which
measures the project impact relies on the interview, and questionnaires etc which are more
primary. This is well illustrated by figure 6.1

61
Figure 6.1 Means of Verification Stage

c) Assumptions

Assumptions are conditions external to the project that may affect the progress or success
of the project and over which the project management has little control. They are stated as
positive conditions that need to exist to permit progress of the project to the next level e.g.
price changes, rainfall, political situations etc.

An assumption needs to be relevant to the project or otherwise relevant to the level of the
objectives to allow the project to progress to the next level. Contrary to a risk, which is
negative statement of what might prevent objectives project from being achieved;
assumptions are a positive statement of a condition that must be met in order for project
objectives to be achieved. It is important to note that assumptions are not delicate
community problems. If the assumptions prove that they will impede on the project moving
to the next level, it is extremely significant to capture them and strategically manage the
project to bypass this problem or otherwise, redesign or terminate the project.

Assumptions are normally forecast and should be relevant and probable. Therefore, the
decision to select an assumption depends on some sort of value judgment on the part of the
evaluator. This can be based on the normal occurrences of risks or events. If something
rarely happens as risks, then the assumption is based on the rare occurrence aspects. The
chance of that thing happening is treated as rare. As a suggestion, the best way to go about
the assumption is by probably giving a percentage chance of something happening or not
happening. Several aspects can be evaluated in this way and those with higher risks are
definitely denoted. These can help the evaluator make valid judgment on assumption than
can affect the project.

For instance, if a project is located in an arid region, you will not assume that the climate
will be conducive for growing maize where maize has never grown. You may also not
assume that it is going to rain in March when there is rare rain in that month. Provisionally,

62
estimate that the assumption has a chance of happening before deciding on it as a problem.
From your estimate if it has no chance of not happening do not bother about it. Logical
framework demands that all hypotheses, assumptions and risks relevant to a project are
made explicit. This then further demands that the appropriate action is considered
(necessary taken) before problems materialize and affects the project. Some factors to
consider include:

1. How important are the assumptions?

2. How big are the risks?

3. Should the project be redesigned?

4. Should some elements of the proposed project be abandoned?

In logical framework, relationships between the assumptions and the intervention logic are
presented as causal, one step leading to the next. If one step is not completed successfully
then the next will not be achieved. The casual relationship between the intervention logic
elements and assumptions is as follows:

• if the preconditions are complied with, then the activities can be started;

• if the activities are realized, and if the assumptions at the activities level
have come true, then the outputs will be realized;

• if the outputs are realized, and if the assumptions at the results level have
come true then the project purpose would be realized

• if the project purpose is realized and if the assumption and the project
purpose level have come true, then the goal will have been significantly be
contributed to

Consider the following figure;

63
6.6.2.2 Vertical Logic

The vertical logic has four levels where each lower level of activity must contribute to the next
higher level. It elucidates between the casual relationships between the different levels of objects
and specifies the important assumptions and uncertainties beyond project management control.
The capacity of the project to move to the next higher level will be determined by the assumptions,
which are concrete determinant factors of the project proceeding from one lower level to the next
higher level all the way to the goal as the highest level.

Each level has a set of the logical framework items referred to as objectives. The items are the
intervention logic, with its corresponding means of verification, objectively verifiable indicators
and the assumptions. As a set the items are addressed by the logic framework sequentially upwards
from the lower level to the higher level. For example, project activities contribute to the
achievement of project outputs. The achievement of the project outputs leads to the achievement
of project purpose and finally the purpose contributes to the goal of the project.

The goal of the project is the ultimate aim of the project; the reason for the project existence. The
description in the matrix involves a detailed breakdown of the sequence of causality. This can be
expressed in terms of

1 if inputs are provided, then the activities can be undertaken;

2 if the activities are undertaken, then outputs will be produced;

3 if outputs are produced, then the purpose will be supported, and;

4 if the purpose is supported, this should then contribute to the overall goal

Figure 6.3 Logic in the objectives

64
Each level thus provides the justification for the next level for instance the goal helps justify the
purpose, the purpose the output, the output the activities and the activities the inputs

Logical Framework Matrix

After determining all the necessary items to be entered into the log frame matrix, it is developed
by drawing a table with four columns and four rows. The first rows enter the item names, goal,
purpose output and activities. Append appropriate information besides each of the items in the first
column. Remember as mentioned earlier, they are written down wards but read upwards.

The next are the indicators corresponding to each of the first column items. The indicators vary
depending on the level they are corresponding to. The various types of indicators mentioned earlier
in this chapter are indicated at each of the levels. For instance, the input indicators appear at the
input level while the output at the output level, the purpose indicators at the purpose level and lastly
the impact indicators at the goal level.

Next is the means of verification (MOV). The MOV also fall into each level. Incidentally, the
MOVs vary according to the levels appropriate for data collected. At the lower levels there is more
secondary information in the form of receipts, documents. At the upper level there is more of
primary information collected through such tools as questionnaires, interview etc

Table 6.1 Logical Framework

65
Project Indicators Means of Assumptions
description Verification

Goal: the broader What are the quantitative ways Source of Factors are
development of measuring or qualitative information and necessary for
impact to which ways of judging whether these methods used sustaining
the project broad objectives are being objectives in the
contributes achieved (Estimated time) long-run

Purpose: the What is the quantitative measure Source of Conditions


development or qualitative evidence by which information necessary for the
outcome expected achievement and distribution of and methods achievements of
at the end of the impact and benefits can be judged used the project’s
project (estimated time) purpose to
reaching the
project goal

Outputs: the What kind and quantity of Source of Factors if not


direct measurable outputs and by when will information present are liable
outputs (goods they be produced (quantity, and to restrict
and services) of quality and time) methods progress from
the project used outputs to
achievements of
project purpose

Activities: Implementation/work project Source of Factors that must


activities that targets. Used during information be realized to
must be monitoring and obtain planned
undertaken to methods outputs on
accomplish the used schedules
output

Unit 6 Self-Evaluation Questions


1. Explain the difference between project indicators and project objectives
2. Describe the four roles played by project indicators.
3. Explain Five characteristics of project indicators
4. What are the main differences between horizontal logic and vertical logic as used in
Logical Framework

66

You might also like