0% found this document useful (0 votes)
19 views5 pages

About Impact Evaluation.

The document discusses impact evaluation and experimental design. It covers the importance of impact evaluation, causal inference, and different types of impact evaluations including non-experimental, quasi-experimental, and experimental designs like randomized controlled trials. Experimental designs are highlighted as the most frequently used for obtaining causal inferences with valid counterfactuals.

Uploaded by

Sergi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views5 pages

About Impact Evaluation.

The document discusses impact evaluation and experimental design. It covers the importance of impact evaluation, causal inference, and different types of impact evaluations including non-experimental, quasi-experimental, and experimental designs like randomized controlled trials. Experimental designs are highlighted as the most frequently used for obtaining causal inferences with valid counterfactuals.

Uploaded by

Sergi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Session 12. Impact evaluation.

Experimental design

Index
Importance of impact evaluation (stage, rationale)
Causal inference
Types of impact evaluations

1. Importance of impact evaluation

An impact evaluation provides information about the impacts produced by an intervention


(small project, large programme, collection of activities or a policy).

Impact evaluation is important because

● Highlights the intended and unintended consequences from a policy intervention


(positive and negative)
● Costs – benefits calculations
● Evidence-based policy-making (data but not intuition)
● Should the pilot study be extended to a wider population, be amended, maintained,
terminated?
● Policy-maker’s accountability

Example: UN MDGs (United Nations Millennium Development Goals).


Is an initiative to combat poverty, hunger, disease, illiteracy, environmental degradation, and
discrimination against women.

Criteria for evaluating the policy (OECD: 1991)

● Effectiveness (goals)
● Efficiency (costs and resources)
● Impact (target group)

There are many variables/factors that influence the


final result. These variables must be measurable, but
how do you do it?


2. Causal inference
Causal inference is the process of determining the real and independent effect of a particular
phenomenon that is part of a larger system. It’s carried out using the scientific method.

Example: Low wages in the high unemployment rate. So a policy-maker introduces a minimum
wage but the unemployment rate has increased.
Exercise:
National Rifle Association: “It is not guns that kill people but people who kill people”

D
D
? Defined Dilemma

Additional problems (causal inference):

● Varying weight of variables


● Transitivity of variables

Major implications of causal inference:

● Uncertainty (the most likely outcome)


● Bias (data collection, analysis and interpretation, conclusions)

Solution in public policy? > Research design. Designing a reseach question.

What is a good research question? Criteria?

Research question

1. Testability of variables (measurable)

Question 1: Does foreign military intervention reduce civilian attacks?


Question 2: Would Ucraine join NATO if Russia had not intervened?

2. Clear variables (your limited variable must be clear)


Question 1: What caused Russian intervention in Ucraine.
Question 2: How did Russia’s security perceptions caused its actions in Ucraine.

Question 1. How did the usage of kahoots improve the students’ final couse grades?

How to proceed?
And in Public Policy Analysis?
Especially in “ politically and technically complex issues?”


3. Types of impact evaluation

Teacher remuneration and the students’ exam scores (a pilot study in 50 schools for 4 years
2017-2021)
> Teacher remuneration is the independent variable, and exam scores the dependent one.

● Non-experimental design (used usually for retrospective research, where


researchers don’t control the process)
A pre-test and post-test design (50 schools).

Problem?

● Quasi experimental design (target/treatment and control group)

Simple difference:
Difference in difference: (change in both groups)

● Experimental design (target/treatment and control group)


> Randomised Control Trial (RCT)
- The most frequently used design
- Random assignation of units (individuals, household, regions, countries) to the
groups
- Causal inference (recall the case of the minmum wages and unemployment
rate)
- Internal and external validity of research results (a valid counterfactual)
Example: diabetes/ poverty intervention

An impact evaluation is essentially a problem of missing data, without this information the
next best alternative is to compare outcomes of treated individuals or households with
those of a comparison group that has not been treated.

(RCT)
Steps :
1. Identify policy
intervention
2. Identify variables
3. Choose the
randomisation unit
4. Allocate units to groups
5. Treat
6. Analyse results
Challenges:
1. Level of randomisation
2. Choice of sample (completely random, not biassed)
3. Spillover (spreading into another area)
4. Compliance

Reflection exercise:
The policy-makers in Nigeria want to know if providing bycicles to girls will improvethe
school enrolment. What the the IV, DV, confounding variables?
What research design would you use? Why?

You might also like