Impact Evaluation in Social Research
Impact Evaluation is a systematic method used to determine whether a program, policy, or intervention has achieved its intended outcomes — and to what extent these outcomes can be directly attributed to the intervention, rather than to other factors.
It helps researchers understand:
✔ What changed?
✔ How much did it change?
✔ Did the program cause that change?
✔ For whom did it work, and why?
Key Purpose of Impact Evaluation
-
Assess effectiveness
Measures whether the intervention produced measurable improvements. -
Establish causality
Confirms that the changes happened because of the intervention. -
Guide policy and resource allocation
Helps governments, NGOs, and donors decide what to scale up or modify. -
Ensure accountability
Shows funders and stakeholders the real value of investments. -
Improve program design
Identifies strengths, gaps, and areas for refinement.
Core Components of Impact Evaluation
1. Counterfactual Analysis
The counterfactual answers:
“What would have happened if the program had not been implemented?”
Common methods:
- Randomised Control Trials (RCTs)
- Difference-in-Differences (DID)
- Propensity Score Matching (PSM)
- Regression Discontinuity Design (RDD)
2. Baseline and Endline Measurements
- Baseline: Data collected before the intervention
- Endline: Data collected after the intervention
Comparison of these helps measure change.
3. Treatment and Control Groups
- Treatment group receives the intervention
- Control group does not
Allows researchers to isolate program impact.
4. Attribution vs. Contribution
- Attribution: Program caused the impact
- Contribution: Program played a role among multiple factors
Impact evaluations generally aim for attribution, using rigorous design.
Impact Evaluation Methods
Quantitative Methods
- RCTs (gold standard for causality)
- Quasi-experimental designs
- Econometric models
- Surveys & structured data analysis
Qualitative Methods
- Focus group discussions
- Interviews with beneficiaries
- Case studies
- Process tracing
Mixed-Methods Evaluation
Combines quantitative and qualitative approaches to capture both outcome size and contextual explanations.
Indicators in Impact Evaluation
Output Indicators
Short-term, immediate deliverables
(e.g., number of training sessions conducted)
Outcome Indicators
Medium-term changes
(e.g., increased knowledge or adoption of skills)
Impact Indicators
Long-term changes
(e.g., employment rate improves due to training)
Steps in Conducting an Impact Evaluation
- Define the program theory / logic model
- Identify evaluation questions
- Select evaluation design (RCT, quasi-experimental, etc.)
- Create baseline data
- Implement the intervention
- Collect endline/midline data
- Analyse impact using statistical/qualitative tools
- Interpret findings
- Provide recommendations
When Is Impact Evaluation Needed?
- When policymakers need to know if a program actually works
- For large-scale funding or scaling decisions
- When multiple outcomes or alternatives exist
- When an intervention claims measurable, attributable results
Examples in Real Social Research
-
Education:
Evaluating whether remedial classes reduce dropout rates. -
Health:
Measuring the effect of awareness campaigns on vaccination uptake. -
Livelihood:
Assessing if skill-training programs increase rural incomes. -
Women Empowerment:
Determining if SHG participation increases decision-making power.
Benefits of Impact Evaluation
✔ Improves policy effectiveness
✔ Saves money by identifying what works
✔ Supports evidence-based decision-making
✔ Enhances transparency & accountability
✔ Helps refine future programs







