2 Evaluation Approach and Methodology
This chapter builds on an inception report submitted by the evaluation team in April 2009. The inception report, which outlines the evaluation team’s understanding of the assignment and approach to the assignment, was discussed with a range of stakeholders in Kathmandu, Nepal, on 16 April 2009.
2.1 Purpose and Objective of the Joint Evaluation
As pointed out in the Terms of Reference (ToR) for the joint evaluation (attached as Appendix 1) “the overall purpose of the evaluation is to provide information about the outcomes and document early signs of impact of the SESP that MoE, the development partners and other education stakeholders can use for improving the policy framework and further the design of the on-going SSR”.
The immediate objectives of the joint evaluation are to:
- Assess the efficiency, effectiveness, relevance and sustainability of each of the four components of the SESP as well as the SESP as a whole.
- Evaluate the strategies, approaches and methods adopted by the SESP in achieving the three main objectives.
- Assess the performance of (1) the implementing partners in the implementation of the respective components/activities and (2) the schools in the use of goods and services provided.
- Analyze the anticipated and unanticipated, positive and negative impact of the SESP on the target groups.
- Enlist the lessons learned and best practices developed by the SESP, and elaborate forward-looking recommendations, which could contribute to policy development and future development of the secondary education in the view of the on-going SSR.
The joint evaluation focuses primarily on the key evaluation criteria of relevance, efficiency, effectiveness and sustainability. In addition, early signs of impact are assessed. The criteria have been used as defined in the 2006 version of the Danida Evaluation Guidelines which in turn are based on the OECD/DAC evaluation criteria. They are captured in Box 2.1 for easy reference. The criteria are operationalised in the evaluation matrix in Appendix 2.
The approach to the joint evaluation involves a combination of qualitative and quantitative research methods. The purpose of this approach is to provide complementary perspectives and, where possible, validate observations derived from the quantitative analysis by probing the assumed reasons behind the changes through qualitative interviews (triangulation). In addition, qualitative research methods such as stakeholder assessment of most significant changes were adopted as the most appropriate and efficient data collection method to measure change and its underlying reasons at school level.
Box 2.1 Evaluation Criteria
||Extent to which objectives of a development intervention are consistent with the beneficiaries’ requirement, country needs, global priorities, and partners’ and donors’ policies.|
||A measure of how economically resources/inputs (funds, expertise, time, etc.) are converted into results.|
||Extent to which the development intervention’s objectives were achieved, or are expected to be achieved, taking their relative importance into account.|
||The positive and negative, primary and secondary, long-term effects produced by a development intervention, directly or indirectly, intended or unintended.|
||The continuation of benefits from a development intervention after major development assistance has been completed.|
The evaluation matrix has been developed to structure the development of specific evaluation questions. For each of the questions, the evaluation matrix includes specific indicators, methods and data sources. This matrix constitutes the overall analytical framework for the joint evaluation from which the use of various methodologies has been identified.
As the matrix shows, some of the subjects to be evaluated can be measured in a valid and reliable way through specific quantitative indicators adopted by SESP such as access to education, which is measured by gross and net enrolment rates. In some cases, SESP indicators are in place but may not necessarily be fully valid or reliable. Quality of education is an illustrative example where the evaluation has decided to supplement the indicator adopted by the GoN, exam pass rates, by data of a more qualitative nature such as changes in stake-holder perceptions. Finally, for aspects such as learning environment and capacity development, no easily identifiable indicators are in place and the evaluation relies therefore largely on data collected through interviews and focus group discussions with stakeholders.
The joint evaluation relied on the following five main methodologies, elaborated briefly below (the inception report contains further details):
- Document review;
- Explorative interviews and seminars;
- Semi-structured interviews;
- Data analysis; and
- Qualitative fieldwork including participatory focus group discussions based on the Most Significant Change (MSC) approach.
The desk study component of the joint evaluation has been carried out to make maximum use of the evaluative research already carried out during SESP and in preparation for the SSR.
Secondly, the document review has informed the development of the evaluation matrix and the detailed planning carried out for the field mission.
The documents reviewed by the evaluation team are listed in Appendix 3 and involve official programme documents, inception documents, M&E reports from the GoN, review reports, aide memoires and evaluations.
In addition, a number of research reports have been utilised notably “Government of Nepal 2006”, “–2008b” and “– 2009”. These reports have been prepared for the GoN and are generally assessed to be of high quality because of their comprehensive methods and the calibre and reputation of the researchers. Moreover, it is notable that despite several critical findings the GoN has accepted the studies as valid representations of the state of affairs in the secondary education sector in Nepal.
Explorative Interviews and Seminars
Explorative interviews and consultations were carried out with different stakeholders in SESP during the inception phase of the assignment. The purpose was to refine the evaluation questions and to adjust expectations with regard to the scope of the joint evaluation. In addition to an interview carried out at the Danish Ministry of Foreign Affairs, an inception seminar was held in Kathmandu. Moreover, separate interviews were conducted in Kathmandu with representatives from the MoE, Education Sector Advisory Team (ESAT), DoE, NCED, and with officials from ADB and the Embassy of Denmark.
Finally, to corroborate the findings and check factual accuracy, a seminar was held in Kathmandu to present the draft report and solicit comments from key stakeholders on the draft report.
To properly assess whether the relevant outcomes and policy objectives have materialised, semi-structured interviews were conducted with representatives from entities responsible for secondary education at different organisational levels as well as involved development partners and researchers/experts on secondary education. The interviewees are listed in Appendix 5a. The interactions were based on interview guides, which are available in Appendix 6.
To evaluate whether the programme intensive activities in the ten PID districts in the western part of Nepal have made a significant contribution compared to similar districts with no programme intensive activities, three sets of PID and non-PID districts have been selected. The selection was made to ensure comparability in as many ways as possible except for the districts’ inclusion or exclusion from the programme’s intensive activities. This approach is also referred to as ’focused comparisons or a ’most-similar’ approach (Hague, Harrop and Breslin, 1992) . Such focused comparisons have been known to work well compared to individual case studies – and are a cost-effective alternative to large-scale data collection. They have proven particularly effective when two units are compared over time.
District case studies were carried out in six districts, evenly divided between PID and non-PID districts. A structured sample was selected with a view to keeping eco-zone, location, population composition, Human Development Index (HDI) Rank, and educational indicators as similar as possible across each of the focused comparisons. Through this selection, Kailali (PID) was compared with Bardiya (non-PID) as both are in the lowlands (’the Terai’) with high Tharu (ethnic) population. Similarly, Doti (PID) was compared with Dadeldhura (both are hilly districts with an overwhelming Brahmin-Chettri population). The original intention was to compare Jumla (PID) with Kalikot (non-PID) (both mountain districts with a majority Khas-Chettri population). However, due to a strike in the Terai and the related logistical issues Rasuwa (non-PID), a mountainous district in Central Region (with Tamang as majority population), was selected instead. Table 2.1 lists the characteristics of the districts used for focused comparisons. Reference is made to Appendix 4a for further details on the sampling criteria and the approach to the district visits.
Table 2.1 – Various Socio-economic Indicators for the Districts selected for Focused Comparisons
||Location and Eco-zone
||Mean Years of Schooling
in Lower Secondary
Source: Column 3: Census 2001; Columns 4, 5&6: Nepal Human Development Report 2004 (The data are for 2001); for the remaining columns: School Level Educational Statistics of Nepal 2003.
Note: Figures in parentheses indicate the HDI rank of the 75 districts. NER=Net Enrolment Rate; T=Total; F=Female; M=Male
In each of the six districts, a high and a low-performing school were visited (measured by SLC pass rate). In each of these 12 different schools, separate interviews and focus group discussions were held with the School Management Committee, the head teacher, lower secondary and secondary teachers, lower secondary and secondary students, and parents. Moreover, semi-structured interviews were carried out in each district with the District Education Office (DEO) staff, Resource Persons (RPs), Education Training Centres (ETC), representatives from the Private and Boarding Schools’ Organisation Nepal (PABSON) and other key stakeholders. Accordingly, a total of six DEO offices were visited and resource persons were interviewed in each district. The criteria used to select stakeholders at the district and school levels are described in the sample letter attached as Appendix 4b. Appendix 5b lists the actual stakeholders consulted in the districts and Appendix 6 includes the interview guides used at district and school level.
Quantitative Data Analysis
Descriptive data have been used to show trends for key educational outcomes (enrolment, student performance) and to compare key educational outcomes between PIDs and their neighbouring districts.
Efficiency of the SESP programme has been analysed at overall level by drawing on existing analyses such as the Review Aide Memoires and financial tables from implementing partners. The analysis has also been enriched by feedback from interviews with key informants.
2.4 Challenges and Limitations
The joint evaluation has been faced with the following limitations and challenges:
- The absence of a solid baseline study reduces the ability to assess change over the SESP period;
- Classroom observations could provide valuable evidence of the nature of the interaction between teachers and pupils and of the enactment of the reformed curriculum, but due to resource constraints they were limited;
- It is not the intention that the sample of PID and non-PID districts should be representative for Nepal as such. The relevance of the findings is, however, assessed to be reasonably high for the PIDs, as the data were obtained in districts that vary considerably and between them cover a wide range of different social and geographical conditions;
- While the focused comparisons were chosen to check for the influence of the PID interventions, it has not been possible to uncover the overall influence of SESP, which was implemented nationwide. Moreover, the focused comparisons rest on the premise that a substantial difference exists in the inputs provided by SESP to PIDs and non-PIDs. It has subsequently been learned during the fieldwork that the distinction between PIDs and non-PIDs has to some extent been blurred by decisions to increase volume of scholarships to and construction and rehabilitation of school blocks in non-PIDs. This implies that the expected difference in educational outcomes between PIDs and non-PIDs is likely to be smaller than originally expected;
- As mentioned, it was the intention to include Kalikot district in the focused comparison against Jumla district, but this was not possible due to a combination of road blockages in Banke district and bad weather conditions for the flight to Jumla and onwards to Kalikot. Instead, Rasuwa district was selected as a non-PID Mountain district. Rasuwa is situated in the Central Development Region, but despite its proximity to the Kathmandu Valley, its HDI is only 0.394, placing the district in the same category of poorly developed districts as Kalikot. Even so, it is clear that Rasuwa has a different point of departure than Jumla. For example, Rasuwa’s enrolment rates were higher at the time of the SESP launch, which needs to be taken into account when comparing the two districts as further elaborated in the following chapters.
- The assessment of SESP attribution is complicated by the fact that other programmes run in parallel in the educational sector such as the Education for All (EFA) programme and the Teacher Education Programme (TEP). It depends largely on stakeholders’ knowledge of and ability to distinguish SESP outputs from other programmes’ outputs;
- The evaluation had a fixed standard schedule in each district and for interactions with stakeholders in Kathmandu. This schedule was largely completed but on some occasions at district level, the groups selected for focus group discussions had a slightly different composition than detailed in Appendix 4b. Moreover, the DEO him/herself was not always available for interviews and the evaluation team instead interviewed school supervisors or other officials from the DEO. None of these deviations are however expected to impact on the nature of the findings in any significant way;
- The evaluation team did not have a chance to interact with the Regional Education irectorates. The meeting scheduled at Surkhet district (upon return from Kalikot) was cancelled for the reasons mentioned above while no qualified representatives were available for interaction with the evaluation team at the Regional Education Directorate (RED) in Doti. The inclusion of the regional perspective would arguably have added further richness to the analysis but at the same time it should be noted that the REDs did not play a major part in implementing SESP; and
- Analyses of educational data (enrolment, drop-out, etc.) indicated sudden drops and increases in data that suggest reliability problems rather than substantial changes. In such cases, the evaluation consulted with the relevant authorities to ascertain whether the data reflected real changes or measurement errors.
This page forms part of the publication 'Joint Evaluation of the Secondary Education Support Programme' as chapter 4 of 14
Version 1.0. 17-05-2010
Publication may be found at the address http://www.netpublikationer.dk/um/10395/index.htm