Go to content

2 Evaluation Approach and Methodology

This chapter sets out the team’s overall approach to the evaluation. It is based on an analytical framework (presented in Section 2.1) to ensure systematic data collection and analysis of the evidence. It incorporates a case study approach, with two in-depth case studies of the response to the current humanitarian crises in South Sudan and Syria, and a desk-based case study of the Danish response in Afghanistan.

In addition to a review of policy, strategy, evaluative and reporting documentation,[1]

the team conducted interviews with:

  • 24 staff members from Danida’s eight NGO partners in Copenhagen, as well as 56 staff members from the eight partners at field level;
     
  • 31 staff members from the International Committee of the Red Cross (ICRC) and Danida’s United Nations partners at headquarters level, as well as 52 staff members at field level;
     
  • 24 staff members from the MFA, including the missions in Geneva and New York, the embassies in Rome, Addis Ababa, Nairobi and Yangon, and staff at field level for the case studies;
     
  • 15 representatives of other donors, including from missions in Geneva and New York, as well as at field level;
     
  • 19 informants from other NGOs and organisations at headquarters and field level;
     
  • 13 gender-disaggregated focus group discussions with beneficiaries in South Sudan and Syria.[2]

Section 2.2 outlines the evaluation methodology and process, including approach to the case studies, as well as the data collection tools employed by the team. Section 2.3 describes how the team synthesised findings from the case studies with the findings from the desk review to generate conclusions and recommendations at the strategy level. Finally, Section 2.4 provides a detailed summary of the challenges and limitations faced by the evaluation team in conducting the evaluation and applying the approach and methodology set out below.

2.1 Evaluation approach and analytical framework

The evaluation has a dual purpose, to provide accountability to Danish taxpayers and aid recipients, and to learn lessons for the future implementation of the Strategy. Given the particular emphasis from Danida on the latter, the team devised an approach and methodology around the three core principles of utilisation, participation and learning.

It was clear from initial consultations during the inception phase that, for the evaluation process as well as the report to be truly useful for Danida and its partners, the team needed to build a strong sense of ownership of the evaluation’s findings and recommendations within Danida and its strategic partners. This was achieved through setting out a participatory process with ongoing dialogue and engagement with the main evaluation stakeholders at critical stages of the evaluation.

The team had originally proposed a theory-based approach to the evaluation. Its aim was to develop a theory of change for the Strategy with Danida staff during the inception phase, and use this to guide data collection and an analysis of the contribution made by Danida and its partners to the results identified during the evaluation. However, based on findings from the theory of change exercise conducted with stakeholders during the inception phase, the team felt that it would be inappropriate to construct a theory retrospectively where no single underlying rationale behind the Strategy exists, and which would require the evaluation team to make a number of major assumptions.

In the absence of a theory of change to guide data collection and analysis, the team developed an analytical framework as the basis for conducting the evaluation. This consisted of two tools: an evaluation matrix, and an evidence assessment framework. These were designed to ensure that the evaluation took a rigorous and systematic approach to answering the evaluation questions.

2.1.1 Evaluation matrix

The evaluation matrix guided the team’s data collection during the evaluation and helped to ensure that it took a coherent and comprehensive approach to answering the questions in the ToR. The matrix is included in Annex C of this report.

The evaluation matrix sets out the six overarching evaluation questions from the ToR, divided into a total of 19 sub-questions. Each sub-question has a number of indicators against which the team gathered evidence during the desk and field phases of the evaluation. These include the critical assumptions that Danida staff identified during the theory of change exercise at the stakeholder workshop. The matrix also identifies the analytical methods used to answer each sub-question and the sources of data for each indicator. Section 2.2 below describes these analytical methods in greater detail.

2.1.2 Evidence assessment framework

The team developed an evidence assessment framework to organise and analyse the data gathered during the evaluation. The team used this during the desk review phase to document the evidence gathered against each indicator by data source (e.g. Danida interviews, partner interviews, documents, etc.). This enabled the team to identify the emerging findings from the evaluation to help focus data collection during the field visits. The framework was then used together with the field-level findings as part of the evaluation synthesis process to identify the final findings from the evaluation that are underpinned by strong evidence. See Section 2.3 for more details.

The tool also allowed the team to eschew the risk of excluding relevant issues that are not covered by the evaluation questions through the inclusion of a section under each sub-question to capture any additional relevant evidence.

2.2 Evaluation process and methodology

This section outlines the key components of the evaluation methodology and the main stages in the evaluation process. A more detailed description of the methodological building blocks and evaluation process is included in Annex B.

Evaluation methodology

The participatory and learning-focused approach described above was underpinned by a number of core methodological building blocks. Figure 1 below summarises these while Annex B describes each tool.

Figure 1: Methodological building blocks

Policy/Strategy Analysis

Analysis of Danida policy and strategy documents

Analysis of strategy against GHD Principles

Context Analysis

Analysis of global context in which assistance is provided

Identification of strategic priorities that remain relevant

Identification of key challenges that have emerged

Portfolio Analysis

Analysis of Danida’s humanitarian portfolio to understand budget allocations

Assessment of extent to which the strategy has guided budget allocation decisions

Results Tracking

Assessment of the adequacy and quality of results documentation and monitoring by partners

Review of partner reports, evaluation reports and capacity assessments

Partner Analysis

Comparative analysis of Danida’s strategic humanitarian partners

Assessment of effectiveness of partnership approach in delivering against strategy priorities

Online Survey

Address evidence gaps through targeting partner organisation staff in field locations not covered by the case studies

Evaluation process

As summarised in Figure 2 below, the evaluation has comprised three phases so far: an inception phase (May-June 2014), the desk review and field study phase (July-September 2014) and the analysis and reporting phase (October 2014-February 2015). In addition, there will be a follow-up and update phase, currently planned for January-February 2016. Annex B describes the main activities that the team undertook in each phase.

Figure 2: Key phases and activities of the evaluation

Inception

Desk review and field study

Analysis & reporting

Follow-up and update

Initial stakeholder consultations

Evaluation design

Design of tools for data collection and analysis

Inception report

Policy & strategy analysis

Context analysis

Comparative partner analysis

Portfolio analysis

Results tracking

Detailed stakeholder consultations

Online survey

Case study visits

Debriefing notes

Case study reports

Evaluation synthesis

Draft evaluation report

Findings and validation workshop

Final evaluation report

Interviews with stakeholders

Document review

Update of findings

Follow-up report

Follow-up phase workshop

2.3 Approach to evaluation synthesis

Once the team had produced draft case study reports, it convened over Skype to discuss emerging findings at the synthesis level. This approach has been important to ensure that evidence from the three individual case studies feeds into global conclusions and recommendations. To guide a systematic approach to analysis, the team used the evidence assessment framework to derive emerging findings from the desk review phase against each evaluation sub-question. This was used in conjunction with an additional Excel-based mapping tool to critically interpret the findings and conclusions from the three case studies and generate an understanding of their applicability at the global level. Table 1 below presents a sample of the framework, showing the analysis and synthesis of findings against the first sub-question.

Once the team had recorded conclusions from each case study, it arrived at an overall interpretation for each evaluation sub-question, which aimed to present a balanced picture across the studies and triangulate with findings and conclusions drawn from desk-based methods. This synthesised interpretation then formed the basis for the main findings in this synthesis report.

Table 1: Outline of evaluation synthesis mapping framework

EQ 1
How relevant and flexible is the Danish Humanitarian Strategy given the changing humanitarian context since 2010?

 

Summary of conclusions

Synthesis

South Sudan

Syria

Afghanistan

Desk methods

Overall interpretation

EQ 1.1
Have the strategic priorities been relevant, given changing humanitarian challenges?

The interventions supported in South Sudan have been relevant to the context. Partners found Danish support to be veryflexible, and more so than some other donors, particularly due to the possibility of accessing additional funds and shifting funds from planned development activities

Danida’s Humanitarian Strategy remains relevant to the Syria crisis response, particularly the focus on vulnerability, protection, linking emergency and longer-term approaches, and the promotion of innovation. Partner capacity for scanning the environment to ensure that their responses remain relevant or can adapt to changing circumstances tended to be limitedto short-term planning exercises

There is a clear strategic focus for engagement in Afghanistan, particularly focus on vulnerability through combination of longer-term support to refugees and IDPs and emergency assistance to those at risk of natural disasters and conflict

The Context Analysis shows that priorities such as resilience remain relevant while urbanisation presents a new challenge to humanitarian actors, and technology and innovation represent new opportunities. The range of new actors in the humanitarian field also has implications for Danida

Overall, partners felt that the priorities in the Strategy have been relevant even though changes in the humanitarian context mean that there are new challenges that the revised Strategy will need to address. Although the humanitarian context has been changing rapidly during the Strategy implementation period, aid agencies tend to be more focused on responding to immediate needs and risks rather than scanning the environment for future threats

2.4 Methodological challenges and limitations

The following section sets out the key limitations to data collection throughout the evaluation, and how the team’s approach, methods and tools have affected the accuracy of findings, confidence in findings, and the reliability of conclusions.

Use of a theory-based approach: As discussed above, the team originally planned to develop a theory of change behind the Strategy, with the idea that it would be tested and reconstructed during the evaluation in order to assess Danida’s contribution to results. However, the team judged early in the evaluation that this approach was not feasible on the basis of findings from the first stakeholder workshop with Danida staff in the inception phase, which showed that there were different views about how the Strategy would bring about change and how these changes might look. The team, therefore, felt that it would be inappropriate to construct a theory retrospectively where no single underlying rationale behind the Strategy existed, and which would require the evaluation team to make a substantial number of assumptions. The team subsequently adapted its overarching evaluation approach to focus instead on participation and learning while maintaining a robust approach to evidence assessment through the use of an evaluation matrix and evidence assessment framework. As part of the inception report, the team did not completely rule out using a theory-based approach and aimed to explore the option of developing theories of change at the country level (i.e. thinking about how the Strategy is applied in a particular context) to provide a focus for field-level evidence gathering. However, this was also deemed unworkable owing to insufficient empirical evidence from the field on which to base a retrospective construction of a theory of change.

Identifying results of Strategy implementation from partner reporting: Consultations during the inception phase made it clear that Danida and its framework agreement NGO partners have struggled with reporting against the 47 strategic priorities in the current Strategy. Also, as part of its partnership approach and adherence to the GHD principles, Danida does not require its partners to use a specific reporting format if they already have an internal format or one that they use for other donors. This means that it is has been difficult to get an overview of results and to compare these across partners and, in the case of multiple sources of funding for a programme, to identify Danida’s contribution to results. This is particularly the case since Danida provides a large proportion of flexible funding, not earmarked to specific projects.[3] The team dealt with this challenge by (1) using the comparative partner analysis to identify which of the strategic priorities each partner is addressing through its programmes and how this contributes to the overall implementation of the Strategy; and (2) by focusing on evaluating the Strategy rather than individual partner interventions to examine whether Danida’s decisions and strategic choices have been effective for implementing the Strategy, and whether partners have effective monitoring and reporting systems in place to inform Danida of results.

Access to documentation: Access to documentation from Danida was a challenge for the evaluation team. The team was reliant on Danida to provide the full range of documents relating to its strategic partners (such as reports on reviews by the Technical Advisory Service and NGO capacity assessments) as well as other relevant reports. However, during the course of the evaluation, the team identified that several documents were missing.[4] Although it was able to locate public documents, such as the annual reports of international organisation partners, and Danida provided some additional documentation following the presentation of the draft report, some non-public documents were unavailable.

Field site access: The case study teams encountered some drawbacks related to field site accessibility. A limited number of partners were using Danida’s funding in Jordan as well as in Lebanon because they found the flexibility of Danida’s funding more helpful for operations within Syria or cross-border operations. As a result, the team’s ability to gather a wide range of evidence from multiple project site visits was less than expected; but the team was still able to gather sufficient evidence to answer the evaluation questions. In South Sudan, logistical and security constraints meant that the team was not able to visit all of Danida’s partner projects. However, given that this is not a project evaluation but rather an evaluation focused at the strategic level, this was not found to constitute a major bias. For the Afghanistan desk study, due to high staff turnover, Danida initially identified only three key informants within the MFA for interview. The team was later able to interview two Danida partners. The desk report makes this limitation very clear when presenting findings and conclusions.


[1] A full bibliography of documents consulted is provided in Annex I.

[2] A full list of persons interviewed is provided in Annex H.

[3] In the case of UN partners, which have received regional funding for the Syria crisis, for example, and country-based pooled funds.

[4] For example, it did not have capacity assessment reports for two NGO partners and was missing at least one review report.

24

Top

This page forms part of the publication "Evaluation of the strategy for Danish humanitarian action 2010-2015 – Synthesis report – Evaluation 2015.01" as chapter 2 of 5.
Version no. 1.0, 2015-05-13
Publication may be found at the address http://www.netpublikationer.dk/um/15_evaluation_2015_01/index.html