Royal danish ministry of foreign affairs - Go to the frontpage of   Publication  



1. The DAC Evaluation Quality Standards identify the key pillars needed for a quality evaluation process and product. They have been prepared by DAC members in order to define member countries’ expectations of evaluation processes, and evaluation products. The Standards are not binding on member countries, but a guide to good practice and aim to improve the quality of development intervention evaluations. They are intended to contribute to a harmonised approach to evaluation in line with the principles of the Paris Declaration on Aid Effectiveness.14

2. The Standards are intended to:

- Provide standards for the process (conduct) and products (outputs) of evaluations.

- Facilitate the comparison of evaluations across countries (metaevaluation).

- Facilitate partnerships and collaboration on joint evaluations.

- Better enable member countries to make use of each others’ evaluation findings and reports (including good practice and lessons learned).

- Streamline evaluation efforts.

3. The Standards support evaluations that adhere to the DAC Principles for the Evaluation of Development Assistance, including impartiality and independence, credibility and usefulness, and should be read in conjunction with those principles. The Principles focus on the management and institutional set up of the evaluation systems within development agencies and remain the benchmark against which OECD Members are assessed in the DAC Peer Reviews. By contrast the Standards provide guidance on the conduct of evaluations and for reports. While the Standards are not binding on every evaluation, they should be applied as widely as possible and a brief explanation provided where this was not possible.

4. The term ’development intervention’ is used in the Standards as a general term to refer to the subject of the evaluation and may refer to an activity, project, programme, strategy, policy, topic, sector, operational area, institutional performance etc.

5. The Standards recognise that the product of an evaluation may be in a variety of different forms, including oral or written reports, presentation and community workshops. The term ’evaluation report’ is used to cover all forms of evaluation products.

1. Rationale, purpose and objectives of an evaluation

1.1 The rationale of the evaluation
Describes why and for whom the evaluation is undertaken and why it is undertaken at a particular point in time.

1.2 The purpose of the evaluation
The evaluation purpose is in line with the learning and accountability function of evaluations. For example the evaluation’s purpose may be to: Contribute to improving an aid policy, procedure or technique; Consider a continuation or discontinuation of a project/programme; Account for aid expenditures to stakeholders and tax payers.

1.3 The objectives of the evaluation
The objectives of the evaluation, specify what the evaluation aims to achieve.

For example:

- To ascertain results (output, outcome, impact) and assess the effectiveness, efficiency and relevance of a specific development intervention;

- To provide findings, conclusions and recommendations with respect to a specific policy, programme etc.

2. Evaluation scope

2.1 Scope of the evaluation
The scope of the evaluation is clearly defined by specifying the issues covered, funds actually spent, the time period, types of interventions, geographical coverage, target groups, as well as other elements of the development intervention addressed in the evaluation.

2.2 Intervention logic and findings
The evaluation report briefly describes and assesses the intervention logic and distinguishes between findings at the different levels: inputs, activities, outcomes and impacts. The report also provides a brief overall assessment of the intervention logic.

2.3 Evaluation criteria
The evaluation report applies the five DAC criteria for evaluating development assistance: relevance, efficiency, effectiveness, impact and sustainability. The criteria applied for the given evaluation are defined in unambiguous terms. If a particular criterion is not applied this is explained in the evaluation report, as are any additional criteria applied.

2.4 Evaluation questions
The questions asked, as well as any revisions to the original questions, are documented in the report for readers to be able to assess whether the evaluation team has sufficiently assessed them.

 3. Context

3.1 The development and policy context
The evaluation report provides a description of the policy context relevant to the development intervention, the development agency’s and partners’ policy documents, objectives and strategies.

The development context may refer to: regional and national economy and levels of development.

The policy context may refer to: Poverty reduction strategies, gender equality, environmental protection and human rights.

3.2 The institutional context
The evaluation report provides a description of the institutional environment and stakeholder involvement relevant to the development intervention, so that their influence can be identified and assessed.

3.3 The socio-political context
The evaluation report describes the socio-political context within which the intervention takes place, and its influence on the outcome and impact of the development intervention.

3.4 Implementation arrangements
The evaluation report describes the organisational arrangements established for implementation of the development intervention, including the roles of donors and partners

4. Evaluation methodology

4.1 Explanation of the methodology used
The evaluation report describes and explains the evaluation method and process and discusses validity and reliability. It acknowledges any constraints encountered and their impact on the evaluation, including their impact on the independence of the evaluation. It details the methods and techniques used for data and information collection and processing. The choices are justified and limitations and shortcomings are explained.

4.2 Assessment of results
Methods for assessment of results are specified. Attribution and contributing/ confounding factors should be addressed. If indicators are used as a basis for results assessment these should be SMART (specific, measurable, attainable, relevant and time bound).

4.3 Relevant stakeholders consulted
Relevant stakeholders are involved in the evaluation process to identify issues and provide input for the evaluation. Both donors and partners are consulted. The evaluation report indicates the stakeholders consulted, the criteria for their selection and describes stakeholders’ participation. If less than the full range of stakeholders was consulted, the methods and reasons for selection of particular stakeholders are described.

4.4 Sampling
The evaluation report explains the selection of any sample. Limitations regarding the representativeness of the evaluation sample are identified.

4.5 Evaluation team
The composition of evaluation teams should posses a mix of evaluative skills and thematic knowledge, be gender balanced, and include professionals from the countries or regions concerned.

5. Information sources

5.1 Transparency of information sources
The evaluation report describes the sources of information used (documentation, respondents, literature etc.) in sufficient detail, so that the adequacy of the information can be assessed. Complete lists of interviewees and documents consulted are included, to the extent that this does not conflict with the privacy and confidentiality of participants.

5.2 Reliability and accuracy of information sources
The evaluation cross-validates and critically assesses the information sources used and the validity of the data using a variety of methods and sources of information.

6. Independence

6.1 Independence of evaluators vis-à-vis stakeholders
The evaluation report indicates the degree of independence of the evaluators from the policy, operations and management function of the commissioning agent, implementers and beneficiaries. Possible conflicts of interest are addressed openly and honestly.

6.2 Free and open evaluation process
The evaluation team is able to work freely and without interference. It is assured of cooperation and access to all relevant information. The evaluation report indicates any obstruction which may have impacted on the process of evaluation.

7. Evaluation ethics

7.1 Evaluation conducted in a professional and ethical manner
The evaluation process shows sensitivity to gender, beliefs, manners and customs of all stakeholders and is undertaken with integrity and honesty. The rights and welfare of participants in the evaluation are protected. Anonymity and confidentiality of individual informants should be protected when requested and/or as required by law.

7.2 Acknowledgement of disagreements within the evaluation team
Evaluation team members should have the opportunity to dissociate themselves from particular judgements and recommendations. Any unresolved differences of opinion within the team should be acknowledged in the report.

8. Quality assurance

8.1 Incorporation of stakeholders’ comments
Stakeholders are given the opportunity to comment on findings, conclusions, recommendations and lessons learned. The evaluation report reflects these comments and acknowledges any substantive disagreements. In disputes about facts that can be verified, the evaluators should investigate and change the draft where necessary. In the case of opinion or interpretation, stakeholders’ comments should be reproduced verbatim, such as in an annex, to the extent that this does not conflict with the rights and welfare of participants.

8.2 Quality control
Quality control is exercised throughout the evaluation process. Depending on the evaluation’s scope and complexity, quality control is carried out either internally or through an external body, peer review, or reference group. Quality controls adhere to the principle of independence of the evaluator.

9. Relevance of the evaluation results

9.1 Formulation of evaluation findings
The evaluation findings are relevant to the object being evaluated and the purpose of the evaluation. The results should follow clearly from the evaluation questions and analysis of data, showing a clear line of evidence to support the conclusions. Any discrepancies between the planned and actual implementation of the object being evaluated are explained.

9.2 Evaluation implemented within the allotted time and budget
The evaluation is conducted and results are made available in a timely manner in relation to the purpose of the evaluation. Unenvisaged changes to timeframe and budget are explained in the report. Any discrepancies between the planned and actual implementation and products of the evaluation are explained

9.3 Recommendations and lessons learned
Recommendations and lessons learned are relevant, targeted to the intended users and actionable within the responsibilities of the users. Recommendations are actionable proposals and lessons learned are generalizations of conclusions applicable for wider use.

9.4 Use of evaluation
Evaluation requires an explicit acknowledgement and response from management regarding intended follow-up to the evaluation results. Management will ensure the systematic dissemination, storage and management of the output from the evaluation to ensure easy accessibility and to maximise the benefits of the evaluation’s findings.

10. Completeness

10.1 Evaluation questions answered by conclusions
The evaluation report answers all the questions and information needs detailed in the scope of the evaluation. Where this is not possible, reasons and explanations are provided.

10.2 Clarity of analysis
The analysis is structured with a logical flow. Data and information are presented, analysed and interpreted systematically. Findings and conclusions are clearly identified and flow logically from the analysis of the data and information. Underlying assumptions are made explicit and taken into account.

10.3 Distinction between conclusions, recommendations and lessons learned
Evaluation reports must distinguish clearly between findings, conclusions and recommendations. The evaluation presents conclusions, recommendations and lessons learned separately and with a clear logical distinction between them. Conclusions are substantiated by findings and analysis. Recommendations and lessons learned follow logically from the conclusions.

10.4 Clarity and representativeness of the summary
The evaluation report contains an executive summary. The summary provides an overview of the report, highlighting the main conclusions, recommendations and lessons learned.

14 Paris Declaration on Aid Effectiveness: Ownership, Harmonisation, Alignment, Results and Mutual Accountability.

This page forms part of the publication 'Evaluation Guidelines' as chapter 8 of 9

Publication may be found at the address


  © |