Task 3 - Construct key evaluation questions against criteria of merit
Third evaluation task aims to determine exactly what we want to know about the value of a learning initiative.
Front-end analysis of the needs of decision makers allows you to establish key evaluation questions and line of enquiry which stakeholders are interested in knowing. This leads to a highly focused, purposeful evaluation which makes good use of limited time and resources and will be taken-up by stakeholders.
Constructing key evaluation questions
Key evaluation questions (KEQs) communicate the scope and focus of an evaluation, guide its conduct and set the structure of the report. KEQs are what make evaluative knowledge more useful and relevant than simply describing what has happened. No more than six KEQs should be agreed upon as evaluation against each KEQ takes time and resources. Too many KEQs can also dilute attention being paid to the main issues of interest.
Using evaluation criteria of merit
Evaluation criteria of merit are used to form evaluative judgements about the quality, value, merit and significance of an initiative. They are the specific dimensions used to distinguish a more successful and worthwhile initiative from one that is less successful and of less strategic value.
Criteria of merit serve as a prompt to ensure different aspects of the initiative are considered and receive the appropriate degree of emphasis. Criteria of merit are not intended to be applied in a routine, standard or fixed way for every initiative or used in a tick-box fashion.
The APS Learning Evaluation Framework's criteria of merit are based on a set from the Organisation of Economic Co-operation and Development's (OECD) Development Assistance Committee. The criteria are:
- Effectiveness
- Accessibility
- Flexibility
- Relevance
- Impact
- Efficiency
These criteria have been designed for use across all APS Learning.
Fact sheet 7 in the handbook summarises KEQs and criteria of merit for evaluation.
Developing evaluation rubrics
Rubrics are a valuable tool for evaluation. They provide detailed descriptions of the agreed performance dimensions and make explicit the different levels of performance that can occur.
Fact sheet 8 in the handbook discusses rubrics in evaluation and illustrates two hypothetical examples.
End of Task 3: Select another tile to continue exploring the Learning Evaluation Handbook.
Contact the APS Academy
For further information and support, or to provide feedback on the Handbook, please visit the APS Academy's contact page.