Task 2 - Clarify the object being evaluated
Not everything can be, or needs to be, evaluated. This Handbook is for evaluating learning initiatives which comprise a set of related inputs and activities, with a particular long-term aim of helping to achieve agency purpose/s or strategic objectives.
To better understand what is being evaluated and confirm that it can actually be evaluated, this second evaluation task involves clarifying the object to be evaluated.
Logic modelling for evaluation
In the learning and development sector, Kirkpatrick’s four-level outcomes hierarchy evaluation model has been a popular approach to learning evaluation for over 40 years. Unfortunately, this approach simplifies the complexity of learning pathways and overlooks individual and organisational factors which may be helping or hindering application of the learning in the workplace.
Learning evaluators are now moving towards more sophisticated forms of logic modelling which capture the complex reality of learning - a practice widely adopted in the broader evaluation field.
To avoid a ‘one size fit’s all’ approach to evaluation, the object of the evaluation needs to be clearly specified before moving onto subsequent tasks. When we clarify a learning initiative through the lens of evaluation, we aim to:
- Establish the strategic value of the initiative;
- Ascertain the diversity of learner inputs and learning pathways;
- Provide a framing for a useful evaluation; and
- Strengthen evaluative thinking.
Using logic models
To develop and use an effective logic model (also referred to as program logic, program theory, or theory of change), a two-step clarification process should be followed:
Step 1 - Describe the learning initiative
This step should focus on reaching agreement about how the initiative makes a difference and is of strategic value, this will clarify what should be included within the scope of the evaluation.
Step 2 - Develop the logic model
Logic models are a powerful tool for explaining the causal relationships between how an initiative operates and the results it is expected to achieve. They can be used to fully describe the set of tangible inputs and activities, and the intended sequence of results that produce learning transfer in the workplace, and the effects of that learning transfer on agency performance.
Potential negative outcomes that are detrimental and make something worse, rather than better, should also be included. In addition, logic models air assumptions and the reasoning behind why the change process are understood to happens in the way described, why this is of strategic value and what is important about the initiative.
Logic models draw on internal and external sources of expertise, learning practitioner’s tacit knowledge, stakeholders experience and what was learned from previous evaluations. Confirmation of the accuracy of the logic model by stakeholders is necessary as logic models guide an evaluation.
Fact sheet 6 in the handbook describes the main elements to include when developing logic models in evaluation.
End of Task 2: Select another tile to continue exploring the Learning Evaluation Handbook.
Contact the APS Academy
For further information and support, or to provide feedback on the Handbook, please visit the APS Academy's contact page.