Evaluation, adaptability and meta-evaluation

Agile Evaluation – based on Agile Development model

Agile Evaluation model

(Adapted from the Agile software development poster)

As part of the Digitally Ready project, we have recognised that the digital landscape (i.e. the tools, software and systems available and in use) is changing at an unprecedented rate. This not only heightens the need for people to have a good understanding of how to evaluate and learn new systems, but motivates a step-change in the way digitally-related projects must be managed.  Agile software development is not without its own issues, but it can be seen as an adaptive method for trying to ensure that software meets the needs of the stakeholders, rather than reflecting a set of user requirements which are a snapshot of a historical need.  The same approach, it might be argued, is increasingly necessary in other projects, and it is felt that it is appropriate for the Digitally Ready project.

Evaluation of the project deliverables (strategies, strategy advice, learning materials and workshops) is always necessary, and is built in to the way we work – producing pilots, discussing and demonstrating to stakeholders and revising as necessary.  Evaluation is also a key part of good project management – where it is necessary to review the processes and methods used to deliver the project outputs and revise the ways of working to allow us to deliver the greatest positive impacts we can achieve.

While reflecting on the nature of the project, it was realised that the evaluation itself needs to be adaptive to allow us to capture the outcomes (impacts we had not necessarily foreseen) and to enable us to make use of emerging knowledge about how people are engaging with the technology and developing skills and practices.  This is even more important during a period of institutional change, where we have a new Vice Chancellor, a new Director of ITS and other significant changes in senior personnel as well as changes to the organisational structures of the institution.

Ideally, we aim to include an ‘evaluation’ section in our dissemination, reflecting on the topic at hand – and we invite stakeholder evaluation on all of our work through the comments section on the blog (or through direct contact via email or in person if you feel that this is more suitable).

Evaluation

The process of re-visiting the evaluation plan in the light of our increased knowledge of current and future changes in the institution has led to a belief that the evaluation methodology needed to be adapted to suit the shifting environment.  The process of basing the proposed evaluation framework on an existing model (agile software development) has enabled the design of a robust, open, method of evaluation which it is hoped will provide timely and relevant feedback.  It also opens the door to an open ‘meta-evaluation’, allowing the evaluation methods themselves to be evaluated and reviewed by peers and stakeholders.

About patparslow

I am a researcher in the School of Systems Engineering, working in the fields of social media, digital identity and learning. I have previously worked in IT training/education, land survey, civil engineering, IT support, and as a software engineer.
This entry was posted in Dissemination, Research. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *