Although I’ve written many posts on meeting with pilot leads and interested parties, it’s occured to me that I’ve never fully elaborated on my original post (“Invitation to Pilot”) and explained the way in which we are effectively evaluating the tools we develop as we go along.
In development, we have opted to take a rapid prototyping approach — which means developing basic versions of each tool (based on the bid and on the scoping documents as approved by the Steering Group) so that we can take these to the actual end-users and get their feedback. We have then been gathering their feedback on the interface and on how they see themselves using the tools, and just as crucially how they see their students using them. This feedback then informs further development, all the while keeping the scoping documents in mind (though we have been reporting back to the Steering Group on tools like the Tagging and Recommender widgets where there have been differences in understanding in how these would work).
You can see some of the feedback we have gathered by looking at the Pilots and Implementation category, which details just some of the meetings we have had.
At least some of the technical evaluation, therefore, is partly ongoing as we check our own progress with the requirements of tutors/lecturers who are looking at our developments with an eye to how they will use them with their students/courses. There will be other considerations in our overall technical evaluation such as documentation, interoperability, sustainability but these should be outlined in more detail in the evaluation phase.