about the scope of the program and were better able
to understand the many factors that contribute to the success of a
The interaction between the various participants
generated information and ideas that did not necessarily occur in
response to the individual questionnaires. Also, it brought forth
ideas from students and tutors that might not otherwise have emerged.
3. Provided Positive Feedback
Some of the coordinators had initially viewed evaluation as a process
of finding out everything that was "wrong" with their work.
However, we all found that the Evaluation Kit provided positive
feedback to program staff about the many good things that were
happening in programs.
4. Confirmed/Suggested Areas for Development
Where areas for improvement were suggested, these were now accepted
more positively because they were seen as constructive criticism. In
some instances the program evaluation results only confirmed what
program staff already knew: the need, for example, to improve
goal-setting and lesson planning for students and tutors, or the need
to diversify fund-raising. In other instances new suggestions for
improvement were made, but often with the recognition that more
financial or material resources would be needed in order to implement
5. Raised Awareness of the Volume of Program
Some coordinators stated that the evaluation process gave tutors and
students a greater realization of the wide range and variety of work
to be done in a program. Some program participants subsequently made a
commitment to help coordinators get things done.
1. The Good Practice Statements are
We felt that the Good Practice statements were generally well
researched and theoretically sound. We also agreed that they provided
an excellent basis for discussion and that they covered most elements
of the program that should be evaluated. We felt that even when used
on their own without the supporting statements (some of which we found
troublesome -- see next section) they provided a valuable starting
point for discussion.