Self-evaluation for Quality Development
The evaluation takes place within the organization, which means that the designers of the evaluation and the evaluees (the subjects of the evaluation) are members of the same organization. In the case of the SEVAQ+ project, the evaluation concerns the knowledge and learning processes of an organization that are based on ICT-enhanced learning environments.
SEVAQ is a self-evaluation tool of quality in eLearning in vocational education and training and higher education. The tool offers both a core of questions and customized evaluation possibilities. Evaluation results are available in real time and in a variety of forms, from radar graphs giving an instant picture to raw data for importing into other applications.
The tool was developed in a pilot Leonardo Da Vinci Project (2005-2007) in which nine European partners merged the Kirkpatrick evaluation approach and the quality model of EFQM to produce and implement a multifunctional dynamic tool in seven languages for the easy generation of self-evaluation questionnaires to gather high-quality feedback from learners.
Learners, who could potentially be students in a university or employees in a company, are given a questionnaire. The instrument has been designed by actors in their organization, e.g. instructors or human resource staff, in order to assess how well the ICT-supported learning processes in the organization are helping to achieve a certain objective.
SEVAQ has great potential to support quality assurance in technology-enhanced learning, pinpointing areas for improvement, tracking evolution, and enabling benchmarking. Market research has clearly showed the need for a 360° evaluation, identifying widespread recognition and certification or validation as critical success factors for the extensive take-up of SEVAQ.
The validation of questions is a challenging process. Their validity depends on the context in which they are used and the way they are used in combination with the target group for which they are used. There is no general validity of a question over all domains, target groups, and application procedures.
As an additional complication, the validity of questions in the SEVAQ+ tool can be viewed on different levels: Questions are assembled into complex factors called criteria that are aggregated into areas. The validity of an area depends highly on the validity of the composition of a criterion, which in turn depends on the validity of the questions.
The validity of the question level, therefore, brings with it the need to validate the validity of the composite indicator (the criteria and the area) as well. This, in connection to the different domains, contexts, and application procedure, opens the field too far to evaluate the validity within the given resources and scope of the SEVAQ+ project.
Therefore, the object of validation on which EFQUEL has recommended to focus will be the perceived usefulness of the questions from the point of view of various stakeholders involved in the design, the evaluation, and the processes, as well as people using the tool for management.
The SEVAQ+ project is coordinated by the University Nancy II, France.
Text partly extracted from Ehlers, U.-D. Helmstedt, C. (2010). Title: Working Paper for SEVAQ+ Project. Essen
Interested in the self-evaluation concept and the SEVAQ+ online tool? EFQUEL recommends participating in the SEVAQ+ workshop on 08 September (free of charge) and also in the Parallel Session on self-evaluation, which is taking place within the EFQUEL Innovation Forum 2010.