Revista Electrónica de Investigación Educativa


Vol. 5, No. 1, 2003

Weaknesses of the Quality Evaluation
Process in the Spanish University: Causes,
Consequences and Proposals for Improvement

Clemente Rodríguez Sabiote   (*)
clerosa@ugr.es

José Gutiérrez Pérez   (*)
jguti@ugr.es

*  Facultad de Ciencias de la Educación
Departamento de Métodos de Investigación y Diagnóstico en Educación

Universidad de Granada

Campus de Cartuja s/n, 18071
Granada, España

(Received: February 11, 2002; accepted for publishing: January 16, 2003)

 

Abstract

The process of quality assessment going on in the Spanish university since the early nineties has generated undeniable advances. However, there have been detected a series of weaknesses which have affected the process with consequences unpredictable a priori. This article reviews some of the weaknesses, together with their possible causes and consequences. Suggestions are provided for improving the process of evaluating the quality of university institutions.

Key words: Quality evaluation in the university, performance indicators in the university.

 

Introduction

Currently, the evaluation of universities is widespread in the countries of Europe, and of course, has a long tradition in Anglo-Saxon culture. Also, certain Latin American countries have begun to introduce experiments in this direction.1 In fact, the growing interest in quality has given rise to ongoing evaluation of the university system (Neave, 1997). In the European context, over the past two decades, most countries immersed in processes of institutional evaluation have developed models involving a change of the state’s role in the control of higher education. In little more than a decade the state has gone from controller to supervisor. The higher education institutions themselves, in exercising their responsibilities, report their achievements, both to the administrative authorities and to the bodies that fund them, as well as to the rest of the higher education system and to the society that sustains them; and try to bring about improvement operations in their areas of greatest weakness. The universities’ quality management inevitably passes through the development of their institutional autonomy, and implies the promotion of self-regulatory processes (Neave and Van Vaught, 1991). This new vision demands not only more and better evaluation of the institutions, but also a different perspective that assumes, in addition to an audit of the system’s performance, the implementation of procedures for its improvement (Hufner and Rau, 1987).

In Spain, specifically, the process of institutional evaluation goes back first to the Experimental Program to Evaluate the Quality of the University System, developed during 1992 and 1994, and whose basic guidelines are contained in the report of the First Experimental Pilot Program for University Quality Assessment; and second, to the European Pilot Project developed between 1994 and 1995, of which the objectives and relevant results can be found in the document entitled “Projects pilots européens pour l´évaluation de la qualité dans l´enseignement supérieur: lignes directrices pour les établissements participants” (Donaldson, Staropoli, Ottenwaelter, and Vroeijenstijn Thune, 1994).

Finally, in 1995, through the enactment of Royal Decree 1947/95, there began the First National Plan for University Quality Assessment (NPUQA) valid for the years 1996-2000. We would like to emphasize the orientational importance and its contribution to the extension of a certain “culture of evaluation” in the upper levels of Spain’s education system. We note some of the empirically-demonstrated weaknesses and their consequences during these years of operation with a predominantly descriptive/supervisory focus. With the launching of the Second University Quality Plan (UQP), from 2001 to 2006 by Royal Decree 408/2001, a new horizon was opened in the evaluation of Spanish university excellence. It led to systems of degree accreditation which allowed the verification of minimum quality standards (for internal use) and the demonstration of a distinguishing mark (for external consumption). These would make it possible to compete in the best conditions in every type of market, from a more modern evaluative approach, highly concerned about putting into action a means of internal correction systems into action. Still, we should point out the weaknesses that affected the NPUQA in order to avoid them in the future.

 

1. Some causes of weakness in the evaluative process of Spanish universities

In the Spanish university there are a number of constraints that have impeded the full development of the late First National Plan for University Quality Assessment to the extent and magnitude intended. In spite of our having joined the group of developed countries which are today implementing processes for the evaluation of their universities, the Spanish university lags somewhat behind those of our neighboring countries: UK, the Netherlands and France, for example. In addition, the Spanish university is affected by a series of conditions that have contributed to the weakness of the process of evaluating university quality in these early stages. Among the reasons cited by De Miguel (1999, p. 105) and Gutierrez (2000, p. 2) we emphasize here some that we consider fundamental:

1.1 The dubious executive capacity of the directive bodies of public universities with respect to their peers in the private sector.

This aspect refers to the diffuse and weak capacity for autonomy and leadership which prevails in many Spanish universities because of statutory developments that ignore or superficially address these issues. The entrenched corporatist trend of the university structure, on the one hand, and the complex network of colleges, on the other,2 contribute to the arrival of a shock absorber/filter that slows down decision-making processes, makes them more complex and expanding them to such an extent as to impede the practical and operational execution of any order extending it indefinitely. Vicente says (2001, p.1):

The audits were limited to autoevaluations and external evaluations carried out by “friends” from other “branches”; but the truth is that nobody knows exactly if there are “losses” or “benefits” and what is worse, nobody demands them; nobody knows of any case in which an academic authority has resigned or has been penalized because of a university’s low performance.

Thus, contrary to one of the objectives of the NPUQA:

To provide educational authorities and the Council of Universities objective information on the quality level achieved by the universities to serve as a basis for decision-making in the field of the respective competence (Royal Decree 1947/95, Article 1, paragraph 3, p. 35.473).

We affirm that neither has information been transmitted with the desirable fluency (De Miguel, 1999a, p.110), nor have reports issued served as a basis for decisions aimed at accountability, improving the situation or making commitments to social change (De Miguel, 1999c).

In addition to the network of colleges constituting the structure of governance and representation of the Spanish university, despite the implementation of the Organic Law of Universities (OLU)3 in 2001, there is another set of conditions and circumstances contributing to the development and maintenance of a weak executive capacity. In this sense, we emphasize the presence of three different levels participating, sometimes with little coordination, in the process of quality evaluation: Level 1, which consists of the Council of Universities and the General Secretariat of the Council of Universities, agencies of the Ministry of Education, Culture and Sports; Level 2, consisting of the Autonomous Agencies for University Quality which oversee the Autonomous Communities; Level 3, structured by the Evaluation Committees, Autoevaluation Committees and External Evaluation Committees. The first two are agencies of the unit evaluated, and the third, of the Council of Universities.

This structure has brought about the sluggishness of the administrative bureaucracy and various disturbances derived from the lack of organization and coordination between the levels of performance described, as well as the insufficient means of carrying out the tasks needed to be done in the time expected. The consequence of this accumulation of malfunctions has been the dearth of decision-making, once the final distortions contained in evaluation reports were known. In this regard, Coba (2001, p. 386) has commented that “the limitations of NPUQA, to summarize them in a sentence, have been the scant involvement and impact which the results of evaluation have had beyond the evaluated units themselves.” Finally, we emphasize the difficulty of extrapolating the business sector’s quality management techniques to whatever service organization—in our case, the university. In this sense Quintanilla (1999, p. 88) distinguishes between:

a) Conceptual problems: referring to the difficulties in transferring the concepts of quality and execution of corporate control to the university, as well as the difficulties in identifying them clearly to the clients of the higher education service.
b) Methodological problems associated with the need to articulate different dimensions and levels in the evaluation of the quality of universities, and to fit the methodology of quality evaluation together with other standard practices within the academic culture.

1.2. The disconnection between the State, the university and regional governments in the identification and formulation of common objectives

In the first place, this aspect may contribute to the dissolution of the rights and duties existing in relation to and for universities. This is because neither are the universities capable of taking the initiative on their own, nor do the regional governments immediately come to terms with the guidelines of the central government. Thus, in many cases, the disconnection is due to strong discrepancies between the central government’s underlying political approaches and those of the regional administrations.

The administrative organization of Spain is geographically distributed over regional communities with ample competencies in the various areas of social, political and economic reality (health, social services, education, etc.) It forms a heterogeneous mosaic where understanding is not always easy.4 That, although desirable in a democratic state, creates some mismatches—the result of misunderstanding between the State, the autonomous communities and the universities, for lack of coordination between the public powers that be, and the universities’ goals. Although these clashes have been particularly virulent—at least individually—when universities get together and express their problems collectively to the group of public authorities, within the Council of Universities, the situation becomes saturated with frictions evinced by endless debates and violent verbal confrontations (Perez Rubalcaba, 1997). In this sense, a rapprochement seems reasonable, aseptic with regard to all ideology, in the joint project of articulating a collection of goals, strategies and activities for evaluating the university’s activities. This would be fully consensual on the part of the academic institutions, the regional governments and even the central government.5 Some of the conclusions reached in the work group dedicated to the university and the organizers of the seminar “The Objectives of the University Facing the New Century” (Conference of Rectors of Spanish Universities [CRUE], 1997, p.1) have already pointed in this direction:

“It is necessary to find, with mutual loyalty and service to the public interest, a satisfactory balance between the autonomy of the universities, recognized constitutionally, and the powers of the autonomous parliaments and governments”.There must also reviewed as well the framework of relations between the universities and public authorities through coordinating bodies, so as to ensure a satisfactory relationship between academic and political economic aspects of the Spanish university system. It is urgent to have a stable but flexible framework which would avoid continuous and costly reforms of the rules relating to curriculums and other aspects of university education.

However, in spite of all this, we cannot overlook the encouraging measures that have been taken in connection with the OLU after the creation of new management and coordination organizations between the three administrations involved in the university’s future. Thus, one can read the following in paragraph IV of the OLU preamble:

The Council for University Coordination will be the maximum consulting and advisory body of the university system, and is configured as a forum for encounter and debate between the three administrations that converge in the university system: State, Regional and University (Organic Law of Universities [OLU], paragraph IV, p. 49.402).

 

2. Weaknesses in the university quality evaluation in the context of Spain

Knowing two of the root causes that may contribute to the weakening of university quality evaluation in Spain, we show here some weaknesses:

2.1. The absence of a systematic method of data collection to support evaluation

Concerning this weakness, we mean the lack of information available regarding information-gathering instruments which should be used in the process of evaluating universities. While the NPUQA guides and those of the current UQP make it clear which indicators should be measured, it is uncertain what instruments should be used, and by means of what methodological strategies this should be done. The indications are too generic,6 and of course, provide little clarification for a group of agents who are usually inadequately trained in research. Should standardized instruments be used or developed ad hoc? The agents participating in the self-directed study centers are lost, and this undoubtedly slows down the evaluation process, except in cases where these agents have consistently participated as external assessors and have developed a mechanism of self-report.

Recently initiated have been systematic work processes centered on some specific aspect of the university system (e.g., evaluation of qualifications and teaching). However, we are far from having an integrated system of indicators with which to addresses the complexity of the system. This is compounded by the need to make explicit the theoretical reference models that would support such systems of indicators.

2.2. Lack of mechanisms for analyzing and validating the information collected

Even if we can overcome this first hurdle (that of the systematic collection of information), we find ourselves facing others:

a) The management, tabulation and analysis of data—laborious tasks, especially when most of the data except some specific ones, are qualitative, based on individual interviews with different actors: teachers, students, administrative and service personnel, and the governing team of the school or university.
b) Lack of criteria for validating the information gathered. The qualitative data require systematic triangulation strategies that would allow one to compare the information of more than one type of agent. This may not be feasible because it often generates more information than it is possible to process. In addition, teams often have no specific training in analyzing qualitative data and triangulation.

Both aspects have much to do with methodological training in which the internal and external evaluators have not necessarily been involved in the evaluation process—training that involves knowing about the technical criteria concerning the quality of instruments used for the collection of information, as well as the nature and type of sample used in the explorations carried out, the parameters of validity and reliability, either from the perspective of classical test theory or qualitative criteria (credibility, transferability).

2.3. Lack of executive actions for the implementation of immediate improvements, and lack of financial support that would make them operational.

The publication of the evaluation report and the obtaining of a number of conclusions could trigger different types of actions: accountability, improvement processes (development), to serve as a source of knowledge and commitment to social change.

Unfortunately, reports of results presented to date7 do not take a serious or decided view of any of the specified functions; this indicates the important executive deficit suffered by the Spanish university. Much is known about the disruptions that afflict our university, but little or nothing is done to solve them. In this sense, Perez Garcia (1998, pp. 123-124) indicates that for evaluation to bear fruit, it must first be possible, and also it must contribute to the improvement of information and to the production of strategic planning that would serve as a basis for executive actions based on results obtained through the evaluation process. Finally, Harman (1999) expresses the opinion that the activities of reporting and monitoring are vital elements of any valuable program of quality assurance, although the biggest change is in designing methods effective and capable of being performed to achieve improvements. Some regions are experiencing operational models of contracts-programs for the implementation of progressive improvements in different universities,8 but until these are structured and a consensus is reached on global and local strategic plans to assess the quality of higher education, all these measures will continue to be specific actions of little importance.

Nothing is more illustrative of this argument than the processes of the institutional evaluation of teaching,9 by students using more or less standardized Likert10 evaluation scales to assess overall compliance with teachers’ obligations, work method, motivational skills, relevance of subject programs, and adequacy of evaluation models. Even when a teacher gets a low rating on these scales, the results do not affect payment of the teachers’ five-year raises (granted automatically), nor their future promotion and stabilization. So, the question is what reasons led a university to implement practices of teaching-quality evaluation if the results will have no impact. As noted by Tejedor (2000), from the results obtained there should be derived decision-making processes and patterns of renewal when necessary. The formative sense of the evaluation lies in the assumption of the virtuality that the information provided would encourage the group of professionals (teachers, researchers and technicians) to make the relevant changes.

In this regard, various factors have been considered in the evaluation of teachers which De Miguel (1998, pp. 70-72) systematized in three criteria: productivity; teaching competence; and excellence or professional development. Into the first, the criterion of productivity, we can put the merit pay models and pay for performance; in both cases the salaries of teachers depend on teacher productivity according to students’ academic performance. These models of remuneration enjoyed a resurgence in the U.S. during the Reagan administration; however, due to the disappointing results achieved by many programs, these models have been modified or eliminated.

Thus, there are experiences that have empirically demonstrated their inefficiency. In this regard, Clofelter and Ladd (1996, quoted by Muir, n.d.) confirmed that in a number of Dallas schools where the merit pay model was applied, the students had increased their academic performance before the model was put into practice. Thus, the merit pay model could not be associated with increased academic performance. Another example found by Muir (n.d.) speaks of the experimental implementation of pay for performance during the Nixon administration in the United States. In this experience teachers were paid based on students’ reading achievement; so, to maximize their pay, some teachers focused their interest on students of average performance, ignored the better students because they considered them skilled on their own, and also ignored the worst ones, because they required too much attention and contributed little profitability.

Consequently, these and other experiences and studies discouraged the implementation of such models, which nonetheless are effected still in some countries. In short, there is no empirical evidence showing that merit pay models improve student achievement, and yet they can generate perverse side effects, of which Kirkpatrick (2001) and Muir (n.d.) reviewed some:

Suffice it to recall the words of the famous liberal economist Milton Friedman, cited by Kirkpatrick (2001, p. 1) with respect to merit pay models: “Merit pay works in a competitive marketplace, not in a socialistic enterprise such as the public school system”.

2.4. Imbalances that affect the process of selecting and training assessors

The working methodology of the NPUQA was based on a model combining self-evaluation (Maassen, 1987; Vroeijenstijn and Acherman, 1990, Van Vught and Westerheijden, 1995) with peer review (Kells, 1992; Vroeijenstijn, 1995) and performance indicators recommended by the evaluating agency (Frazer, 1997).

In line with this, Harman (1999, p. 1) believes that quality assurance in the evaluation of the university depends on:

A combination of a limited number of key methodologies, the most important of which are: self-studies; peer review by panels of experts, at least usually involving external some members; the use of relevant statistical information and performance indicators; and surveys of key groups, such as students, graduates and employers.

Thus, this work structure generated the selection and training of internal evaluation committees (IEC) and external ones (EEC). In both cases the selection process was not without problems. De Miguel (1999a, p. 107) notes the following:

2.5. Inadequate functioning of the evaluation committees

In general terms the performance of the IEC can be defined as unsatisfactory. To the non-disclosure of the reports, an aspect alien to the members of these committees, should be added their merely descriptive and politically correct nature. De Miguel (1999a, p. 108) highlights some problems which affect the work of the IECs:

We, for our part, would add a problem common to both committees: the dilemma of the voluntary versus the mandatory status of these committee members. If participation is voluntary, we run the risk of low involvement, lack of motivation in the work, and the inclusion of biased data with little or no verification. On the other hand, if participation is mandatory, we infringe upon the principle of the professional autonomy of some university agents. If there are not also established adequate compensation mechanisms (economic or academic), the activity of the committee members may be affected by incompetence, lack of motivation, and a lack of coordination. In short, the dialectic on the type of participation in the evaluation process is evident, and the solution is not at all easy. In fact, Harman (1999, p. 1) believes that:

An important variation between quality assurance systems is whether participation is voluntary or compulsory. Many countries began with institutional audits on a voluntary basis ?...?. Generally, however, with national reviews of disciplines participation is compulsory. Even when participation in such reviews is voluntary, strong moral and professional pressures often operate in institutions.

2.6. Questioning performance indicators in terms of how they are drawn up, applied and interpreted

The first important idea in this regard is that performance indicators, although they constitute one of the most popular tools for evaluating teaching, may involve a certain danger in three differential aspects (De Miguel, 1999b, pp. 1-4, Mora, 1999, pp. 2-3):

a) Quality of preparation.
b) More or less correct use made of them.
c) Sphere to which the evaluation makes reference.

The first two aspects would be what we call technical difficulties. Thus, in speaking about the quality of the indicators, we mean problems focused on questions relating to how they are constructed, what criteria should be selected and what is the theoretical foundation that justifies their production and application. Based on to these aspects, Wyatt, Ruby Norton, Davies and Shrubb (1989, p. 65) and Osoro and Salvador (1994, p. 279) established a set of criteria, still valid today, in the selection of indicators.11 These are:

1. Importance and use: value of information for policy development, audience interest and accountability.
2. Technical quality: based on the validity of the context and the reliability of the information collected.
3. Reliability: on the basis of careful data collection, its cost, analysis and report, and simplicity of the information.

On their use and application, most of the criticisms focus on aspects of their policy and practices, as regards several issues:

The third aspect is that referring to the question of the scope of the evaluation. In this regard we find that the context evaluated is less controversial in some cases than others. There are disciplines, such as economics, where assessment, per se, is considered something inherent and necessary, but in other disciplines, such as education, evaluation is not always well received. On the other hand, and within this discipline, we find areas where institutional assessment is less controversial (research and management), and others where it is much more problematic (teaching-learning processes). The explanation of such phenomena may perhaps reside in the fact that the teaching-learning processes are difficult to put into operation, and consequently, questions arise about whether it is appropriate to collect information about them using performance indicators constructed with objective approaches.

 

3. Suggestions for improvement

With the full range of consequences specified, the product of an institutional evaluation undoubtedly weakened, it seems appropriate that around each university there be defined a framework that would guarantee the continuity of the initiatives. It is society, represented by regional governments, which should require the continued application of these evaluation tools so as to facilitate the means to give attention to at least one set of fundamental objectives:

1) Train staff volunteers participating in the processes of internal and external auto-evaluation (teachers, students and the personnel of administration and services [PAS]) in the process of assessing university quality with a set of ideas and basic research strategies.

The concept of the institutional evaluation of quality proposed by De Miguel, Mora and Rodriguez, 1991; Vroejenstijn, 1995; Rodríguez, 1995 and Van Vught and Westerheijden, 1995) is a model which postulates the integration of different types of evaluation: more external auto-evaluation, i.e. the use of internal indicators proceeding from self-regulation and peer review.

What is really capital about this concept of university quality assessment, according to De Miguel (1997a, p. 172), is its comprehensive dimension, which addresses both processes and products, as well as effectiveness and efficiency, improvement, quality management and the procedures established to assure it. The evaluation assumes, based on these parameters, a three-pronged approach: quality control, quality management and quality assurance, overcoming the exclusivity of the modalities: quality audit and quality assessment of clear external trends (Rodríguez, 1995, De Miguel, 1997b).

To establish various methodological strategies for this evaluation process there has been produced a series of models with different phases and procedures (House, 1993, Van Vught, 1995; Westerheijden, 1996; Rodríguez Espinar, 1997). In our judgment, the Achilles heel of most of these models is the first phase of the sensitization and preparation of this model: the selection of the institution’s internal staff who will be responsible for the evaluation process as well as the appointment of people who will be responsible for directing the process (the evaluation committee). And if we consider it this way, it is because in the selection of such members, some criteria that should be considered are not taken into account:

a) Preliminary criteria for the competency of the participants regarding institutional evaluation.
b) Members’ level of knowledge about research processes.
c) Participants’ level of involvement and personal interest about the evaluation of quality.

In this way, the objectives of the evaluation can be known: the institution’s strengths and weaknesses, and opportunities and threats in relation to its future development; but little or nothing is known about the research sequence needed to achieve these things; the methodologies chosen; the tools for gathering information; data analysis strategies to use; or the quality criteria for the data collection instruments to be employed, for example.

2) Take into account objective indicators so as to be able to measure them, but at the same time solicit observations, opinions and evaluations that have been contrasted (evidence) and apply them equally according to the starting point of each scenario evaluated.

We refer to the desirability of continuing to maintain the objective indicators contemplated by the UQP guide and the model of indicators proposed by De Miguel (1999b), but complementing both aspects by contributions that can generate more qualitative strategies. The emphasis on a greater role for performance indicators (quantitative) in the evaluation process seems more geared toward replacing the assessment processes themselves, as value judgments contextualized and shared by the agents, by bureaucratic and conclusive procedures that would pay little attention to the environment, to the specific characteristics of each unit, and to the opinion and participation of those involved (Apodaca, 2001, p. 371). With this methodological orientation transparency and clarity are gained, although it is possible to confuse statistical data with real performance indicators which should perhaps refer to goals, values, context and specific times (Dochy et al., 1990b). Frackmann (1991) goes further, indicating that there is a risk in thinking that the institutions’ objectives are the same as indicators. It is therefore appropriate to consider the value judgments and the evidence (quantitative data, documents, opinions, etc.) of the agents involved in the evaluation process, as well as the objective indicators it has been decided to contemplate (Shadish, Cook and Levinton, 1991, Apodaca and Grau, 1996).

3) Establish technical criteria for the selection of measurement indicators.

The choice of performance indicators in institutional university evaluation must be supported by a number of technical criteria such as those proposed by Cave, Hanney, Henkel and Kogan (1988) and Tognolini (1991):

In addition, Osoro and Salvador (1994) propose the following:

4) Provide the evaluatory organizations with executive powers and capacity for liaison (after the results were known), so that they can implement remedial measures, either in the line of accountability, improvement, or both.

The last phase of the evaluation model considered includes the development of an improvement plan which would include goals to achieve, actions that must be taken to achieve them, estimates of control systems, and provision of necessary resources. So far, we do not know if an improvement plan has been either designed or implemented.

Higher education institutions, therefore, must be committed to going through transparent internal and external evaluations, conducted openly by independent specialists (Matthew, 2001, pp. 642-643). In this sense, the possible creation of the National Agency for Quality Assessment, proposed in the 2000 University Report,12 and the recent OLU contribute to the effective implementation of improvements through an accreditation process centralized in a body independent of the university. In this regard, authors such as De la Plaza y Peces-Barba are of the opinion that this independence is insufficient for excessive patronage of the central government, and the opacity of its creation process (“Creation of the National Assessment Agency”, 2002).

In any case it seems reasonable, as Mora (2001, p. 393) indicates, to undertake courses of action in parallel with existing assessment processes:

The accreditation model makes a distinction between the terms certification and accreditation. The first (certification) alludes to the process which guarantees that an organization meets the requirements for quality assurance; the second (accreditation) directly affects the validation process of certification agencies.

Contrasting with this model is that of the institutional evaluation, having aims similar to the previous one (accreditation model), but with obvious differences: it is oriented toward making judgments, not verdicts; it is not always determined by standards, and it is basically situated in the framework of self-regulation generated within the organization. Thus, the fundamental difference in the two procedures may be considered in terms of standards as contrasted with purposes.

It is not the same thing to evaluate an institution or academic program in terms of quality standards established by certain agencies, as it is to estimate the adaptation of the processes and the quality of the results obtained by an institution based on the objectives it proposed to achieve. (De Miguel, 2001, p. 399)

The two models are compatible, and can be established in conjunction with the use of performance indicators and by setting up a financial mechanism to improve quality. In both cases, the undesirable effects of a misuse of performance indicators and funding should be considered, with quality as a goal. On the production, use and interpretation of indicators we have previously developed a complete dissertation. With regard to funding as a quality incentive, we find ourselves with the option called strategic use of funding (Villarreal, 1998), which answers to the growing government concern about increasing the effectiveness and efficiency of the universities’ functioning (Wagner, 1996). This concern can be channeled through different strategies. Villarreal (1998, pp.170-171) propose the following:

1. The use of incentives linked to performance level, measured in relation to the achievement of certain objectives (quality indicators) in funding programs accessible to all universities;
2. The use of bilateral programs-contracts in which the administration determines the objectives to be achieved by each individual university, and the funding linked to these objectives;
3. The establishment of competitive funding programs. The complementary resource is linked to competitive processes, with an objective evaluation process, and with a monitoring process that would guide in determining whether to continue allocating such resources or not.

 

References

Apodaca, P. (2001). Calidad y evaluación de la educación superior: situación actual y prospectiva. Revista de Investigación Educativa, 19 (2), 367-382.

Apodaca, P. & Grao, J. (1996). Autoevaluación, planificación estratégica y calidad total en la evaluación y mejora de la educación superior. In J. Tejedor & J. Rodríguez (Eds.). Evaluación educativa. II Evaluación Institucional (pp. 90-110). Salamanca: Universidad de Salamanca.

Barnetson, B. (1999). Key performance indicators in higher education. Alberta: Alberta Colleges and Institutes Faculties Association.

Benítez, C. (2002). Los estímulos académicos ¿instrumentos de superación o medios de compensación salarial? [Review of the book: La experiencia institucional con los programas de estímulos: La UNAM en el periodo 1990-1996]. Revista Electrónica de Investigación Educativa, 4 (1). Retrieved August 5, 2002, from: http://redie.ens.uabc.mx/vol4no1/contenido-benitez2.html

Berry, C. (1999). University league tables: artefacts and inconsistencies in individual rankings. Higher Education Review, 31 (3), pp. 3-10.

Bormans, M., Brouwer, R., In´t Veld, R. J., & Mertens, F. J. (1987). The role of performance indicators in improving the dialogue between government and universities. International Journal of Management in Higher Education, 11 (2), 181-194.

Bricall, J. M. (April 3, 2000). Si los gobiernos no reforman la universidad, lo harán los mercados. El País, pp. 41-43.

Caparrós, J. (January 22, 2001). Por una ley de autonomía universitaria. El País. Retrieved January 15, 2002, from: http://www.elpais.es/suplementos/educa/20010122/01edu22a.html

Cave, M., Hanney, S., Henkel, M., & Kogan, M. (1988). The use of performance indicators in Higher Education: a critical analysis of developing practice. London. Jessica-Kingsley.

Coba, E. (2001). La evaluación de la calidad de las universidades. Revista de Investigación Educativa, 19 (2), 383-388.

Creación de la Agencia Nacional de Evaluación de la Calidad y Acreditación de las Universidades. (2002). Comunidad Escolar, 20 (704). Retrieved January 12, 2003, from: http://comunidad-escolar.pntic.mec.es/704/univer1.html

Conferencia de Rectores de las Universidades Españolas (November 17-18, 1997). Resultados del grupo de trabajo: Universidad y poderes públicos. Paper presented at Seminar “Los objetivos de la universidad ante el nuevo siglo”. Retrieved January 18, 2003, from: http://www.crue.org/confsal.htm

Conferencia Mundial sobre la Educación Superior (1998). Declaración mundial sobre la educación superior en el Siglo XXI: Visión y acción. Retrieved January 18, 2003, from: http://www.unesco.org/education/educprog/wche/declaration_spa.htm

Consejo de Universidades (1994). Informe final del comité técnico. Programa experimental de la evaluación de la calidad del sistema universitario (Document No.13). Madrid: Secretaría General.

Cuenin, S. (1987). L’utilisation des indicateurs de performance dans les universités: une enquête internationale. Revue Internationale de Gestion des Établissements de ‘Enseignement Supérieur, 11 (2), 146-169.

De la Orden, A., Asensio, I., Carballo, R., Fernández Díaz, J., Fuentes, A., García Ramos, J.M, et. al. (1997). Desarrollo y validación de un modelo de calidad universitario como base para su evaluación. Revista Electrónica de Investigación y Evaluación Educativa, 3 (1), 1-24. Retrieved August 5, 2002, from:
http://www.uv.es/RELIEVE/v3n1/RELIEVEv3n1_2.htm

De Miguel, M. (1995). Revisión de programas académicos e innovación en la enseñanza superior. Revista de Educación, 306, 427-453.

De Miguel, M. (1997a). La evaluación de los centros educativos. Una aproximación a un enfoque sistémico. Revista de Investigación Educativa, 15 (2), 145-178.

De Miguel, M. (1997b). La evaluación de los centros educativos. In H. Salmerón (Ed.), Evaluación educativa. Teoría, metodología y aplicación en áreas de conocimiento (pp. 151-175). Granada: Grupo Editorial Universitario.

De Miguel, M. (1998). La evaluación del profesorado universitario. Criterios y propuestas para mejorar la función docente. Revista de Educación, 315, 67-83.

De Miguel, M. (1999a). El Plan Nacional de Evaluación de la Calidad de las Universidades. Problemas y alternativas. Revista Interuniversitaria de Formación del Profesorado, 34, 99-114.

De Miguel, M. (1999b). La evaluación de la enseñanza. Propuesta de indicadores para las titulaciones. Unpublished manuscript.

De Miguel, M. (1999c). La evaluación de programas: entre el conocimiento y el compromiso. Revista de Investigación Educativa, 17 (2), 345-348.

De Miguel, M. (2001). Modelos académicos de evaluación y mejora de la enseñanza superior. Revista de Investigación Educativa, 19 (2), 397-400.

De Miguel, M., Mora, J. G. & Rodríguez, S. (1991). La evaluación de las instituciones universitarias. Madrid: Consejo de Universidades.

Dochy, F. J., Segers, M. S. & Wijnen, W. H. (Eds.). (1990a). Management information and performance indicators in higher education: An international issue. Assen, Países Bajos: Van Gorcum.

Dochy, F. J., Segers, M. S. & Wijnen, W. H. (1990b). Selecting performance indicators. In L. C. Goedegebuure, P. A. Maasen & D. I. Westerheijden (Eds.). Peer review and performance indicators (pp.27-45). Utrecht: Uitgeverij Lemma.

Donaldson, J., Staropoli, A., Ottenwaelter, M.O., Thune, C. & Vroeijenstijn, T. (1994). Projects pilots européens pour l´évaluation de la qualité dans l´enseignement supérieur: lignes directrices pour les établissements participants. Bruselas: Comisión de las Comunidades Europeas.

Escudero, T. (1997). Enfoques modélicos y estrategias en la evaluación de los centros educativos. Revista Electrónica de Investigación y Evaluación Educativa, 3 (1), 1-22. Retrieved August 5, 2002, from: http://www.uv.es/RELIEVE/v3n1/RELIEVEv3n1_1.htm

Escudero, T. (1999). Los estudiantes como evaluadores de la docencia y de los profesores: nuestra experiencia. Revista Interuniversitaria de Formación del Profesorado, 34, 31-37.

Escudero, T. (2000). Evaluación de centros e instituciones educativas: las perspectivas del evaluador. In D. González, E. Hidalgo & J. Gutiérrez (Coords.), Innovación en la escuela y mejora de la calidad educativa (pp. 57-76). Granada: Grupo Editorial Universitario.

Frackmann, E. (1991). Lecciones que deben aprenderse de una década de discusiones sobre indicadores de rendimiento. In M. de Miguel, G. Mora & S. Rodríguez (Eds.), La evaluación de las instituciones universitarias (pp.399-422). Madrid: Consejo de Universidades.

Frazer, M. (1997). Report on the modalities of external evaluation of higher education in Europe (1995-1997). Higher Education in Europe, 22 (3), 349-401.

García, E. (May, 1992). Evaluación de la Enseñanza en la Universidad. Paper presented at the Simposio de Evaluación de las Reformas Educativas, Madrid.

Grao, J. & Winter, R. (1999). Indicadores para la calidad de los indicadores. In J. Vidal (Coord.), Indicadores en la universidad, información y decisiones (pp. 81-87). Madrid: Consejo de Universidades.

Gurrutxaga, A. (June 4, 2001). La reforma universitaria que no queremos. El País. Retrieved July 5, 2001, from: http://www.elpais.es/suplementos/educa/20010604/aula.html

Gutiérrez, F. (October 2, 2000). Debilidades en la evaluación universitaria. El País, p. 38.

House, E. (1993). Professional evaluation. Social impact and political consequences. London: Sage.

Harman, G. (1999, primavera). Management of quality assurance. International Higher Education, 15, 8-9. Retrieved January 18, 2003, from:
http://www.bc.edu/bc_org/avp/soe/cihe/newsletter/News15/text5.html

Hüfner, K. & Rau, E. (1987). Measuring performance in higher education: problems and perspectives. Higher Education in Europe, 12 (4), 5-13.

In’t Veld, R. (September, 1990). Threats and opportunities for evaluation in higher education. Paper presented at the 10th General Conference of Member Institutions of Progamm on Institutional Management in Higher Education, Paris.

Kells, H. R. (1992). Self-regulation in higher education: a multi-national perspective on collaborative systems of quality assurance and control. London: Jessica Kingsley Publishers.

Kirkpatrick, D. W. (November 7, 2001). Merit pay: Theory and practice. Retrieved July 18, 2002, from: http://schoolreformers.com/editorials/2001/meritpay.html

Levesque, K., Bradby, D., & Rossi, K. (May, 1996). Using data for program improvement: how do we encourage schools to do it? Centerfocus, 12, 59-76.

López, P. (1998). Evaluación institucional: hacia un modelo europeo. In F. Michavila (Ed.). Experiencias y consecuencias de la evaluación universitaria (pp. 95-110). Madrid: Fundación Universidad-Empresa.

Maassen, P. A. M. (1987). Quality control in Dutch higher education: Internal versus external evaluation. European Journal of Education, 22 (2), 161-170.

Mateo, J. (2001). La evaluación institucional universitaria. Una nueva cultura de la evaluación en un contexto de cambio. Revista de Investigación Educativa, 19 (2), 641-647.

Menéndez, J. (July 16, 2001). Sobre la proyectada reforma de las universidades. ABC, p. 40.

Michavila, F. (June 25, 2001). La ley universitaria se ha hecho con la vista en el retrovisor. El País. Retrieved November 14, 2001, from:
http://www.elpais.es/suplementos/educa/20010625/42francis.html

Mora, J. G. (1991). Calidad y rendimiento en las instituciones universitarias. Madrid: Consejo de Universidades.

Mora, J. G. (1998). La evaluación institucional de la universidad. Revista de Educación, 315, 29-44.

Mora, J. G. (July, 1999). Indicadores y decisiones en las universidades. Paper presented at Primer Seminario de Indicadores en la Universidad: información y decisiones, León, Spain.

Mora, J. G. (2001). El marco español y europeo en las políticas de calidad. Revista de Investigación Educativa, 19 (2), 389-395.

Morrison, H. G., Magennis, S. P., & Carey, L. J. (1995). Performance indicators and league tables: a call for standars. Higher Education Quaterly, 49 (2), 128-145.

Muir, E. (s.f.). Merit pay and pay for performance. Retrieved July 18, 2002, from: http://aft.org//newmembers/K12/merit.html

Neave, G. (September, 1997). The evaluative State: the moment of truth from an imagined Spanish perspectives. Paper presented at the Jornadas Retos presentes y futuros de la Universidad, IVIE y Consejo de Universidades, Valencia, September, 1997.

Neave. G. & Van Vught, F. A. (1991). Prometheus bound. Oxford: Pergamon.

Osoro, J. M. & Salvador, L. (1994). Criterios y procedimientos para la selección de indicadores de rendimiento en evaluación institucional universitaria. Revista de Investigación Educativa, 23, 279-282.

Pérez García, F. (1998). Causas y consecuencias de la evaluación de las universidades: para qué debe servir. Revista de Educación, 315, 109-124.

Pérez Rubalcaba, A. (September, 1997). Ley de Reforma Universitaria, Universidad y poderes públicos. Paper presented at the Seminario Objetivos de la Universidad ante el Nuevo Siglo. Retrieved January 18, 2003, from: http://www.crue.org/pperezru.htm

Quintanilla, M. A. (1998). En pos de la calidad: notas sobre una nueva frontera para el sistema sobre el sistema universitario español. Revista de Educación, 315, 85-95.

Raga, J. (1998). Claros y oscuros en el proceso de evaluación de la calidad de las Universidades. In F. Michavila (Ed.). Experiencias y consecuencias de la evaluación universitaria (pp. 111-121). Madrid: Fundación Universidad-Empresa.

Rodríguez , S. (1995). La evaluación de la enseñanza universitaria. In E. Oroval (Ed.). Planificación, evaluación y financiación de los sistemas educativos (pp. 150-175). Barcelona: Civitas.

Rodríguez, S. (1997). La evaluación institucional universitaria. Revista de Investigación Educativa, 15 (2), 179-214.

Rodríguez Gómez, R. (2000). La reforma de la educación superior. Señas del debate internacional a fin del siglo. Revista Electrónica de Investigación educativa, 2 (1). Retrieved November 14, 2001, from: http://redie.uabc.mx/vol2no1/contenido-rodgo.html

Segers, M. & Dochy, F. (1996). Quality assurance in higher education: Theoretical considerations and empirical evidence. Studies in Educational Evaluation, 22 (2), 115-137.

Sizer, J., Spee, A., & Bormans, R. (1992). The role of performance indicators in higher education. Higher Education, 24 (2), 133-155.

Shadish, W. R., Cook, T. D., & Levinton, L. C. (1991). Foundations of program evaluation: theories of practice. Newbury Park, CA: Sage.

Tejedor, J. (2000). Evaluación de la calidad de la docencia. In D. González, E. Hidalgo, & J. Gutiérrez (Coords.), Innovación en la escuela y mejora de la calidad educativa (pp.21-57). Granada: Grupo Editorial Universitario.

Tognolini, J. (September, 1991). Performance indicators in education: some practical problems and issues. In J. Hewton, Performace indicators in education: what can the tell us? Paper presented at Third National Conference, Camberra.

Van Vught, F. (1995). Managament for quality. Paris: Organización para la Cooperación y Desarrollo Económicos.

Van Vught, F. & Westerheijden, D. (1995). Evaluation institutionelle et gestion de la qualité. Ginebra: Programme CRE.

Vicente, J. (April 2, 2001). Universidad: primero evaluar; luego aumentar la financiación. El País. Retrieved January 18, 2002, from:
http://www.elpais.es/suplementos/educa/20010402/aula.html

Vidal, J. (November, 2001). ¿Cómo se utiliza la información estadística en las Instituciones de Educación Superior en España? Paper presented at the V Encuentro Internacional de Responsables de Información Estadística de las Instituciones de Educación Superior. México, D. F.

Villarreal, E. (1998). La financiación del sistema universitario. In J. M. De Luxán (Ed.). Política y Reforma Universitaria (pp. 159-174). Barcelona: Centro de Estudios de Derecho, Economía y Ciencias Sociales.

Villarreal, E. (June, 1999). La utilización de indicadores de rendimiento en la financiación de la Educación Superior. Paper presented at the Primer Seminario de Indicadores en la Universidad: información y decisiones, León, Spain.

Vroejenstijn, A.I. (1995). Improvement and accountability: navigating between Scylla and Charybdis. London: Jessica Kingsley.

Vroejenstijn, A. I. & Acherman, H. (1990). Control-oriented quality assessment vs improvement-oriented quality assessment. In L. C. Goedegebuure, P. A. Maasen & D. I. Westerheijden (Eds.), Peer review and performance indicators (pp.81-101). Utrecht: Uitjeverij Lemma.

Wagner, A. (1996). Le financement de l’enseignement supérieur: nouvelles méthodes, nouveaux problèmens. Gestion de l’enseignement supérieur, 8 (1), 7-19.

Weert, E. (1990). A macro-analysis of quality assessment in higher education. Higher Education, 19, 57-62.

Westerheijden, D. (1996). Use of quality assessment in Dutch universities. In P. A. M. Maassen & F. Van Vught (Eds.), Inside Academia. New challenges of the academic profession (pp. 269-289). Enschede, Netherlands: Center for Higher Education Policy Studies.

Williams, G. (1986). The missing bottom line. In G. C. Moodie (Ed.), Standards and criteria in Higher Education (pp. 110-125). Guildford, United Kingdom: Society for Research into Higher Education.

Wyatt, T., Ruby, A., Norton, S., Davies, B., & Shrubb, S. (March, 1989). Reporting on educational progress: performance indicators in education. Paper presented at the Conference of Directors-General of Education, Sidney.

Legislative references

Ley Orgánica de Universidades 6/2001, de 21 de diciembre de 2001, de Universidades. Boletín Oficial del Estado No. 307 (December 24, 2001).

Real Decreto 1947/1995 de 1 de diciembre, por el que se establece el Plan Nacional de Evaluación de la Calidad Universitaria, Boletín Oficial del Estado No. 294. (December 9, 1995).

Real Decreto 408/2001 de 20 de abril, por el que se establece el II Plan de Calidad de las Universidades, Boletín Oficial del Estado No. 96 (April 21, 2001).

Translator: Lessie Evona York-Weatherman

UABC Mexicali

1For example, in Mexico through the ANUIES document called “Higher Education in the Twenty-first Century. Strategic lines of development” (Rodríguez Gómez, 2000).

2Faculties, governing boards, other secondary organizations (staff meetings, departmental councils), and finally, the commissions set up under the latter (teaching, economics, etc.)

3Approved January 20, 2002.

4Whether because of each other’s political color, or because of the evident disconnection between the university and the regional and central governments.

5If they had acted thus, it is possible that the OLU might not be so controversial today. In this sense we quote the words of Gurrutxaga Ander Abad, Basque Government Deputy Minister of Universities and Research (Gurrutxaga, 2001):

The ministry has prepared the draft of a law that aims to transform the structure of the university without asking, without listening to, and without consulting the Basque Autonomous Community, which paradoxically must manage and finance the cost of this law.

6Self-directed study and external evaluation based on peer review consistent with the methodology used in the general framework of most of the countries of the European Union.

7On the subject of this aspect there are authors like De Miguel (1999a, p.110), who believes that: Most of the reports prepared have not been disseminated, even though among the recommendations specified in the PNECU there was established the need to publicize all the documentation generated, in order to give credibility to the evaluation process carried out.

8The Unit for the Quality of Andalusian Universities (UCUA) has launched for the academic year 2002-2003 one such initiative that gives priority to a number of degrees in different universities of Andalusia, committing itself to a funding structure that would make viable the implementation of improvements.

9Processes that are currently developing in various specialties of the University of Granada.

10Type of attitude scale consisting of a collection of items or phrases related to a dimension or feature, and about which are emitted judgments of agreement or disagreement. It is usual to contemplate five possible options of evaluation, which are distributed in a bipolar continuum from the least to the greatest amount of agreement or vice versa.

11The interested reader can find a comprehensive and detailed description of the types of indicators referred to in the PNECU (National Plan for Evaluation of Quality) on the website: www.mec.es/consejou/indicadores/index.html.

12In essence, the University Report 2000 is an extensive catalog of recommendations on various aspects of Spanish university life; it has taken as a reference earlier reports such as: France’s Attali reports, Britain’s Dearing report, or the USA’s Boyer report. The newest recommendations articulated in the document are those dealing with a new system of accreditation for universities and degrees; another new, more flexible system of studies; the reinforcement of public funding; increased scholarships; the modification of the system of access for university teachers; government and administration; and the creation of companies that exploit university research.

Please cite the source as:

Rodríguez Sabiote, C. & Gutiérrez Pérez, J. (2003). Weaknesses of the quality evaluation process in the Spanish University: causes, consequences and proposals for improvement. Revista Electrónica de Investigación Educativa, 5 (1). Retrieved month day, year, from: http://redie.ens.uabc.mx/vol5no1/contents-sabiote.html