Differential Weighting of Items to Improve University Admission Test Validity

Authors

  • Eduardo Backhoff Escudero Instituto de Investigación y Desarrollo Educativo Universidad Autónoma de Baja California
  • Felipe Tirado Segura Escuela Nacional de Estudios Profesionales, campus Iztacala Universidad Nacional Autónoma de México
  • Norma Larrazolo Reyna Instituto de Investigación y Desarrollo Educativo Universidad Autónoma de Baja California

Keywords:

Weighting of items, weighted scores, evaluation methods, predictive validity, admissions test.

Abstract

This paper gives an evaluation of different ways to increase university admission test criterion-related validity, by differentially weighting test items. We compared four methods of weighting multiple-choice items of the Basic Skills and Knowledge Examination (EXHCOBA): (1) punishing incorrect responses by a constant factor, (2) weighting incorrect responses, considering the levels of error, (3) weighting correct responses, considering the item’s difficulty, based on the Classic Measurement Theory, and (4) weighting correct responses, considering the item’s difficulty, based on the Item Response Theory. Results show that none of these methods increased the instrument’s predictive validity, although they did improve its concurrent validity. It was concluded that it is appropriate to score the test by simply adding up correct responses.

Downloads

Download data is not yet available.

References

Backhoff, E., Ibarra, M. A. y Rosas, M. (1995). Sistema Computarizado de Exámenes (SICODEX). Revista Mexicana de Psicología, 12 (1), 55-62.

Backhoff, E. y Tirado, F. (1992). Desarrollo del Examen de Habilidades y Conocimientos Básicos. Revista de la Educación Superior, 83, 95-118.

Bravin, J. (1983). Bright idea: Hard courses should carry more weight than easy courses. Executive Educator, 5 (1), 40-30.

Budescu, D. V. (1979). Differential weighting of multiple choice items. Princeton: Educational Testing Service.

Cronbach, L. J. (1971). Test validation. En R. L. Thorndike (Ed.), Educational measurement (2a. ed.). Washington: Consejo Americano en Educación.

Donnelly, M. B. y otros (1983). Simple Adding versus Differential Weighting of MCAT Subtest Scores. Journal of Medical Education, 58 (7), 581-83.

Govindarajulu, Z. (1988). Alternative methods for combining several test scores. Educational and Psychological Measurement, 48 (1), 53-60.

Mislevy, R. y Bock, R. D. (1982). Bilog: Maximum likelihood item analysis and test scoring with logistic models. Mooresville: Scientific Software.

Muñiz, J. (1997). Introducción a la Teoría de Respuestas a los Ítems. Madrid: Pirámide.

Nemecek, P. M. (1994). Constructing weighted grading systems. Clearing House, 67 (6), 325-326.

Nunnally, J. C. y Bernstein, I. H. (1994). Psychometric Theory. New York: Mc Graw-Hill.

Razel, M., y Eylon, B. S. (1987, abril). Validating alternative modes of scoring for coloured progressive matrices. Trabajo presentado en la Reunión Anual de la American Educational Research Association, Washington.

Siegel, J. y Anderson, C. S. (1991). Considerations in calculating high school GPA and rank in class. NASSP Bulletin, 75 (537), 96-109.

Sympson, J. B. y Haladyna, T. M. (1988, abril). An evaluation of "polyweighting" in domain referenced testing. Trabajo presentado en la Reunión Anual de la American Educational Research Association, New Orleans.

Talley, N. R. y Mohr, J. I. (1993). The case for a national standard of grade weighting. Journal of College Admission, 139, 9-13.

Talley, N. R. y Mohr, J. I. (1991). Weighted averages, computer screening, and college admission in public colleges and universities. Journal of College Admission, 132, 9-11.

Trent, J. W. y Leland, L. M. (1968). Beyond high school: A psychological study of 10,000 high school graduates. San Francisco: Jossey-Bass.

Tristán, A. y Vidal, R. (2000). Análisis de la práctica de asignar pesos a los reactivos y su efecto en el diseño y calificación de pruebas. Memorias del IV Foro de Evaluación Educativa. México: Centro de Evaluación de la Educación Superior.

Willis, J. A. (1993, abril). Chapter 1 Eligibility Factors and Weights: Using probit analysis to determine eligibility criteria. Trabajo presentado en la Reunión Anual de la American Educational Research Association, Atlanta.

Downloads

Article abstract page views: 1589

Published

2001-05-01

Similar Articles