Research Papers

Document Type

Conference Paper

Abstract

Positioning tests are organized in Flanders for prospective STEM students. They provide a low-stakes opportunity to assess their level of starting competences before enrolment. Predictive validity for subsequent academic achievement is an important quality measure of these positioning tests. However, the content of the tests varies over the years. This could be problematic for making accurate predictions based on data from previous years. Therefore, the objective of this study is to compare the stability over time of the predictions of academic achievement using either criterionreferenced (absolute grading) or norm-referenced (relative grading) positioning test grades of engineering and science students. Comparisons of classifications over six academic years yielded various results (n=1258). For the engineering students, all predictions where unstable in those academic years when the tests were held online due to Covid-19 measures, and when positioning test participation became obligatory. However, in the years when aforementioned special events were absent, norm-referencing yielded the most stable prediction. For the science students, norm-referencing yielded a stable prediction over all six academic years, and criterion-referencing yielded a stable prediction when the tests were not held online. This clearly suggests that the implementation of normreferencing in positioning tests may lead to more accurate predictions of academic achievement over time, regardless of changes in test content, despite the current use of criterion-referencing in practice.

DOI

https://doi.org/10.21427/NH2E-WS39

Creative Commons License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.


Share

COinS