Research Papers

Document Type

Conference Paper

Abstract

It is common in architecture education to quantify the quality of assignments into grades, often done by one or two teachers using rubrics. However, this can have several downsides. It suggests an objective preciseness that is debatable for the creative assignments in the field of architecture, and the assessment is dependent on the judgement of only one or two people. Comparative judgement (CJ) offers an alternative to rubric-based assessment by applying pairwise comparison to student assignments, resulting in a ranking instead of a grade.

We used a mixed methods approach to compare the reliability, time efficiency, and fairness of CJ in the selection of students for an undergraduate architecture programme at Delft University of Technology in the Netherlands. Teachers involved in the rubric-based approach for student selection were asked to re-assess a random selection of the assignments using CJ. Reliability and time investments for both methods were compared, and the involved assessors were asked in a focus group setting which of the two methods they perceived as more reliable and fair. Comparing rubric-based assessment to CJ is new, as previous studies have only looked at these assessment methods in isolation.

Findings indicate that CJ can be serve as a more reliable and time efficient alternative to rubric-based assessment. However, teachers still perceive rubrics as having higher reliability and fairness. Though this research is particularly relevant in the context of architecture, it contributes to wider discussions about reliable and fair assessment of creative student assignments.

DOI

https://doi.org/10.21427/WEY8-JZ69

Creative Commons License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.


Share

COinS