Document Type

Conference Paper


Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence




Like many others, our institution had to adapt our traditional proctored, written examinations to open-book online variants due to the COVID-19 pandemic. This paper describes the process applied to develop open-book online exams for final year (undergraduate) students studying Applied Machine Learning and Applied Artificial Intelligence and Deep Learning courses as part of a four-year BSc in Computer Science. We also present processes used to validate the examinations as well as plagiarism detection methods implemented. Findings from this study highlight positive effects of using open-book online exams, with 85% of students reporting that they either prefer online open-book examinations or have no preference between traditional and open-book exams. There were no statistically significant differences reported comparing the exam results of student cohorts who took the open-book online examination, compared to previous cohorts who sat traditional exams. These results are of value to the CSEd community for three reasons. First, it outlines a methodology for developing online open-book exams (including publishing the open-book online exam papers as samples). Second, it provides approaches for deterring plagiarism and implementing plagiarism detection for open-book exams. Finally, we present feedback from students which may be used to guide future online open-book exam development.