Document Type
Article
Rights
Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Disciplines
Computer Sciences
Abstract
Mental workload (MWL) is an imprecise construct, with distinct definitions and no predominant measurement technique. It can be intuitively seen as the amount of mental activity devoted to a certain task over time. Several approaches have been proposed in the literature for the modelling and assessment of MWL. In this paper, data related to two sets of tasks performed by participants under different conditions is reported. This data was gathered from different sets of questionnaires answered by these participants. These questionnaires were aimed at assessing the features believed by domain experts to influence overall mental workload. In total, 872 records are reported, each representing the answers given by a user after performing a task. On the one hand, collected data might support machine learning researchers interested in using predictive analytics for the assessment of mental workload. On the other hand, data, if exploited by a set of rules/arguments (as in [3]), may serve as knowledge-bases for researchers in the field of knowledge-based systems and automated reasoning. Lastly, data might serve as a source of information for mental workload designers interested in investigating the features reported here for mental workload modelling. This article was co-submitted from a research journal "An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems" [3]. The reader is referred to it for the interpretation of the data.
DOI
https://doi.org/10.1016/j.dib.2020.105433
Recommended Citation
Rizzo. L. & Longo, L. (2020). Self-reported data for mental workload modelling in human-computer interaction and third-level education. Data Brief, Mar 19;30:105433. doi: 10.1016/j.dib.2020.105433.
Data Appendix 1
data2.xlsx (37 kB)
Data Appendix 2
data3.xlsx (87 kB)
Data Appendix 3