This item is available under a Creative Commons License for non-commercial use only
Mental workload (MWL) is an imprecise construct, with distinct definitions and no predominant measurement technique. It can be intuitively seen as the amount of mental activity devoted to a certain task over time. Several approaches have been proposed in the literature for the modelling and assessment of MWL. In this paper, data related to two sets of tasks performed by participants under different conditions is reported. This data was gathered from different sets of questionnaires answered by these participants. These questionnaires were aimed at assessing the features believed by domain experts to influence overall mental workload. In total, 872 records are reported, each representing the answers given by a user after performing a task. On the one hand, collected data might support machine learning researchers interested in using predictive analytics for the assessment of mental workload. On the other hand, data, if exploited by a set of rules/arguments (as in ), may serve as knowledge-bases for researchers in the field of knowledge-based systems and automated reasoning. Lastly, data might serve as a source of information for mental workload designers interested in investigating the features reported here for mental workload modelling. This article was co-submitted from a research journal "An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems" . The reader is referred to it for the interpretation of the data.
Rizzo. L. & Longo, L. (2020). Self-reported data for mental workload modelling in human-computer interaction and third-level education. Data Brief, Mar 19;30:105433. doi: 10.1016/j.dib.2020.105433.