This item is available under a Creative Commons License for non-commercial use only
Computational music analysis investigates the relevant features required for the detection and classification of musical content, features which do not always directly overlap with musical composition concepts. Human perception of music is also an active area of research, with existing work considering the role of perceptual schema in musical pattern recognition. Data sonification investigates the use of non-speech audio to convey information, and it is in this context that some potential guidelines for human pattern recognition are presented for discussion in this paper. Previous research into the role of musical contour (shape) in data sonification shows that it has a significant impact on pattern recognition performance, whilst investigation in the area of rhythmic parsing made a significant difference in performance when used to build structures in data sonifications. The paper presents these previous experimental results as the basis for a discussion around the potential for inclusion of schema- based classifiers in computational music analysis, considering where shape and rhythm classification may be employed at both the segmental and supra-segmental levels to better mimic the human process of perception.
Cullen, C., Coleman, W. (2016). Human pattern recognition in data sonfication. 6th International Workshop on Folk Music Analysis, Dublin, 15-17 June, 2016.