Document Type

Conference Paper


Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence

Publication Details

CAL 07 Conference, TCD, March, 2007.


As conferencing tools become an increasingly common feature in students’ experience, tutors need to have an understanding of how these tools facilitate the formation and maintenance of collaborative learning communities. Inevitably the pursuit of this understanding requires some form of analysis of the interactions involved. This analysis of the written transcripts, created by students during computer mediated conferencing (CMC), invariably takes the form of a systematic content analysis. For small-scale work the analysis can be undertaken manually but when the volume is large, as might arise from courses delivered wholly online or in a blended learning approach for example, some form of automated content analysis comes into its own. Whether analysed quantitatively or qualitatively, there is much to commend this type of approach by higher education tutors wishing to assess the progress of their students and improve their understanding of how students learn through computer conferencing technology. On the basis that tutors need to have an awareness of the advantages and limitations of such tools, this paper examines the content analysis approaches currently available.

The research of Howell-Richardson & Mellar (1996), on analysing patterns of participation amongst students within computer conferencing courses, provided an interesting contemporary snap-shot of the methodological considerations in this relatively new aspect of e-learning research. Rourke, Anderson, Garrison & Archer (2003) enhanced the area of study with a systematic and rigorous exploration of the field covering the period 1990-2000. Their survey covered 19 commonly referenced studies and focused on the units of analysis, variables studied, reliability and research designs. They identified five main units of analysis that have generally been used in computer conferencing research, namely: proposition, sentence, paragraph, thematic and message units.

However, it is clear from the literature that considerable debate continues to surround the different frameworks proposed for the analysis of computer conferencing transcripts. This debate generally focuses on the appropriateness of the methodology and the representation of interaction patterns and learning processes. In this paper we propose to extend Rourke et al.’s study to cover a wider range of methodological models. Specifically we examine the merits and demerits of these models as exemplified in a selection of influential conferencing analysis studies.