Document Type

Conference Paper

Rights

Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence

Disciplines

5.9 OTHER SOCIAL SCIENCES, Social sciences

Publication Details

European Conference on Cyber Warfare and Security; Reading, (Jun 2020)

Abstract

Social media has become an effective medium for the execution of cyberpsychological threats by adopting language to influence perceptions based on personal interests and behaviours. Targeted messages can be refined for maximum effect and have been implicated in changing the outcome of democratic elections and the decreasing uptake of vaccinations. However, computational propaganda and cyberpsychological threats are not well understood within the cybersecurity community. To address this, we adopt the theoretical model of the illusory truth effect to posit that how information is presented online, may solidify views in an 'undecided' group with 'some' knowledge of an argument. We test this hypothesis by employing an explanatory sequential design. We first analyse a dataset containing adverts related to Brexit to determine influential terms using the corpus linguistics method. Analysing term frequencies, collocational and concordance information, the results of our quantitative analysis indicate that function words such as the personal pronouns ‘we’ or the definite article ‘the’ play a significant role in the construction of computational propaganda language. We then conducta qualitative analysis of a Facebook ad related to Brexit to further understand how the ‘who’ and the ‘what’ elements are realised in computational propaganda language, that is, who is targeted and what is the underlying message. We found that understanding these, one can gain insights into a threat actor’s motivation, opportunity and capability and, thus, allows a defensive response to be put into place. In turn, how an audience responds, may provide insight on the impact of the threat.

DOI

http://dx.doi.org/10.34190/EWS.20.503


Share

COinS