This paper analyses nearly 600 news segments relating to climate change broadcast on 3 American news networks over a period of 8 years. The paper demonstrates the typical steps involved in a text analytics solution. It shows how the text data was sourced and imported into a software program. The steps carried out in pre-processing the text data are outlined as well as explaining key terms in the text analytics pipeline. A sentiment analysis is applied using a lexicon and further processing is carried out to answer the
original questions posed such as what words drive a particular sentiment category, how the news topic vocabulary varies by news network and how sentiment changes over time. It is argued here that the use of an externally provided lexicon in sentiment analysis is not without its pitfalls. It is also shown how the lexicon can be altered by the implementer and the subsequent effect on the results. The stop words list used also affects the text content downstream which will influence the sentiment score. As such, the integrity of the results output in a sentiment analysis solution can be called into question when the source code itself is not publicly visible and available for inspection.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License.
"Text Analytics Techniques in the Digital World: a Sentiment Analysis Case Study of the Coverage of Climate Change on US News Networks,"
Irish Communication Review:
1, Article 7.
Available at: https://arrow.tudublin.ie/icr/vol16/iss1/7