Project summary

The aim of this transdisciplinary project is to understand how disinformation is used in Australian political advertising on social media. To this aim we will perform a data-driven analysis of Australian political campaigning on Facebook. We will collect manual truthfulness labels by means of crowdsourcing and train Artificial Intelligence models to analyse the varying level of truthfulness in the analysed dataset. This will serve as a first step towards a continuous social media monitoring of political campaigning to understand how untruthful communication may be used to influence an audience (e.g., the electorate) and further an agenda. 


Project description

Disinformation, which can be defined as the deliberate attempt to spread false and misleading information [1,15], is not new but the contemporary version is “different, and dangerous” (according to [7]). There appears widespread agreement that citizens are increasingly exposed to on-line information intentionally designed to mislead and influence them [1,4]. While there is little evidence to suggest disinformation influences voting behaviour [6,11], the concern that many democratic theorists, policymakers and commentators hold is campaigns can use disinformation tactics to increase polarisation, fragment the public sphere and increase dissatisfaction with democracy and key institutions such as media and political parties.  


The aim of this project is to understand how sponsored social media content is used in such campaigns. With a focus on Australian politics, we will perform a data-driven analysis of Facebook political advertising using data already collected from the Facebook ad Library. Using data analytics methods at scale (e.g., Apache Spark) to deal with the size of available data and AI methods (e.g., computer vision) to process media content at scale, we will develop a large-scale analysis of all political sponsored content published on Facebook with a specific focus on election events such as Australian federal elections. Based on verified methodologies [13], we will crowdsource truthfulness labels (e.g., via Amazon MTurk) for a sample of political ads in the collection and train supervised machine learning models to then classify the entire dataset for further analysis on the use of disinformation on social media. 


While questions in regard to how data is being used in political campaigning has been the focus of significant scholarly attention (e.g., [8]), our approach is innovative as it focuses on truthfulness in social media advertising. In doing so, it will generate an understanding of the challenges and opportunities of this type of personalised campaigning with a human-in-the-loop approach [2]. The significance of this study relates to the contemporary media environment and the widespread perception that disinformation poses a threat to the integrity of democratic elections. 48 per cent of Australians rely on online media as their main source of news and as one example of the spread of disinformation, 66 per cent say they have encountered disinformation online related to Covid-19 [12]. Global concerns about disinformation has led to the formation of parliamentary committees in many advanced democracies including the United States, the United Kingdom and, of course, in Australia. Chair of the Select Committee on Foreign Interference through Social Media, Senator Jenny McAllister recently said “...through social media you can introduce inauthentic, deceptive, covert voices whose goal isn’t to improve the quality of public debate in Australia it’s actually to degrade it” [16]. This project will produce an authoritative set of findings that can inform the work of electoral regulators, and our elected representatives who are working to maintain the integrity of democratic elections. 


Publication

View publications


Reference

[1] Bennett WL, Livingston S. (2020) The Disinformation Age: Politics, Technology, and Disruptive Communication. Cambridge University Press.
[2] Demartini G., Mizzaro S., Spina D. (2020). Human-in-the-loop Artificial Intelligence for Fighting Online Misinformation: Challenges and Opportunities. In: The Bulletin of the Technical Committee on Data Engineering, Vol. 43 No. 3. September.
[3] Dziedzic S. and Norman J. (2020).  Scott Morrison demands apology from China over ’repugnant’ tweet showing Australian soldier threatening to kill child, 2020. Published by abc.net.au on 30.11.2020
[4] Humprecht E, et al. (2020) Resilience to online disinformation: A framework for cross-national comparative research. The International Journal of Press/Politics.
[5] Jiang and Wilson (2018).  Linguistic signals under misinformation and fact-checking:  Evidence from user comments on social media.  In CSCW, pages 1–23.
[6] Jungherr A. and Gayo-Avello D. (2020). Retooling Politics: How Digital Media Are Shaping Democracy. Cambridge University Press.
[7] Karpf D. (2019) On Digital Disinformation and Democratic Myths. Mediawell.
[8] Kefford, Glenn (2021). Political parties and campaigning in Australia: data, digital and field. Cham, Switzerland: Palgrave Macmillan. doi: 10.1007/978-3-030-68234-7
[9] Knaus C. (2020). Disinformation and lies are spreading faster than Australia’s bushfires. Published by theguardian.com on 12.01.2020
[10] Liu Y. and Wu Y.-F. B. (2018). Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: Thirty-second AAAI conference on artificial intelligence.
[11] Miller ML and Vaccari C. (2020) Digital Threats to Democracy. The International Journal of Press/Politics.
[12] Park S., Fisher C., Lee J. Y., Mcguinness K., Sang Y., O’Neil M., Jensen M., Mccallum K., and Fuller G.
(2020).  Digital news report: Australia 2020. In Digital News Report: Australia 2020-Launch. News Media Research Centre, University of Canberra.
[13] Roitero K., Soprano M., Fan S., Spina D., Mizzaro S., and Demartini G. (2020). Can the crowd identify misinformation objectively? the effects of judgment scale and assessor’s background. In: Proceedings of the International Conference on Information and Knowledge Management.
[14] Starbird K. (2019). Disinformation’s spread: bots, trolls and all of us. Nature, 571(7766):449–450.
[15] Tenove C. (2020) Protecting Democracy from Disinformation: Normative Threats and Policy Responses. The International Journal of Press/Politics.
[16] The Guardian. (2020) Why are we concerned about foreign interference through social media?Australian politics live podcast. 

 

Project members

Lead investigator:

Associate Professor Gianluca Demartini

Associate Professor
School of Electrical Engineering and Computer Science

Other investigator(s):