Tom Ascott discusses the politicisation of coronavirus and the dangers of disinformation online.
The politicisation of coronavirus highlights the issues associated with combating disinformation.
Since the coronavirus outbreak, everything has changed. What will be able to return to normal and what has become the ‘new normal’ is being explored between spikes in coronavirus cases. What can be learned from 2020, is that citizens have relied more on social media and digital communications to stay in touch with their family, friends and colleagues.
Our reliance on these mediums has highlighted another problem: online disinformation campaigns. As politicians look to experts for guidance on messaging for solutions to coronavirus, these messages are being muddled. The Prime Minister Boris Johnson seemed to reveal a level of incredulity about how bad the current situation truly is when he called anti-vaxxers ‘nuts’. The fact is that the anti-vaccination movement represents a serious hurdle to the distribution of the forthcoming vaccine for coronavirus.
The World Health Organization has a ‘safe level’ target of 95% of the population needing to be vaccinated to prevent an outbreak. Year on year, vaccinations have fallen in the UK, and now only 90.3% of children aged 24 months and under are vaccinated. Citizens that expose themselves to greater health risks not only put themselves in jeopardy but can create public health issues that ripple out with hard to foresee consequences.
This is the face of disinformation that more people are beginning to see. It is not confined to users sharing links about giants being discovered or blurry UFO sightings. Disinformation has helped to propagate beliefs that vaccinations are dangerous and could sway a sizable portion of the population towards resisting one.
Disinformation makes it harder to spread correct information. Disinformation articles will use the same keywords as news articles, meaning that developers responsible for algorithms have to pay special attention to results that are surfaced. For some users, looking for coronavirus information online will provide them with factual reporting, for others, because of personalised algorithms, that same search may return disinformation written specifically to muddy the water.
AI is providing a future for the growth of disinformation. AI allows for disinformation to be generated not by human writers, but by algorithms that targets specific individuals, that can then be delivered and spread more quickly. Algorithms can make more complex disinformation content, such as deepfakes, which are more convincing than simple text posts.
Policy solutions for combating disinformation are difficult to implement.
Two years ago, the European Commission laid out its strategy on tackling online disinformation. It is still in practice, with TikTok signing up to it in June 2020. One of the European Commission’s solutions for countering disinformation is user reporting of disinformation online.
User reporting, and media literacy campaigns, all suffer from the same problem. Our current political climate means that any issue can become highly politicised and the politicisation of public health issues, for example, creates two camps who both believe that they are spreading the truth.
Politicisation generates organic misinformation, as citizens who earnestly believe that wearing a mask is dangerous to your health will feel that that message is not being spread. If one truly believed wearing masks was harmful, then sharing that information is absolutely key. It means that once a critical mass of online disinformation has been reached, platforms cannot rely on users to correctly report disinformation online. Those who believe masks are harmful will report content telling citizens to wear a mask. In turn, articles explaining the importance and safety of vaccines will be reported by anti-vaxxers.
Instead of passing on the problem to citizens, we must look to platforms to get better at monitoring trends and tackling disinformation early.
Likewise, politicians must be better at not politicising certain issues, especially ones related to public health. It is a difficult dialogue to be had, but Johnson’s comments on anti-vaxxers being ‘nuts’ will not change any of their minds. A more thoughtful and nuanced conversation must be had on the issue, to bring those on the fringe back into the fold.
It is critical that cross-cutting think tanks like the Fabian Society continue to research these issues. Papers such as Modern Britain: Global Leader in Ethical AI highlight how problems of AI, and disinformation, affect no single sector, but present holistic problems. But there is an opportunity to get ahead of the problem and as Darren Jones MP writes, ‘responsible regulation will be critical’.
The societal rewards for bipartisan behaviour are higher than ever. Mature, conscientious leadership that focuses on those left out in the cold of the political discourse will yield many benefits, including the potential for a vaccine, not just for coronavirus but for many of our political woes.
Tom Ascott is the digital communications manager at the Royal United Services Institute and the Editor of Intellect’s journal Technoetic Arts. He wrote his graduate thesis on the risks posed by automated weapon systems. You can read more of his analysis on technology, disinformation and digital media on his blog: http://www.tomascott.co.uk/
He tweets at @Tom_Ascott