Does Online activity harm our Democracy?

Living in the Gen Z era, where power has significantly shifted to social media companies, does Social Media and the internet threaten our democracy?

The threats posed by social media and the internet are numerous. Whilst these issues are incredibly important, such as cyberbullying, extremism, extreme or revenge pornography, modern slavery, hate crime and more, the focus here is on misinformation and disinformation regarding their impact on democracy. Misinformation is unintentional whilst disinformation is intentional spreading of false or misleading information.

The Democracy and Digital Technologies Committee (“the Committee”) recently looked into the connection between democracy and digital technologies. Their report strongly supports the Government’s Online Harms programme, states that democracy is facing a “daunting new challenge” and that disinformation and misinformation are resulting in what they term a “crisis of trust”.

Is the threat to democracy real?

First, what is understood by ‘democracy’ in this post?

democratic voting system, and citizen participation are key. It involves the rule of the majority, government by the people and “in which the supreme power is vested in the people and exercised by them directly or indirectly through a system of representation usually involving periodically held free elections.” It is “the process which gives people a voice in society.

How is ‘democracy’ being affected by social media?

Transparency is key in a democratic society. When misinformation and disinformation spread, it can pervade all aspects of public life, from voting, to spending habits, to health.

Misinformation and disinformation have been exacerbated, and indeed highlighted by, the pandemic, COVID-19. In week one of the UK lockdown, nearly 50% of respondents to a survey relayed that they had seen information they thought to be false or misleading.

The false information spread threatened public health at times, resulting in 5G towers being set alight, a trend of inhaling sulphurous fumes from fireworks in an attempt to prevent getting covid-19, drinking silver, avoiding ice cream and more.

False or misleading information is harmful because it creates mistrust and confusion, and in situations such as a pandemic this leads to more concrete repercussions such as those mentioned above. During the election campaign when false and misleading claims were shared, 17% of the public said they were less likely to vote because of this. Focus group participants suggested that political misinformation made them lose trust in the political process. There have been concerns that the corrected versions of previously incorrect or misleading information are not then re-circulated to the people who saw the incorrect version.

There are concerns that algorithms could create polarisation. The Committee stated that “the evidence base is far from strong enough to support effective regulation in this area”. However, even if there is insufficient evidence to prove algorithms create polarisation, parts of the population do not understand how social media platforms work. This is a harm that should be shielded against if it is a possibility since boosted content could not only create tensions and the potentiality for a more divided society, but it could also allow members of the public to hold misguided opinions.

These issues affect how, and whether, the population vote. Those who may be misguided by what they see on social media, may vote a certain way because of the information they are presented with. Social media platforms inadvertently affect the way people vote. Whether that is because they believe false or misleading content, are led to believe their opinion is correct through being recommended similar, affirming content, or because they decide not to vote altogether, the result is that social media content can influence users’ perceptions and opinions, and ultimately their “voice” as understood in the definition of democracy above.

Was this a problem in the past?

Printed press also posed similar challenges in the past. However, social media and the internet pose their own particular set of issues with the widespread, rapid circulation of online content. The rise of social media has occurred concurrently with the decline in memberships in trade unions, churches, local community groups, and sporting associations. In the past, these social spaces were “core arenas for democratic activity”, and democratic debate was generally the remit of newspapers, television channels and radio stations. Now, the traditional spaces for public debate have spread to online platforms, where anyone can be an author, and location doesn’t matter.

Furthermore, whilst the printed press is regulated, the online space is unregulated as it stands. The EU’s e-Commerce Directive makes the host liable only if the host, or its technology, become aware of illegal content and then fail to remove it in an appropriate time frame. Tech giants and social media platforms largely create their own policies, rules, guidance, and, whilst there do exist collaborative initiatives between them, there is no legislation in the UK at the moment that sets a baseline for companies’ conduct in this sphere.

Social media and tech giants need guidelines created and enforced by an impartial authority. Allowing competition to motivate platforms to have solid policies and codes of conduct in place may previously have worked, however, generally, people will use the large platforms despite their concerns since they feel they have no other choice.

Furthermore, enforcement of platforms’ own guidelines can be inconsistent. The Report uses YouTube as an example. One of the largest Youtubers, Steven Crowder, was reported for repeated racist and homophobic abuse of Carlos Maza. YouTube decided that his videos did not violate its policies because their primary purpose was not to incite hatred. However, before the incident, the guidelines didn’t say that it had to be the primary purpose, only that the incitement had to be against the guidelines. After criticism from the public, YouTube de-monetised his videos. They also changed their guidelines, but it is unclear what the true motive for updating them was.
It also reports that, “YouTube employees have anonymously spoken to the press to indicate that they are prevented from enforcing the rules consistently and that more senior employees stop sanctions from being applied to high profile creators.”

To determine and enforce their own codes of conduct and policies when they have an oligopoly on the market is something that can’t be reconciled with creating a safe space online. Microsoft made a comparison to doctors and lawyers that was pertinent:

“Just as today when consulting a doctor over a medical issue, or a lawyer over a legal challenge, we can seek a second opinion or redress when something goes wrong, in the world of algorithms knowing who is accountable when something goes wrong is equally important. Maintaining public trust will require clear line of sight over who is accountable … in the real world.”

Online Harms Framework

The Online Harms White Paper is a proposal to enact legislation that would create a legal duty of care for companies and tech firms who allow content or activity to be shared on their platforms. The duty of care would apply to content or activity which could cause significant physical or psychological harm to an individual. Obvious platforms whom this would apply to include Facebook, Google, Twitter and YouTube.

The Government plans to take a three-pronged, risk-based approach. This includes within its ambit misinformation and disinformation. However, it focuses on harms to individuals rather than to the public in general. Whilst this is “a spectrum rather than a bright line”, this would mean that harms to the public or society would only be within the Online Harms ambit if it constituted harm to an individual.

Why is it controversial?

There are multiple interests to be balanced in regulating Social Media and the internet. These include freedom of expression, the right to privacy, promoting and protecting democracy, safety, (potential) duties of care, ensuring accurate content, and many more.

The Online Harms framework would hold social media platforms and tech giants accountable for the content they allow users to put online if they do not monitor it adequately. There would be sanctions in place, and an independent regulator to create codes of conduct and enforce them – as it stands, this will be Ofcom.

It could conflict with freedom of expression if platforms are called upon to monitor and remove content that is inconsistent with its duty of care to users. Where platforms are eager not to be sanctioned, they risk erring on the side of caution and removing harmful, but legal, content. In Germany, similar legislation was enacted, and there has since been evidence of the over-removal of lawful content. Human Rights Watch picked up on this, and summarised their concerns as such:

“Two key aspects of the law violate Germany’s obligation to respect free speech, Human Rights Watch said. First, the law places the burden on companies that host third-party content to make difficult determinations of when user speech violates the law, under conditions that encourage suppression of arguably lawful speech. Even courts can find these determinations challenging, as they require a nuanced understanding of context, culture, and law. Faced with short review periods and the risk of steep fines, companies have little incentive to err on the side of free expression.

Second, the law fails to provide either judicial oversight or a judicial remedy should a cautious corporate decision violate a person’s right to speak or access information. In this way, the largest platforms for online expression become “no accountability” zones, where government pressure to censor evades judicial scrutiny.”

The Online Harms framework would allow those caught by it to seek judicial review of the independent regulator’s actions and decisions. However, the concern that companies will be overly cautious if the framework is enacted, and infringe on members’ of the public right to free speech persists.

What should be done?

Algorithms

One suggestion from the Committee is that there should be more algorithmic transparency. In theory, this would mean that users, researchers and the general public have access to the algorithms that are in place on the platforms to enable them to understand the content that they see and see the bigger picture.

On the other hand, Jennifer Cobbe has suggested that instead of algorithmic transparency, platforms should simply not promote content that is harmful, and be given guidance as to what types of content this includes. In her opinion, it would lessen the infringement of freedom of expression since the content is still hosted, but it would have to be actively searched for in order to be seen.

This could help strike a better balance between freedom of expression & free choice, and the potential harm caused by content. It would appear to fit within the broader aims of the Online Harms framework since it focuses on the harm to individuals. However, it could conflict with the duty of care currently proposed if harmful content is known to the platform or its technology and is not taken down because, instead, the algorithms simply do not promote it. It would also require more from platforms than to be transparent with their algorithms. Thus, algorithmic transparency would be preferrable.

To boost the effects of algorithmic transparency, requirements for popular content creators similar to putting #Ad before posts could be included in guidelines for creators. Platforms could be required to publish these guidelines by the codes of conduct made for them by Ofcom. For example, creators’ content containing political or health-related content could require statements to ensure users are aware the content is not ‘medically approved’ or ‘an authoritative statement’. This would help remind users to remain critical of content they find online.

Education

Part of the above suggestion will most likely be dependent on increased digital media literacy, since algorithmic transparency does no good unless one can understand what the algorithms mean for their use of the platform. Digital media literacy is taken to mean a deeper understanding of online platforms: not just how to use technology and digital media, but how to critically analyse what one sees in order to “distinguish fact from fiction” and “question how that information reaches them and how they are using it”.

After a comparison to Estonia, where the government regularly reviews its digital education strategies, and to Finland, where it is considered the patriotic duty of every Finnish citizen to distinguish fact from fiction because of the threat from Russia, the Committee concluded that more needs to be done by way of digital media literacy education in the UK. The Government has previously rejected the idea of digital literacy become the “fourth pillar of education alongside reading, writing and maths“, and, whilst it stated it was introducing digital literacy as a key part of the national curriculum and publishing a media literacy strategy in 2020, the strategy has yet to be been published. It is important that users are able to understand the platforms they use, and a clear digital media education strategy could help with achieving this.

Although it may not have been accepted as a “fourth pillar”, digital literacy has been recognised as a fundamental aspect of education. With covid-19 speeding up the transition to more online transactions and communications, it is now more important than ever to ensure that the education regime covers how to navigate online content safely and with an open mind.

Scope of Online Harms Framework

As mentioned above, harms to democracy as well as specific harms to individuals should be included as to when platforms would be liable for breaching their duty of care to their users. This would ensure that democracy is adequately prioritised as something that is threatened by social media, the internet and online platforms. It has been highlighted that there is no hard line between individual harm and societal harms, rather, “it is a spectrum” where there may be overlap. However, this approach could leave gaps where a societal harm is left unaddressed because it is not also an individual harm. As the Committee suggests, the duty of care should extend to “actions which undermine democracy” and cause “generic harm to our democracy”. This may extend the ambit of the Online Harms framework, however, it would extend it in a justified way, proportionate to the threat posed. This could allow a lawful justification for limiting a user’s freedom of expression, and could also fulfil the “necessary in a democratic society” requirement for Article 10 of the European Convention on Human Rights.

There are, of course, other harms within the scope of the Online Harms framework that threaten democracy. However, the focus here was on misinformation and disinformation. If you would like to read a discussion on more of the threats posed by platforms and tech giants, on democracy, or other harms, let me know in the comments below!

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments