Isabella Berkley, Randle Steinbeck, Valentina Vargas, David Stansbury
Speakers: Gowri Ramachandran, Ian Vandewalker Brennan Center for Justice
This session explored some of the vulnerabilities present in the elections system in the US, from disinformation and cyber security perspectives. Both speakers noted considerable progress since the 2016 election, as well as the 2018 mid terms, in terms of processes that had been put in place (e.g. ensuring back ups to voting records) and in public awareness about the risk of disinformation. But, adversaries’ sophistication had improved as well. As the election system relies fundamentally on voters’ trust, it only required one incident to break that trust. The incident may not even occur in reality: even the suggestion of falsified election results could undermine that crucial trust component.
Activation: Facilitated Debate
Aim:
Shed light on the contrasting and complementary aims of different stakeholder groups in content moderation policies on online platforms.
Recipe:
The group was divided into six teams of two to three people. Each team was assigned a stakeholder group in the debate around moderation policies for online platforms to represent. The six stakeholder groups we explored were:
- Political parties
- Tech companies and platforms
- Election Officials
- Grass roots activists
- Voters focused on the rights-first philosophy to online moderation (see Jonathan Zittrain’s framework)
- Voters focused on the public health philosophy to online moderation
The teams were all provided with same prompt (below) and then went into break out rooms for fifteen minutes to agree the main considerations that their stakeholder group would have in this case, recording their thoughts in the exercise Google document (see below). Teams were encouraged to take five minutes for personal reflection prior to launching into discussion.
Prompt: In the days leading up to the 2020 election, you decide to log onto Twitter to see the most recent political commentary by your friends and family. While browsing, you come across a tweet posted by a distant person you follow. The tweet, which appears to show the polling place in your local town, includes an image of a police officer wearing [either a MAGA or a pro-Biden] mask with text stating: “Is your vote safe?” You feel uncomfortable with this post and do not know if this is a real image, or disinformation to encourage voter suppression.
After fifteen minutes the teams returned to the main room. One by one they presented their main points to the other teams, as called upon by the debate facilitator. Once each team had had an opportunity to present their ideas, teams were invited to respond to any other team’s ideas. In these responses team’s retained their stakeholder group identity, rather than reverting to their individual identities.
Outputs and insights:
The debate clearly captured the complexity of the issue. Even amongst groups that sat largely on the same side of the biggest picture question – to moderate or not to moderate – there was considerable difference in the approaches suggested. Are we more concerned about short term effects to ensure accuracy in the information purveyed in the current election window, or concerned about longer term precedent? What are the main rights that are at risk – the right to vote, the need to protect accurate information, and the right to freedom of speech – and what is their relative priority? How can uniform standards be set by governments, when the particulars of individual platforms are so different?
Particularly salient insights included the question of where deleting content from platforms might in some cases amount to evidence tampering if the content was to be used in later court proceedings; and whether there was any outcome on content moderation that ended positively for social media platforms: act early and they are accused of setting cultural speech norms where there is no consensus in broader society. Act too late and they are being reckless with the political and broader information health of society.