South Africa’s online trolls: to feed or not to feed?

Don’t feed the trolls. That’s the general advice when confronted with these online pests. The wisdom behind it is that trolls are sociopaths who want to provoke anger. They do not enter online debates to enrich arguments, but to throw the arguments off-topic with inflammatory but largely irrelevant posts. They behave in this way because they perceive the online environment as being largely free of consequences, and can hide behind the internet’s anonymity.

South Africa has more than its fair share of trolls, and recent articles on race and racism seem to attract some of the most vociferous. Writers like Gillian Schutte and Benjamin Fogel have been trolled for their articles on the continued existence of white racism. Articles on the Marikana massacre also threw up some of the worst examples of online racism, with some commentators on social networking sites referring to black people as primitive, superstitious, retarded, chimpanzees, lazy unproductive parasites, “moronic Neanderthals that would rape a baby to cure themselves of HIV”, undeserving of the vote and incapable of running a country.

Posts like these present freedom of expression advocates with a dilemma. Allow these posts, and inflame racial tensions, but ignore the posts and allow racism to go unchallenged. A free speech-friendly approach to dealing with the problem would require all views, including racist views, to be expressed, so that they can be challenged publicly. After all, the South African Constitution does protect speech, even racist speech. Furthermore, banning racist speech would merely drive racist attitudes underground.

But contesting such speech openly can amount to feeding the trolls, and the trolls shouldn’t be fed. The problem is that the classic free speech arguments were crafted before the internet, when it was safe to assume that speakers could be identified and held accountable for their statements.

This means that new ways need to be found to protect both freedom of speech and promote accountability. One of the ways of doing this is to require people who post on online forums to use their actual names. But this does not stop people from using pseudonyms. Furthermore, compelling people to verify their identities is not the best solution in all situations, as anonymity may allow commentators who fear retribution for their speech, to speak freely. If people are prevented from concealing their identities, then matters of public importance may never come to light.

News24.com and other 24.com sites have been plagued with racist and offensive online speech. To address the problem, since 2012 they have built an authentication layer into their system, compelling commentators to log into the comments section using their Facebook accounts. According to Jannie Momberg, editor-in-chief of News24.com, “[There] has been a real improvement due to the introduction of the ‘real world’ identities. The fight against ‘trolls’ is ongoing though. Using Facebook log-ins won’t solve the issue of trolling, but it certainly mitigates against it.” While this approach may increase accountability, it does not stop persistent trolls from creating fictional Facebook accounts and hiding behind them.

Another possibility is to look to internet service providers (ISPs) to become more responsible for online behaviour. However, it is not appropriate on freedom of expression grounds for intermediaries like ISPs to be held legally liable for content, and the South African law recognises this. South African ISPs are provided with a safe harbour under certain conditions, including that they operate a self-regulatory system through a notice-and-takedown procedure.

But industry self-regulation does not necessarily automatically promote freedom of expression. ISPs tend to be risk-averse, and may well err in favour of caution. For example, the acceptable use policies of a number of the major ISPs limit freedom of expression unduly. MWEB’s acceptable use policy, for example, states that it prohibits use of its services in a way that is “…harmful, obscene, discriminatory… constitutes abuse, a security risk or a violation of privacy… indecent, hateful, malicious, racist… treasonous, excessively violent or promoting the use of violence or otherwise harmful to others”. Many of these grounds are vague and would therefore be protected by the Constitution.

iBurst’s policy is even more restrictive in that it forbids publication of illegal material that it defines as obscene and discriminatory. It also forbids material that “could be deemed objectionable, offensive, indecent, pornographic, harassing, threatening, embarrassing, distressing, vulgar, hateful, racially or ethnically offensive, or otherwise inappropriate, regardless of whether this material or its dissemination is unlawful”. This policy is almost certainly unconstitutional, given its over-breadth. In contrast, Internet Solutions’ policy is much more tightly drafted, and defines prohibited content as “copying or dealing in intellectual property without authorisation, child pornography and/or any unlawful hate-speech materials”.

Unless creative ways are found to deal with the trolling and hate speech problem, the internet remains vulnerable to both state and market censorship. The internet has placed the means of communication in many more hands, but the community of internet users now needs to start taking greater responsibility, including devising strategies to promote good digital citizenship. User education should promote critical thinking, bringing home the message that words have consequences.

Furthermore, online publishers need to take responsibility for promoting intelligent debate: a difficult ask in a tight media economy where competition for eyeballs can lead to sensationalist publishing. Editor and publisher of the South African Civil Society Information Service, Fazila Farouk, has argued that “[Who] comments and how they comment also depends on what kinds of views the publication publishes. If publications are perhaps a little cavalier about the views that they publish, then I think this also attracts trolls to the comments section”.

In this regard, moderation informed by a community standards approach is perhaps the best. Too many comments policies are imposed, rather than crowdsourced from users; as a result they often fail to capture the ‘will of the tweeple’. According to deputy head of the School of Journalism at Rhodes University, Professor Herman Wasserman, “[This] moderating does not have to be handed down from on high by the editor(s), but could be a system of peer review by the online community, according to community standards that are drawn up collaboratively and applied collectively. This approach, adopted by The Guardian newspaper, allows users the opportunity to promote a safe and inclusive online community that defines collectively what online conduct is acceptable. Moderation merely enforces these commonly agreed standards.”

Furthermore, a difficult but necessary distinction needs to be drawn between trolls and those internet users who genuinely believe in racist, sexist and other bigoted ideas. The crucial difference is one of intention: trolls may spout offensive speech simply for effect, whether they truly believe in those views or not. The latter really believe their views, and effort needs to be put into calling them out and encouraging them to change those views.

And the trolls? Again, a community standards approach can be applied, as experienced user communities often know who is a troll and who isn’t. They can inform the moderator accordingly, so that he or she could be blocked if warnings do not work. Users should be careful of using the label ‘troll’ too easily, however, as it can damage an otherwise legitimate user’s online reputation and destroy the trust necessary for productive online debate.

The community standards approach to internet governance has its pitfalls. It could introduce a new form of cyber-mob censorship, and lead to unpopular but socially important ideas being shouted down. Also, in South Africa, the demographic of internet users does not reflect the demographic of broader society, which could lead to inappropriate community standards being applied. But internet penetration is increasing. The hard reality is that in a global climate of increasing governmental controls of the internet, governance of cyberspace by its users may be the only way of ensuring that the internet remains free while its users remain accountable.

Failure to engage with offensive speech makes the case for government regulation stronger, as the message is communicated that such speech has no consequences, and therefore the government has a responsibility to step in and protect vulnerable users. As constitutional scholar Dave Winer has argued, “[That’s] the flip side of free speech: there are consequences to what you say – not because you’ll go to jail, but because your fellow citizens will not accept you as fit to walk amongst them.”

Professor Jane Duncan is Highway Africa Chair of Media and Information Society in the School of Journalism and Media Studies at Rhodes University.

By Jane Duncan

Source: Themediaonline