Try our mobile app

Meta’s plan to fight South African election misinformation

Published: 2024-04-12 12:00 +02:00 by Nkosinathi Ndlovu tag: Social media

JSE:SPP

Meta said it is drawing from its experiences in over 200 elections around the world for South Africa’s upcoming poll.
Facebook parent Meta Platforms has launched an anti-hate speech and misinformation campaign in South Africa, which will run on its platforms as well as local and national radio stations.

This is part of the social media giant’s efforts to ensure that users on its platforms – which include WhatsApp, Facebook and Instagram – are able to identify and report content that is deliberately designed to mislead or misinform them, potentially threatening the integrity of the upcoming election in May.

Meta, along with other social media platforms, have come under fire in recent years for allowing misinformation to thrive on their platforms so long as it drove traffic volumes.

Mis- and disinformation are not new and they will not stop happening when the election is over

In Meta’s case, this coincided with the mishandling of user data – 50 million Facebook accounts to be specific – that British consulting firm Cambridge Analytica was able to exploit in order to manipulate the voter base in an effort to swing the 2016 US national election in Donald Trump’s favour.

In 2019, Meta was fined US$5-billlion by the US Federal Trade Commission over the saga. But according to Balkissa Idé Siddo, public policy director for sub-Saharan Africa at Meta, the company has learnt a lot since then.

“We draw on lessons from our involvement in over 200 elections worldwide,” said Siddo in an interview with TechCentral this week. “Over the last eight years, we’ve rolled out industry-leading transparency tools for ads about elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third-party fact-checking programme of any social media platform to help combat the spread of misinformation.”

IEC partnership

According to Siddo, Meta works closely with South Africa’s Electoral Commission (IEC) to help develop policies and tools that will help the commission in its battle against misinformation in the context of the election. The initiative includes training programmes for IEC staff on media literacy and how to detect misinformation.

But she added that although the social media giant’s work with the commission may intensify in the build-up to the election, the partnership represents an ongoing process in a growing relationship between the two bodies.

“Mis- and disinformation are not new and they will not stop happening when the election is over,” said Iddo.

Read: AI deepfakes and SA’s fight to protect the 2024 election

To help facilitate user education, Meta’s moderators are quick to remove content that is deemed to be harmful, but they keep, downrank and label content categorised as misinformation so that users can engage with it and learn how to recognise it even when they come across it on other platforms.

But advancements in technology are adding new challenges to the content moderation landscape. Artificial intelligence and deepfakes are intensifying the quality of fake content on social media platforms, and Iddo believes that this highlights the importance of ensuring that users are better educated about misinformation so that they can recognise fake content and respond to it appropriately.

Meta’s Balkissa Idé Siddo

Iddo said, however, that there is a positive side to AI that is not talked about as much as its potential dangers. In discussions with various stakeholders, including content creators, Meta has observed that there is an excitement about how AI can help elevate their content-production capabilities, especially for smaller, less-resourced media outlets and individual content producers.

Meta is also using AI as part of its arsenal to combat the spread of misinformation on its platforms. “We have more than 40 000 staff dedicated to safety and security, and we partner with local bodies for fact-checking. But we have also been experimenting with AI tools and we have found that large language models are much faster at detecting harmful content,” said Iddo.

At an international level, Meta is part of a partnership with other social media and owners of content production platforms that use AI, including Microsoft, Google, Shuttershock and Midjourney, to help social media platforms identify AI-generated content.

After identifying that content is AI generated, Meta makes users aware of it through labelling

After identifying that content is AI generated, Meta makes users aware of it through labelling. “They [content creation platforms] need to embed watermarks in their content so that when it gets onto our platforms we can recognise it,” said Ben Waters, policy communications manager for Africa and the Middle-East at Meta.

Locally, Meta, Google and TikTok parent ByteDance signed a cooperation agreement with the IEC in July last year under which the elections agency has set up an independent, three-member committee to evaluate any reported cases of misinformation on social media platforms.

Depending on the committee’s findings, it will make recommendations to the IEC, which can then ask the offending platform either to de-rank the malicious content or take it down. But one of the largest social media platforms, X – formerly Twitter – is not party to the agreement. – © 2024 NewsCentral Media

Get breaking news alerts from TechCentral on WhatsApp