Photo: © UNDP Mauritania/Freya Morales

Misinformation, disinformation, and hate speech are some of the flagship challenges of the modern era. The 2020 US election, and its violent aftermath, is one of the most visible examples of how the viral spread of misinformation can negatively affect the electoral process. But according to a recent Ipsos poll, fake news is a global trend. Of 25,000 people surveyed in 25 countries, four out of five believe they’ve been exposed to fake news. Among those, 87% believe fake news is made worse by the internet, and 83% believe that fake news has negatively impacted the political discourse in their country. 

Mis- and disinformation creates doubt in people’s minds about what to trust, which in volatile periods like elections can lead to violence. Ultimately, what is often referred to as  ‘fake news’ undermines public trust and the capacity of citizens to identify reliable sources of information.

iVerify can help. Developed by UNDP through the Brussels-based Task Force on Electoral Assistance and the Chief Digital Office, iVerify is a fact-checking initiative which combats the spread of false narratives during election periods by combining new technology like AI and machine learning with tried-and-true in-person fact-checking. 

iVerify was developed in response to “repeated requests over the past three years from UNDP Country Offices and national counterparts on how to deal with misinformation and hate speech in elections,” says Gianpiero Catozzi, UNDP Elections Senior Advisor and Coordinator of the the European Commission-UNDP Joint Task Force on Electoral Assistance. 

Here is how it works. iVerify processes articles and outputs reports with a determination of their veracity. The inputs are either manual, by members of the public or the iVerify team, or automated. People can submit articles for review via text (WhatsApp, SMS, and more) or directly through the iVerify platform. Leveraging CrowdTangle, which allows iVerify to track public content across social media, iVerify also automatically reviews articles in Facebook, Instagram, and Reddit daily, running them through Detoxify, an open-source algorithm which uses machine learning to detect hate speech.

 

Photo: © UNDP Zambia

“We selected Detoxify after looking at a lot of machine learning tools,” says Mark Belinsky, Digital Innovation and Scaling Specialist at UNDP’s Chief Digital Office. “A lot of effort was put into limiting algorithmic bias, which is critical in these sensitive situations and too often overlooked when introducing artificial intelligence systems into programming.” 

These reports are then sent to the team of in-person fact-checkers, composed of individuals linked to one or several national counterparts that have been capacitated and equipped through the iVerify’s initiative. As part of their fact-checking assignment, the team follows up with the people or institutions mentioned in a story to determine the veracity of the claims made. If those in-person verifiers find hate speech, disinformation, or misinformation, they flag it and publish an article on the iVerify website to let the world know. iVerify leverages another open source technology, Check, to help. It uses human-in-the-loop machine learning to match content so that anything already labeled false doesn't have to be reviewed again, improving efficiency. All of these tools and approaches are open and available to anyone to use.

The tool was piloted in Zambia, ahead of the historic August 2021 general elections in which opposition leader Hakainde Hichilema took power from Edgar Lungu, who had led the country since 2015. “Zambia was perfect for a pilot,” says Mathilde Vougny, who works on UNDP’s electoral assistance team in Brussels, “because political mis- and disinformation commonly seen in other countries is in a nascent stage there. It’s not coming from anywhere outside Zambia’s borders.”

Plus, the project found enthusiastic partners, first in the local development group Panos Institute of Southern Africa as well as in the Electoral Commission of Zambia. The Electoral Commission, in particular, was concerned about a rapid increase they saw in online hate speech, misinformation, and disinformation they were seeing in the leadup to the election. “They were happy to work with us,” says Saré Knoope, Electoral Assistance Analyst at UNDP, “because they felt obstructed by the amount of erroneous information that was spreading. They were keen to work together because they thought it was in their interest. That helped us a lot.” The relationship was mutually beneficial: the iVerify team contributed a chapter on misinformation, disinformation, and hate speech to the ECZ’s Media Handbook on Elections, which was used to train journalists in the country’s 10 provinces. 

But the Zambian election turned out to be a lot more tense than anyone was expecting, according to Vusumuzi Sifile, Executive Director at Panos Institute of Southern Africa. Amid accusations of voter registration irregularities, restriction of independent media activity, and violent crackdowns of opposition demonstrations, the iVerify team in Zambia had their work cut out for them. Fact-checkers had barely finished their training when they were thrown into the deep end, asked to verify inflammatory claims with people and entities whose emotions were running high.

“One of our greatest challenges,” says Mwenya Maliseela Serah, an iVerify Zambia fact-checker, “is getting a response from our sources. Say you’re fact-checking an article and a politician is mentioned. You have to talk to the politician to verify the information. Sometimes they won’t answer. Or other times, they won’t understand what we’re trying to do and might feel threatened.”

“Sometimes you get negative language, or even insults,” Serah says. In fact, once a politician she was speaking to even threatened to bomb the Panos offices in Lusaka, Zambia’s capital. Given the tense reality of the situation, and the huge responsibility they shouldered, “we focused most of our efforts on quality of the fact-checking,” says Vougny. 

 

 

The Zambia project aimed to build the credibility of the fact-checkers in the context of Zambia’s information landscape. On several occasions the team’s interventions resulted in content producers removing content, and publishing retraction statements, on the basis of iVerify fact-checking reports.

iVerify fact-checker Nyambe Jere is proud of her contribution. “For us,” she says, “it felt like an accomplishment to provide the information that we did, and prevent hate speech and more violence.” 

The next challenge for the UNDP iVerify team will be next month’s elections in Honduras, which are already expected to be especially prone to disinformation, misinformation, and hate speech. They’ve also started work on Liberia’s elections, scheduled for 2023. Their experience in Zambia taught the team to start the whole process months earlier than they originally anticipated, including critical trainings for fact-checkers. Given the unique political climate of each country, the iVerify cannot be a simple duplication and entails significant customization. But, according to Mathilde Vougny, the end goal is similar for each new iVerify country. Those goals include “creating an ecosystem where information is exchanged, and building a network of journalists and other actors who benefit from fact-checked information,” she says. 

In Zambia, “iVerify contributed to encouraging a stronger connection between all the groups in the election who have a role to play,” Vougny says, from the media, civil society, law enforcement, the electoral commission, and more. “We managed to build synergies.”