r00tzBook: A misinformation CTF for kids

by Win Suen, @RandomForestCat

The new frontier

Misinformation is a rapidly-emerging threat to public discourse on key issues. Increasingly, the next public forum is located in our digital neighborhoods and communities. Two-thirds of American adults get at least some of their news on social media, with many citing convenience as their reason. Yet on the Internet, not everyone is who they claim to be. The term “fake news” has become politically-charged, sometimes thrown around to describe a political ideology someone happens to disagree with. Most researchers, however, are focused on the more concrete problem of identifying, tracking, and responding to actors who purposefully spread verifiably false information to cause public harm or for personal gain. 


Politically-motivated misinformation is a trending topic, but it’s important to note documented campaigns in other areas. Here are some examples:

  • Russian state-sponsored disinformation actors are targeting susceptible populations (e.g.- Russian speakers who feel disenfranchised in modern Ukraine) to sow social division and distrust, amplifying messages on social media through bot activity and gaming recommender systems of popular platforms. Techniques successfully deployed in Ukraine go on to be applied around the world, including in the 2016 presidential election in the United States.

  • Natural disasters present opportunity to inject misinformation into situations with developing information and lack of complete information. For instance, during Hurricane Harvey, rumors erupted that individuals seeking shelter were asked to provide their immigration status. Rumors like this endanger people who might otherwise seek aid or shelter in a dangerous situation.

  • Multiple scientific studies have repeatedly shown there is no link between receiving a vaccine and developing autism. Unfortunately, an organization distributed pamphlets warning parents that vaccines endanger their children. One New York county where this occurred is currently fighting a measles outbreak, with hundreds of new cases since last year.

Governments are slowly mobilizing to catch up to the ease, speed, and scale with which misinformation campaigns spread online. In February 2019, the U.K. House of Commons urged legislators to establish a process to hold tech companies legally liable for misuse of their platforms, including for misinformation, election interference, and disseminating harmful or illegal content. Canada has also moved aggressively to legislate against fake news in attempts to curb election interference. Many other countries have begun acting on the threat misinformation poses to fair elections and public safety.

Screen Shot 2019-05-18 at 2.35.35 PM.png

Source: Daniel Funke

Tech companies are scrambling to fix the online misinformation problem, without much consistent success. Almost immediately after the House of Commons report was published, Google released a white paper on how it mitigates misinformation. Across its products, Google relies on a combination of human moderators, technology, and machine learning to play cat-and-mouse with misinformation. Twitter, Reddit, and WhatsApp are all grappling with similar problems. Facebook’s reliance on human moderators for everything from violence to hate speech to fake news recently came under scrutiny. In response to the frightening outbreaks of measles across the country, the platform began banning anti-vax misinformation. All the while, these tech companies have to tread carefully to avoid accusations of partisan censorship or restriction of free speech. 

Due to the sheer scale of online misinformation activity, which is often amplified and echoed by bots, researchers are leveraging machine learning techniques to solve the problem at scale. Yet such approaches have limits, and must adapt to a constantly-changing threat landscape. The absence of a silver bullet highlights how dynamic, challenging, and polarizing the debate on misinformation has become. 

Number of academic papers on misinformation, 2000 - 2018.

Screen Shot 2019-05-18 at 6.16.57 PM.png

Source.

What we’re doing

R00tz is an incredible opportunity to educate, train, and engage the youngest participants on our civil society. To that end, AI Village is developing a hands-on red team vs. blue team misinformation CTF on a closed-course, simulated social network. Participants will see the social network change in real time in response to their actions. 

Participants will:

  • Create bots to disseminate a message on our social network. Learn how bots can be responsibly used on social media networks.

  • Learn about spam filters that use a variety of machine learning techniques, including anomaly detection and natural language processing. 

  • Generate bot behaviors to attempt to evade spam filters, experiencing the adversarial nature of the CTF.

  • Learn about ethics and best practices from volunteer instructors.

We look forward to seeing you at r00tz to train the next generation of cybersecurity experts and informed Internet users.

References

https://www.theatlantic.com/international/archive/2019/04/russia-disinformation-ukraine-election/587179/

https://www.cbsnews.com/news/measles-outbreak-tracking-down-the-people-behind-anti-vaccine-pamphlet/

https://www.cdc.gov/vaccinesafety/concerns/autism.html

https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf

https://www.journalism.org/2018/09/10/news-use-across-social-media-platforms-2018/

https://www.poynter.org/ifcn/anti-misinformation-actions/

https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf

https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/How_Google_Fights_Disinformation.pdf

https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

https://www.wired.com/story/facebook-anti-vaccine-crack-down/



AI Unwind

GPT-2 and Threatmodels