AI Unwind

This year we’re trying something new: we’re offering unstructured time for attendees to unwind and chat with experts about the threats posed by AI technologies such as natural language generation, deepfakes, and facial recognition. We’re calling this “AI Unwind” and it includes demos for how to detect and resist these technologies.

After the talks, come unwind with us from 5 - 7 PM on Friday and Saturday. Here are some of the demos and experts we have for you to chat with:


Recent progress in natural language generation has raised dual use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news.

Rowan Zellers from the University of Washington and Allen Institute for AI will be around to discuss the risks of controllable generation by demoing a model named Grover. Given a headline like 'Link Found Between Vaccines and Autism,' Grover can generate the rest of the article; humans find these generated texts to be trustworthy.

Rowan will discuss how to respond to these threats. Counter-intuitively, the best defense against Grover turns out to be Grover itself, with over 92% accuracy at telling apart human-written from machine-written news articles.

Stop by to try out the demo for yourself and learn about how technology might evolve in the future. If you can’t wait, you can play with Grover online today at


Deepfakes, application of GANs and other deep learning techniques to produce fake content is another threat to truth that has improved dramatically both in terms of believability and the amount of data necessary.

The threat of deepfakes has been a subject of the recent hearing by the House Intelligence Committee and continues to draw a stream of publicity, with a number of prominent figures such as President Obama, Donald Trump, PewDiePie, and Nicholas Cage being the subjects of deepfakes. While advancing by leaps and bounds, the underlying technology is still brittle (believable videos take painstakingly long to produce and require judicious selection of source and target).

Siwei Lyu, Yuezun Li, and Barton Rhodes will give you a chance to play with an interactive detection of a deepfake using commonly available tools that illustrate possibilities and limitations of commonly available tools.

Facial recognition:

Facial recognition is another tool in the arsenal of surveillance state. While more jurisdictions in the US are choosing to ban the use of technology, many still allow it for commercial / security uses. Rich Harang and Ethan Rudd will illustrate a system that can detect salient attributes of the person and demo ways of avoiding detection or throwing the recognition system into misidentifying the individual.

r00tzBook: A misinformation CTF for kids