AI Village at DEF CON announces largest-ever public Generative AI Red Team

Posted by Sven Cattell, Rumman Chowdhury, Austin Carson on 03 May 2023

Largest annual hacker convention to host thousands to find bugs in large language models built by Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability. This event is supported by the White House Office of Science, Technology, and Policy, the National Science Foundation’s Computer and Information Science and Engineering (CISE) Directorate, and the Congressional AI Caucus.


AI Village (AIV) is hosting the first public generative AI red team event at DEF CON 31 with our partners at Humane Intelligence, SeedAI, and the AI Vulnerability Database. We will be testing models kindly provided by Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability with participation from Microsoft, on an evaluation platform developed by Scale AI.

We love the explosion of creativity that new generative large language models (LLMs) allow. They can help people get their ideas out faster and better than ever before. They can lower barriers to entry in creative fields, and allow for new kinds of creative content. However, we’re only beginning to understand the embedded and emergent risks that come from automating these new technologies at scale. Hallucinations, jailbreaks, bias, and a drastic leap in capabilities are all new issues security professionals and the public have to deal with.

According to Sven Cattell, the founder of AI Village, “Traditionally, companies have solved this problem with specialized red teams. However this work has largely happened in private. The diverse issues with these models will not be resolved until more people know how to red team and assess them. Bug bounties, live hacking events, and other standard community engagements in security can be modified for machine learning model based systems. These fill two needs with one deed, addressing the harms and growing the community of researchers that know how to help.”

At DEF CON 2023, we are conducting the largest red teaming exercise ever for any group of AI models. Thousands of people will experience hands-on LLM red-teaming for the first time – and we’re bringing in hundreds of students from overlooked institutions and communities. This is the first time anyone is attempting more than a few hundred experts to assess these models, so we will be learning together. We’ll publish what we learn from this event to help others who want to try the same thing. The more people who know how to best work with these models, and their limitations, the better. This is also an opportunity for new communities to learn skills in AI by exploring its quirks and limitations.

We will be providing laptops and timed access to multiple LLMs from the vendors. We will also be providing a capture the flag (CTF) style point system to promote testing a wide range of harms. Red teamers will be expected to abide by the hacker hippocratic oath. The individual who gets the highest number of points wins a high end NVIDIA GPU.

Who are we?

This is a collaborative effort. In addition to thousands of hackers, we are also bringing in partners from community groups and policy-oriented nonprofits as well as supporters in government.

The DEF CON community has extensive experience evaluating a huge range of technologies. AIV hosted the first public bias bounty at DEF CON 29 which has grown into the Bias Buccaneers and Humane Intelligence and we’re working with that team again on this event.

Our nonprofit community partners include Houston Community College – which participated in an educational pilot of this exercise; Black Tech Street from Tulsa, OK; the Internet Education Foundation’s Congressional App Challenge; and the AI Vulnerability Database. In addition to Humane Intelligence and SeedAI, the Wilson Center Science and Technology Innovation Program (STIP) is joining as policy partner.

This challenge is supported by the White House Office of Science, Technology, and Policy (OSTP) and is aligned with the goals of the Biden-Harris Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework. The National Science Foundation’s Computer and Information Science and Engineering (CISE) Directorate will also participate, and the Congressional AI Caucus is collaborating on this initiative as part of their AI Primer series. This exercise will be adapted into educational programming for the Congressional AI Caucus and other officials – as well as for the national networks of our community partners.

The participants are using an evaluation platform that is developed and provided by Scale AI. Our CTF challenge is built by Humane Intelligence. Do you want to participate? We are seeking a laptop sponsor, sponsorship, and travel support for community partners.

Interested in Sponsorship? [email protected] Press: [email protected]

2024

Back to Top ↑

2023

Threat Modeling LLM Applications

19 minute read

Threat Modeling LLM Applications Before we get started: Hi! My name is GTKlondike, and these are my opinions as a cybersecurity consultant. While experts fr...

Back to Top ↑

2022

AI and Hiring Tech Panel

4 minute read

AI and ML is already being used to identify job candidates, screen resumes, assess worker productivity and even help tag candidates for firing. Can the inter...

Back to Top ↑

2018

Gradient Attacks

11 minute read

Welcome to the second post in the AI Village’s adversarial machine learning series. This one will cover the greedy fast methods that are most commonly used. ...

Dimensionality and Adversarial Examples

11 minute read

Welcome to AI Village’s series on adversarial examples. This will focus on image classification attacks as they are simpler to work with and this series is m...

Back to Top ↑