AI Village Announcing Generative Red Team 2 at DEF CON 32
Time: 9:00-10:30
Speaker: Gavin Klondike
Today, over a quarter of security products for detection have some form of machine learning built in. However, “machine learning” is nothing more than a mysterious buzzword for many security analysts. In order to properly deploy and manage these products, analysts will need to understand how the machine learning components operate to ensure they are working efficiently. In this talk, we will dive head first into building and training our own security-related models using the 7-step machine learning process. No environment setup is necessary, but Python experience is strongly encouraged.
Gavin Klondike is a senior consultant and researcher who has a passion for network security, both attack and defense. Through that passion, he runs NetSec Explained; a blog and YouTube channel which covers intermediate and advanced level network security topics, in an easy to understand way. His work has given him the opportunity to be published in industry magazines and speak at conferences such as Def Con, Def Con China, and CactusCon. Currently, he is researching into ways to address the cybersecurity skills gap, by utilizing machine learning to augment the capabilities of current security analysts.
Time: 10:30-11:30
Speaker: Yuvaraj Govindarajulu
As of this year, there are over a 2.5 billion Edge-enabled IoT devices and close to 1.5 million new AI Edge devices projected to be shipped. These devices include smaller compressed versions of AI models running on them. While in the last years, we have been able to improve the performance of the AI models and reduce their memory footprint on these devices, not much has been spoken about the security threats of the AI models on tiny models.
First step towards protecting these AI models from attacks such as Model Theft, evasion and data poisoning, would be to study the efficacy of attacks on these Tiny Intelligent systems. Some of them at the lower Hardware and software layers could be protected through classical embedded security, they alone would not suffice to protect these Tiny Intelligence. Many of these tiny devices (microcontrollers) do not come with built-in security features because of their price and power requirements. So an understanding of how the core AI algorithm could be attacked and protected become necessary. In this talk we go about discussing what could be the possible threats to these devices and provide directions on how additional AI security measures would save the Tiny intelligence.
Time: 11:30-12:30
Speaker: Taylor Kulp-Mcdowall
As the current machine learning paradigm shifts toward the use of large pretrained models fine-tuned to a specific use case, it becomes increasingly important to trust the pretrained models that are downloaded from central model repositories (or other areas of the internet). As has been well documented in the machine learning literature, numerous attacks currently exist that allow an adversary to poison or “trojan” a machine learning model causing the model to behave correctly except when dealing with a specific adversary chosen input or “trigger”. This talk will introduce the threats posed by these AI trojan attacks, discuss the current types of attacks that exist, and then focus on the state of the art techniques used to both defend and detect these attacks.
As part of an emphasis on trojan detection, the talk will also cover key aspects of the TrojAI Competition (https://pages.nist.gov/trojai/)—an open leaderboard run by NIST and IARPA to spur the development of better trojan detection techniques. This leaderboard provides anyone with the opportunity to run and evaluate their own trojan detectors across large datasets of clean/poisoned AI models already developed by the TrojAI team. These datasets consist of numerous different AI architectures trained across tasks ranging from image classification to extractive question answering. They are open-source and ready for the community to use.
Time: 12:30-13:30
Speaker: Will Pearce, and others!
We’ll go over the challenges and the solutions people made up.
Generative Red Team History
Threat Modeling LLM Applications Before we get started: Hi! My name is GTKlondike, and these are my opinions as a cybersecurity consultant. While experts fr...
Largest annual hacker convention to host thousands to find bugs in large language models built by Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stabil...
The Spherical Cow of Machine Learning Security
Prompt Detective Announcement
Disclaimer: This does not reflect the AIV as a whole, these are my opinions and this was my response.
AI and ML is already being used to identify job candidates, screen resumes, assess worker productivity and even help tag candidates for firing. Can the inter...
The Red Team Village and the AI Village will host a panel from different industry experts to discuss the use of artificial intelligence and machine learning ...
Automate Detection with Machine Learning
A few useful things to know about AI Red Teams
Automate Detection with Machine Learning
Generative Art at AI Village DEF CON 30
Welcome to the second post in the AI Village’s adversarial machine learning series. This one will cover the greedy fast methods that are most commonly used. ...
Originally posted on Medium - follow @sarajayneterp and like her article there
Welcome to AI Village’s series on adversarial examples. This will focus on image classification attacks as they are simpler to work with and this series is m...