Prompt Detective at SXSW!

Posted by Sven Cattell on 07 March 2023

Prompt Detective Announcement

Join us for an upcoming workshop on the benefits and limitations of large language models (LLMs) like GPT3, Bloom, , and a unique red teaming exercise where participants will try to get LLMs to misbehave!

As LLMs continue to play an increasingly important role in various fields such as natural language processing, artificial intelligence, and digital communications, it is essential to understand their capabilities and limitations. This workshop is designed to help individuals gain a better understanding of LLMs, their potential benefits & limitations, and the ethical considerations surrounding their use.

In addition to learning about the technology behind LLMs, their applications, and the current limitations of these systems, participants will also have the opportunity to engage in a red teaming exercise. This exercise will involve attempting to get LLMs to misbehave by inputting certain phrases or contexts that could trigger unintended responses. The exercise will provide participants with a unique perspective on the limitations of LLMs and the potential risks associated with their use. Participants will learn:

  • How to perform prompt injection to hijack the LLM.
  • What topics the LLMs are often incorrect and unreliable about, known as hallucination.
  • How to do behavioral modification.
  • How to secure your LLM against these attacks.
  • How the underlying technology of tokenization, transformers works to produce this technology.

This workshop is open to all individuals, regardless of their background or expertise. Whether you are a student, a hacker, a policy maker, or simply someone interested in learning more about LLMs, this workshop is an excellent opportunity to enhance your understanding of this powerful technology.

Join us on March 11th at the Philips Building at SXSW to learn more about LLMs, participate in a red teaming exercise, and explore the potential benefits and limitations of these powerful language models.

2024

Back to Top ↑

2023

Threat Modeling LLM Applications

19 minute read

Threat Modeling LLM Applications Before we get started: Hi! My name is GTKlondike, and these are my opinions as a cybersecurity consultant. While experts fr...

Back to Top ↑

2022

AI and Hiring Tech Panel

4 minute read

AI and ML is already being used to identify job candidates, screen resumes, assess worker productivity and even help tag candidates for firing. Can the inter...

Back to Top ↑

2018

Gradient Attacks

11 minute read

Welcome to the second post in the AI Village’s adversarial machine learning series. This one will cover the greedy fast methods that are most commonly used. ...

Dimensionality and Adversarial Examples

11 minute read

Welcome to AI Village’s series on adversarial examples. This will focus on image classification attacks as they are simpler to work with and this series is m...

Back to Top ↑