BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//papers.synesthesia.it//ai-heroes-2024//VDPVEA
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-ai-heroes-2024-XTTTVV@papers.synesthesia.it
DTSTART;TZID=CET:20241211T125000
DTEND;TZID=CET:20241211T133000
DESCRIPTION:Dive into the wild world of prompt injection attacks in LLM-pow
 ered web apps! As AI chatbots and assistants become ubiquitous\, a new bre
 ed of security vulnerabilities emerges. In this hands-on adventure\, we'll
  dissect vulnerable chatbots\, craft sneaky exploits\, and explore robust 
 defense strategies.\n\nGet ready to break (and fix) things as we create a 
 simple LLM-powered app\, then systematically exploit its weaknesses. You'l
 l learn to identify common vulnerabilities\, understand the anatomy of pro
 mpt injection attacks\, and implement effective countermeasures. Perfect f
 or devs who want to build safer AI-driven interfaces and stay ahead in the
  AI security game!
DTSTAMP:20241209T105219Z
LOCATION:Sala 150
SUMMARY:Prompt Injection Attacks: Understanding and Mitigating Exploiting R
 isks in LLM-Powered Web Apps - Jorrik Klijnsma
URL:https://papers.synesthesia.it/ai-heroes-2024/talk/XTTTVV/
END:VEVENT
END:VCALENDAR
