About CSA-AU Seminar: How to Attack and Defend LLMs: AI Security Explained

Event Overview

Date and Time: 3 October 2025, 12pm-2pm (lunch 12pm)

Location: Room IM16, UQ CBD campus, 308 Queen Street, Brisbane 

Speaker: Holly Wright, Software Architect in the IBM Security Elite team

Holly is the winner of the "Women in AI - Cybersecurity" award in 2025 and the "Best Female Secure Coder" at the Australian Women in Security Awards in 2022. She has 8+ years of experience building cyber-threat detection and machine learning products for the world's largest organisations. In addition, Holly has 5 algorithm patents with the United States Patent Office and has published a paper with IEEE on using machine learning in drones. She is a deeply technical, full-stack developer who leads teams around the world. Holly has a passion for exploring new technologies, and many of her favourite projects have come from getting creative in hackathons. She cares deeply about uplifting those around her, through school and university mentoring and she has shared her knowledge at numerous global conferences.
 

Abstract: Ready to dive into the world of large language models (LLMs)? Whether you're a cybersecurity enthusiast, a data scientist, or a beginner with a curiosity for how LLMs can be hacked and protected, this seminar will give you the insights you need to stay ahead of the game.

Disclaimer: This seminar is for educational purposes only. We do not encourage or support any illegal activity. The techniques discussed are meant to highlight security vulnerabilities and help individuals enhance their own cybersecurity awareness. Always obtain proper authorisation before engaging in any form of testing or assessments.

What You'll Learn: From understanding how hackers exploit language models to building defences, this talk will guide you through the critical concepts of LLM security. Learn how adversarial attacks work and how to safeguard your own models from being manipulated. 

Key Topics Covered:

  • Understanding Language Model Vulnerabilities: Explore how attackers exploit weaknesses in LLMs to manipulate outputs or extract sensitive data.
  • Common Hacking Techniques: Learn about techniques like prompt injection, backdoor attacks, and how they impact model security.
  • Adversarial Attacks & Mitigation: Discover how adversarial examples are used to fool models, and the cutting-edge defences to protect against them.
  • Protecting Your Models: Practical tips on how to secure your LLMs from data leaks, model inversion, and other threats.
  • Securing Large-Scale Models: Best practices for deploying and maintaining large models in a secure environment, including model testing and monitoring.

This seminar is ideal for AI researchers, cybersecurity professionals, and anyone interested in the intersection of artificial intelligence and security. Empower yourself with the knowledge to build robust, secure models and stay one step ahead of potential threats.

 Hosted by Cloud Security Alliance

 

Venue

UQ CBD campus, 308 Queen Street, Brisbane City QLD, Australia
Room: 
IM 16 Teaching Suite