Security and Privacy for AI

Researching how to secure AI systems against attacks and ensure privacy in AI-driven data processing and decision-making.

Security and Privacy for AI is a research area that addresses the vulnerabilities, threats, and privacy risks associated with artificial intelligence systems. As AI models increasingly drive decisions in sensitive domains, such as finance, healthcare, and security, they become attractive targets for attacks, including model inversion, adversarial inputs, and data poisoning. This topic explores how to defend AI systems against such threats, as well as how to preserve privacy in AI training and inference through methods like differential privacy, federated learning, and encrypted computation. Research also investigates responsible model deployment, data governance, and alignment with ethical and legal standards. Ensuring the security and privacy of AI is essential for building trustworthy and robust intelligent systems.

 

Contact cyber@uq.edu.au for more information.


 

Security and Privacy for AI Researchers

  • Dr Naipeng Dong

    Lecturer
    School of Electrical Engineering and Computer Science
    Affiliate of UQ Cyber Research Centre
    UQ Cyber Research Centre

    Associate Professor Dan Kim

    Deputy Director & Associate Professor
    School of Electrical Engineering and Computer Science
    Affiliate of UQ Cyber Research Centre
    UQ Cyber Research Centre

    Professor John Swinson

    Professor
    TC Beirne School of Law

    Dr Guowei Yang

    Senior Lecturer
    School of Electrical Engineering and Computer Science
    Affiliate of UQ Cyber Research Centre
    UQ Cyber Research Centre