Session

Machine Learning and the Optimization of Virtual Personae for Phishing Scams

Every decade or so there’s a new technology that entrenches itself in our everyday lives – almost with no discernible effects to the public. If the previous decade was “the cloud”, this decade could certainly go to AI and Machine Learning. Seemingly every week, a new state-of-the-art model is released that allows life-like recreations of synthetic content. However, these systems are ripe for abuse—attackers have incredible new tools at their disposal no matter what their preferred social engineering vector. In this talk we will explore what arbitrary creation of synthetic content means for systems of trust. From logging into your computer (Windows Hello for Business) to getting help from customer service, machine learning models are already being used to make decisions that have implications for trust. We will discuss some of the risks to be considered when implementing or using these systems, what detections might look like, and how we might be better prepared to defend than it seems.

About the speaker

Will Pearce

Will Pearce

AI/ML Security Researcher at Nvidia
Will Pearce is a Security Researcher on the AI Red Team at Nvidia. He focuses on attacking machine learning systems and developing ML-enabled red team capabilities. Previously, he was the Red Team Lead for the Azure Trustworthy ML team at Microsoft, and a Senior Security Consultant at Silent Break Security. His work on offensive machine learning has appeared at industry conferences including Blackhat, Defcon, WWHF, DerbyCon, and LabsCon as well an academic appearance at the SAI Conference on Computing.
Read more …
Copyright © 2025
 
Swiss Cyber Storm
Hosting graciously provided for free by Nine