Towards More Practical Threat Models in Artificial Intelligence Security
AI Security has been researched for almost two decades. Yet, existing, frequently studied threat models have never been tested in terms of real-world usage of AI. In this talk, we discuss a survey with 271 real-world AI practitioners, whose description of AI usage we match with existing threat models. While we find that all threat models do exist, there are also significant mismatches where research is too generous with the attacker.
About the speaker
Kathrin Grosse
Research Scientist, IBM Research Zurich
Kathrin Grosse is a research scientist at IBM Research Zurich.
Her research interests are the intersection of machine learning and security, and recently focused on machine learning security in practice.
Grosse received a Ph.D. in computer science from CISPA Helmholtz Center for Information Security.
During her PhD, she interned at Disney Research Zurich in 2020/21 and IBM Yorktown in 2019, where her work resulted in a US Patent.
Furthermore, she was nominated as an AI Newcomer in the context of the German Federal Ministry of Education and Research’s Science Year 2019.
After her PhD, she joined Battista Biggio’s Lab at the university of Cagliari for a Postdoc and afterwards worked at EPFL with Alexandre Alahi on the security of autonomous vehicles.
She serves as a reviewer for many international journals and conferences.
Read more …
Read more …