Lukas Struppek

prof_pic.jpg

I am a research scientist at the German Research Center for Artificial Intelligence (DFKI) and a Ph.D. student at the Artificial Intelligence and Machine Learning Lab at TU Darmstadt. My research interests lie in the privacy and security of artificial intelligence (AI) and deep learning systems.

As AI becomes more widespread and is used in critical areas such as autonomous driving, medical applications, or the financial sector, the security of the model and the privacy of training data play a crucial role. In my work, I study various attacks on machine learning models to understand and mitigate the resulting threats to safety and privacy.

news

Apr 1, 2024 :tada: Our paper “Does CLIP Know My Face?” got accepted by the Journal of Artificial Intelligence Research (JAIR).
Mar 4, 2024 :tada: Two papers got accepted at ICLR Workshops: A shortened version of our paper “Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis” at the Workshop on Navigating and Addressing Data Problems for Foundation Models. And our paper “Exploring the Adversarial Capabilities of Large Language Models” at the Workshop on Secure and Trustworthy Large Language Models.
Feb 28, 2024 :tada: Our short paper CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI got accepted at ECIS 2024.
Feb 1, 2024 :loudspeaker: I joined the the German Research Center for Artificial Intelligence (DFKI) as a Research Scientist.
Jan 16, 2024 :tada: Our paper “Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks” got accepted at ICLR 2024.

selected publications

  1. Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
    Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting
    In International Conference on Learning Representations (ICLR) 2024
  2. Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
    Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting
    In International Conference on Computer Vision (ICCV) 2023
  3. Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
    Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, and Kristian Kersting
    Journal of Artificial Intelligence Research (JAIR) 2023
  4. Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
    Lukas Struppek, Dominik Hintersdorf, Antonio De Almeida Correia, Antonia Adler, and Kristian Kersting
    In International Conference on Machine Learning (ICML) 2022
  5. To Trust or Not To Trust Prediction Scores for Membership Inference Attacks
    Dominik Hintersdorf*,  Lukas Struppek*, and Kristian Kersting
    In International Joint Conference on Artificial Intelligence (IJCAI) 2022
  6. Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash
    Lukas Struppek*, Dominik Hintersdorf*, Daniel Neider, and Kristian Kersting
    In ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2022