Lukas Struppek

prof_pic.jpg

I am a research scientist at the German Research Center for Artificial Intelligence (DFKI) and a Ph.D. student at the Artificial Intelligence and Machine Learning Lab at the Technical University of Darmstadt.

My research focuses on two primary directions for trustworthy and adversarial machine learning. First, I explore the potential adversarial threats to machine learning models, particularly in the context of generative AI. Second, I examine the security and trustworthiness of generative AI systems themselves. In both areas, my research often investigates adversarial settings where a generative model is either part of an attack or under attack itself. My ultimate goal is to make machine learning models and AI systems reliable for deployment, opening their incredible potential to improve our lives.

Previously, I received a M.Sc. and B.Sc in Industrial Engineering and Management at Karlsruhe Institute of Technology (KIT). I have also been a research assistant in the Applied Technical-Cognitive Systems at KIT.

News

Oct 11, 2024 :tada: Our paper “Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations” got accepted at the NeurIPS 2024 New Frontiers in Adversarial Machine Learning Workshop.
Sep 26, 2024 :tada: Our paper “Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models” got accepted at NeurIPS 2024.
Jul 17, 2024 :tada: Our paper “Fair Diffusion: Auditing and Instructing Text-to-Image Generation Models on Fairness” got accepted by the AI and Ethics Journal.
Jul 04, 2024 :tada: Our paper “Defending Our Privacy With Backdoors” got accepted at the European Conference on Artificial Intelligence (ECAI).
Jul 03, 2024 :tada: Our paper “Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models” got accepted at the ICML 2024 Workshop on Foundation Models in the Wild.

Selected Publications

  1. 24_icml_workshop.png
    Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
    Dominik Hintersdorf*Lukas Struppek*, Kristian Kersting, Adam Dziedzic, and Franziska Boenisch
    In Conference on Neural Information Processing Systems (NeurIPS), 2024
  2. 24_iclr.png
    Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
    Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting
    In International Conference on Learning Representations (ICLR), 2024
  3. 23_jair.png
    Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
    Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, and Kristian Kersting
    Journal of Artificial Intelligence Research (JAIR), 2023
  4. 23_iccv.png
    Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
    Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting
    In International Conference on Computer Vision (ICCV), 2023
  5. 22_icml.png
    Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
    Lukas Struppek, Dominik Hintersdorf, Antonio De Almeida Correia, Antonia Adler, and Kristian Kersting
    In International Conference on Machine Learning (ICML), 2022
  6. 22_facct.png
    Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash
    Lukas Struppek*, Dominik Hintersdorf*, Daniel Neider, and Kristian Kersting
    In ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
  7. 22_ijcai.png
    To Trust or Not To Trust Prediction Scores for Membership Inference Attacks
    Dominik Hintersdorf*Lukas Struppek*, and Kristian Kersting
    In International Joint Conference on Artificial Intelligence (IJCAI) , 2022