Lukas Struppek

prof_pic.jpg

I am a senior research scientist at the German Research Center for Artificial Intelligence (DFKI). I recently received my PhD (Dr. rer.nat) from the Artificial Intelligence and Machine Learning Lab at the Technical University of Darmstadt, where I continue to collaborate.

I am passionate about building trustworthy AI systems that we can rely on. My research focuses on generative and multimodal AI systems, addressing important challenges in security, privacy, and safety. From exploring vulnerabilities to improving robustness and safety, I aim to advance AI technologies in a safe and reliable way.

Previously, I received an M.Sc. and a B.Sc. in Industrial Engineering and Management at Karlsruhe Institute of Technology (KIT). I have also been a research assistant in the Applied Technical-Cognitive Systems at KIT.

I am always happy to collaborate and discuss novel research ideas. Feel free to get in touch!

News

Oct 20, 2025 :trophy: I have been recognized as a top reviewer at NeurIPS 2025.
Sep 01, 2025 :trophy: My PhD thesis received the IANUS Award for outstanding research promoting peace and security responsibility in the natural and technical sciences.
Jul 19, 2025 :sparkles: Our ICML 2025 Workshop on “The Impact of Memorization on Trustworthy Foundation Models” took place. We had a great time with amazing speakers and papers.
Jun 13, 2025 :man_student: I’m excited to share that my PhD thesis has officially been published: “Understanding and Mitigating Security, Privacy, and Ethical Risks in Generative Artificial Intelligence”.
Apr 22, 2025 :loudspeaker: Call for Papers has started for our ICML 2025 Workshop on “The Impact of Memorization on Trustworthy Foundation Models”

Selected Publications

  1. 24_icml_workshop.png
    Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
    Dominik Hintersdorf*Lukas Struppek*, Kristian Kersting, Adam Dziedzic, and Franziska Boenisch
    In Conference on Neural Information Processing Systems (NeurIPS), 2024
  2. 24_iclr.png
    Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
    Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting
    In International Conference on Learning Representations (ICLR), 2024
  3. 23_jair.png
    Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
    Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, and Kristian Kersting
    Journal of Artificial Intelligence Research (JAIR), 2023
  4. 23_iccv.png
    Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
    Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting
    In International Conference on Computer Vision (ICCV), 2023
  5. 22_icml.png
    Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
    Lukas Struppek, Dominik Hintersdorf, Antonio De Almeida Correia, Antonia Adler, and Kristian Kersting
    In International Conference on Machine Learning (ICML), 2022
  6. 22_facct.png
    Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash
    Lukas Struppek*, Dominik Hintersdorf*, Daniel Neider, and Kristian Kersting
    In ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
  7. 22_ijcai.png
    To Trust or Not To Trust Prediction Scores for Membership Inference Attacks
    Dominik Hintersdorf*Lukas Struppek*, and Kristian Kersting
    In International Joint Conference on Artificial Intelligence (IJCAI) , 2022