Lukas Struppek

prof_pic.jpg

I am a research scientist at the German Research Center for Artificial Intelligence (DFKI) and a Ph.D. student at the Artificial Intelligence and Machine Learning Lab at the Technical University of Darmstadt.

I am passionate about building trustworthy AI systems that we can rely on. My research focuses on generative and multimodal AI systems, addressing important challenges in security, privacy, and safety. From exploring vulnerabilities to improving robustness, I aim to advance AI technologies in a safe and reliable way.

Previously, I received a M.Sc. and B.Sc in Industrial Engineering and Management at Karlsruhe Institute of Technology (KIT). I have also been a research assistant in the Applied Technical-Cognitive Systems at KIT.

I am always happy to collaborate and discuss novel research ideas. Feel free to get in touch!

:fire: Open to research job opportunities! :fire:

News

Jan 30, 2025 :blue_book: Successfully submitted my PhD thesis “Understanding and Mitigating Security, Privacy, and Ethical Risks in Generative Artificial Intelligence”.
Oct 11, 2024 :tada: Our paper “Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations” got accepted at the NeurIPS 2024 New Frontiers in Adversarial Machine Learning Workshop.
Sep 26, 2024 :tada: Our paper “Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models” got accepted at NeurIPS 2024.
Jul 17, 2024 :tada: Our paper “Fair Diffusion: Auditing and Instructing Text-to-Image Generation Models on Fairness” got accepted by the AI and Ethics Journal.
Jul 04, 2024 :tada: Our paper “Defending Our Privacy With Backdoors” got accepted at the European Conference on Artificial Intelligence (ECAI).

Selected Publications

  1. 24_icml_workshop.png
    Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
    Dominik Hintersdorf*Lukas Struppek*, Kristian Kersting, Adam Dziedzic, and Franziska Boenisch
    In Conference on Neural Information Processing Systems (NeurIPS), 2024
  2. 24_iclr.png
    Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
    Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting
    In International Conference on Learning Representations (ICLR), 2024
  3. 23_jair.png
    Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
    Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, and Kristian Kersting
    Journal of Artificial Intelligence Research (JAIR), 2023
  4. 23_iccv.png
    Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
    Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting
    In International Conference on Computer Vision (ICCV), 2023
  5. 22_icml.png
    Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
    Lukas Struppek, Dominik Hintersdorf, Antonio De Almeida Correia, Antonia Adler, and Kristian Kersting
    In International Conference on Machine Learning (ICML), 2022
  6. 22_facct.png
    Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash
    Lukas Struppek*, Dominik Hintersdorf*, Daniel Neider, and Kristian Kersting
    In ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
  7. 22_ijcai.png
    To Trust or Not To Trust Prediction Scores for Membership Inference Attacks
    Dominik Hintersdorf*Lukas Struppek*, and Kristian Kersting
    In International Joint Conference on Artificial Intelligence (IJCAI) , 2022