News

Oct 11, 2024 :tada: Our paper “Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations” got accepted at the NeurIPS 2024 New Frontiers in Adversarial Machine Learning Workshop.
Sep 26, 2024 :tada: Our paper “Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models” got accepted at NeurIPS 2024.
Jul 17, 2024 :tada: Our paper “Fair Diffusion: Auditing and Instructing Text-to-Image Generation Models on Fairness” got accepted by the AI and Ethics Journal.
Jul 04, 2024 :tada: Our paper “Defending Our Privacy With Backdoors” got accepted at the European Conference on Artificial Intelligence (ECAI).
Jul 03, 2024 :tada: Our paper “Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models” got accepted at the ICML 2024 Workshop on Foundation Models in the Wild.
May 11, 2024 :trophy: Our paper “Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis” received a Best Paper Award at the DPFM Workshop @ ICLR.
Apr 01, 2024 :tada: Our paper “Does CLIP Know My Face?” got accepted by the Journal of Artificial Intelligence Research (JAIR).
Mar 04, 2024 :tada: Two papers got accepted at ICLR Workshops: A shortened version of our paper “Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis” at the Workshop on Navigating and Addressing Data Problems for Foundation Models. And our paper “Exploring the Adversarial Capabilities of Large Language Models” at the Workshop on Secure and Trustworthy Large Language Models.
Feb 28, 2024 :tada: Our short paper CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI got accepted at ECIS 2024.
Feb 01, 2024 :loudspeaker: I joined the the German Research Center for Artificial Intelligence (DFKI) as a Research Scientist.
Jan 16, 2024 :tada: Our paper “Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks” got accepted at ICLR 2024.
Dec 18, 2023 :tada: Our paper “Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis” got published in the Journal of Artificial Intelligence Research (JAIR).
Nov 21, 2023 :trophy: I have been recognized as a top reviewer at NeurIPS 2023.
Oct 27, 2023 :tada: Our papers “Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data” and “Defending Our Privacy With Backdoors” got accepted at the NeurIPS 2023 Workshop on Backdoors in Deep Learning.
Oct 23, 2023 :microphone: We gave a talk at AISoLA titled “Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models”.
Sep 23, 2023 :tada: Our paper “SEGA: Instructing Diffusion using Semantic Dimensions” got accepted at NeurIPS 2023.
Jul 17, 2023 :tada: Our paper “Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis” got accepted at ICCV 2023.
Mar 17, 2023 :tada: Our paper “Combining AI and AM – Improving Approximate Matching through Transformer Networks” got accepted at DFRWS USA 2023.
Nov 24, 2022 :microphone: I gave a talk at SECUSO @ Karlsruhe Institute of Technology with the titel “A Brief History of Security and Privacy in Deep Learning”.
May 26, 2022 :trophy: My master thesis “Embedding Convolutional Mixture of Experts into Deep Neural Networks for Computer Vision Tasks” received the Faculty Award of the KIT department of economics and management.
May 26, 2022 :trophy: Our paper “Investigating the Risks of Client-Side Scanning for the Use Case NeuralHash” received the Best Paper Award at the ConPro Workshop @ IEEE S&P.
May 15, 2022 :tada: Our paper “Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks” got accepted at ICML 2022.
Apr 20, 2022 :tada: Our paper “To Trust or Not To Trust Prediction Scores for Membership Inference Attacks” got accepted at IJCAI-ECAI 2022 for a long presentation (Top 3.75%).
Apr 07, 2022 :tada: Our paper “Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash” got accepted at ACM FAccT 2022.