Search Results - "Boenisch, Franziska"

Refine Results
  1. 1

    A Systematic Review on Model Watermarking for Neural Networks by Boenisch, Franziska

    Published in Frontiers in big data (29-11-2021)
    “…Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages…”
    Get full text
    Journal Article
  2. 2

    Tracking All Members of a Honey Bee Colony Over Their Lifetime Using Learned Models of Correspondence by Boenisch, Franziska, Rosemann, Benjamin, Wild, Benjamin, Dormagen, David, Wario, Fernando, Landgraf, Tim

    Published in Frontiers in robotics and AI (04-04-2018)
    “…Computational approaches to the analysis of collective behavior in social insects increasingly rely on motion paths as an intermediate data layer from which…”
    Get full text
    Journal Article
  3. 3
  4. 4

    Secure and Private Machine Learning by Boenisch, Franziska

    Published 01-01-2022
    “…In recent years, the advances of Machine Learning (ML) have led to its increased application within critical applications and on highly sensitive data. This…”
    Get full text
    Dissertation
  5. 5

    Controlled Privacy Leakage Propagation Throughout Overlapping Grouped Learning by Kiani, Shahrzad, Boenisch, Franziska, Draper, Stark C.

    “…Federated Learning (FL) is the standard protocol for collaborative learning. In FL, multiple workers jointly train a shared model. They exchange model updates…”
    Get full text
    Journal Article
  6. 6

    A Systematic Review on Model Watermarking for Neural Networks by Boenisch, Franziska

    Published 08-12-2021
    “…Frontiers in Big Data 4 (2021) Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and…”
    Get full text
    Journal Article
  7. 7

    When the Curious Abandon Honesty: Federated Learning Is Not Private by Boenisch, Franziska, Dziedzic, Adam, Schuster, Roei, Shamsabadi, Ali Shahin, Shumailov, Ilia, Papernot, Nicolas

    “…In federated learning (FL), data does not leave personal devices when they are jointly training a machine learning model. Instead, these devices share…”
    Get full text
    Conference Proceeding
  8. 8

    Controlled privacy leakage propagation throughout differential private overlapping grouped learning by Kiani, Shahrzad, Boenisch, Franziska, Draper, Stark C.

    “…Federated Learning (FL) is a privacy-centric frame-work for distributed learning where devices collaborate to develop a shared global model while keeping their…”
    Get full text
    Conference Proceeding
  9. 9

    On the Privacy Risk of In-context Learning by Duan, Haonan, Dziedzic, Adam, Yaghini, Mohammad, Papernot, Nicolas, Boenisch, Franziska

    Published 15-11-2024
    “…Large language models (LLMs) are excellent few-shot learners. They can perform a wide variety of tasks purely based on natural language prompts provided to…”
    Get full text
    Journal Article
  10. 10

    Localizing Memorization in SSL Vision Encoders by Wang, Wenhao, Dziedzic, Adam, Backes, Michael, Boenisch, Franziska

    Published 27-09-2024
    “…Recent work on studying memorization in self-supervised learning (SSL) suggests that even though SSL encoders are trained on millions of images, they still…”
    Get full text
    Journal Article
  11. 11

    Beyond the Mean: Differentially Private Prototypes for Private Transfer Learning by Wahdany, Dariush, Jagielski, Matthew, Dziedzic, Adam, Boenisch, Franziska

    Published 12-06-2024
    “…Machine learning (ML) models have been shown to leak private information from their training datasets. Differential Privacy (DP), typically implemented through…”
    Get full text
    Journal Article
  12. 12

    Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models by Hintersdorf, Dominik, Struppek, Lukas, Kersting, Kristian, Dziedzic, Adam, Boenisch, Franziska

    Published 04-06-2024
    “…Diffusion models (DMs) produce very detailed and high-quality images. Their power results from extensive training on large amounts of data, usually scraped…”
    Get full text
    Journal Article
  13. 13

    Regulation Games for Trustworthy Machine Learning by Yaghini, Mohammad, Liu, Patty, Boenisch, Franziska, Papernot, Nicolas

    Published 05-02-2024
    “…Existing work on trustworthy machine learning (ML) often concentrates on individual aspects of trust, such as fairness or privacy. Additionally, many…”
    Get full text
    Journal Article
  14. 14

    Personalized Differential Privacy for Ridge Regression by Acharya, Krishna, Boenisch, Franziska, Naidu, Rakshit, Ziani, Juba

    Published 30-01-2024
    “…The increased application of machine learning (ML) in sensitive domains requires protecting the training data through privacy frameworks, such as differential…”
    Get full text
    Journal Article
  15. 15

    Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders by Dubiński, Jan, Pawlak, Stanisław, Boenisch, Franziska, Trzciński, Tomasz, Dziedzic, Adam

    Published 12-10-2023
    “…Machine Learning as a Service (MLaaS) APIs provide ready-to-use and high-utility encoders that generate vector representations for given inputs. Since these…”
    Get full text
    Journal Article
  16. 16

    Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models by Duan, Haonan, Dziedzic, Adam, Papernot, Nicolas, Boenisch, Franziska

    Published 24-05-2023
    “…Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first…”
    Get full text
    Journal Article
  17. 17

    Have it your way: Individualized Privacy Assignment for DP-SGD by Boenisch, Franziska, Mühl, Christopher, Dziedzic, Adam, Rinberg, Roy, Papernot, Nicolas

    Published 29-03-2023
    “…When training a machine learning model with differential privacy, one sets a privacy budget. This budget represents a maximal privacy violation that any user…”
    Get full text
    Journal Article
  18. 18

    Learning with Impartiality to Walk on the Pareto Frontier of Fairness, Privacy, and Utility by Yaghini, Mohammad, Liu, Patty, Boenisch, Franziska, Papernot, Nicolas

    Published 17-02-2023
    “…Deploying machine learning (ML) models often requires both fairness and privacy guarantees. Both of these objectives present unique trade-offs with the utility…”
    Get full text
    Journal Article
  19. 19

    Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives by Hanke, Vincent, Blanchard, Tom, Boenisch, Franziska, Olatunji, Iyiola Emmanuel, Backes, Michael, Dziedzic, Adam

    Published 02-11-2024
    “…While open Large Language Models (LLMs) have made significant progress, they still fall short of matching the performance of their closed, proprietary…”
    Get full text
    Journal Article
  20. 20

    Introducing Model Inversion Attacks on Automatic Speaker Recognition by Pizzi, Karla, Boenisch, Franziska, Sahin, Ugur, Böttinger, Konstantin

    Published 09-01-2023
    “…Proc. 2nd Symposium on Security and Privacy in Speech Communication, 2022 Model inversion (MI) attacks allow to reconstruct average per-class representations…”
    Get full text
    Journal Article