Search Results - "Boenisch, Franziska"
-
1
A Systematic Review on Model Watermarking for Neural Networks
Published in Frontiers in big data (29-11-2021)“…Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages…”
Get full text
Journal Article -
2
Tracking All Members of a Honey Bee Colony Over Their Lifetime Using Learned Models of Correspondence
Published in Frontiers in robotics and AI (04-04-2018)“…Computational approaches to the analysis of collective behavior in social insects increasingly rely on motion paths as an intermediate data layer from which…”
Get full text
Journal Article -
3
Toward Sharing Brain Images: Differentially Private TOF-MRA Images With Segmentation Labels Using Generative Adversarial Networks
Published in Frontiers in artificial intelligence (02-05-2022)“…Sharing labeled data is crucial to acquire large datasets for various Deep Learning applications. In medical imaging, this is often not feasible due to privacy…”
Get full text
Journal Article -
4
Secure and Private Machine Learning
Published 01-01-2022“…In recent years, the advances of Machine Learning (ML) have led to its increased application within critical applications and on highly sensitive data. This…”
Get full text
Dissertation -
5
Controlled Privacy Leakage Propagation Throughout Overlapping Grouped Learning
Published in IEEE journal on selected areas in information theory (2024)“…Federated Learning (FL) is the standard protocol for collaborative learning. In FL, multiple workers jointly train a shared model. They exchange model updates…”
Get full text
Journal Article -
6
A Systematic Review on Model Watermarking for Neural Networks
Published 08-12-2021“…Frontiers in Big Data 4 (2021) Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and…”
Get full text
Journal Article -
7
When the Curious Abandon Honesty: Federated Learning Is Not Private
Published in 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P) (01-07-2023)“…In federated learning (FL), data does not leave personal devices when they are jointly training a machine learning model. Instead, these devices share…”
Get full text
Conference Proceeding -
8
Controlled privacy leakage propagation throughout differential private overlapping grouped learning
Published in 2024 IEEE International Symposium on Information Theory (ISIT) (07-07-2024)“…Federated Learning (FL) is a privacy-centric frame-work for distributed learning where devices collaborate to develop a shared global model while keeping their…”
Get full text
Conference Proceeding -
9
On the Privacy Risk of In-context Learning
Published 15-11-2024“…Large language models (LLMs) are excellent few-shot learners. They can perform a wide variety of tasks purely based on natural language prompts provided to…”
Get full text
Journal Article -
10
Localizing Memorization in SSL Vision Encoders
Published 27-09-2024“…Recent work on studying memorization in self-supervised learning (SSL) suggests that even though SSL encoders are trained on millions of images, they still…”
Get full text
Journal Article -
11
Beyond the Mean: Differentially Private Prototypes for Private Transfer Learning
Published 12-06-2024“…Machine learning (ML) models have been shown to leak private information from their training datasets. Differential Privacy (DP), typically implemented through…”
Get full text
Journal Article -
12
Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
Published 04-06-2024“…Diffusion models (DMs) produce very detailed and high-quality images. Their power results from extensive training on large amounts of data, usually scraped…”
Get full text
Journal Article -
13
Regulation Games for Trustworthy Machine Learning
Published 05-02-2024“…Existing work on trustworthy machine learning (ML) often concentrates on individual aspects of trust, such as fairness or privacy. Additionally, many…”
Get full text
Journal Article -
14
Personalized Differential Privacy for Ridge Regression
Published 30-01-2024“…The increased application of machine learning (ML) in sensitive domains requires protecting the training data through privacy frameworks, such as differential…”
Get full text
Journal Article -
15
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders
Published 12-10-2023“…Machine Learning as a Service (MLaaS) APIs provide ready-to-use and high-utility encoders that generate vector representations for given inputs. Since these…”
Get full text
Journal Article -
16
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
Published 24-05-2023“…Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first…”
Get full text
Journal Article -
17
Have it your way: Individualized Privacy Assignment for DP-SGD
Published 29-03-2023“…When training a machine learning model with differential privacy, one sets a privacy budget. This budget represents a maximal privacy violation that any user…”
Get full text
Journal Article -
18
Learning with Impartiality to Walk on the Pareto Frontier of Fairness, Privacy, and Utility
Published 17-02-2023“…Deploying machine learning (ML) models often requires both fairness and privacy guarantees. Both of these objectives present unique trade-offs with the utility…”
Get full text
Journal Article -
19
Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives
Published 02-11-2024“…While open Large Language Models (LLMs) have made significant progress, they still fall short of matching the performance of their closed, proprietary…”
Get full text
Journal Article -
20
Introducing Model Inversion Attacks on Automatic Speaker Recognition
Published 09-01-2023“…Proc. 2nd Symposium on Security and Privacy in Speech Communication, 2022 Model inversion (MI) attacks allow to reconstruct average per-class representations…”
Get full text
Journal Article