Search Results - "Bhagoji, Arjun"
-
1
Backdoor Attacks Against Deep Learning Systems in the Physical World
Published in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (01-06-2021)“…Backdoor attacks embed hidden malicious behaviors into deep learning models, which only activate and cause misclassifications on model inputs containing a…”
Get full text
Conference Proceeding -
2
Towards Scalable and Robust Model Versioning
Published in 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (09-04-2024)“…As the deployment of deep learning models continues to expand across industries, the threat of malicious incursions aimed at gaining access to these deployed…”
Get full text
Conference Proceeding -
3
"Community Guidelines Make this the Best Party on the Internet": An In-Depth Study of Online Platforms' Content Moderation Policies
Published 08-05-2024“…Moderating user-generated content on online platforms is crucial for balancing user safety and freedom of speech. Particularly in the United States, platforms…”
Get full text
Journal Article -
4
Enhancing robustness of machine learning systems via data transformations
Published in 2018 52nd Annual Conference on Information Sciences and Systems (CISS) (01-03-2018)“…We propose the use of data transformations as a defense against evasion attacks on ML classifiers. We present and investigate strategies for incorporating a…”
Get full text
Conference Proceeding -
5
The Role of Data Geometry in Adversarial Machine Learning
Published 01-01-2020“…As machine learning (ML) systems become ubiquitous, it is critically important to ensure that they are secure against adversaries. This is the focus of the…”
Get full text
Dissertation -
6
Towards Scalable and Robust Model Versioning
Published 17-01-2024“…As the deployment of deep learning models continues to expand across industries, the threat of malicious incursions aimed at gaining access to these deployed…”
Get full text
Journal Article -
7
MYCROFT: Towards Effective and Efficient External Data Augmentation
Published 10-10-2024“…Machine learning (ML) models often require large amounts of data to perform well. When the available data is limited, model trainers may need to acquire more…”
Get full text
Journal Article -
8
Feasibility of State Space Models for Network Traffic Generation
Published 04-06-2024“…Many problems in computer networking rely on parsing collections of network traces (e.g., traffic prioritization, intrusion detection). Unfortunately, the…”
Get full text
Journal Article -
9
NetDiffusion: Network Data Augmentation Through Protocol-Constrained Traffic Generation
Published 12-10-2023“…Datasets of labeled network traces are essential for a multitude of machine learning (ML) tasks in networking, yet their availability is hindered by privacy…”
Get full text
Journal Article -
10
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Published 13-10-2021“…USENIX Security Symposium 2022 In adversarial machine learning, new defenses against attacks on deep learning systems are routinely broken soon after their…”
Get full text
Journal Article -
11
Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Published 21-02-2023“…Finding classifiers robust to adversarial examples is critical for their safe deployment. Determining the robustness of the best possible classifier under a…”
Get full text
Journal Article -
12
Lower Bounds on Adversarial Robustness from Optimal Transport
Published 26-09-2019“…While progress has been made in understanding the robustness of machine learning classifiers to test-time adversaries (evasion attacks), fundamental questions…”
Get full text
Journal Article -
13
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Published 16-04-2021“…Understanding the fundamental limits of robust supervised learning has emerged as a problem of immense interest, from both practical and theoretical…”
Get full text
Journal Article -
14
A Real-time Defense against Website Fingerprinting Attacks
Published 08-02-2021“…Anonymity systems like Tor are vulnerable to Website Fingerprinting (WF) attacks, where a local passive eavesdropper infers the victim's activity. Current WF…”
Get full text
Journal Article -
15
Equivalence of 2D color codes (without translational symmetry) to surface codes
Published in 2015 IEEE International Symposium on Information Theory (ISIT) (01-06-2015)“…In a recent work, Bombin, Duclos-Cianci, and Poulin showed that every local translationally invariant 2D topological stabilizer code is locally equivalent to a…”
Get full text
Conference Proceeding Journal Article -
16
Natural Backdoor Datasets
Published 21-06-2022“…Extensive literature on backdoor poison attacks has studied attacks and defenses for backdoors using "digital trigger patterns." In contrast, "physical…”
Get full text
Journal Article -
17
Understanding Robust Learning through the Lens of Representation Similarities
Published 20-06-2022“…Representation learning, i.e. the generation of representations useful for downstream applications, is a task of fundamental importance that underlies much of…”
Get full text
Journal Article -
18
On the Permanence of Backdoors in Evolving Models
Published 07-06-2022“…Existing research on training-time attacks for deep neural networks (DNNs), such as backdoors, largely assume that models are static once trained, and hidden…”
Get full text
Journal Article -
19
A Critical Evaluation of Open-World Machine Learning
Published 08-07-2020“…Open-world machine learning (ML) combines closed-world models trained on in-distribution data with out-of-distribution (OOD) detectors, which aim to detect and…”
Get full text
Journal Article -
20
Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Published 03-02-2023“…The proliferation of global censorship has led to the development of a plethora of measurement platforms to monitor and expose it. Censorship of the domain…”
Get full text
Journal Article