Search Results - "Eykholt, Kevin"
-
1
Robust Physical-World Attacks on Deep Learning Visual Classification
Published in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (01-06-2018)“…Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations…”
Get full text
Conference Proceeding -
2
Designing and Evaluating Physical Adversarial Attacks and Defenses for Machine Learning Algorithms
Published 01-01-2019“…Studies show that state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to…”
Get full text
Dissertation -
3
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks
Published 15-10-2024“…The vulnerability of machine learning models in adversarial scenarios has garnered significant interest in the academic community over the past decade,…”
Get full text
Journal Article -
4
Accelerating Certified Robustness Training via Knowledge Transfer
Published 25-10-2022“…Training deep neural network classifiers that are certifiably robust against adversarial attacks is critical to ensuring the security and reliability of…”
Get full text
Journal Article -
5
Tyche: A Risk-Based Permission Model for Smart Homes
Published in 2018 IEEE Cybersecurity Development (SecDev) (01-09-2018)“…Emerging smart home platforms, which interface with a variety of physical devices and support third-party application development, currently use permission…”
Get full text
Conference Proceeding -
6
Transferring Adversarial Robustness Through Robust Representation Matching
Published 21-02-2022“…With the widespread use of machine learning, concerns over its security and reliability have become prevalent. As such, many have developed defenses to harden…”
Get full text
Journal Article -
7
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering
Published 17-12-2018“…We provide a methodology, resilient feature engineering, for creating adversarially resilient classifiers. According to existing work, adversarial attacks…”
Get full text
Journal Article -
8
Ares: A System-Oriented Wargame Framework for Adversarial ML
Published in 2022 IEEE Security and Privacy Workshops (SPW) (01-05-2022)“…Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved…”
Get full text
Conference Proceeding -
9
Ares: A System-Oriented Wargame Framework for Adversarial ML
Published 24-10-2022“…Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved…”
Get full text
Journal Article -
10
URET: Universal Robustness Evaluation Toolkit (for Evasion)
Published 03-08-2023“…Machine learning models are known to be vulnerable to adversarial evasion attacks as illustrated by image classification models. Thoroughly understanding such…”
Get full text
Journal Article -
11
Adaptive Verifiable Training Using Pairwise Class Similarity
Published 14-12-2020“…Verifiable training has shown success in creating neural networks that are provably robust to a given amount of noise. However, despite only enforcing a single…”
Get full text
Journal Article -
12
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models
Published in 2023 IEEE Security and Privacy Workshops (SPW) (01-05-2023)“…Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to…”
Get full text
Conference Proceeding -
13
Can Attention Masks Improve Adversarial Robustness?
Published 26-11-2019“…Deep Neural Networks (DNNs) are known to be susceptible to adversarial examples. Adversarial examples are maliciously crafted inputs that are designed to fool…”
Get full text
Journal Article -
14
Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders
Published 12-09-2019“…Adversarial machine learning is a well-studied field of research where an adversary causes predictable errors in a machine learning algorithm through precise…”
Get full text
Journal Article -
15
Separation of Powers in Federated Learning
Published 19-05-2021“…Federated Learning (FL) enables collaborative training among mutually distrusting parties. Model updates, rather than training data, are concentrated and fused…”
Get full text
Journal Article -
16
Tyche: Risk-Based Permissions for Smart Home Platforms
Published 14-01-2018“…Emerging smart home platforms, which interface with a variety of physical devices and support third-party application development, currently use permission…”
Get full text
Journal Article -
17
Robust Classification using Robust Feature Augmentation
Published 26-05-2019“…Existing deep neural networks, say for image classification, have been shown to be vulnerable to adversarial images that can cause a DNN misclassification,…”
Get full text
Journal Article -
18
Internet of Things Security Research: A Rehash of Old Ideas or New Intellectual Challenges?
Published 23-05-2017“…The Internet of Things (IoT) is a new computing paradigm that spans wearable devices, homes, hospitals, cities, transportation, and critical infrastructure…”
Get full text
Journal Article -
19
Physical Adversarial Examples for Object Detectors
Published 20-07-2018“…Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has…”
Get full text
Journal Article -
20
Note on Attacking Object Detectors with Adversarial Stickers
Published 21-12-2017“…Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are…”
Get full text
Journal Article