Search Results - "Eykholt, Kevin"

Refine Results
  1. 1

    Robust Physical-World Attacks on Deep Learning Visual Classification by Eykholt, Kevin, Evtimov, Ivan, Fernandes, Earlence, Li, Bo, Rahmati, Amir, Xiao, Chaowei, Prakash, Atul, Kohno, Tadayoshi, Song, Dawn

    “…Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations…”
    Get full text
    Conference Proceeding
  2. 2

    Designing and Evaluating Physical Adversarial Attacks and Defenses for Machine Learning Algorithms by Eykholt, Kevin

    Published 01-01-2019
    “…Studies show that state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to…”
    Get full text
    Dissertation
  3. 3

    Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks by Eykholt, Kevin, Ahmed, Farhan, Vaishnavi, Pratik, Rahmati, Amir

    Published 15-10-2024
    “…The vulnerability of machine learning models in adversarial scenarios has garnered significant interest in the academic community over the past decade,…”
    Get full text
    Journal Article
  4. 4

    Accelerating Certified Robustness Training via Knowledge Transfer by Vaishnavi, Pratik, Eykholt, Kevin, Rahmati, Amir

    Published 25-10-2022
    “…Training deep neural network classifiers that are certifiably robust against adversarial attacks is critical to ensuring the security and reliability of…”
    Get full text
    Journal Article
  5. 5

    Tyche: A Risk-Based Permission Model for Smart Homes by Rahmati, Amir, Fernandes, Earlence, Eykholt, Kevin, Prakash, Atul

    “…Emerging smart home platforms, which interface with a variety of physical devices and support third-party application development, currently use permission…”
    Get full text
    Conference Proceeding
  6. 6

    Transferring Adversarial Robustness Through Robust Representation Matching by Vaishnavi, Pratik, Eykholt, Kevin, Rahmati, Amir

    Published 21-02-2022
    “…With the widespread use of machine learning, concerns over its security and reliability have become prevalent. As such, many have developed defenses to harden…”
    Get full text
    Journal Article
  7. 7

    Designing Adversarially Resilient Classifiers using Resilient Feature Engineering by Eykholt, Kevin, Prakash, Atul

    Published 17-12-2018
    “…We provide a methodology, resilient feature engineering, for creating adversarially resilient classifiers. According to existing work, adversarial attacks…”
    Get full text
    Journal Article
  8. 8

    Ares: A System-Oriented Wargame Framework for Adversarial ML by Ahmed, Farhan, Vaishnavi, Pratik, Eykholt, Kevin, Rahmati, Amir

    “…Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved…”
    Get full text
    Conference Proceeding
  9. 9

    Ares: A System-Oriented Wargame Framework for Adversarial ML by Ahmed, Farhan, Vaishnavi, Pratik, Eykholt, Kevin, Rahmati, Amir

    Published 24-10-2022
    “…Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved…”
    Get full text
    Journal Article
  10. 10

    URET: Universal Robustness Evaluation Toolkit (for Evasion) by Eykholt, Kevin, Lee, Taesung, Schales, Douglas, Jang, Jiyong, Molloy, Ian, Zorin, Masha

    Published 03-08-2023
    “…Machine learning models are known to be vulnerable to adversarial evasion attacks as illustrated by image classification models. Thoroughly understanding such…”
    Get full text
    Journal Article
  11. 11

    Adaptive Verifiable Training Using Pairwise Class Similarity by Wang, Shiqi, Eykholt, Kevin, Lee, Taesung, Jang, Jiyong, Molloy, Ian

    Published 14-12-2020
    “…Verifiable training has shown success in creating neural networks that are provably robust to a given amount of noise. However, despite only enforcing a single…”
    Get full text
    Journal Article
  12. 12

    Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models by Baracaldo, Nathalie, Ahmed, Farhan, Eykholt, Kevin, Zhou, Yi, Priya, Shriti, Lee, Taesung, Kadhe, Swanand, Tan, Mike, Polavaram, Sridevi, Suggs, Sterling, Gao, Yuyang, Slater, David

    “…Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to…”
    Get full text
    Conference Proceeding
  13. 13

    Can Attention Masks Improve Adversarial Robustness? by Vaishnavi, Pratik, Cong, Tianji, Eykholt, Kevin, Prakash, Atul, Rahmati, Amir

    Published 26-11-2019
    “…Deep Neural Networks (DNNs) are known to be susceptible to adversarial examples. Adversarial examples are maliciously crafted inputs that are designed to fool…”
    Get full text
    Journal Article
  14. 14

    Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders by Vaishnavi, Pratik, Eykholt, Kevin, Prakash, Atul, Rahmati, Amir

    Published 12-09-2019
    “…Adversarial machine learning is a well-studied field of research where an adversary causes predictable errors in a machine learning algorithm through precise…”
    Get full text
    Journal Article
  15. 15

    Separation of Powers in Federated Learning by Cheng, Pau-Chen, Eykholt, Kevin, Gu, Zhongshu, Jamjoom, Hani, Jayaram, K. R, Valdez, Enriquillo, Verma, Ashish

    Published 19-05-2021
    “…Federated Learning (FL) enables collaborative training among mutually distrusting parties. Model updates, rather than training data, are concentrated and fused…”
    Get full text
    Journal Article
  16. 16

    Tyche: Risk-Based Permissions for Smart Home Platforms by Rahmati, Amir, Fernandes, Earlence, Eykholt, Kevin, Prakash, Atul

    Published 14-01-2018
    “…Emerging smart home platforms, which interface with a variety of physical devices and support third-party application development, currently use permission…”
    Get full text
    Journal Article
  17. 17

    Robust Classification using Robust Feature Augmentation by Eykholt, Kevin, Gupta, Swati, Prakash, Atul, Rahmati, Amir, Vaishnavi, Pratik, Zheng, Haizhong

    Published 26-05-2019
    “…Existing deep neural networks, say for image classification, have been shown to be vulnerable to adversarial images that can cause a DNN misclassification,…”
    Get full text
    Journal Article
  18. 18

    Internet of Things Security Research: A Rehash of Old Ideas or New Intellectual Challenges? by Fernandes, Earlence, Rahmati, Amir, Eykholt, Kevin, Prakash, Atul

    Published 23-05-2017
    “…The Internet of Things (IoT) is a new computing paradigm that spans wearable devices, homes, hospitals, cities, transportation, and critical infrastructure…”
    Get full text
    Journal Article
  19. 19

    Physical Adversarial Examples for Object Detectors by Eykholt, Kevin, Evtimov, Ivan, Fernandes, Earlence, Li, Bo, Rahmati, Amir, Tramer, Florian, Prakash, Atul, Kohno, Tadayoshi, Song, Dawn

    Published 20-07-2018
    “…Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has…”
    Get full text
    Journal Article
  20. 20

    Note on Attacking Object Detectors with Adversarial Stickers by Eykholt, Kevin, Evtimov, Ivan, Fernandes, Earlence, Li, Bo, Song, Dawn, Kohno, Tadayoshi, Rahmati, Amir, Prakash, Atul, Tramer, Florian

    Published 21-12-2017
    “…Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are…”
    Get full text
    Journal Article