Search Results - "Gurevin, Deniz"

  • Showing 1 - 16 results of 16
Refine Results
  1. 1

    Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples by Mahmood, Kaleel, Gurevin, Deniz, van Dijk, Marten, Nguyen, Phuoung Ha

    Published in Entropy (Basel, Switzerland) (18-10-2021)
    “…Many defenses have recently been proposed at venues like NIPS, ICML, ICLR and CVPR. These defenses are mainly focused on mitigating white-box attacks. They do…”
    Get full text
    Journal Article
  2. 2

    Exploiting Intrinsic Redundancies in Dynamic Graph Neural Networks for Processing Efficiency by Gurevin, Deniz, Ding, Caiwen, Khan, Omer

    Published in IEEE computer architecture letters (01-07-2024)
    “…Modern dynamical systems are rapidly incorporating artificial intelligence to improve the efficiency and quality of complex predictive analytics. To…”
    Get full text
    Journal Article
  3. 3

    Secure Remote Attestation with Strong Key Insulation Guarantees by Gurevin, Deniz, Jin, Chenglu, Nguyen, Phuong Ha, Khan, Omer, van Dijk, Marten

    Published in IEEE transactions on computers (2024)
    “…Secure processors with hardware-enforced isolation are crucial for secure cloud computation. However, commercial secure processors have underestimated the…”
    Get full text
    Journal Article
  4. 4

    Enabling Retrain-Free Deep Neural Network Pruning Using Surrogate Lagrangian Relaxation by Gurevin, Deniz

    Published 01-01-2022
    “…Network pruning is a widely used technique to reduce computation cost and model size for deep neural networks. However, the typical three-stage pipeline, i.e.,…”
    Get full text
    Dissertation
  5. 5

    Towards Sparsification of Graph Neural Networks by Peng, Hongwu, Gurevin, Deniz, Huang, Shaoyi, Geng, Tong, Jiang, Weiwen, Khan, Orner, Ding, Caiwen

    “…As real-world graphs expand in size, larger GNN models with billions of parameters are deployed. High parameter count in such models makes training and…”
    Get full text
    Conference Proceeding
  6. 6

    Towards Real-Time Temporal Graph Learning by Gurevin, Deniz, Shan, Mohsin, Geng, Tong, Jiang, Weiwen, Ding, Caiwen, Khan, Omer

    “…In recent years, graph representation learning has gained significant popularity, which aims to generate node embeddings that capture features of graphs. One…”
    Get full text
    Conference Proceeding
  7. 7

    PruneGNN: Algorithm-Architecture Pruning Framework for Graph Neural Network Acceleration by Gurevin, Deniz, Shan, Mohsin, Huang, Shaoyi, Hasan, MD Amit, Ding, Caiwen, Khan, Omer

    “…Performing training and inference for Graph Neural Networks (GNNs) under tight latency constraints has become increasingly difficult as real-world input graphs…”
    Get full text
    Conference Proceeding
  8. 8

    Masked Memory Primitive for Key Insulated Schemes by DiMeglio, Zachary, Bustami, Jenna, Gurevin, Deniz, Jin, Chenglu, van Dijk, Marten, Khan, Omer

    “…In practical security systems, it is difficult to keep secret keys protected against adversarial attacks. Key insulated schemes (KIS) are used to improve…”
    Get full text
    Conference Proceeding
  9. 9

    An Efficient Algorithm for the Construction of Dynamically Updating Trajectory Networks by Gurevin, Deniz, Michael, Chris J., Khan, Omer

    “…Trajectory based spatiotemporal networks (STN) are useful in a wide range of applications, such as crowd behavior analysis. Significant portion of trajectory…”
    Get full text
    Conference Proceeding
  10. 10

    MergePath-SpMM: Parallel Sparse Matrix-Matrix Algorithm for Graph Neural Network Acceleration by Shan, Mohsin, Gurevin, Deniz, Nye, Jared, Ding, Caiwen, Khan, Omer

    “…Graph neural networks have seen tremendous adoption to perform complex predictive analytics on massive and unstructured real-world graphs. The trend in…”
    Get full text
    Conference Proceeding
  11. 11

    Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural Network Pruning by Zhou, Shanglin, Bragin, Mikhail A, Pepin, Lynn, Gurevin, Deniz, Miao, Fei, Ding, Caiwen

    Published 08-04-2023
    “…Network pruning is a widely used technique to reduce computation cost and model size for deep neural networks. However, the typical three-stage pipeline…”
    Get full text
    Journal Article
  12. 12

    Towards Real-Time Temporal Graph Learning by Gurevin, Deniz, Shan, Mohsin, Geng, Tong, Jiang, Weiwen, Ding, Caiwen, Khan, Omer

    Published 08-10-2022
    “…In recent years, graph representation learning has gained significant popularity, which aims to generate node embeddings that capture features of graphs. One…”
    Get full text
    Journal Article
  13. 13

    Towards Sparsification of Graph Neural Networks by Peng, Hongwu, Gurevin, Deniz, Huang, Shaoyi, Geng, Tong, Jiang, Weiwen, Khan, Omer, Ding, Caiwen

    Published 10-09-2022
    “…As real-world graphs expand in size, larger GNN models with billions of parameters are deployed. High parameter count in such models makes training and…”
    Get full text
    Journal Article
  14. 14

    Secure Remote Attestation with Strong Key Insulation Guarantees by Gurevin, Deniz, Jin, Chenglu, Nguyen, Phuong Ha, Khan, Omer, van Dijk, Marten

    Published 05-01-2022
    “…Recent years have witnessed a trend of secure processor design in both academia and industry. Secure processors with hardware-enforced isolation can be a solid…”
    Get full text
    Journal Article
  15. 15

    Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples by Mahmood, Kaleel, Gurevin, Deniz, van Dijk, Marten, Nguyen, Phuong Ha

    Published 20-05-2021
    “…Many defenses have recently been proposed at venues like NIPS, ICML, ICLR and CVPR. These defenses are mainly focused on mitigating white-box attacks. They do…”
    Get full text
    Journal Article
  16. 16

    Enabling Retrain-free Deep Neural Network Pruning using Surrogate Lagrangian Relaxation by Gurevin, Deniz, Zhou, Shanglin, Pepin, Lynn, Li, Bingbing, Bragin, Mikhail, Ding, Caiwen, Miao, Fei

    Published 18-12-2020
    “…Network pruning is a widely used technique to reduce computation cost and model size for deep neural networks. However, the typical three-stage pipeline, i.e.,…”
    Get full text
    Journal Article