Adversarial Attack on 3D Fused Sensory Data in Drone Surveillance
Lately, unmanned aerial vehicles (UAVs) have not only been used for surveillance but also, they are used in safety-critical applications. Today's UAVs are enabled with state-of-the-art sensors (i.e., LiDAR and 2D cameras) for monitoring public space. In most drone surveillance applications, the...
Saved in:
Published in: | 2024 2nd International Conference on Advancement in Computation & Computer Technologies (InCACCT) pp. 70 - 75 |
---|---|
Main Authors: | , , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
02-05-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Lately, unmanned aerial vehicles (UAVs) have not only been used for surveillance but also, they are used in safety-critical applications. Today's UAVs are enabled with state-of-the-art sensors (i.e., LiDAR and 2D cameras) for monitoring public space. In most drone surveillance applications, the LiDAR and 2D camera data are fused for efficient monitoring and for preserving storage. Mostly deep neural networks (DNN) are used for performing object detection, classification, and segmentation on this fused data such that crucial information can be extracted. However, the DNNs are vulnerable to adversarial attacks that falsely misclassify the object. In this work, we performed a late fusion of the 2D camera and the 3D point cloud data, on these fused data we applied the adversarial patch. We proposed a pixel distillation methodology (PDM) for generating an adversarial patch that can misclassify the detected objects. We evaluated the proposed methodology on the benchmark KITTI dataset and the 3D-Model dataset, additionally in a controlled environment for real-time analysis we tested the model on Tello and Parrot drone. On average, our method exhibits 80% attack success rate and successfully fools Yolo7 and Catchnet models. |
---|---|
DOI: | 10.1109/InCACCT61598.2024.10551069 |