Universal Adversarial Attacks on the Raw Data from a Frequency Modulated Continuous Wave Radar
As more and more applications rely on Artificial Intelligence (AI), it is inevitable to explore the associated safety and security risks, especially for sensitive applications where physical integrity is at risk. One of the most interesting challenges that come with AI are adversarial attacks being...
Saved in:
Published in: | IEEE access Vol. 10; p. 1 |
---|---|
Main Authors: | , |
Format: | Journal Article |
Language: | English |
Published: |
Piscataway
IEEE
2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | As more and more applications rely on Artificial Intelligence (AI), it is inevitable to explore the associated safety and security risks, especially for sensitive applications where physical integrity is at risk. One of the most interesting challenges that come with AI are adversarial attacks being a well-researched problem in the visual domain, where a small change in the input data can cause the Neural Network (NN) to make an incorrect prediction. In the radar domain, AI is not that widespread yet but the results that AI applications produce are very promising, which is why more and more applications based on it are being used. This work presents three possible attack methods that are particularly suitable for the radar domain. The developed algorithms generate universal adversarial attack patches for all sorts of radar applications based on NN. The main goal of the algorithms, apart from the computation of universal patches, is the identification of sensitive areas in the raw radar data input which than can be examined more closely. To the best of our knowledge, this is the first work that deals with calculating universal patches on raw radar data, which is of great importance especially for interference analysis. The developed algorithms have been verified on two data sets. One in the field of autonomous driving where the attacks lead to a steering misprediction of up to 0.3 for the steering value which is within [-1,1], with the results also being successfully tested on a demonstrator. The other data set originated from a gesture recognition task, where the attacks decreased the accuracy, originally at 97.0% up to a minimum of 16.5%, which is slightly above 12.5% being the accuracy for a purely random prediction. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3218349 |