End-to-end deep learning-based framework for driver action recognition
Distracted driving is the main cause of traffic accidents. To present road accidents, different solutions have been proposed. Among these solutions, computer vision techniques have attracted much attention recently thanks to its wide potential for identifying distracting driving behavior on the road...
Saved in:
Published in: | 2022 International Conference on Multimedia Analysis and Pattern Recognition (MAPR) pp. 1 - 6 |
---|---|
Main Authors: | , , , , , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
01-10-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Distracted driving is the main cause of traffic accidents. To present road accidents, different solutions have been proposed. Among these solutions, computer vision techniques have attracted much attention recently thanks to its wide potential for identifying distracting driving behavior on the road. Driver action recognition has its own challenges compared with human action recognition as the activities of drivers can be distinguished one from the others by fine movements. Variation in lighting conditions as well as the effect of lighting reflects on the car door is another challenge of driver action recognition. In this paper, we proposed an end-to-end deep learning-based framework for continuous drive action recognition. To this end, MoviNet-A0 network that is proposed for action recognition has been adapted for driver action recognition. Then, to leverage the information of different views, several voting and post-processing strategies are proposed. The proposed methods have been evaluated in both isolated action classification and video continuous recognition on SynDD1 dataset which is introduced in the Track 3 of the 6 th AI City Challenges. The most difficulty in this competition is to locate accurately the starting and ending the proposed method can classify 18 actions with 97.13% of the proposed method can classify 18 actions with 97.13% of accuracy at top 1. For the continuous recognition, our results in the competition with a F1-score of 0.1348. |
---|---|
ISSN: | 2770-6850 |
DOI: | 10.1109/MAPR56351.2022.9924944 |