[Formula Omitted]: A Deep Learning System for Path Prediction Using Similar Motions

Trajectory prediction techniques play a serious role in many location-based services such as mobile advertising, carpooling, taxi services, traffic management, and routing services. These techniques rely on the object's motion history to predict the future path(s). As a consequence, these techn...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 8; p. 23881
Main Authors: Abdalla, Mohammed, Hendawi, Abdeltawab, Mokhtar, Hoda M O, Elgamal, Neveen, Krumm, John, Ali, Mohamed
Format: Journal Article
Language:English
Published: Piscataway The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 01-01-2020
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Trajectory prediction techniques play a serious role in many location-based services such as mobile advertising, carpooling, taxi services, traffic management, and routing services. These techniques rely on the object's motion history to predict the future path(s). As a consequence, these techniques fail when history is unavailable. The unavailability of history might occur for several reasons such as; history might be inaccessible, a recently registered user with no preceding history, or previously logged data is preserved for confidentiality and privacy. This paper presents a Bi-directional recurrent deep-learning based prediction system, named DeepMotions, to predict the future path of a query object without any prior knowledge of the object historical motions. The main idea of DeepMotions is to observe the moving objects in the vicinity that have similar motion patterns of the query object. Then use those similar objects to train and predict the query object's future steps. To compute similarity, we propose a similarity function that is based on the KNN algorithm. Extensive experiments conducted on real data sets confirm the efficient performance and the quality of prediction in DeepMotions with up to 96% accuracy.
ISSN:2169-3536
DOI:10.1109/ACCESS.2020.2966982