A time–frequency convolutional neural network for the offline classification of steady-state visual evoked potential responses

► A new convolutional neural network architecture. ► It includes the Fourier transform between two hidden layers. ► Classification of steady-state visual evoked potentials. ► The average recognition rate is 95.61%. A new convolutional neural network architecture is presented. It includes the fast Fo...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters Vol. 32; no. 8; pp. 1145 - 1153
Main Author: Cecotti, Hubert
Format: Journal Article
Language:English
Published: Elsevier B.V 01-06-2011
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:► A new convolutional neural network architecture. ► It includes the Fourier transform between two hidden layers. ► Classification of steady-state visual evoked potentials. ► The average recognition rate is 95.61%. A new convolutional neural network architecture is presented. It includes the fast Fourier transform between two hidden layers to switch the signal analysis from the time domain to the frequency domain inside the network. This technique allows the signal classification without any special pre-processing and uses knowledge from the problem in the network topology. The first step allows the creation of different spatial and time filters. The second step is dedicated to the signal transformation in the frequency domain. The last step is the classification. The system is tested offline on the classification of EEG signals that contain steady-state visual evoked potential (SSVEP) responses. The mean recognition rate of the classification of five different types of SSVEP response is 95.61% on a time segment length of 1 s. The proposed strategy outperforms other classical neural network architecures.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2011.02.022