The performance of steady-state visual-evoked potential (SSVEP)-based Brain-Computer Interfaces (BCIs) have shown great improvement with multi-channel classification techniques. These methods fundamentally involve developing spatial filters that linearly combine the Electroencephalography (EEG) channels so as to improve SSVEP strength and suppress noise. This paper proposes a nonlinear spatial filter using Maximum Contrastive Networks (MCNs). Essentially, MCNs are deep networkstrained to maximize the contrast between signal and noise components in EEG. In other words, thenetwork attempts to enhance the signal-to-noise ratio (SNR) of the SSVEPs in EEG. Networks of varying configurations and sigmoid functions are experimented on the EEG recordings. After random initialization, the network is pre-trained using a denoising autoencoder.
Then the network is trained by back-propagation to maximize contrast/SNR. The results obtained from the MCNs are compared with the classifiers based on Minimum Energy Combination (MEC) and Canonical Correlation Analysis (CCA). In this initial study, results show that MCNs significantly improve performance over the MEC and CCA based classifiers across all sessions for the trained subject. The cube-root sigmoid MCNs proved to be more accurate compared to the hyperbolic tangent MCNs. Since significantly higher accuracies were attained for lower EEG time segments, subject-specific trained MCNs with optimal configuration likely possess a large potential for online SSVEP detection.