ContrastNet: Unsupervised Feature Learning for EEG Signals and HSI Classification Using DCNN and Autoencoders

Main Article Content

M. Rama, T. Nalini, A. Gayathri, G. L.Varaprasad, N. Saikiran

Abstract

Scientific research on digital images has advanced significantly following their emergence. From earlier times to our current situation, the sizes as well as the quality of the images generated have increased substantially. Nevertheless, there are only so many outcomes that may be achieved when the data in these pictures stays in the band that is visible (RGB band). It has become necessary to obtain photos containing additional broadband information as a result. The Hyperspectral Imaging (HSI) technique was created to address this requirement. A new framework called ContrastNet is intended for unsupervised learning of features in two different domains: ECG readings and HSI. ContrastNet attempts to extract discriminative features from these complicated data types without necessitating labelled data to be trained by utilizing automatic encoders and Deep Convolutional Neural Networks (DCNNs).When it comes to EEG signals, ContrastNet picks up representative features that identify significant patterns connected to various brain activity levels. Comparably, ContrastNet collects features from HSI classification that  encode spectral data important for differentiating between various categories of interest. ContrastNet is an adaptable feature learning method that can handle various datasets and applications without requiring a large amount of labelled data by employing DCNNs and automatic encoders in an unsupervised way.

Article Details

Section
Articles