Hybrid Recommender System Using CNNs, Bi-Directional RNNs, and Autoencoders

Main Article Content

Naveen Kumar Navuri, CVPR Prasad

Abstract

This research presents an innovative hybrid recommender system that utilizes stacked Convolutional Neural Networks (CNNs), Bi-Directional Recurrent Neural Networks (RNNs), and Improved Autoencoders to deliver highly accurate and tailored product recommendations. Conventional recommendation methods including collaborative and content-based filtering, frequently struggle to accurately capture the complex and ever-changing connections between users and objects. In order to address these constraints, our hybrid approach integrates multiple deep learning methodologies to extract and merge visual, temporal, and latent characteristics from user interaction and product data.


The proposed method begins by employing convolutional neural networks (CNN) with layered architectures to extract visually rich and high-quality features from product photos. Afterwards, these visual characteristics are merged with embedded metadata via attention mechanisms, thus guaranteeing the precise acquisition of important visual and contextual data. Subsequently, bi-directional recurrent neural networks (Bi-RNN) are used to capture the temporal patterns of user activities, so providing a thorough comprehension of user behavior over a prolonged duration. Ultimately, the temporal features are combined with the visual features to offer a unified and complete user preferences representation, leading to a strong and well-balanced representation.


To boost the accuracy of suggestions, the combined features undergo processing using an advanced autoencoder that integrates residual blocks and attention processes. This autoencoder utilizes dimensionality reduction and reconstruction techniques to enhance the features and eliminate noise. The resulting brief and informative representations of features are subsequently employed to create suggestions.


The validation of our model is conducted using the TMDB dataset, which includes comprehensive metadata, textual descriptions, and visual content for movies. The findings of our experiment demonstrate significant enhancements in both the accuracy and relevance of recommendations when compared to conventional methods and other deep learning-based approaches. Specifically, our approach showcases improved efficiency in collecting complex user-item interactions and adjusting to evolving user preferences over time. This hybrid methodology provides a resilient solution for contemporary recommender systems, resulting in enhanced and tailored user interactions.

Article Details

Section
Articles