An Improved Deep Network and Handcrafted Feature-Based Scene Classification Convolutional Model for Self-Driving Cars

Main Article Content

Sanjay P. Pande, Sarika Khandelwal

Abstract

With growing urbanization, cars have populated the roads to great extent. Intelligent cars are the need to mitigate the traffic and improve the traffic efficiency. Scene classification in one of the important ingredient of self-driven cars, which acts as a key source for good decision making tasks. Scene complexities and diversities have made the problem more complex resulting in similarities in different classes and differences in same categories. To deal with such scenarios, the proposed work present a fusion of various effective features obtained from deep networks and descriptors to classify scenes in four different categories. Local Features are obtained using the YOLOV5m and the VGG19 networks. The YOLOV5m is used to detect the relevant objects in the scene and the VGG19 is used to lift the blind features of the objects detected. The global feature extraction module is used to extract the blind features using VGG19 network. For improving the classification accuracy, eight handcrafted features adding fine and coarse details of the image are fused with local and global features. A fully connected network comprising of five layers is finally used to differentiate the scenes in four categories viz crosswalk, highway, over pass/tunnel and parking. A self-generated dataset is constructed from four different publicly available datasets to evaluate the performance of the proposed scene classification model. The experimental results show that even under high correlation between classes, the system was able to classify the test samples with 86.79% accuracy which is higher than the state-of-the-art models.

Article Details

Section
Articles