A Multi-Scale Interpretable Deep Learning Framework Based on Learnable Wavelet Representations

Main Article Content

V.S.S.V.D.Prakash, G. Sudheer

Abstract

This paper introduces a deep learning framework that employs learnable wavelet representations to achieve multi-scale interpretability. Wavelet filters are incorporated as trainable components within the model, utilizing compact B-spline bases and regularization to satisfy vanishing-moment and time-frequency localization properties on discrete signals. Instead of enforcing strict orthogonality or perfect reconstruction, the learnable filters are analyzed from an approximate frame perspective, enabling stable and consistent attribution of features across scales. The framework’s stability is evaluated with respect to input perturbations and scale truncation. Numerical experiments on time-series and biomedical signal datasets demonstrate that the proposed method achieves predictive performance comparable to standard convolutional networks, while offering enhanced multi-scale interpretability relative to typical post-hoc methods. This approach integrates multi-resolution analysis with deep learning in a practical manner, without reliance on ideal wavelet assumptions.

Article Details

Section
Articles