Lagrange-Guided Bayesian Machine Learning Inversion (LGBMLI): A Mathematical Framework for Inverse Cubic Sensor Calibration under Uncertainty

Main Article Content

Ganti Srikanth, Gopinathan Sudheer, S Uma Devi

Abstract

Cubic polynomial transformations are commonly employed in sensor systems to represent nonlinear responses, achieving a balance between precision and computational efficiency. The inverse problem — retrieving physical input from measured output — is essential for sensor calibration but is complicated by multiple roots, noise sensitivity, and parameter uncertainty. This work introduces the Lagrange-Guided Bayesian Machine Learning Inversion (LGBMLI) framework, a mathematically grounded methodology that replaces Cardano’s root generator with the numerically stable Lagrange analytical solver, integrates a corrected Bayesian posterior formulation, and employs ensemble learning for root selection. A formal theorem for root-region classification and a lemma for the monotonicity condition are established with complete proofs, providing well-founded justifications for the machine-learning features. The inverse problem is reformulated as posterior inference rather than exact root recovery: p(x∣y)∝p(y∣x) p(x). Results are reported as mean ± standard deviation over repeated runs on synthetic datasets with 80–85% multiple-root cases. The proposed methodology achieves 94.6% ± 1.8% root selection accuracy while reducing MSE reconstruction error by 34% compared to Cardano and by 25.45% compared to Newton–Raphson. The 94.2% empirical coverage of nominal 95% credible intervals (miscoverage ≈5.8%) indicates well-calibrated uncertainty quantification with a small residual approximation gap. Identifiability conditions, stability analysis, computational complexity, and extensions to outlier-contaminated and lacunary data are also discussed.

Article Details

Section
Articles