Deep Learning-Driven Multi-Modality Image Fusion: Enhancing MRI and CT Registration and Diagnostic Performance

Main Article Content

Amita Jajoo, Vijaya B. Musande

Abstract

Improving the accuracy of diagnosis and treatment planning depends much on multimodality medical imaging, particularly integrating MRI and CT images. This work aims to build a quick and accurate image registration and fusion system capable of merging beneficial elements from MRI and CT images. Preprocessing actions like Gaussian smoothing, contrast enhancement, and normalisation helped to guarantee acceptable quality of the sources for alignment. Phase correlation and Fourier shift was used for registration; then weighted average was used for merging. Quantitative metrics like PSNR, SSIM, and entropy revealed improved structural similarity and information abundance in the combined images. Furthermore improving the edge features and reducing the noise by using the UNET deep learning model produced diagnostically superior results. The findings revealed that the framework could achieve appropriate integration by keeping the sharpness of structures from CT and the specifics of soft tissues from MRI. Although the technique is effective, additional research on more sophisticated fusion techniques and tiny alignment issues has to be done. This work offers a consistent approach for multi-modality imaging, therefore supporting therapeutic applications requiring good view of all interior components.

Article Details

Section
Articles