Traditional deep learning models are often considered “black boxes” due to their lack of interpretability, which limits their therapeutic use despite their success in classification tasks. This study aims to improve the interpretability of diagnoses for COVID-19, pneumonia, and tuberculosis from X-ray images using an enhanced DenseNet201 model within a transfer learning framework. We incorporated Explainable Artificial Intelligence (XAI) techniques, including SHAP, LIME, Grad-CAM, and Grad-CAM++, to make the model’s decisions more understandable. To enhance image clarity and detail, we applied preprocessing methods such as Denoising Autoencoder, Contrast Limited Adaptive Histogram Equalization (CLAHE), and Gamma Correction. An ablation study was conducted to identify the optimal parameters for the proposed approach. Our model’s performance was compared with other transfer learning-based models like EfficientNetB0, InceptionV3, and LeNet using evaluation metrics. The model that included data augmentation techniques achieved the best results, with an accuracy of 99.20%, and precision and recall of 99%. This demonstrates the critical role of data augmentation in improving model performance. SHAP and LIME provided significant insights into the model’s decision-making process, while Grad-CAM and Grad-CAM++ highlighted specific image features and regions influencing the model’s classifications. These techniques enhanced transparency and trust in AI-assisted diagnoses. Finally, we developed an Android-based system using the most effective model to support medical specialists in their decision-making process.