Explainable Artificial Intelligence In Radiological Decision Support Systems: A Comprehensive Review

Authors

  • Babita, Dr. Prakash Mathew

Keywords:

Explainable Artificial Intelligence; Radiology; Medical Imaging; Deep Learning; Clinical Decision Support Systems

Abstract

The rapid growth of artificial intelligence (AI), especially in deep learning, has really changed the game in radiological image analysis. It’s now possible to automate disease detection, classification, and decision support with impressive accuracy. However, even with these advancements, getting AI widely adopted in clinical radiology is still a challenge. This is largely due to the often mysterious, black-box nature of many deep learning models. The lack of transparency raises important questions about trust, accountability, bias, ethical standards, and legal responsibilities in high-stakes healthcare settings. That’s where Explainable Artificial Intelligence (XAI) comes into play. It’s become essential for tackling these issues by offering clear insights into how models make their decisions.

This review article dives deep into the existing research on explainable AI techniques used in radiological decision support systems. It looks at the main deep learning methods in radiology, sorts through various explainability approaches—like visualization-based, model-agnostic, and hybrid techniques—and assesses how they can boost clinical trust, diagnostic accuracy, and ethical responsibility. The review also touches on the clinical, social, and regulatory aspects of XAI, points out current limitations, and suggests future research paths. By bringing together theoretical, methodological, and practical viewpoints, this review highlights the importance of explainable AI as a key element for the responsible and sustainable use of AI in radiology.

References

Litjens, G., Kooi, T., Bejnordi, B. E., et al. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60–88.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

Esteva, A., Robicquet, A., Ramsundar, B., et al. (2019). A guide to deep learning in healthcare. Nature Medicine, 25(1), 24–29.

Topol, E. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25, 44–56.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Holzinger, A., Langs, G., Denk, H., et al. (2019). What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923.

Selvaraju, R. R., Cogswell, M., Das, A., et al. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision (ICCV).

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD Conference.

Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems (NeurIPS).

Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813.

Samek, W., Montavon, G., Vedaldi, A., et al. (2021). Explainable AI: Interpreting, explaining and visualizing deep learning. Springer.

European Commission. (2021). Ethics guidelines for trustworthy AI. Brussels.

Shen, D., Wu, G., & Suk, H. I. (2017). Deep learning in medical image analysis. Annual Review of Biomedical Engineering, 19, 221–248.

Rajpurkar, P., Irvin, J., Zhu, K., et al. (2017). CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint arXiv:1711.05225.

Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11), e745–e750.

Downloads

How to Cite

Babita, Dr. Prakash Mathew. (2024). Explainable Artificial Intelligence In Radiological Decision Support Systems: A Comprehensive Review. International Journal of Engineering Science & Humanities, 14(1), 135–143. Retrieved from https://www.ijesh.com/j/article/view/570

Similar Articles

<< < 2 3 4 5 6 7 8 9 10 11 > >> 

You may also start an advanced similarity search for this article.