Bridging the Gap: Enhancing Trust and Transparency in MachineLearning with Explainable AI
DOI:
https://doi.org/10.47363/JAICC/2022(1)E123Keywords:
Machine Learning, Explainable AI, Transparency, Interpretability, Artificial IntelligenceAbstract
Explainable Artificial Intelligence (XAI) aims to address the complexity and opacity of AI systems, often referred to as "black boxes." It seeks to provide
transparency and build trust in AI, particularly in domains where decisions impact safety, security, and ethical considerations. XAI approaches fall
into three categories: opaque systems that offer no explanation for their predictions, interpretable systems that provide some level of justification, and
comprehensible systems that enable users to reason about and interact with the AI system. Automated reasoning plays a crucial role in achieving truly
explainable AI. This paper presents current methodologies, challenges, and the importance of integrating automated reasoning for XAI. It is grounded in
a thorough literature review and case studies, providing insights into the practical applications and future directions for XAI.
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Journal of Artificial Intelligence & Cloud Computing

This work is licensed under a Creative Commons Attribution 4.0 International License.