Bridging the Gap: Enhancing Trust and Transparency in MachineLearning with Explainable AI

Authors

  • Pushkar Mehendale San Francisco, CA, USA.  Author

DOI:

https://doi.org/10.47363/JAICC/2022(1)E123

Keywords:

Machine Learning, Explainable AI, Transparency, Interpretability, Artificial Intelligence

Abstract

Explainable Artificial Intelligence (XAI) aims to address the complexity and opacity of AI systems, often referred to as "black boxes." It seeks to provide
transparency and build trust in AI, particularly in domains where decisions impact safety, security, and ethical considerations. XAI approaches fall
into three categories: opaque systems that offer no explanation for their predictions, interpretable systems that provide some level of justification, and
comprehensible systems that enable users to reason about and interact with the AI system. Automated reasoning plays a crucial role in achieving truly
explainable AI. This paper presents current methodologies, challenges, and the importance of integrating automated reasoning for XAI. It is grounded in
a thorough literature review and case studies, providing insights into the practical applications and future directions for XAI.

Author Biography

  • Pushkar Mehendale, San Francisco, CA, USA. 

    Pushkar Mehendale, San Francisco, CA, USA. 

Downloads

Published

2022-12-24

How to Cite

Bridging the Gap: Enhancing Trust and Transparency in MachineLearning with Explainable AI. (2022). Journal of Artificial Intelligence & Cloud Computing, 1(4), 1-4. https://doi.org/10.47363/JAICC/2022(1)E123

Similar Articles

41-50 of 195

You may also start an advanced similarity search for this article.