Utilization of GAN for Automatic Evaluation of Counterfactuals:Challenges and Opportunities
DOI:
https://doi.org/10.47363/JAICC/2024(3)273Keywords:
Explainable Artificial Intelligence (XAI), Counterfactual Explanations, Generative Adversarial Networks (GAN), Computer VisionAbstract
Over the past few years, Explainable Artificial Intelligence (XAI) has grown significantly due to the fact that successful deep learning models are still difficult to understand and interpret. XAI aims to enable better interpretability of the judgments/classifications made by the neural networks for humans.In XAI research, counterfactual explanations are proven to be very effective in explaining the model’s mistakes, describing what updates could be done in a particular image to attain the correct classification. However, systematic evaluation of counterfactuals is challenging. This paper reports on the challenges of using GANs (Generative Adversarial Networks) to assess the quality of counterfactuals, using the CUB-200-2011 birds dataset as a case study.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Journal of Artificial Intelligence & Cloud Computing

This work is licensed under a Creative Commons Attribution 4.0 International License.