dc.description.abstract | The viability of developing an Explainable Artificial
Intelligence (XAI) model for anti-phishing detection is examined in
this review. The significance of Explainable Artificial Intelligence
(XAI), it's principles, methods/types, challenges, ethical issues,
vulnerability aspects are discussed. The areas of machine learning
for phishing detection, XAI models for phishing detection,
developing appropriate explanation messages for warnings,
feasibility issues, and a comparison with conventional approaches
are all covered. The importance of XAI in enhancing the clarity and
interpretability of AI models are further emphasized in the paper. It
shows different XAI techniques, difficulties in striking a balance
between explainability and performance, and XAI ethics. The
evaluation looks at phishing scams, machine learning detection
methods, and the advantages of XAI models. It suggests a thorough
strategy for conveying explanatory messages and examines the
viability of creating XAI models. In highlighting the promise of XAI
to improve transparency and interpretability, the research also
acknowledges the difficulties that must be overcome in order to
create scalable and reliable XAI models for anti-phishing detection. | en_US |