Tuning of Explainable ArtificialIntelligence (XAI) tools in the field of textanalysis

Aus IPD-Institutsseminar
Zur Navigation springen Zur Suche springen
Vortragende(r) Philipp Weinmann
Vortragstyp Bachelorarbeit
Betreuer(in) Clemens Müssener
Termin Fr 11. Juni 2021
Kurzfassung The goal of this bachelor thesis was to analyse classification results using a 2017 published method called shap. Explaining how an artificial neural network makes a decision is an interdisciplinary research subject combining computer science, math, psychology and philosophy. We analysed these explanations from a psychological standpoint and after presenting our findings we will propose a method to improve the interpretability of text explanations using text-hierarchies, without loosing much/any accuracy. Secondary, the goal was to test out a framework developed to analyse a multitude of explanation methods. This Framework will be presented next to our findings and how to use it to create your own analysis. This Bachelor thesis is addressed at people familiar with artificial neural networks and other machine learning methods.