On the Interpretability of Anomaly Detection via Neural Networks

Aus IPD-Institutsseminar
Zur Navigation springen Zur Suche springen
Vortragende(r) Marco Sturm
Vortragstyp Masterarbeit
Betreuer(in) Edouard Fouché
Termin Fr 12. Oktober 2018
Kurzfassung Verifying anomaly detection results when working in on an unsupervised use case is challenging. For large datasets a manual labelling is economical unfeasible. In this thesis we create explanations to help verifying and understanding the detected anomalies. We develop a method to rule generation algorithm that describe frequent patterns in the output of autoencoders. The number of rules is significantly lower than the number of anomalies. Thus, finding explanations for these rules is much less effort compared to finding explanations for every single anomaly. Its performance is evaluated on a real-world use case, where we achieve a significant reduction of effort required for domain experts to understand the detected anomalies but can not specify the usefulness in exact numbers due to the missing labels. Therefore, we also evaluate the approach on benchmark dataset.