Institutsseminar/2021-06-11

Aus IPD-Institutsseminar
Zur Navigation springen Zur Suche springen
Termin (Alle Termine)
Datum Fr 11. Juni 2021, 11:30 Uhr
Dauer 70 min
Raum https://conf.dfn.de/webapp/conference/979160755
Vorheriger Termin Fr 21. Mai 2021
Nächster Termin Fr 18. Juni 2021

Vorträge

Vortragende(r) Wenrui Zhou
Titel Outlier Analysis in Live Systems from Application Logs
Vortragstyp Proposal
Betreuer(in) Edouard Fouché
Kurzfassung Modern computer applications tend to generate massive amounts logs and have become so complex that it often is difficult to explain why a specific application has failed. In this work we want detect and explain such failure by detecting outliers from application logs. This is challenging because (1)Log is unstructured text streaming data. (2)labelling application log is labor intensive and inefficient.

Logs are similar to natural languages. Recent advances in deep learning have shown great performance in Natural Language Processing (NLP) tasks. Based on these, we investigate how state-of-the-art sequence-to-sequence frameworks with attention mechanisms can detect and expolain outliers from applications logs. We plan to compare our framework against state-of-the-art log outlier detectors, based on existing outlier detection benchmarks.

Vortragende(r) Martin Lange
Titel Quantitative Evaluation of the Expected Antagonism of Explainability and Privacy
Vortragstyp Proposal
Betreuer(in) Clemens Müssener
Kurzfassung Explainers for machine learning models help humans and models work together. They build trust in a model's decision by giving further insight into the decision making process. However, it is unclear whether this insight can also expose private information. The question of my thesis is whether there exists a conflict of objectives between explainability and privacy and how to measure the effects of this conflict.

I propose two different possible types of attack that can be applied against explainers: model extraction and information about the training data. Differential privacy is introduced as a way to measure the privacy breach of these attacks. Finally, three specific use cases are presented where explainers can realistically be abused to breach differential privacy.

Vortragende(r) Philipp Weinmann
Titel Tuning of Explainable ArtificialIntelligence (XAI) tools in the field of textanalysis
Vortragstyp Bachelorarbeit
Betreuer(in) Clemens Müssener
Kurzfassung The goal of this bachelor thesis was to analyse classification results using a 2017 published method called shap. Explaining how an artificial neural network makes a decision is an interdisciplinary research subject combining computer science, math, psychology and philosophy. We analysed these explanations from a psychological standpoint and after presenting our findings we will propose a method to improve the interpretability of text explanations using text-hierarchies, without loosing much/any accuracy. Secondary, the goal was to test out a framework developed to analyse a multitude of explanation methods. This Framework will be presented next to our findings and how to use it to create your own analysis. This Bachelor thesis is addressed at people familiar with artificial neural networks and other machine learning methods.
Neuen Vortrag erstellen

 

Hinweise