Semantische Suche

Freitag, 7. Mai 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Elena Schediwie
Titel Bachelorarbeit: Local Outlier Factor for Feature‐evolving Data Streams
Vortragstyp Bachelorarbeit
Betreuer(in) Florian Kalinke
Vortragsmodus
Kurzfassung Outlier detection is a core task of data stream analysis. As such, many algorithms targeting this problem exist, but tend to treat the data as so-called row stream, i.e., observations arrive one at a time with a fixed number of features. However, real-world data often has the form of a feature-evolving stream: Consider the task of analyzing network data in a data center - here, nodes may be added and removed at any time, changing the features of the observed stream. While highly relevant, most existing outlier detection algorithms are not applicable in this setting. Further, increasing the number of features, resulting in high-dimensional data, poses a different set of problems, usually summarized as "the curse of dimensionality".

In this thesis, we propose FeLOF, addressing this challenging setting of outlier detection in feature-evolving and high-dimensional data. Our algorithms extends the well-known Local Outlier Factor algorithm to the feature-evolving stream setting. We employ a variation of StreamHash random hashing projections to create a lower-dimensional feature space embedding, thereby mitigating the effects of the curse of dimensionality. To address non-stationary data distributions, we employ a sliding window approach. FeLOF utilizes efficient data structures to speed up search queries and data updates.

Extensive experiments show that our algorithm achieves state-of-the-art outlier detection performance in the static, row stream and feature-evolving stream settings on real-world benchmark data. Additionally, we provide an evaluation of our StreamHash adaptation, demonstrating its ability to cope with sparsely populated high-dimensional data.

Freitag, 7. Mai 2021, 14:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Gilbert Groten
Titel Automatisches Auflösen von Abkürzungen in Quelltext
Vortragstyp Masterarbeit
Betreuer(in) Tobias Hey
Vortragsmodus
Kurzfassung Abgekürzte Quelltextbezeichner stellen ein Hindernis bei der Gewinnung von Informationen aus Quelltext dar. Im Rahmen dieser Arbeit werden Abkürzungsauflösungsverfahren entwickelt, um diese abgekürzten Quelltextbezeichner zu den gemeinten, nicht abgekürzten Begriffen aufzulösen. Zum einen wird die Entscheidung für den besten Auflösungskandidaten mittels worteinbettungsbasierten Ähnlichkeitsfunktionen getroffen. Zum anderen werden Trigramm-Grammatiken verwendet, um die Wahrscheinlichkeit eines Auflösungskandidaten zu bestimmen. Die im Rahmen dieser Arbeit entwickelten Verfahren bauen auf zwei Verfahren auf, welche von Alatawi et al. entwickelt wurden. In diesen werden statistische Eigenschaften von Quelltextabkürzungen, sowie Uni- und Bigramm-Grammatiken verwendet, um die Auflösung einer Abkürzung zu bestimmen. Das präziseste der im Rahmen dieser Arbeit entwickelten Verfahren (das Trigramm-basierte) löst auf einem Beispielquelltext, evaluiert gegen eine von Alatawi et al. bereitgestellte Musterlösung, 70,33% der abgekürzten Quelltextbezeichner richtig auf, und ist damit 3,30 Prozentpunkte besser als das nachimplementierte, präziseste Verfahren von Alatawi et al.
Vortragende(r) Niklas Ewald
Titel Identifikation von Rückverfolgbarkeitsverbindungen zwischen Anforderungen mittels Sprachmodellen
Vortragstyp Bachelorarbeit
Betreuer(in) Tobias Hey
Vortragsmodus
Kurzfassung Die Rückverfolgbarkeit zwischen Anforderungen ist ein wichtiger Teil der Softwareentwicklung. Zusammenhänge werden dokumentiert und können für Aufgaben wie Auswirkungs- oder Abdeckungsanalysen verwendet werden. Da das Identifizieren von Rückverfolgbarkeitsverbindungen von Hand zeitaufwändig und fehleranfällig ist, ist es hilfreich, wenn automatische Verfahren eingesetzt werden können. Anforderungen werden häufig während der Entwicklung verfeinert. Entstehende Anforderungen lassen sich zu den ursprünglichen Anforderungen zurückverfolgen. Die entstehenden Anforderungen befinden sich auf einem anderen Abstraktionslevel. Dies erschwert die automatische Identifizierung von Rückverfolgbarkeitsverbindungen. Auf großen Textkorpora trainierte Sprachmodelle stellen eine mögliche Lösung für dieses Problem dar. In dieser Arbeit wurden drei auf Sprachmodellen basierende Verfahren entwickelt und verglichen: Feinanpassung einer Klassifikationsschicht, Ausnutzen der Ähnlichkeit der jeweiligen Satzeinbettungen und eine Erweiterung des zweiten Verfahrens, bei dem zusätzlich zunächst Cluster gebildet werden. Es wurden sechs öffentlich verfügbare Datensätze verwendet, um die Verfahren zu evaluieren. Von den drei Verfahren erreichen jeweils das Sprachmodell mit Klassifikationsschicht und das Ausnutzen der Ähnlichkeit zwischen Satzeinbettungen für drei Datensätze die besten Ergebnisse, die aber hinter den Ergebnissen von anderen aktuellen Verfahren zurückbleiben. Das feinangepasste Sprachmodell mit Klassifikationsschicht erzielt eine Ausbeute von bis zu 0,96 bei einer eher geringen Präzision von 0,01 bis 0,26.

Freitag, 14. Mai 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Nobel Liaw
Titel Cost-Efficient Evaluation of ML Classifiers With Feature Attribution Annotations (Proposal)
Vortragstyp Bachelorarbeit
Betreuer(in) Moritz Renftle
Vortragsmodus
Kurzfassung Conventional evaluation of an ML classifier uses test data to estimate its expected loss. For "cognitive" ML tasks like image or text classification, this requires that experts annotate a large and representative test data set, which can be expensive.

In this thesis, we explore another approach for estimating the expected loss of an ML classifier. The aim is to enhance test data with additional expert knowledge. Inspired by recent feature attribution techniques, such as LIME or Saliency Maps, the idea is that experts annotate inputs not only with desired classes, but also with desired feature attributions. We then explore different methods to derive a large conventional test data set based on few such feature attribution annotations. We empirically evaluate the loss estimates of our approach against ground-truth estimates on large existing test data sets, with a focus on the tradeoff between the number of expert annotations and the achieved estimation accuracy.

Vortragende(r) Luc Mercatoris
Titel Erklärbare k-Portfolios von SAT-Solvern (Verteidigung)
Vortragstyp Bachelorarbeit
Betreuer(in) Jakob Bach
Vortragsmodus
Kurzfassung Das SAT-Problem ist ein bekanntes NP-vollständiges Problem aus der theoretischen Informatik. Es handelt es sich um die Problemstellung, ob für eine gegebene aussagenlogische Formel G eine Variablenbelegung existiert, sodass G erfüllt ist.

Portfolio-basierte Methoden zum Lösen von SAT-Instanzen nutzen die komplementäre Stärke von einer Menge von verschiedenen SAT-Algorithmen aus. Hierbei wird aus einer gegebenen Menge von Algorithmen, dem sogenannten Portfolio, mittels eines Vorhersagemodells derjenige Algorithmus ausgewählt, der die bestmögliche Performance für die betrachtete SAT-Instanz verspricht.

In dieser Arbeit interessieren wir uns besonders für erklärbare Portfolios, sprich für Portfolios, für die man nachvollziehen kann, wie die Vorhersage zu ihrem Ergebnis kommt. Gute Erklärbarkeit resultiert einerseits aus einer geringen Größe des Portfolios, andererseits aus einer reduzierten Komplexität des Vorhersagemodells.

Im Vordergrund der Arbeit liegt das effiziente Finden von kleinen Portfolios aus einer größeren Menge von Algorithmen, sowie den Einfluss der Portfoliogröße auf die Performance des Portfolios.

Vortragende(r) Jonathan Bechtle
Titel Evaluation of Automated Feature Generation Methods
Vortragstyp Masterarbeit
Betreuer(in) Vadim Arzamasov
Vortragsmodus
Kurzfassung Manual feature engineering is a time consuming and costly activity, when developing new Machine Learning applications, as it involves manual labor of a domain expert. Therefore, efforts have been made to automate the feature generation process. However, there exists no large benchmark of these Automated Feature Generation methods. It is therefore not obvious which method performs well in combination with specific Machine Learning models and what the strengths and weaknesses of these methods are.

In this thesis we present an evaluation framework for Automated Feature Generation methods, that is integrated into the scikit-learn framework for Python. We integrate nine Automated Feature Generation methods into this framework. We further evaluate the methods on 91 datasets for classification problems. The datasets in our evaluation have up to 58 features and 12,958 observations. As Machine Learning models we investigate five models including state of the art models like XGBoost.

Freitag, 21. Mai 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Haiko Thiessen
Titel Detecting Outlying Time-Series with Global Alignment Kernels (Defense)
Vortragstyp Masterarbeit
Betreuer(in) Florian Kalinke
Vortragsmodus
Kurzfassung Detecting outlying time-series poses two challenges: First, labeled training data is rare, as it is costly and error-prone to obtain. Second, algorithms usually rely on distance metrics, which are not readily applicable to time-series data. To address the first challenge, one usually employs unsupervised algorithms. To address the second challenge, existing algorithms employ a feature-extraction step and apply the distance metrics to the extracted features instead. However, feature extraction requires expert knowledge, rendering this approach also costly and time-consuming.

In this thesis, we propose GAK-SVDD. We combine the well-known SVDD algorithm to detect outliers in an unsupervised fashion with Global Alignment Kernels (GAK), bypassing the feature-extraction step. We evaluate GAK-SVDD's performance on 28 standard benchmark data sets and show that it is on par with its closest competitors. Comparing GAK with a DTW-based kernel, GAK improves the median Balanced Accuracy by 4%. Additionally, we extend our method to the active learning setting and examine the combination of GAK and domain-independent attributes.

Vortragende(r) Kuan Yang
Titel Efficient Verification of Data-Value-Aware Process Models
Vortragstyp Bachelorarbeit
Betreuer(in) Elaheh Ordoni
Vortragsmodus
Kurzfassung Verification methods detect unexpected behavior of business process models before their execution. In many process models, verification depends on data values. A data value is a value in the domain of a data object, e.g., $1000 as the price of a product. However, verification of process models with data values often leads to state-space explosion. This problem is more serious when the domain of data objects is large. The existing works to tackle this problem often abstract the domain of data objects. However, the abstraction may lead to a wrong diagnosis when process elements modify the value of data objects.

In this thesis, we provide a novel approach to enable verification of process models with data values, so-called data-value-aware process models. A distinctive of our approach is to support modification of data values while preserving the verification results. We show the functionality of our approach by conducting the verification of a real-world application: the German 4G spectrum auction model.

Freitag, 21. Mai 2021, 14:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Fabian Scheytt
Titel Ein modellbasierter Ansatz zur Bewertung der Vollständigkeit von verzahnten Sicherheits- und Risikoanalysen für E/E-Architekturen
Vortragstyp Masterarbeit
Betreuer(in) Emre Taşpolatoğlu
Vortragsmodus
Kurzfassung Die Cybersicherheit bereits in frühen Entwicklungsphasen zu betrachten, gewinnt in der Automobilindustrie zunehmend an Relevanz, um immer komplexer werdende Fahrzeuge gegen Angriffe abzusichern. Welche Teile eines Systemmodells in einer modellbasierten Sicherheitsbetrachtung bereits analysiert wurden, ist nicht eindeutig und meist nur händisch mit Expertenwissen zu ermitteln. Bestehende Ansätze liefern in der frühen Konzeptphase bestenfalls unvollständige Ergebnisse, da das Systemmodell nur skizzenhaft existiert. In dieser Arbeit wurde ein Konzept vorgestellt, mit dem Sicherheitsbetrachtungen bereits in der frühen Konzeptphase durch eine Metrik auf Vollständigkeit bewertet werden können. Dazu werden aus Systemzusammenhängen Elemente bestimmt, die in einer vollständigen Sicherheitsbetrachtung enthalten sein müssen. Diese Erwartung wird daraufhin mit der tatsächlichen Sicherheitsbetrachtung verglichen, um den Grad der Vollständigkeit zu bestimmen. Das Konzept wurde prototypisch implementiert und dessen Anwendbarkeit anhand einer Fallstudie aus dem EVITA Projekt evaluiert.

Freitag, 11. Juni 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Philipp Weinmann
Titel Tuning of Explainable ArtificialIntelligence (XAI) tools in the field of textanalysis
Vortragstyp Bachelorarbeit
Betreuer(in) Clemens Müssener
Vortragsmodus
Kurzfassung The goal of this bachelor thesis was to analyse classification results using a 2017 published method called shap. Explaining how an artificial neural network makes a decision is an interdisciplinary research subject combining computer science, math, psychology and philosophy. We analysed these explanations from a psychological standpoint and after presenting our findings we will propose a method to improve the interpretability of text explanations using text-hierarchies, without loosing much/any accuracy. Secondary, the goal was to test out a framework developed to analyse a multitude of explanation methods. This Framework will be presented next to our findings and how to use it to create your own analysis. This Bachelor thesis is addressed at people familiar with artificial neural networks and other machine learning methods.

Freitag, 18. Juni 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Aleksandr Eismont
Titel Integrating Structured Background Information into Time-Series Data Monitoring of Complex Systems
Vortragstyp Bachelorarbeit
Betreuer(in) Pawel Bielski
Vortragsmodus
Kurzfassung Monitoring of time series data is increasingly important due to massive data generated by complex systems, such as industrial production lines, meteorological sensor networks, or cloud computing centers. Typical time series monitoring tasks include: future value forecasting, detecting of outliers or computing the dependencies.

However, the already existing methods for time series monitoring tend to ignore the background information such as relationships between components or process structure that is available for almost any complex system. Such background information gives a context to the time series data, and can potentially improve the performance of time series monitoring tasks.

In this bachelor thesis, we show how to incorporate structured background information to improve three different time series monitoring tasks. We perform the experiments on the data from the cloud computing center, where we extract background information from system traces. Additionally, we investigate different representations and quality of background information and conclude that its usefulness is independent from a concrete time series monitoring task.

Freitag, 25. Juni 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Nobel Liaw
Titel Cost-Efficient Evaluation of ML Classifiers With Feature Attribution Annotations (Final BA Presentation)
Vortragstyp Bachelorarbeit
Betreuer(in) Moritz Renftle
Vortragsmodus
Kurzfassung To evaluate the loss of cognitive ML models, e.g., text or image classifier, accurately, one usually needs a lot of test data which are annotated manually by experts. In order to estimate accurately, the test data should be representative or else it would be hard to assess whether a model overfits, i.e., it uses spurious features of the images significantly to decide on its predictions.With techniques such as Feature Attribution, one can then compare important features that the model sees with their own expectations and can therefore be more confident whether or not he should trust the model. In this work, we propose a method that estimates the loss of image classifiers based on Feature-Attribution techniques. We use the classic approach for loss estimate as our benchmark to evaluate our proposed method. At the end of this work, our analysis reveals that our proposed method seems to have a similar loss estimate to that of the classic approach with a good image classifer and a representative test data. Based on our experiment, we expect that our proposed method could give a better loss estimate than the classic approach in cases where one has a biased test data and an image classifier which overfits.

Freitag, 25. Juni 2021, 14:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Julian Roßkothen
Titel Analyse von KI-Ansätzen für das Trainieren virtueller Roboter mit Gedächtnis
Vortragstyp Bachelorarbeit
Betreuer(in) Daniel Zimmermann
Vortragsmodus
Kurzfassung In dieser Arbeit werden mehrere rekurrente neuronale Netze verglichen.

Es werden LSTMs, GRUs, CTRNNs und Elman Netze untersucht. Die Netze werden dabei untersucht sich einen Punkt zu merken und anschließend nach dem Punkt mit einem virtuellen Roboterarm zu greifen.

Bei LSTM, GRU und Elman Netzen wird auch untersucht wie die Netze die Aufgabe lösen, wenn jedes Neuron nur auf den eigenen Speicher zugreifen kann.

Dabei hat sich herausgestellt, dass LSTMs und GRUs deutlich besser bei den Experimenten bewertet werden als CTRNNs und Elman Netze. Außerdem werden die Rechenzeit und der Zusammenhang zwischen der Anzahl der zu trainierenden Parameter und der Ergebnisse der Experimente verglichen.

Vortragende(r) Lukas Bach
Titel Automatically detecting Performance Regressions
Vortragstyp Masterarbeit
Betreuer(in) Robert Heinrich
Vortragsmodus
Kurzfassung One of the most important aspects of software engineering is system performance. Common approaches to verify acceptable performance include running load tests on deployed software. However, complicated workflows and requirements like the necessity of deployments and extensive manual analysis of load test results cause tests to be performed very late in the development process, making feedback on potential performance regressions available much later after they were introduced.

With this thesis, we propose PeReDeS, an approach that integrates into the development cycle of modern software projects, and explicitly models an automated performance regression detection system that provides feedback quickly and reduces manual effort for setup and load test analysis. PeReDeS is embedded into pipelines for continuous integration, manages the load test execution and lifecycle, processes load test results and makes feedback available to the authoring developer via reports on the coding platform. We further propose a method for detecting deviations in performance on load test results, based on Welch's t-test. The method is adapted to suit the context of performance regression detection, and is integrated into the PeReDeS detection pipeline. We further implemented our approach and evaluated it with an user study and a data-driven study to evaluate the usability and accuracy of our method.

Vortragende(r) Jan Wittler
Titel Derivation of Change Sequences from State-Based File Differences for Delta-Based Model Consistency
Vortragstyp Masterarbeit
Betreuer(in) Timur Sağlam
Vortragsmodus
Kurzfassung In view-based software development, views may share concepts and thus contain redundant or dependent information. Keeping the individual views synchronized is a crucial property to avoid inconsistencies in the system. In approaches based on a Single Underlying Model (SUM), inconsistencies are avoided by establishing the SUM as a single source of truth from which views are projected. To synchronize updates from views to the SUM, delta-based consistency preservation is commonly applied. This requires the views to provide fine-grained change sequences which are used to incrementally update the SUM. However, the functionality of providing these change sequences is rarely found in real-world applications. Instead, only state-based differences are persisted. Therefore, it is desirable to also support views which provide state-based differences in delta-based consistency preservation. This can be achieved by estimating the fine-grained change sequences from the state-based differences.

This thesis evaluates the quality of estimated change sequences in the context of model consistency preservation. To derive such sequences, matching elements across the compared models need to be identified and their differences need to be computed. We evaluate a sequence derivation strategy that matches elements based on their unique identifier and one that establishes a similarity metric between elements based on the elements’ features. As an evaluation baseline, different test suites are created. Each test consists of an initial and changed version of both a UML class diagram and consistent Java source code. Using the different strategies, we derive and propagate change sequences based on the state-based difference of the UML view and evaluate the outcome in both domains. The results show that the identity-based matching strategy is able to derive the correct change sequence in almost all (97 %) of the considered cases. For the similarity-based matching strategy we identify two reoccurring error patterns across different test suites. To address these patterns, we provide an extended similarity-based matching strategy that is able to reduce the occurrence frequency of the error patterns while introducing almost no performance overhead.

Freitag, 2. Juli 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}} (Keine Vorträge)

Freitag, 9. Juli 2021, 14:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Dennis Grötzinger
Titel Exploring The Robustness Of The Natural Language Inference Capabilties Of T5
Vortragstyp Bachelorarbeit
Betreuer(in) Jan Keim
Vortragsmodus
Kurzfassung Large language models like T5 perform excellently on various NLI benchmarks. However, it has been shown that even small changes in the structure of these tasks can significantly reduce accuracy. I build upon this insight and explore how robust the NLI skills of T5 are in three scenarios. First, I show that T5 is robust to some variations in the MNLI pattern, while others degenerate performance significantly. Second, I observe that some other patterns that T5 was trained on can be substituted for the MNLI pattern and still achieve good results. Third, I demonstrate that the MNLI pattern translate well to other NLI datasets, even improving accuracy by 13% in the case of RTE. All things considered, I conclude that the robustness of the NLI skills of T5 really depend on which alterations are applied.

Freitag, 16. Juli 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Florian Leiser
Titel Modelling Dynamical Systems using Transition Constraints
Vortragstyp Masterarbeit
Betreuer(in) Pawel Bielski
Vortragsmodus
Kurzfassung Despite promising performance of data science approaches in various applications, in industrial research and development the results can be often unsatisfactory due to the costly experiments that lead to small datasets to work with. Theory-guided Data Science (TGDS) can solve the problem insufficient data by incorporating existing industrial domain knowledge with data science approaches.

In dynamical systems, like gas turbines, transition phases occur after a change in the input control signal. The domain knowledge about the steepness of these transitions can potentially help with the modeling of such systems using the data science approaches. There already exist TGDS approaches that use the information about the limits of the values. However it is currently not clear how to incorporate the information about the steepness of the transitions with them.

In this thesis, we develop three different TGDS approaches to include these transition constraints in recurrent neural networks (RNNs) to improve the modeling of input-output behavior of dynamical systems. We evaluate the approaches on synthetic and real time series data by varying data availability and different degrees of steepness. We conclude that the TGDS approaches are especially helpful for flat transitions and provide a guideline on how to use the available transition constraints in real world problems. Finally, we discuss the required degree of domain knowledge and intellectual implementation effort of each approach.

Freitag, 23. Juli 2021, 11:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Tom George
Titel Augmenting Bandit Algorithms with Domain Knowledge
Vortragstyp Masterarbeit
Betreuer(in) Pawel Bielski
Vortragsmodus
Kurzfassung Bandit algorithms are a family of algorithms that efficiently solve sequential decision problems, like monitoring in a cloud computing system, news recommendations or clinical trials. In such problems there is a trade of between exploring new options and exploiting presumably good ones and bandit algorithms provide theoretical guarantees while being practical.

While some approaches use additional information about the current state of the environment, bandit algorithms tend to ignore domain knowledge that can’t be extracted from data. It is not clear how to incorporate domain knowledge into bandit algorithms and how much improvement this yields.

In this masters thesis we propose two ways to augment bandit algorithms with domain knowledge: a push approach, which influences the distribution of arms to deal with non-stationarity as well as a group approach, which propagates feedback between similar arms. We conduct synthetic and real world experiments to examine the usefulness of our approaches. Additionally we evaluate the effect of incomplete and incorrect domain knowledge. We show that the group approach helps to reduce exploration time, especially for small number of iterations and plays, and that the push approach outperforms contextual and non-contextual baselines for large context spaces.

Vortragende(r) Youheng Lü
Titel Auswahl von SAT-Instanzen zur Evaluation von Solvern
Vortragstyp Bachelorarbeit
Betreuer(in) Jakob Bach
Vortragsmodus
Kurzfassung Das schnelle und effiziente Lösen von SAT-Instanzen ist für viele Bereiche relevant, zum Beispiel Kryptografie, Scheduling oder formale Verifikationen. Um die Geschwindigkeit von SAT-Solvern zu evaluieren, gibt es SAT-Instanzenmengen, die die Solver lösen müssen. Diese Instanzenmengen (Benchmarks) bestehen aus Hunderten von unterschiedlichen Instanzen. Um ein repräsentatives Ergebnis zu erhalten, muss eine Benchmark viele unterschiedliche Instanzen haben, da unterschiedliche Solver auf unterschiedlichen Instanzen gut sind. Wir gehen aber davon aus, dass wir Benchmarks erstellen können, die kleiner als die aktuellen Benchmarks sind, die immer noch repräsentative Ergebnisse liefern.

In unserer Arbeit stellen wir einen Ansatz vor, der aus einer gegebenen repräsentativen Benchmark eine kleinere Teilmenge wählt, die als repräsentative Benchmark dienen soll. Wir definieren dabei, dass eine Benchmark repräsentativ ist, wenn der Graph der Laufzeiten ein festgelegtes Ähnlichkeitsmaß gegenüber der ursprünglichen Benchmark überschreitet. Wir haben hierbei einen BeamSearch-Algorithmus erforscht. Am Ende stellen wir allerdings fest, dass eine zufällige Auswahl besser ist und eine zufällige Auswahl von 10 % der Instanzen ausreicht, um eine repräsentative Benchmark zu liefern.

Freitag, 23. Juli 2021, 14:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Nicolas Boltz
Titel Architectural Uncertainty Analysis for Access Control Scenarios in Industry 4.0
Vortragstyp Masterarbeit
Betreuer(in) Maximilian Walter
Vortragsmodus
Kurzfassung In this thesis, we present our approach to handle uncertainty in access control during design time. We propose the concept of trust as a composition of environmental factors that impact the validity of and consequently trust in access control properties. We use fuzzy inference systems as a way of defining how environmental factors are combined. These trust values are than used by an analysis process to identify issues which can result from a lack of trust.

We extend an existing data flow diagram approach with our concept of trust. Our approach of adding knowledge to a software architecture model and providing a way to analyze model instances for access control violations shall enable software architects to increase the quality of models and further verify access control requirements under uncertainty. We evaluate the applicability based on the availability, the accuracy and the scalability regarding the execution time.

Vortragende(r) Haris Dzogovic
Titel Evaluating architecture-based performance prediction for MPI-based systems
Vortragstyp Bachelorarbeit
Betreuer(in) Larissa Schmid
Vortragsmodus
Kurzfassung One research field of High Performance Computing (HPC) is computing clusters. Computing clusters are distributed memory systems where different machines are connected through a network. To enable the machines to communicate with each other they need the ability to pass messages to each other through the network. The Message Passing Interface (MPI) is the standard in implementing parallel systems for distributed memory systems. To enable software architects in predicting the performance of MPI-based systems several approaches have been proposed. However, those approaches depend either on an existing implementation of a program or are tailored for specific programming languages or use cases. In our approach, we use the Palladio Component Model (PCM) that allows us to model component-based architectures and to predict the performance of the modeled system. We modeled different MPI functions in the PCM that serve as reusable patterns and a communicator that is required for the MPI functions. The expected benefit is to provide patterns for different MPI functions that allow a precise modelation of MPI-based systems in the PCM. And to obtain a precise performance prediction of a PCM instance.

Freitag, 30. Juli 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}} (Keine Vorträge)

Freitag, 20. August 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Martin Lange
Titel Quantitative Evaluation of the Expected Antagonism of Explainability and Privacy
Vortragstyp Bachelorarbeit
Betreuer(in) Clemens Müssener
Vortragsmodus
Kurzfassung Explainable artificial intelligence (XAI) offers a reasoning behind a model's behavior.

For many explainers this proposed reasoning gives us more information about the inner workings of the model or even about the training data. Since data privacy is becoming an important issue the question arises whether explainers can leak private data. It is unclear what private data can be obtained from different kinds of explanation. In this thesis I adapt three privacy attacks in machine learning to the field of XAI: model extraction, membership inference and training data extraction. The different kinds of explainers are sorted into these categories argumentatively and I present specific use cases how an attacker can obtain private data from an explanation. I demonstrate membership inference and training data extraction for two specific explainers in experiments. Thus, privacy can be breached with the help of explainers.

Freitag, 10. September 2021, 14:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Martin Armbruster
Titel Commit-Based Continuous Integration of Performance Models
Vortragstyp Masterarbeit
Betreuer(in) Manar Mazkatli
Vortragsmodus
Kurzfassung Architecture-level performance models, for instance, the PCM, allow performance predictions to evaluate and compare design alternatives. However, software architectures drift over time so that initially created performance models are out-to-date fast due to the required manual high effort to keep them up-to-date.

To close the gap between the development and having up-to-date performance models, the Continuous Integration of Performance Models (CIPM) approach has been proposed. It incorporates automatically executed activities into a Continuous Integration pipeline and is realized with Vitruvius combining Java and the PCM. As a consequence, changes from a commit are extracted to incrementally update the models in the VSUM. To estimate the resource demand in the PCM, the CIPM approach adaptively instruments and monitors the source code.

In previous work, parts of the CIPM pipeline were prototypically implemented and partly evaluated with artificial projects. A pipeline combining the incremental model update and the adaptive instrumentation is absent. Therefore, this thesis presents the combined pipeline adapting and extending the existing implementations. The evaluation is performed with the TeaStore and indicates the correct model update and instrumentation. Nevertheless, there is a gap towards the calibration pipeline.

Vortragende(r) Sina Schmitt
Titel Einfluss meta-kognitiver Strategien auf die Schlussfolgerungsfähigkeiten neuronaler Sprachmodelle
Vortragstyp Bachelorarbeit
Betreuer(in) Jan Keim
Vortragsmodus
Kurzfassung Die meta-kognitive Strategie "laut nachzudenken" kann auf neuronale Sprachmodelle übertragen werden, wie Betz et al. zeigen: Ein vortrainiertes Sprachmodell ist besser in der Lage, deduktive Schlussfolgerungsprobleme zu lösen, wenn es zuvor dynamische Problemelaborationen generiert. Das Sprachmodell verwendet auf dem Datensatz von Betz et al. eine einfache Heuristik für seine Antwortvorhersage, die es mithilfe der selbst generierten Kontexterweiterungen effektiver einsetzen kann. In dieser Arbeit untersuche ich, wie dynamische Kontexterweiterungen die Performanz eines neuronalen Sprachmodells beeinflussen, wenn es nicht auf eine solche Heuristik zurückgreifen kann. Ich überprüfe (i) die Schlussfolgerungsfähigkeiten eines vortrainierten neuronalen Sprachmodells im Zero-Shot Setting, (ii) den Einfluss verschiedener vorgegebener Kontexterweiterungen auf die Zero-Shot-Performanz und (iii) die Fähigkeiten des Sprachmodells, selbst effektive Kontexterweiterungen zu generieren und zu nutzen.

Freitag, 17. September 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Tanja Fenn
Titel Change Detection in High Dimensional Data Streams
Vortragstyp Masterarbeit
Betreuer(in) Edouard Fouché
Vortragsmodus
Kurzfassung The data collected in many real-world scenarios such as environmental analysis, manufacturing, and e-commerce are high-dimensional and come as a stream, i.e., data properties evolve over time – a phenomenon known as "concept drift". This brings numerous challenges: data-driven models become outdated, and one is typically interested in detecting specific events, e.g., the critical wear and tear of industrial machines. Hence, it is crucial to detect change, i.e., concept drift, to design a reliable and adaptive predictive system for streaming data. However, existing techniques can only detect "when" a drift occurs and neglect the fact that various drifts may occur in different dimensions, i.e., they do not detect "where" a drift occurs. This is particularly problematic when data streams are high-dimensional.

The goal of this Master’s thesis is to develop and evaluate a framework to efficiently and effectively detect “when” and “where” concept drift occurs in high-dimensional data streams. We introduce stream autoencoder windowing (SAW), an approach based on the online training of an autoencoder, while monitoring its reconstruction error via a sliding window of adaptive size. We will evaluate the performance of our method against synthetic data, in which the characteristics of drifts are known. We then show how our method improves the accuracy of existing classifiers for predictive systems compared to benchmarks on real data streams.

Vortragende(r) Wenrui Zhou
Titel Outlier Analysis in Live Systems from Application Logs
Vortragstyp Masterarbeit
Betreuer(in) Edouard Fouché
Vortragsmodus
Kurzfassung Modern computer applications tend to generate massive amounts of logs and have become so complex that it is often difficult to explain why applications failed. Locating outliers in application logs can help explain application failures. Outlier detection in application logs is challenging because (1) the log is unstructured text streaming data. (2) labeling application logs is labor-intensive and inefficient.

Logs are similar to natural languages. Recent deep learning algorithm Transformer Neural Network has shown outstanding performance in Natural Language Processing (NLP) tasks. Based on these, we adapt Transformer Neural Network to detect outliers from applications logs In an unsupervised way. We compared our algorithm against state-of-the-art log outlier detection algorithms on three widely used benchmark datasets. Our algorithm outperformed state-of-the-art log outlier detection algorithms.

Freitag, 24. September 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}} (Keine Vorträge)

Freitag, 24. September 2021, 14:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}} (Keine Vorträge)