Semantische Suche

Freitag, 16. Juli 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Florian Leiser
Titel Modelling Dynamical Systems using Transition Constraints
Vortragstyp Masterarbeit
Betreuer(in) Pawel Bielski
Vortragsmodus
Kurzfassung Despite promising performance of data science approaches in various applications, in industrial research and development the results can be often unsatisfactory due to the costly experiments that lead to small datasets to work with. Theory-guided Data Science (TGDS) can solve the problem insufficient data by incorporating existing industrial domain knowledge with data science approaches.

In dynamical systems, like gas turbines, transition phases occur after a change in the input control signal. The domain knowledge about the steepness of these transitions can potentially help with the modeling of such systems using the data science approaches. There already exist TGDS approaches that use the information about the limits of the values. However it is currently not clear how to incorporate the information about the steepness of the transitions with them.

In this thesis, we develop three different TGDS approaches to include these transition constraints in recurrent neural networks (RNNs) to improve the modeling of input-output behavior of dynamical systems. We evaluate the approaches on synthetic and real time series data by varying data availability and different degrees of steepness. We conclude that the TGDS approaches are especially helpful for flat transitions and provide a guideline on how to use the available transition constraints in real world problems. Finally, we discuss the required degree of domain knowledge and intellectual implementation effort of each approach.

Freitag, 23. Juli 2021, 11:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Tom George
Titel Augmenting Bandit Algorithms with Domain Knowledge
Vortragstyp Masterarbeit
Betreuer(in) Pawel Bielski
Vortragsmodus
Kurzfassung Bandit algorithms are a family of algorithms that efficiently solve sequential decision problems, like monitoring in a cloud computing system, news recommendations or clinical trials. In such problems there is a trade of between exploring new options and exploiting presumably good ones and bandit algorithms provide theoretical guarantees while being practical.

While some approaches use additional information about the current state of the environment, bandit algorithms tend to ignore domain knowledge that can’t be extracted from data. It is not clear how to incorporate domain knowledge into bandit algorithms and how much improvement this yields.

In this masters thesis we propose two ways to augment bandit algorithms with domain knowledge: a push approach, which influences the distribution of arms to deal with non-stationarity as well as a group approach, which propagates feedback between similar arms. We conduct synthetic and real world experiments to examine the usefulness of our approaches. Additionally we evaluate the effect of incomplete and incorrect domain knowledge. We show that the group approach helps to reduce exploration time, especially for small number of iterations and plays, and that the push approach outperforms contextual and non-contextual baselines for large context spaces.

Vortragende(r) Youheng Lü
Titel Auswahl von SAT-Instanzen zur Evaluation von Solvern
Vortragstyp Bachelorarbeit
Betreuer(in) Jakob Bach
Vortragsmodus
Kurzfassung Das schnelle und effiziente Lösen von SAT-Instanzen ist für viele Bereiche relevant, zum Beispiel Kryptografie, Scheduling oder formale Verifikationen. Um die Geschwindigkeit von SAT-Solvern zu evaluieren, gibt es SAT-Instanzenmengen, die die Solver lösen müssen. Diese Instanzenmengen (Benchmarks) bestehen aus Hunderten von unterschiedlichen Instanzen. Um ein repräsentatives Ergebnis zu erhalten, muss eine Benchmark viele unterschiedliche Instanzen haben, da unterschiedliche Solver auf unterschiedlichen Instanzen gut sind. Wir gehen aber davon aus, dass wir Benchmarks erstellen können, die kleiner als die aktuellen Benchmarks sind, die immer noch repräsentative Ergebnisse liefern.

In unserer Arbeit stellen wir einen Ansatz vor, der aus einer gegebenen repräsentativen Benchmark eine kleinere Teilmenge wählt, die als repräsentative Benchmark dienen soll. Wir definieren dabei, dass eine Benchmark repräsentativ ist, wenn der Graph der Laufzeiten ein festgelegtes Ähnlichkeitsmaß gegenüber der ursprünglichen Benchmark überschreitet. Wir haben hierbei einen BeamSearch-Algorithmus erforscht. Am Ende stellen wir allerdings fest, dass eine zufällige Auswahl besser ist und eine zufällige Auswahl von 10 % der Instanzen ausreicht, um eine repräsentative Benchmark zu liefern.

Freitag, 23. Juli 2021, 14:00 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Nicolas Boltz
Titel Architectural Uncertainty Analysis for Access Control Scenarios in Industry 4.0
Vortragstyp Masterarbeit
Betreuer(in) Maximilian Walter
Vortragsmodus
Kurzfassung In this thesis, we present our approach to handle uncertainty in access control during design time. We propose the concept of trust as a composition of environmental factors that impact the validity of and consequently trust in access control properties. We use fuzzy inference systems as a way of defining how environmental factors are combined. These trust values are than used by an analysis process to identify issues which can result from a lack of trust.

We extend an existing data flow diagram approach with our concept of trust. Our approach of adding knowledge to a software architecture model and providing a way to analyze model instances for access control violations shall enable software architects to increase the quality of models and further verify access control requirements under uncertainty. We evaluate the applicability based on the availability, the accuracy and the scalability regarding the execution time.

Vortragende(r) Haris Dzogovic
Titel Evaluating architecture-based performance prediction for MPI-based systems
Vortragstyp Bachelorarbeit
Betreuer(in) Larissa Schmid
Vortragsmodus
Kurzfassung One research field of High Performance Computing (HPC) is computing clusters. Computing clusters are distributed memory systems where different machines are connected through a network. To enable the machines to communicate with each other they need the ability to pass messages to each other through the network. The Message Passing Interface (MPI) is the standard in implementing parallel systems for distributed memory systems. To enable software architects in predicting the performance of MPI-based systems several approaches have been proposed. However, those approaches depend either on an existing implementation of a program or are tailored for specific programming languages or use cases. In our approach, we use the Palladio Component Model (PCM) that allows us to model component-based architectures and to predict the performance of the modeled system. We modeled different MPI functions in the PCM that serve as reusable patterns and a communicator that is required for the MPI functions. The expected benefit is to provide patterns for different MPI functions that allow a precise modelation of MPI-based systems in the PCM. And to obtain a precise performance prediction of a PCM instance.

Freitag, 30. Juli 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}} (Keine Vorträge)

Freitag, 20. August 2021, 11:30 Uhr

iCal (Download)
Webkonferenz: {{{Webkonferenzraum}}}

Vortragende(r) Martin Lange
Titel Quantitative Evaluation of the Expected Antagonism of Explainability and Privacy
Vortragstyp Bachelorarbeit
Betreuer(in) Clemens Müssener
Vortragsmodus
Kurzfassung Explainable artificial intelligence (XAI) offers a reasoning behind a model's behavior.

For many explainers this proposed reasoning gives us more information about the inner workings of the model or even about the training data. Since data privacy is becoming an important issue the question arises whether explainers can leak private data. It is unclear what private data can be obtained from different kinds of explanation. In this thesis I adapt three privacy attacks in machine learning to the field of XAI: model extraction, membership inference and training data extraction. The different kinds of explainers are sorted into these categories argumentatively and I present specific use cases how an attacker can obtain private data from an explanation. I demonstrate membership inference and training data extraction for two specific explainers in experiments. Thus, privacy can be breached with the help of explainers.