Suche mittels Attribut

Diese Seite stellt eine einfache Suchoberfläche zum Finden von Objekten bereit, die ein Attribut mit einem bestimmten Datenwert enthalten. Andere verfügbare Suchoberflächen sind die Attributsuche sowie der Abfragengenerator.

Suche mittels Attribut

Eine Liste aller Seiten, die das Attribut „Kurzfassung“ mit dem Wert „TBD“ haben. Weil nur wenige Ergebnisse gefunden wurden, werden auch ähnliche Werte aufgelistet.

Hier sind 52 Ergebnisse, beginnend mit Nummer 1.

Zeige (vorherige 100 | nächste 100) (20 | 50 | 100 | 250 | 500)


    

Liste der Ergebnisse

  • Neural-Based Outlier Detection in Data Streams  + (Outlier detection often needs to be done uOutlier detection often needs to be done unsupervised with high dimensional data in data streams. “Deep structured energy-based models” (DSEBM) and “Variational Denoising Autoencoder” (VDA) are two promising approaches for outlier detection. They will be implemented and adapted for usage in data streams. Finally, their performance will be shown in experiments including the comparison with state of the art approaches.mparison with state of the art approaches.)
  • Adaptive Variational Autoencoders for Outlier Detection in Data Streams  + (Outlier detection targets the discovery ofOutlier detection targets the discovery of abnormal data patterns. Typical scenarios, such as are fraud detection and predictive maintenance are particularly challenging, since the data is available as an infinite and ever evolving stream. In this thesis, we propose Adaptive Variational Autoencoders (AVA), a novel approach for unsupervised outlier detection in data streams.</br></br>Our contribution is two-fold: (1) we introduce a general streaming framework for training arbitrary generative models on data streams. Here, generative models are useful to capture the history of the stream. (2) We instantiate this framework with a Variational Autoencoder, which adapts its network architecture to the dimensionality of incoming data.</br></br>Our experiments against several benchmark outlier data sets show that AVA outperforms the state of the art and successfully adapts to streams with concept drift.ully adapts to streams with concept drift.)
  • Scenario Discovery with Active Learning  + (PRIM (Patient Rule Induction Method) is anPRIM (Patient Rule Induction Method) is an algorithm used for discovering scenarios, by creating hyperboxes in the input space. Yet PRIM alone usually requires large datasets and computational simulations can be expensive. Consequently, one wants to obtain scenarios while reducing the number of simulations. It has been shown, that combining PRIM with machine learning models, can reduce the number of necessary simulation runs by around 75%.</br>In this thesis, I analyze nine different active learning sampling strategies together with several machine learning models, in order to find out if active learning can systematically improve PRIM even further, and if out of those strategies and models, a most beneficial combination of sampling method and intermediate machine learning model exists for this purpose.ne learning model exists for this purpose.)
  • Patient Rule Induction Method with Active Learning  + (PRIM (Patient Rule Induction Method) is anPRIM (Patient Rule Induction Method) is an algorithm for discovering scenarios from simulations, by creating hyperboxes, that are human-comprehensible. Yet PRIM alone requires relatively large datasets and computational simulations are usually quite expensive. Consequently, one wants to obtain a plausible scenario, with a minimal number of simulations. It has been shown, that combining PRIM with ML models, which generalize faster, can reduce the number of necessary simulation runs by around 75%.</br>We will try to reduce the number of simulation runs even further, using an active learning approach to train an intermediate ML model. </br>Additionally, we extend the previously proposed methodology to not only cover classification but also regression problems. A preliminary experiment indicated, that the combination of these methods, does indeed help reduce the necessary runs even further. In this thesis, I will analyze different AL sampling strategies together with several intermediate ML models to find out if AL can systematically improve existing scenario discovery methods and if a most beneficial combination of sampling method and intermediate ML model exists for this purpose.rmediate ML model exists for this purpose.)
  • A Parallelizing Compiler for Adaptive Auto-Tuning  + (Parallelisierende Compiler und Auto-Tuner Parallelisierende Compiler und Auto-Tuner sind zwei der vielen Technologien, die Entwick-</br>lern das Schreiben von leistungsfähigen Anwendungen für moderne heterogene Systeme</br>erleichtern können. In dieser Arbeit stellen wir einen parallelisierenden Compiler vor, der</br>Parallelität in Programmen erkennen und parallelen Code für heterogene Systeme erzeu-</br>gen kann. Außerdem verwendet der vorgestellte Compiler Auto-Tuning, um eine optimale</br>Partitionierung der parallelisierten Codeabschnitte auf mehrere Plattformen zur Laufzeit</br>zu finden, welche die Ausführungszeit minimiert. Anstatt jedoch die Parallelisierung ein-</br>mal für jeden parallelen Abschnitt zu optimieren und die gefundenen Konfigurationen so</br>lange zu behalten wie das Programm ausgeführt wird, sind Programme, die von unserem</br>Compiler generiert wurden, in der Lage zwischen verschiedenen Anwendungskontexten zu</br>unterscheiden, sodass Kontextänderungen erkannt und die aktuelle Konfiguration für je-</br>den vorkommenden Kontext individuell angepasst werden kann. Zur Beschreibung von</br>Kontexten verwenden wir sogenannte Indikatoren, die bestimmte Laufzeiteigenschaften</br>des Codes ausdrücken und in den Programmcode eingefügt werden, damit sie bei der Aus-</br>führung ausgewertet und vom Auto-Tuner verwendet werden können. Darüber hinaus</br>speichern wir gefundene Konfigurationen und die zugehörigen Kontexte in einer Daten-</br>bank, sodass wir Konfigurationen aus früheren Läufen wiederverwenden können, wenn die</br>Anwendung erneut ausgeführt wird.</br>Wir evaluieren unseren Ansatz mit der Polybench Benchmark-Sammlung. Die Ergeb-</br>nisse zeigen, dass wir in der Lage sind, Kontextänderungen zur Laufzeit zu erkennen und</br>die Konfiguration dem neuen Kontext entsprechend anzupassen, was im Allgemeinen zu</br>niedrigeren Ausführungszeiten führt.en zu niedrigeren Ausführungszeiten führt.)
  • Calibrating Performance Models for Particle Physics Workloads  + (Particle colliders are a primary method ofParticle colliders are a primary method of conducting experiments in particle physics, as they allow to both create short-lived, high-energy particles and observe their properties. The world’s largest particle collider, the Large Hadron Collider (subsequently referred to as LHC), is operated by the European Organization for Nuclear Research (CERN) near Geneva. The operation of this kind of accelerator requires the storage and computationally intensive analysis of large amounts of data. The Worldwide LHC Computing Grid (WLCG), a global computing grid, is being run alongside the LHC to serve this purpose.</br></br>This Bachelor’s thesis aims to support the creation of an architecture model and simulation for parts of the WLCG infrastructure with the goal of accurately being able to simulate and predict changes in the infrastructure such as the replacement of the load balancing strategies used to distribute the workload between available nodes.bute the workload between available nodes.)
  • Adaptive Monitoring for Continuous Performance Model Integration  + (Performance Models (PMs) can be used to prPerformance Models (PMs) can be used to predict software performance and evaluate the alternatives at the design stage. Building such models manually is a time consuming and not suitable for agile development process where quick releases have to be generated in short cycles. To benefit from model-based performance prediction during agile software development the developers tend to extract PMs automatically. Existing approaches that extract PMs based on reverse-engineering and/or measurement techniques require to monitor and analyze the whole system after each iteration, which will cause a high monitoring overhead.</br>The Continuous Integration of Performance Models (CIPM) approach address this problem by updating the PMs and calibrate it incrementally based on the adaptive monitoring of the changed parts of the code.</br></br>In this work, we introduced an adaptive monitoring approach for performance model integration, which instruments automatically only the changed parts of the source code using specific pre-defined probes types. Then it monitors the system adaptively. The resulting measurements are used by CIPM to estimate PM parameters incrementally.</br></br>The evaluation confirmed that our approach can reduce the monitoring overhead to 50%.can reduce the monitoring overhead to 50%.)
  • (Freiwillige Teilnahme) Abschlussvortrag Praxis der Forschung SS23 I  + (Performancevorhersage für Container-AnwendPerformancevorhersage für Container-Anwendungen</br>Abstract: Nowadays distributed applications are often not statically deployed on virtual machines. Instead, a desired state is defined declaratively. A control loop then tries to create the desired state in the cluster. Predicting the impact on the performance of a system using these deployment techniques is difficult. This paper introduces a method to predict the performance impact of the usage of containers and container orchestration in the deployment of a system. Our proposed approach enables system simulation and experimentation with various mechanisms of container orchestration, including autoscaling and container scheduling. We validated this approach using a micro-service reference application across different scenarios. Our findings suggest, that the simulation could effectively mimic most features of container orchestration tools, and the performance prediction of containerized applications in dynamic scenarios could be improved significantly.scenarios could be improved significantly.)
  • Tuning of Explainable Artificial Intelligence (XAI) tools in the field of text analysis  + (Philipp Weinmann will present his plan forPhilipp Weinmann will present his plan for his Bachelor thesis with the title: Tuning of Explainable Artificial Intelligence (XAI) tools in the field of text analysis: He will present a global introduction to explainers for Artificial Intelligence in the context of NLP. We will then explore in details one of these tools: Shap, a perturbation based local explainer and talk about evaluating shap-explanations.d talk about evaluating shap-explanations.)
  • Explainable Artificial Intelligence for Decision Support  + (Policy makers face the difficult task to mPolicy makers face the difficult task to make far-reaching decisions that impact the life of the the entire population based on uncertain parameters that they have little to no control</br>over, such as environmental impacts. Often, they use scenarios in their decision making process. Scenarios provide a common and intuitive way to communicate and characterize different uncertain outcomes in many decision support applications,</br>especially in broad public debates. However, they often fall short of their potential, particularly when applied for groups with diverse</br>interests and worldviews, due to the difficulty of choosing a small number of scenarios to summarize the entire range of uncertain future outcomes. Scenario discovery addresses these problems by using statistical or data-mining algorithms to find easy-to-interpret, policy-relevant regions in the space of uncertain input parameters of computer simulation models. One of many approaches to scenario discovery is subgroup discovery, an approach from the domain of explainable Artificial Intelligence.</br></br>In this thesis, we test and evaluate multiple different subgroup discovery methods for their applicabilty to scenario discovery applications.abilty to scenario discovery applications.)
  • Symbolic Performance Modeling  + (Predicting software performance under diffPredicting software performance under different configurations is a challenging task due to the large amount of possible configurations. Performance-influence models help stakeholders understand how configuration options and their interactions influence the performance of a program. A crucial part of the performance modeling process is the design of an experiment set that delivers performance measurements which are used as input for a machine learning algorithm that learns the performance model. An optimal experiment set should contain the minimal amount of experiments that produces a sufficiently accurate performance model.</br></br>The topic of this thesis is Symbolic Performance Modeling, a new white-box approach to the analysis of the configuration options' influence on the software's performance. The approach utilizes taint analysis to determine where in the source code configuration options influence the software's performance and symbolic execution to determine whether the influence is significant. We assume that only loop constructs with non-constant iteration counts change the asymptotic behavior of the program. The Feature Taint Analysis provided by VaRA is used to determine which configuration options influence loops, while the Path Tracing provided by PhASAR is used to construct all control-flow paths leading to the loops and their respective path conditions. The SMT Solver Z3 is then used to derive value ranges from the path conditions for the configuration options which influence the loop constructs. We determine the significance of a configuration option's influence based on the size of its value range.</br></br>We implement the proof-of-concept tool Symbolic Performance Modeling Value Generator to evaluate the approach with regard to its capabilities to analyze real-world applications and its performance. From the insights gained during the evaluation, we define limitations of the current implementation and propose improvements for future work. and propose improvements for future work.)
  • Enhancing Non-Invasive Human Activity Recognition by Fusioning Electrical Load and Vibrational Measurements  + (Professional installation of stationary seProfessional installation of stationary sensors burdens the adoption of Activity Recognition Systems in households. This can be circumvented by utilizing sensors that are cheap, easy to set up and adaptable to a variety of homes. Since 72% of European consumers will have Smart Meters by 2020, it provides an omnipresent basis for Activity Recognition. </br>This thesis investigates, how a Smart Meter’s limited recognition of appliance involving activities can be extended by Vibration Sensors. We provide an experimental setup to aggregate a dedicated dataset with a sampling frequency of 25,600 Hz. We evaluate the impact of combining a Smart Meter and Vibration Sensors on a system’s accuracy, by means of four developed Activity Recognition Systems. This results in the quantification of the impact. We found out that through combining these sensors, the accuracy of an Activity Recognition System rather strives towards the highest accuracy of a single underlying sensor, than jointly surpassing it.rlying sensor, than jointly surpassing it.)
  • Evidence-based Token Abstraction for Software Plagiarism Detection  + (Programming assignments for students are tProgramming assignments for students are target of plagiarism. Especially for graded assignments, instructors want to detect plagiarism among the students. For larger courses, however, manual inspection of all submissions is a resourceful task. For this purpose, there are numerous tools that can help detect plagiarism in submissions. Many well-known plagiarism detection tools are token-based detectors. In an abstraction step, they map source code to a list of tokens, and such lists are then compared with each other. While there is much research in the area of comparison algorithms, the mapping is often only considered superficially. In this work, we conduct two experiments that address the issue of token abstraction. For that, we design different token abstractions and explain their differences. We then evaluate these abstractions using multiple datasets. We show that different abstractions have pros and cons, and that a higher abstraction level does not necessarily perform better. These findings are useful when adding support for new programming languages and for improving existing plagiarism detection tools. Furthermore, the results can be helpful to choose abstractions tailored to specific requirements.actions tailored to specific requirements.)
  • Theory-Guided Data Science for Battery Voltage Prediction: A Systematic Guideline  + (Purely data-driven Data Science approachesPurely data-driven Data Science approaches tend to underperform when applied to scientific problems, especially when there is little data available. Theory-guided Data Science (TGDS) incorporates existing problem specific domain knowledge in order to increase the performance of Data Science models. It has already proved to be successful in scientific disciplines like climate science or material research.</br></br>Although there exist many TGDS methods, they are often not comparable with each other, because they were originally applied to different types of problems. Also, it is not clear how much domain knowledge they require. There currently exist no clear guidelines on how to choose the most suitable TGDS method when confronted with a concrete problem.</br></br>Our work is the first one to compare multiple TGDS methods on a time series prediction task. We establish a clear guideline by evaluating the performance and required domain knowledge of each method in the context of lithium-ion battery voltage prediction. As a result, our work could serve as a starting point on how to select the right TGDS method when confronted with a concrete problem.d when confronted with a concrete problem.)
  • Using Architectural Design Space Exploration to Quantify Cost-to-Quality Relationship  + (QUPER ist eine Methode um bei einer ReleasQUPER ist eine Methode um bei einer Release-Plannung, bei der eine bestimmte Qualitätsanforderung zentral ist, das Fällen von Entscheidungen einfacher zu machen. Die Methode ist genau dann äußerst hilfreich, wenn das Softwareprojekt mehrere konkurrierende Produkte auf dem Markt hat und eine bestimmte Qualitätsanforderung den Wert der Software für den Kunden stark beeinflusst. QUPER benötigt allerdings Schätzungen des Entwicklungsteams und ist somit stark von der Erfahrung dessen abhängig. Das Palladio Component Model in Kombination mit PerOpteryx können dabei helfen, diese groben Schätzungen durch genauere Information für ein kommendes Release zu ersetzen: Mit einem gegebenen Palladio-Modell und einer potentiellen Verbesserung für die Software kann uns PerOpteryx die genaue Verbesserung der Qualitätsanforderung geben. In dieser Arbeit werden zuerst die QUPER-Methode allein und dann QUPER mit Hilfe von PerOpteryx auf zwei exemplarische Softwareprojekte angewandt und die Ergebnisse verglichen.e angewandt und die Ergebnisse verglichen.)
  • Modularization approaches in the context of monolithic simulations  + (Quality characteristics of a software systQuality characteristics of a software system such as performance or reliability can determine</br>its success or failure. In traditional software engineering, these characteristics can</br>only be determined when parts of the system are already implemented and past the design</br>process. Computer simulations allow to determine estimations of quality characteristics</br>of software systems already during the design process. Simulations are build to analyse</br>certain aspects of systems. The representation of the system is specialised for the specific analysis. This specialisation often results in a monolithic design of the simulation.</br>Monolithic structures, however, can induce reduced maintainability of the simulation and</br>decreased understandability and reusability of the representations of the system. The</br>drawbacks of monolithic structures can be encountered by the concept of modularisation,</br>where one problem is divided into several smaller sub-problems. This approach allows an</br>easier understanding and handling of the sub-problems.</br>In this thesis an approach is provided to describe the coupling of newly developed</br>and already existing simulations to a modular simulation. This approach consists of a</br>Domain-Specific Language (DSL) developed with model-driven technologies. The DSL</br>is applied in a case-study to describe the coupling of two simulations. The coupling of</br>these simulations with an existing coupling approach is implemented according to the</br>created description. An evaluation of the DSL is conducted regarding its completeness to</br>describe the coupling of several simulations to a modular simulation. Additionally, the</br>modular simulation is examined regarding the accuracy of preserving the behaviour of the</br>monolithic simulation. The results of the modular simulation and the monolithic version</br>are compared for this purpose. The created modular simulation is additionally evaluated</br>in regard to its scalability by analysis of the execution times when multiple simulations</br>are coupled. Furthermore, the effect of the modularisation on the simulation execution</br>times is evaluated.</br>The obtained evaluation results show that the DSL can describe the coupling of the two</br>simulations used in the case-study. Furthermore, the results of the accuracy evaluation</br>suggest that problems in the interaction of the simulations with the coupling approach exist.</br>However, the results also show that the overall behaviour of the monolithic simulation is</br>preserved in its modular version. The analysis of the execution times suggest, that the</br>modular simulation experiences an increase in execution time compared to the monolithic</br>version. Also, the results regarding the scalability show that the execution time of the</br>modular simulation does not increase exponentially with the number of coupled simulations.ly with the number of coupled simulations.)
  • Parametrisierung der Spezifikation von Qualitätsannotationen in Software-Architekturmodellen  + (Qualitätseigenschaften von komponentenbasiQualitätseigenschaften von komponentenbasierten Software-Systemen hängen sowohl von den eingesetzten Komponenten, als auch von ihrem eingesetzten Kontext ab. Während die kontextabhängige Parametrisierung für einzelne Qualitätsanalysemodelle, wie z.B. Performance, bereits fundiert wissenschaftlich analysiert wurde, ist dies für andere Qualitätsattribute, insbesondere für qualitativ beschreibende Modelle, noch ungeklärt. Die vorgestellte Arbeit stellt die Qualitätseffekt-Spezifikation vor, die eine kontextabhängige Analyse und Transformation beliebiger Qualitätsattribute erlaubt. Der Ansatz enthält eine eigens entworfene domänenspezifischen Sprache zur Modellierung von Auswirkungen in Abhängigkeit des Kontextes und dazu entsprechende Transformation der Qualitätsannotationen. Transformation der Qualitätsannotationen.)
  • Generalized Monte Carlo Dependency Estimation with improved Convergence  + (Quantifying dependencies among variables iQuantifying dependencies among variables is a fundamental task in data analysis. It allows to understand data and to identify the variables required to answer specific questions. Recent studies have positioned Monte Carlo Dependency Estimation (MCDE) as a state-of-the-art tool in this field.</br>MCDE quantifies dependencies as the average discrepancy between marginal and conditional distributions. In practice, this value is approximated with a dependency estimator. However, the original implementation of this estimator converges rather slowly, which leads to suboptimal results in terms of statistical power. Moreover, MCDE is only able to quantify dependencies among univariate random variables, but not multivariate ones. In this thesis, we make 2 major improvements to MCDE. First, we propose 4 new dependency estimators with faster convergence. We show that MCDE equipped with these new estimators achieves higher statistical power. Second, we generalize MCDE to GMCDE (Generalized Monte Carlo Dependency Estimation) to quantify dependencies among multivariate random variables. We show that GMCDE inherits all the desirable properties of MCDE and demonstrate its superiority against the state-of-the-art dependency measures with experiments.-art dependency measures with experiments.)
  • Adaptives Online-Tuning für kontinuierliche Zustandsräume  + (Raytracing ist ein rechenintensives VerfahRaytracing ist ein rechenintensives Verfahren zur Erzeugung photorealistischer Bilder. Durch die automatische Optimierung von Parametern, die Einfluss auf die Rechenzeit haben, kann die Erzeugung von Bildern beschleunigt werden. Im Rahmen der vorliegenden Arbeit wurde der Auto-Tuner libtuning um ein generalisiertes Reinforcement Learning-Verfahren erweitert, das in der Lage ist, bestimmte Charakteristika der zu zeichnenden Frames bei der Auswahl geeigneter Parameterkonfigurationen zu berücksichtigen. Die hierfür eingesetzte Strategie ist eine ε-gierige Strategie, die für die Exploration das Nelder-Mead-Verfahren zur Funktionsminimierung aus libtuning verwendet. Es konnte gezeigt werden, dass ein Beschleunigung von bis zu 7,7 % in Bezug auf die gesamte Rechenzeit eines Raytracing-Anwendungsszenarios dieser Implementierung gegenüber der Verwendung von libtuning erzielt werden konnte.ndung von libtuning erzielt werden konnte.)
  • Integration of Reactions and Mappings in Vitruvius  + (Realizing complex software projects is oftRealizing complex software projects is often done by utilizing multiple programming or modelling languages. Separate parts of the software are relevant to certain development tasks or roles and differ in their representation. These separate representations are related and contain redundant information. Such redundancies exist for example with an implementation class for a component description, which has to implement methods with signatures as specified by the component. Whenever redundant information is affected in a development update, other representations that contain redundant information have to be updated as well. This additional development effort is required to keep the redundant information consistent and can be costly.</br></br>Consistency preservation languages can be used to describe how consistency of representations can be preserved, so that in use with further development tools the process of updating redundant information is automated. However, such languages vary in their abstraction level and expressiveness. Consistency preservation languages with higher abstraction specify what elements of representations are considered consistent in a declarative manner. A language with less abstraction concerns how consistency is preserved after an update using imperative instructions. A common trade-off in the decision for selecting a fitting language is between expressiveness and abstraction. Higher abstraction on the one hand implies less specification effort, on the other hand it is restricted in expressiveness compared to a more specific language.</br></br>In this thesis we present a concept for combining two consistency specification languages of different abstraction levels. Imperative constructs of a less abstract language are derived from declarative consistency expressions of a language of higher abstraction and combined with additional imperative constructs integrated into the combined language. The combined language grants the benefits of the more abstract language and enables realizing parts of the specification without being restricted in expressiveness. As a consequence a developer profits from the advantages of both languages, as previously a specification that can not be completely expressed with the more abstract language has to be realized entirely with the less abstract language.</br></br>We realize the concepts by combining the Reactions and Mappings language of the VITRUVIUS project. The imperative Reactions language enables developers to specify</br>triggers for certain model changes and repair logic. As a more abstract language, Mappings specify consistency with a declarative description between elements of two representations and what conditions for the specific elements have to apply. We research the limits of expressiveness of the declarative description and depict, how scenarios are supported that require complex consistency specifications. An evaluation with a case study shows the applicability of the approach, because an existing project, prior using the Reactions language, can be realized with the combination concept. Furthermore, the compactness of the preservation specification is increased.e preservation specification is increased.)
  • On the semantics of similarity in deep trajectory representations  + (Recently, a deep learning model (t2vec) foRecently, a deep learning model (t2vec) for trajectory similarity computation has been proposed. Instead of using the trajectories, it uses their deep representations to compute the similarity between them. At this current state, we do not have a clear idea how to interpret the t2vec similarity values, nor what they are exactly based on. This thesis addresses these two issues by analyzing t2vec on its own and then systematically comparing it to the the more familiar traditional models.</br></br>Firstly, we examine how the model’s parameters influence the probability distribution (PDF) of the t2vec similarity values. For this purpose, we conduct experiments with various parameter settings and inspect the abstract shape and statistical properties of their PDF. Secondly, we consider that we already have an intuitive understanding of the classical models, such as Dynamic Time Warping (DTW) and Longest Common Subsequence (LCSS). Therefore, we use this intuition to analyze t2vec by systematically comparing it to DTW and LCSS with the help of heat maps.o DTW and LCSS with the help of heat maps.)
  • Implementation and Evaluation of CHQL Operators in Relational Database Systems to Query Large Temporal Text Corpora  + (Relational database management systems havRelational database management systems have an important place in the informational revolution. Their release on the market facilitates the storing and analysis of data. In the last years, with the release of large temporal text corpora, it was proven that domain experts in conceptual history could also benefit from the performance of relational databases. Since the relational algebra behind them lacks special functionality for this case, the Conceptual History Query Language (CHQL) was developed. The first result of this thesis is an original implementation of the CHQL operators in a relational database, which is written in both SQL and its procedural extension. Secondly, we improved substantially the performance with the trigram indexes. Lastly, the query plan analysis reveals the problem behind the query optimizers choice of inefficient plans, that is the inability of predicting correctly the results from a stored function.rectly the results from a stored function.)
  • Analysis and Visualization of Semantics from Massive Document Directories  + (Research papers are commonly classified inResearch papers are commonly classified into categories, and we can see the existing contributions as a massive document directory, with sub-folders. However, research typically evolves at an extremely fast pace; consider for instance the field of computer science. It can be difficult to categorize individual research papers, or to understand how research communities relate to each other.</br>In this thesis we will analyze and visualize semantics from massive document directories. The results will be displayed using the arXiv corpus, which contains domain-specific (computer science) papers of the past thirty years. The analysis will illustrate and give insight about past trends of document directories and how their relationships evolve over time. how their relationships evolve over time.)
  • Anforderung-zu- Quelltextrückverfolgbarkeit mittels Wort- und Quelltexteinbettungen  + (Rückverfolgbarkeitsinformationen helfen EnRückverfolgbarkeitsinformationen helfen Entwickler beim Verständnis von Softwaresystemen und dienen als Grundlage für weitere Techniken wie der Abdeckungsanalyse. In dieser Arbeit wird untersucht, wie Einbettungen für die automatische Rückverfolgbarkeit zwischen Anforderungen und Quelltext eingesetzt werden können. Dazu werden verschiedene Möglichkeiten betrachtet, die Anforderungen und den Quelltext mit Einbettungen zu repräsentieren und anschließend aufeinander abzubilden, um Rückverfolgbarkeitsverbindungen zwischen ihnen zu erzeugen. Für eine Klasse existieren beispielsweise viele Optionen, welche Informationen bzw. welche Klassenelemente zur Berechnung einer Quelltexteinbettung berücksichtigt werden. Für die Abbildung werden zwischen den Einbettungen durch eine Metrik Ähnlichkeitswerte berechnet, mit deren Hilfe Aussagen über die Existenz einer Rückverfolgbarkeitsverbindung zwischen ihren repräsentierten Artefakten getroffen werden können.</br>In der Evaluation wurden die verschiedenen Möglichkeiten für die Einbettung und Abbildung untereinander und mit anderen Arbeiten verglichen. Bezüglich des F1-Wertes erzeugen Quelltexteinbettungen mit Klassennamen, Methodensignaturen und -kommentaren sowie Abbildungsverfahren, die die Word Mover’s Distance als Ähnlichkeitsmetrik nutzen, die besten projektübergreifenden Ergebnisse. Das beste Verfahren erreicht auf dem Projekt LibEST, welches aus 14 Quelltext- und 52 Anforderungsartefakten besteht, einen F1-Wert von 60,1%. Die beste projektübergreifende Konfiguration erzielt einen durchschnittlichen F1-Wert von 39%. einen durchschnittlichen F1-Wert von 39%.)
  • Bestimmung der semantischen Funktion von Quelltextabschnitten  + (Rückverfolgbarkeitsinformationen zwischen Rückverfolgbarkeitsinformationen zwischen Quelltext und Anforderungen ermöglichen es Werkzeugen Programmierer besser bei der Navigation und der Bearbeitung von Quelltext zu unterstützen. Um solche Verbindungen automatisiert herstellen zu können, muss die Semantik der Anforderungen und des Quelltextes verstanden werden. Im Rahmen dieser Arbeit wird ein Verfahren zur Beschreibung der geteilten Semantik von Gruppierungen von Programmelementen entwickelt. Das Verfahren basiert auf dem statistischen Themenmodell LDA und erzeugt eine Menge von Schlagwörtern als Beschreibung dieser Semantik. Es werden natürlichsprachliche Inhalte im Quelltext der Gruppierungen analysiert und genutzt, um das Modell zu trainieren. Um Unsicherheiten in der Wahl der Parameter von LDA auszugleichen und die Robustheit der Schlagwortmenge zu verbessern, werden mehrere LDA-Modelle kombiniert. Das entwickelte Verfahren wurde im Rahmen einer Nutzerstudie evaluiert. Insgesamt wurde eine durchschnittliche Ausbeute von 0.73 und ein durchschnittlicher F1-Wert von 0.56 erreicht.chschnittlicher F1-Wert von 0.56 erreicht.)
  • Improving Document Information Extraction with efficient Pre-Training  + (SAP Document Information Extraction (DOX) SAP Document Information Extraction (DOX) is a service to extract logical entities from scanned documents based on the well-known Transformer architecture. The entities comprise header information such as document date or sender name, and line items from tables on the document with fields such as line item quantity. The model currently needs to be trained on a huge number of labeled documents, which is impractical. Also, this hinders the deployment of the model at large scale, as it cannot easily adapt to new languages or document types. Recently, pretraining large language models with self-supervised learning techniques have shown good results as a preliminary step, and allow reducing the amount of labels required in follow-up steps. However, to generalize self-supervised learning to document understanding, we need to take into account different modalities: text, layout and image information of documents. How to do that efficiently and effectively is unclear yet. The goal of this thesis is to come up with a technique for self-supervised pretraining within SAP DOX. We will evaluate our method and design decisions against SAP data as well as public data sets. Besides the accuracy of the extracted entities, we will measure to what extent our method lets us lower label requirements.r method lets us lower label requirements.)
  • Wichtigkeit von Merkmalen für die Klassifikation von SAT-Instanzen (Proposal)  + (SAT gehört zu den wichtigsten NP-schweren SAT gehört zu den wichtigsten NP-schweren Problemen der theoretischen Informatik, weshalb die Forschung vor allem daran interessiert ist, besonders effiziente Lösungsverfahren dafür zu finden. Deswegen wird eine Klassifizierung vorgenommen, indem ähnliche Probleminstanzen zu Instanzfamilien gruppiert werden, die man mithilfe von Verfahren des maschinellen Lernens automatisieren will. Die Bachelorarbeit beschäftigt sich unter anderem mit folgenden Themen: Mit welchen (wichtigsten) Eigenschaften kann eine Instanz einer bestimmten Familie zugeordnet werden? Wie erstellt man einen guten Klassifikator für dieses Problem? Welche Gemeinsamkeiten haben Instanzen, die oft fehlklassifiziert werden? Wie sieht eine sinnvolle Familieneinteilung aus?eht eine sinnvolle Familieneinteilung aus?)
  • Verification of Access Control Policies in Software Architectures  + (Security in software systems becomes more Security in software systems becomes more important as systems becomes more complex and connected. Therefore, it is desirable to to conduct security analysis on an architectural level. A possible approach in this direction are data-based privacy analyses. Such approaches are evaluated on case studies. Most exemplary systems for case studies are developed specially for the approach under investigation. Therefore, it is not that simple to find a fitting a case study. The thesis introduces a method to create usable case studies for data-based privacy analyses. The method is applied to the Community Component Modeling Example (CoCoME). The evaluation is based on a GQM plan and shows that the method is applicable. Also it is shown that the created case study is able to check if illegal information flow is present in CoCoME. Additionally, it is shown that the provided meta model extension is able to express the case study.tension is able to express the case study.)
  • Beyond Similarity - Dimensions of Semantics and How to Detect them  + (Semantic similarity estimation is a widelySemantic similarity estimation is a widely used and well-researched area. Current state-of-the-art approaches estimate text similarity with large language models. However, semantic similarity estimation often ignores fine-grain differences between semantic similar sentences. This thesis proposes the concept of semantic dimensions to represent fine-grain differences between two sentences. A workshop with domain experts identified ten semantic dimensions. From the workshop insights, a model for semantic dimensions was created. Afterward, 60 participants decided via a survey which semantic dimensions are useful to users. Detectors for the five most useful semantic dimensions were implemented in an extendable framework. To evaluate the semantic dimensions detectors, a dataset of 200 sentence pairs was created. The detectors reached an average F1 score of 0.815.tors reached an average F1 score of 0.815.)
  • Faster Feedback Cycles via Integration Testing Strategies for Serverless Edge Computing  + (Serverless computing allows software enginServerless computing allows software engineers to develop applications in the cloud without having to manage the infrastructure. The infrastructure is managed by the cloud provider. Therefore, software engineers treat the underlying infrastructure as a black box and focus on the business logic of the application. This lack of inside knowledge leads to an increased testing difficulty as applications tend to be dependent on the infrastructure and other applications running in the cloud environment. While isolated unit and functional testing is possible, integration testing is a challenge, as reliable results are often only achieved after deploying to the deployment environment because infrastructure specifics and other cloud services are only available in the actual cloud environment. This leads to a laborious development process. For this reason, this thesis deals with creating testing strategies for serverless edge computing to reduce feedback cycles and speed up development time. For evaluation, the developed testing strategies are applied to Lambda@Edge in AWS.ategies are applied to Lambda@Edge in AWS.)
  • Influence of Load Profile Perturbation and Temporal Aggregation on Disaggregation Quality  + (Smart Meters become more and more popular.Smart Meters become more and more popular. With Smart Meter, new privacy issues arise. A prominent privacy issue is disaggregation, i.e., the determination of appliance usages from aggregated Smart Meter data. The goal of this thesis is to evaluate load profile perturbation and temporal aggregation techniques regarding their ability to prevent disaggregation. To this end, we used a privacy operator framework for temporal aggregation and perturbation, and the NILM TK framework for disaggregation. We evaluated the influence on disaggregation quality of the operators from the framework individually and in combination. One main observation is that the de-noising operator from the framework prevents disaggregation best.he framework prevents disaggregation best.)
  • Modelling and Enforcing Access Control Requirements for Smart Contracts  + (Smart contracts are software systems emploSmart contracts are software systems employing the underlying blockchain technology to handle transactions in a decentralized and immutable manner. Due to the immutability of the blockchain, smart contracts cannot be upgraded after their initial deploy. Therefore, reasoning about a contract’s security aspects needs to happen before the deployment. One common vulnerability for smart contracts is improper access control, which enables entities to modify data or employ functionality they are prohibited from accessing. Due to the nature of the blockchain, access to data, represented through state variables, can only be achieved by employing the contract’s functions. To correctly restrict access on the source code level, we improve the approach by Reiche et al. who enforce access control policies based on a model on the architectural level.</br>This work aims at correctly enforcing role-based access control (RBAC) policies for Solidity smart contract systems on the architectural and source code level. We extend the standard RBAC model by Sandhu, Ferraiolo, and Kuhn to also incorporate insecure information flows and authorization constraints for roles. We create a metamodel to capture the concepts necessary to describe and enforce RBAC policies on the architectural level. The policies are enforced in the source code by translating the model elements to formal specifications. For this purpose, an automatic code generator is implemented. To reason about the implemented smart contracts on the source code level, tools like solc-verify and Slither are employed and extended. Furthermore, we outline the development process resulting from the presented approach.</br>To evaluate our approach and uncover problems and limitations, we employ a case study using the three smart contract software systems Augur, Fizzy and Palinodia. Additionally, we apply a metamodel coverage analysis to reason about the metamodel’s and the generator’s completeness. Furthermore, we provide an argumentation concerning the approach’s correct enforcement.</br>This evaluation shows how a correct enforcement can be achieved under certain assumptions and when information flows are not considered. The presented approach can detect 100% of manually introduced violations during the case study to the underlying RBAC policies. Additionally, the metamodel is expressive enough to describe RBAC policies and contains no unnecessary elements, since approximately 90% of the created metamodel are covered by the implemented generator. We identify and describe limitations like oracles or public variables.itations like oracles or public variables.)
  • Methodology for Evaluating a Domain-Specific Model Transformation Language  + (Sobald ein System durch mehrere Modelle beSobald ein System durch mehrere Modelle beschrieben wird, können sich diese verschiedenen Beschreibungen auch gegenseitig widersprechen. Modelltransformationen sind ein geeignetes Mittel, um das selbst dann zu vermeiden, wenn die Modelle von mehreren Parteien parallel bearbeitet werden. Es gibt mittlerweile reichhaltige Forschungsergebnisse dazu, Änderungen zwischen zwei Modellen zu transformieren. Allerdings ist die Herausforderung, Modelltransformationen zwischen mehr als zwei Modellen zu entwickeln, bislang unzureichend gelöst. Die Gemeinsamkeiten-Sprache ist eine deklarative, domänenspezifische Programmiersprache, mit der multidirektionale Modelltransformationen programmiert werden können, indem bidirektionale Abbildungsspezifikationen kombiniert werden. Da sie bis jetzt jedoch nicht empirisch validiert wurde, stellt es eine offene Frage dar, ob die Sprache dazu geeignet ist, realistische Modelltransformationen zu entwickeln, und welche Vorteile die Sprache gegenüber einer alternativen Programmiersprache für Modelltransformationen bietet.</br></br>In dieser Abschlussarbeit entwerfe ich eine Fallstudie, mit der die Gemeinsamkeiten-Sprache evaluiert wird. Ich bespreche die Methodik und die Validität dieser Fallstudie. Weiterhin präsentiere ich Kongruenz, eine neue Eigenschaft für bidirektionale Modelltransformationen. Sie stellt sicher, dass die beiden Richtungen einer Transformation zueinander kompatibel sind. Ich leite aus praktischen Beispielen ab, warum wir erwarten können, dass Transformationen normalerweise kongruent sein werden. Daraufhin diskutiere ich die Entwurfsentscheidungen hinter einer Teststrategie, mit der zwei Modelltransformations- Implementierungen, die beide dieselbe Konsistenzspezifikation umsetzen, getestet werden können. Die Teststrategie beinhaltet auch einen praktischen Einsatzzweck von Kongruenz. Zuletzt stelle ich Verbesserungen der Gemeinsamkeiten-Sprache vor.</br></br>Die Beiträge dieser Abschlussarbeit ermöglichen gemeinsam, eine Fallstudie zu Programmiersprachen für Modelltransformationen umzusetzen. Damit kann ein besseres Verständnis der Vorteile dieser Sprachen erzielt werden. Kongruenz kann die Benutzerfreundlichkeit beliebiger Modelltransformationen verbessern und könnte sich als nützlich herausstellen, um Modelltransformations-Netzwerke zu konstruieren. Die Teststrategie kann auf beliebige Akzeptanztests für Modelltransformationen angewendet werden. Modelltransformationen angewendet werden.)
  • Modeling of Security Patterns in Palladio  + (Software itself and the contexts, it is usSoftware itself and the contexts, it is used in, typically evolve over time. Analyzing and ensuring security of evolving software systems in contexts, that are also evolving, poses many difficulties. In my thesis I declared a number of goals and propose processes for the elicitation of attacks, their prerequisites and mitigating security patterns for a given architecture model and for annotation of it with security-relevant information. I showed how this information can be used to analyze the systems security, in regards of modeled attacks, using an attack validity algorithm I specify. Process and algorithm are used in a case study on CoCoME in order to show the applicability of each of them and to analyze the fulfillment of the previously stated goals. Security catalog meta-models and instances of catalogs containing a number of elements have been provided.g a number of elements have been provided.)
  • Multi-model Consistency through Transitive Combination of Binary Transformations  + (Software systems are usually described thrSoftware systems are usually described through multiple models that address different development concerns. These models can contain shared information, which leads to redundant representations of the same information and dependencies between the models. These representations of shared information have to be kept consistent, for the system description to be correct. The evolution of one model can cause inconsistencies with regards to other models for the same system. Therefore, some mechanism of consistency restoration has to be applied after changes occurred. Manual consistency restoration is error-prone and time-consuming, which is why automated consistency restoration is necessary. Many existing approaches use binary transformations to restore consistency for a pair of models, but systems are generally described through more than two models. To achieve multi-model consistency preservation with binary transformations, they have to be combined through transitive execution.</br></br>In this thesis, we explore transitive combination of binary transformations and we study what the resulting problems are. We develop a catalog of six failure potentials that can manifest in failures with regards to consistency between the models. The knowledge about these failure potentials can inform a transformation developer about possible problems arising from the combination of transformations. One failure potential is a consequence of the transformation network topology and the used domain models. It can only be avoided through topology adaptations. Another failure potential emerges, when two transformations try to enforce conflicting consistency constraints. This can only be repaired through adaptation of the original consistency constraints. Both failure potentials are case-specific and cannot be solved without knowing which transformations will be combined. Furthermore, we develop two transformation implementation patterns to mitigate two other failure potentials. These patterns can be applied by the transformation developer to an individual transformation definition, independent of the combination scenario. For the remaining two failure potentials, no general solution was found yet and further research is necessary.</br></br>We evaluate the findings with a case study that involves two independently developed transformations between a component-based software architecture model, a UML class diagram and its Java implementation. All failures revealed by the evaluation could be classified with the identified failure potentials, which gives an initial indicator for the completeness of our failure potential catalog. The proposed patterns prevented all failures of their targeted failure potential, which made up 70% of all observed failures, and shows that the developed implementation patterns are applicable and help to mitigate issues occurring from transitively combining binary transformations.sitively combining binary transformations.)
  • Abstrakte und konsistente Vertraulichkeitsspezifikation von der Architektur bis zum Code  + (Software-Systeme können sensible InformatiSoftware-Systeme können sensible Informationen verarbeiten. Um ihre Vertraulichkeit zu gewährleisten, können sowohl das Architekturmodell, als auch seine Implementierung hinsichtlich des Informationsflusses untersucht werden. Dazu wird eine Vertraulichkeitsspezifikation definiert. Beide Modellebenen besitzen eine Repräsentation der gleichen Spezifikation. Wird das System weiterentwickelt, kann sie sich auf beiden Ebenen verändern und dementsprechend widersprüchliche Aussagen enthalten. Möchte man die Vertraulichkeit der Informationen verifizieren, müssen die Spezifikationselemente im Quellcode in einem zusätzlichen Schritt in eine weitere Sprache übersetzt werden. Die Bachelorarbeit beschäftigt sich mit der Transformation der unterschiedlichen Repräsentationen der Vertraulichkeitsspezifikation eines Software-Systems. Das beinhaltet ein Abbildungskonzept zur Konsistenzhaltung der Vertraulichkeitsspezifikation und die Übersetzung in eine Sprache, die zur Verifikation benutzt werden kann. die zur Verifikation benutzt werden kann.)
  • Automatisiertes GUI-basiertes Testen einer Passwortmanager-Applikation mit Neuroevolution  + (Software-Testing ist essenziell zur GewährSoftware-Testing ist essenziell zur Gewährleistung der Qualität und Funktionalität von Softwareprodukten. Es existieren sowohl manuelle als auch automatisierte Methoden. Allerdings weisen sowohl automatisierte Verfahren als auch menschliche und skriptbasierte Tests bezüglich Kosteneffizienz und Zeitaufwand Einschränkungen auf. Monkey-Testing, gekennzeichnet durch zufällige Klicks auf der Benutzeroberfläche, berücksichtigt dabei oft nicht ausreichend die Logik der Applikation.</br></br>Diese Bachelorarbeit konzentriert sich auf die automatisierte neuroevolutionäre Testmethode, die neuronale Netze als Testagenten nutzt und diese mittels evolutionärer Algorithmen über mehrere Generationen hinweg verfeinert. Zur Evaluierung dieser Agenten und zum Vergleich mit Monkey-Testing wurde eine simulierte Version einer Passwort-Manager Applikation eingesetzt. Dabei wurde eine Belohnungsstruktur innerhalb der simulierten Anwendung implementiert. Die Ergebnisse verdeutlichen, dass das neuroevolutionäre Testverfahren im Hinblick auf die erzielten Belohnungen im Vergleich zum Monkey-Testing signifikant besser performt. Dies führt zu einer besseren Berücksichtigung der Anwendungslogik im Testprozess.tigung der Anwendungslogik im Testprozess.)
  • GUI-basiertes Testen einer Lernplattform-Anwendung durch Nutzung von Neuroevolution  + (Software-Testing ist notwendig, um die QuaSoftware-Testing ist notwendig, um die Qualität und Funktionsfähigkeit von Softwareartefakten sicherzustellen. Es gibt sowohl automatisierte als auch manuelle Testverfahren. Allerdings sind automatisierte Verfahren, sowie menschliches Testen und skriptbasiertes Testen in Bezug auf Zeitaufwand und Kosten weniger gut skalierbar. Monkey-Testing, das durch zufällige Klicks auf der Benutzeroberfläche gekennzeichnet ist, berücksichtigt die Applikationslogik oft nicht ausreichend.</br>Der Fokus dieser Bachelorarbeit liegt auf dem automatisierten neuroevolutionären Testverfahren, das neuronale Netze als Testagenten verwendet und sie mithilfe evolutionärer Algorithmen über mehrere Generationen hinweg verbessert. Um das Training der Agenten zu ermöglichen und den Vergleich zum Monkey-Testing zu ermöglichen, wurde eine simulierte Version der Lernplattform Anki implementiert. Zur Beurteilung der Testagenten wurde eine Belohnungsstruktur in der simulierten Anwendung entwickelt.</br>Die Ergebnisse zeigen, dass das neuroevolutionäre Testverfahren im Vergleich zum Monkey-Testing in Bezug auf erreichte Belohnungen signifikant besser abschneidet. Dadurch wird die Applikationslogik im Testprozess besser berücksichtigt.ogik im Testprozess besser berücksichtigt.)
  • Entity Linking für Softwarearchitekturdokumentation  + (Softwarearchitekturdokumentationen enthaltSoftwarearchitekturdokumentationen enthalten Fachbegriffe aus der Domäne der Softwareentwicklung. Wenn man diese Begriffe findet und zu den passenden Begriffen in einer Datenbank verknüpft, können Menschen und Textverarbeitungssysteme diese Informationen verwenden, um die Dokumentation besser zu verstehen. Die Fachbegriffe in Dokumentationen entsprechen dabei Entitätserwähnungen im Text.</br>In dieser Ausarbeitung stellen wir unser domänenspezifisches Entity-Linking-System vor. Das System verknüpft Entitätserwähnungen innerhalb von Softwarearchitekturdokumentationen zu den zugehörigen Entitäten innerhalb einer Wissensbasis. </br>Das System enthält eine domänenspezifische Wissensbasis, ein Modul zur Vorverarbeitung und ein Entity-Linking-System.erarbeitung und ein Entity-Linking-System.)
  • Entwicklung einer Entwurfszeit-DSL zur Formalisierung von Runtime Adaptationsstrategien für SAS zum Zweck der Strategie-Optimierung  + (Softwaresysteme der heutigen Zeit werden zSoftwaresysteme der heutigen Zeit werden zunehmend komplexer und unterliegen immer</br>mehr variierenden Bedingungen. </br>Dadurch gewinnen selbst-adaptive Systeme an Bedeutung, da diese sich neuen Bedingungen dynamisch anpassen können, indem sie Veränderungen an sich selbst vornehmen. </br>Domänenspezifische Modellierungssprachen (DSL) zur Formalisierung von Adaptionsstrategien stellen ein wichtiges Mittel dar, um den Entwurf von Rückkopplungsschleifen selbst-adaptiver Softwaresysteme zu modellieren und zu optimieren. </br>Hiermit soll eine Bachelorarbeit vorgeschlagen werden, die sich mit der Fragestellung befasst, wie eine Optimierung von Adaptionsstrategien in einer DSL zur Entwurfszeit beschrieben werden kann. zur Entwurfszeit beschrieben werden kann.)
  • Preventing Code Insertion Attacks on Token-Based Software Plagiarism Detectors  + (Some students tasked with mandatory prograSome students tasked with mandatory programming assignments lack the time or dedication to solve the assignment themselves. Instead, they plagiarize a peer’s solution by slightly modifying the code. However, there exist numerous tools that assist in detecting these kinds of plagiarism. These tools can be used by instructors to identify plagiarized programs. The most used type of plagiarism detection tools is token-based plagiarism detectors. They are resilient against many types of obfuscation attacks, such as renaming variables or whitespace modifications. However, they are susceptible to inserting lines of code that do not affect the program flow or result.</br>The current working assumption was that the successful obfuscation of plagiarism takes more effort and skill than solving the assignment itself. This assumption was broken by automated plagiarism generators, which exploit this weakness. This work aims to develop mechanisms against code insertions that can be directly integrated into existing token-based plagiarism detectors. For this, we first develop mechanisms to negate the negative effect of many types of code insertion. Then we implement these mechanisms prototypically into a state-of-the-art plagiarism detector. We evaluate our implementation by running it on a dataset consisting of real student submissions and automatically generated plagiarism. We show that with our mechanisms, the similarity rating of automatically generated plagiarism increases drastically. Consequently, the plagiarism generator we use fails to create usable plagiarisms.we use fails to create usable plagiarisms.)
  • Software Plagiarism Detection on Intermediate Representation  + (Source code plagiarism is a widespread proSource code plagiarism is a widespread problem in computer science education. To counteract this, software plagiarism detectors can help identify plagiarized code. Most state-of-the-art plagiarism detectors are token-based. It is common to design and implement a new dedicated language module to support a new programming language. This process can be time-consuming, furthermore, it is unclear whether it is even necessary. In this thesis, we evaluate the necessity of dedicated language modules for Java and C/C++ and derive conclusions for designing new ones. To achieve this, we create a language module for the intermediate representation of LLVM. For the evaluation, we compare it to two existing dedicated language modules in JPlag. While our results show that dedicated language modules are better for plagiarism detection, language modules for intermediate representations show better resilience to obfuscation attacks. better resilience to obfuscation attacks.)
  • Portables Auto-Tuning paralleler Anwendungen  + (Sowohl Offline- als auch Online-Tuning steSowohl Offline- als auch Online-Tuning stellen gängige Lösungen zur automatischen Optimierung von parallelen Anwendungen dar. Beide Verfahren haben ihre individuellen Vor- und Nachteile: das Offline-Tuning bietet minimalen negativen Einfluss auf die Laufzeiten der Anwendung, die getunten Parameterwerte sind allerdings nur auf im Voraus bekannter Hardware verwendbar. Online-Tuning hingegen bietet dynamische Parameterwerte, die zur Laufzeit der Anwendung und damit auf der Zielhardware ermittelt werden, dies kann sich allerdings negativ auf die Laufzeit der Anwendung ausüben.</br>Wir versuchen die Vorteile beider Ansätze zu verschmelzen, indem im Voraus optimierte Parameterkonfigurationen auf der Zielhardware, sowie unter Umständen mit einer anderen Anwendung, verwendet werden. Wir evaluieren sowohl die Hardware- als auch die Anwendungsportabilität der Konfigurationen anhand von fünf Beispielanwendungen.ionen anhand von fünf Beispielanwendungen.)
  • DomainML: A modular framework for domain knowledge-guided machine learning  + (Standard, data-driven machine learning appStandard, data-driven machine learning approaches learn relevant patterns solely from data. In some fields however, learning only from data is not sufficient. A prominent example for this is healthcare, where the problem of data insufficiency for rare diseases is tackled by integrating high-quality domain knowledge into the machine learning process.</br></br>Despite the existing work in the healthcare context, making general observations about the impact of domain knowledge is difficult, as different publications use different knowledge types, prediction tasks and model architectures. It further remains unclear if the findings in healthcare are transferable to other use-cases, as well as how much intellectual effort this requires.</br></br>With this Thesis we introduce DomainML, a modular framework to evaluate the impact of domain knowledge on different data science tasks. We demonstrate the transferability and flexibility of DomainML by applying the concepts from healthcare to a cloud system monitoring. We then observe how domain knowledge impacts the model’s prediction performance across both domains, and suggest how DomainML could further be used to refine both the given domain knowledge as well as the quality of the underlying dataset. as the quality of the underlying dataset.)
  • State of the Art: Multi Actor Behaviour and Dataflow Modelling for Dynamic Privacy  + (State of the Art Vortrag im Rahmen der Praxis der Forschung.)
  • Data-Preparation for Machine-Learning Based Static Code Analysis  + (Static Code Analysis (SCA) has become an iStatic Code Analysis (SCA) has become an integral part of modern software development, especially since the rise of automation in the form of CI/CD. It is an ongoing question of how machine learning can best help improve SCA's state and thus facilitate maintainable, correct, and secure software. However, machine learning needs a solid foundation to learn on. This thesis proposes an approach to build that foundation by mining data on software issues from real-world code. We show how we used that concept to analyze over 4000 software packages and generate over two million issue samples. Additionally, we propose a method for refining this data and apply it to an existing machine learning SCA approach.an existing machine learning SCA approach.)
  • Creating Study Plans by Generating Workflow Models from Constraints in Temporal Logic  + (Students are confronted with a huge amountStudents are confronted with a huge amount of regulations when planning their studies at a university. It is challenging for them to create a personalized study plan while still complying to all official rules. The STUDYplan software aims to overcome the difficulties by enabling an intuitive and individual modeling of study plans. A study plan can be interpreted as a sequence of business process tasks that indicate courses to make use of existing work in the business process domain. This thesis focuses on the idea of synthesizing business process models from declarative specifications that indicate official and user-defined regulations for a study plan. We provide an elaborated approach for the modeling of study plan constraints and a generation concept specialized to study plans. This work motivates, discusses, partially implements and evaluates the proposed approach.ments and evaluates the proposed approach.)
  • A comparative study of subgroup discovery methods  + (Subgroup discovery is a data mining techniSubgroup discovery is a data mining technique that is used to extract interesting relationships in a dataset related to to a target variable. These relationships are described in the form of rules. Multiple SD techniques have been developed over the years. This thesis establishes a comparative study between a number of these techniques in order to identify the state-of-the-art methods. It also analyses the effects discretization has on them as a preprocessing step . Furthermore, it investigates the effect of hyperparameter optimization on these methods. </br></br>Our analysis showed that PRIM, DSSD, Best Interval and FSSD outperformed the other subgroup discovery methods evaluated in this study and are to be considered state-of-the-art . It also shows that discretization offers an efficiency improvement on methods that do not employ internal discretization. It has a negative impact on the quality of subgroups generated by methods that perform it internally. The results finally demonstrates that Apriori-SD and SD-Algorithm were the most positively affected by the hyperparameter optimization.fected by the hyperparameter optimization.)
  • Software Testing  + (TBA)
  • Exploring Modern IDE Functionalities for Consistency Preservation  + (TBA)
  • Exploring the Traceability of Requirements and Source Code via LLMs  + (TBA)
  • Data-Driven Approaches to Predict Material Failure and Analyze Material Models  + (Te prediction of material failure is usefuTe prediction of material failure is useful in many industrial contexts such as predictive maintenance, where it helps reducing costs by preventing outages. However, failure prediction is a complex task. Typically, material scientists need to create a physical material model to run computer simulations. In real-world scenarios, the creation of such models is ofen not feasible, as the measurement of exact material parameters is too expensive. Material scientists can use material models to generate simulation data. Tese data sets are multivariate sensor value time series. In this thesis we develop data-driven models to predict upcoming failure of an observed material. We identify and implement recurrent neural network architectures, as recent research indicated that these are well suited for predictions on time series. We compare the prediction performance with traditional models that do not directly predict on time series but involve an additional step of feature calculation. Finally, we analyze the predictions to fnd abstractions in the underlying material model that lead to unrealistic simulation data and thus impede accurate failure prediction. Knowing such abstractions empowers material scientists to refne the simulation models. The updated models would then contain more relevant information and make failure prediction more precise. and make failure prediction more precise.)
  • Improving SAP Document Information Extraction via Pretraining and Fine-Tuning  + (Techniques for extracting relevant informaTechniques for extracting relevant information from documents have made significant progress in recent years and became a key task in the digital transformation. With deep neural networks, it became possible to process documents without specifying hard-coded extraction rules or templates for each layout. However, such models typically have a very large number of parameters. As a result, they require many annotated samples and long training times. One solution is to create a basic pretrained model using self-supervised objectives and then to fine-tune it using a smaller document-specific annotated dataset. However, implementing and controlling the pretraining and fine-tuning procedures in a multi-modal setting is challenging. In this thesis, we propose a systematic method that consists in pretraining the model on large unlabeled data and then to fine-tune it with a virtual adversarial training procedure. For the pretraining stage, we implement an unsupervised informative masking method, which improves upon standard Masked-Language Modelling (MLM). In contrast to randomly masking tokens like in MLM, our method exploits Point-Wise Mutual Information (PMI) to calculate individual masking rates based on statistical properties of the data corpus, e.g., how often certain tokens appear together on a document page. We test our algorithm in a typical business context at SAP and report an overall improvement of 1.4% on the F1-score for extracted document entities. Additionally, we show that the implemented methods improve the training speed, robustness and data-efficiency of the algorithm.ness and data-efficiency of the algorithm.)
  • Analyse von Zeitreihen-Kompressionsmethoden am Beispiel von Google N-Grams  + (Temporal text corpora like the Google NgraTemporal text corpora like the Google Ngram dataset usually incorporate a vast number of words and expressions, called ngrams, and their respective usage frequencies over the years. The large quantity of entries complicates working with the dataset, as transformations and queries are resource and time intensive. However, many use-cases do not require the whole corpus to have a sufficient dataset and achieve acceptable results. We propose various compression methods to reduce the absolute number of ngrams in the corpus. Additionally, we utilize time-series compression methods for quick estimations about the properties of ngram usage frequencies. As basis for our compression method design and experimental validation serve CHQL (Conceptual History Query Language) queries on the Google Ngram dataset. The goal is to find compression methods that reduce the complexity of queries on the corpus while still maintaining good results.rpus while still maintaining good results.)
  • Analyse von Zeitreihen-Kompressionsmethoden am Beispiel von Google N-Gram  + (Temporal text corpora like the Google NgraTemporal text corpora like the Google Ngram Data Set usually incorporate a vast number of words and expressions, called ngrams, and their respective usage frequencies over the years. The large quantity of entries complicates working with the data set, as transformations and queries are resource and time intensive. However, many use cases do not require the whole corpus to have a sufficient data set and achieve acceptable query results. We propose various compression methods to reduce the total number of ngrams in the corpus. Specially, we propose compression methods that, given an input dictionary of target words, find a compression tailored for queries on a specific topic. Additionally, we utilize time-series compression methods for quick estimations about the properties of ngram usage frequencies. As basis for our compression method design and experimental validation serve CHQL (Conceptual History Query Language) queries on the Google Ngram Data Set.age) queries on the Google Ngram Data Set.)
  • Implementation and Evaluation of CHQL Operators in Relational Database Systems  + (The IPD defined CHQL, a query algebra thatThe IPD defined CHQL, a query algebra that enables to formalize queries about conceptual history. CHQL is currently implemented in MapReduce which offers less flexibility for query optimization than relational database systems does. The scope of this thesis is to implement the given operators in SQL and analyze performance differences by identifying limiting factors and query optimization on the logical and physical level. At the end, we will provide efficient query plans and fast operator implementations to execute CHQL queries in relational database systems.QL queries in relational database systems.)
  • The Kconfig Variability Framework as a Feature Model  + (The Kconfig variability framework is used The Kconfig variability framework is used to develop highly variable software such as the Linux kernel, ZephyrOS and NuttX. Kconfig allows developers to break down their software in modules and define the dependencies between these modules, so that when a concrete configuration is created, the semantic dependencies between the selected modules are fulfilled, ensuring that the resulting software product can function. Kconfig has often been described as a tool of define software product lines (SPLs), which often occur within the context of feature-oriented programming (FOP). In this paper, we introduce methods to transform Kconfig files into feature models so that the semantics of the model defined in a Kconfig file are preserved. The resulting feature models can be viewed with FeatureIDE, which allows the further analysis of the Kconfig file, such as the detection of redundant dependencies and cyclic dependencies.dant dependencies and cyclic dependencies.)
  • Review of data efficient dependency estimation  + (The amount and complexity of data collecteThe amount and complexity of data collected in the industry is increasing, and data analysis rises in importance. Dependency estimation is a significant part of knowledge discovery and allows strategic decisions based on this information.</br>There are multiple examples that highlight the importance of dependency estimation, like knowing there exists a correlation between the regular dose of a drug and the health of a patient helps to understand the impact of a newly manufactured drug.</br>Knowing how the case material, brand, and condition of a watch influences the price on an online marketplace can help to buy watches at a good price.</br>Material sciences can also use dependency estimation to predict many properties of a material before it is synthesized in the lab, so fewer experiments are necessary.</br></br>Many dependency estimation algorithms require a large amount of data for a good estimation. But data can be expensive, as an example experiments in material sciences, consume material and take time and energy.</br>As we have the challenge of expensive data collection, algorithms need to be data efficient. But there is a trade-off between the amount of data and the quality of the estimation. With a lack of data comes an uncertainty of the estimation. However, the algorithms do not always quantify this uncertainty. As a result, we do not know if we can rely on the estimation or if we need more data for an accurate estimation.</br></br>In this bachelor's thesis we compare different state-of-the-art dependency estimation algorithms using a list of criteria addressing these challenges and more. We partly developed the criteria our self as well as took them from relevant publications. The existing publications formulated many of the criteria only qualitative, part of this thesis is to make these criteria measurable quantitative, where possible, and come up with a systematic approach of comparison for the rest.</br></br>From 14 selected criteria, we focus on criteria concerning data efficiency and uncertainty estimation, because they are essential for lowering the cost of dependency estimation, but we will also check other criteria relevant for the application of algorithms.</br>As a result, we will rank the algorithms in the different aspects given by the criteria, and thereby identify potential for improvement of the current algorithms.</br></br>We do this in two steps, first we check general criteria in a qualitative analysis. For this we check if the algorithm is capable of guided sampling, if it is an anytime algorithm and if it uses incremental computation to enable early stopping, which all leads to more data efficiency.</br></br>We also conduct a quantitative analysis on well-established and representative datasets for the dependency estimation algorithms, that performed well in the qualitative analysis.</br>In these experiments we evaluate more criteria:</br>The robustness, which is necessary for error-prone data, the efficiency which saves time in the computation, the convergence which guarantees we get an accurate estimation with enough data, and consistency which ensures we can rely on an estimation.hich ensures we can rely on an estimation.)
  • Identifying Security Requirements in Natural Language Documents  + (The automatic identification of requiremenThe automatic identification of requirements, and their classification according to their security objectives, can be helpful to derive insights into the security of a given system. However, this task requires significant security expertise to perform. In this thesis, the capability of modern Large Language Models (such as GPT) to replicate this expertise is investigated. This requires the transfer of the model's understanding of language to the given specific task. In particular, different prompt engineering approaches are combined and compared, in order to gain insights into their effects on performance. GPT ultimately performs poorly for the main tasks of identification of requirements and of their classification according to security objectives. Conversely, the model performs well for the sub-task of classifying the security-relevance of requirements. Interestingly, prompt components influencing the format of the model's output seem to have a higher performance impact than components containing contextual information.ponents containing contextual information.)
  • Predicting System Dependencies from Tracing Data Instead of Computing Them  + (The concept of Artificial Intelligence forThe concept of Artificial Intelligence for IT Operations combines big data and machine learning methods to replace a broad range of IT operations including availability and performance monitoring of services. In large-scale distributed cloud infrastructures a service is deployed on different separate nodes. As the size of the infrastructure increases in production, the analysis of metrics parameters becomes computationally expensive. We address the problem by proposing a method to predict dependencies between metrics parameters of system components instead of computing them. To predict the dependencies we use time windowing with different aggregation methods and distributed tracing data that contain detailed information for the system execution workflow. In this bachelor thesis, we inspect the different representations of distributed traces from simple counting of events to more complex graph representations. We compare them with each other and evaluate the performance of such methods. evaluate the performance of such methods.)
  • Change Detection in High Dimensional Data Streams  + (The data collected in many real-world scenThe data collected in many real-world scenarios such as environmental analysis, manufacturing, and e-commerce are high-dimensional and come as a stream, i.e., data properties evolve over time – a phenomenon known as "concept drift". This brings numerous challenges: data-driven models become outdated, and one is typically interested in detecting specific events, e.g., the critical wear and tear of industrial machines. Hence, it is crucial to detect change, i.e., concept drift, to design a reliable and adaptive predictive system for streaming data. However, existing techniques can only detect "when" a drift occurs and neglect the fact that various drifts may occur in different dimensions, i.e., they do not detect "where" a drift occurs. This is particularly problematic when data streams are high-dimensional. </br></br>The goal of this Master’s thesis is to develop and evaluate a framework to efficiently and effectively detect “when” and “where” concept drift occurs in high-dimensional data streams. We introduce stream autoencoder windowing (SAW), an approach based on the online training of an autoencoder, while monitoring its reconstruction error via a sliding window of adaptive size. We will evaluate the performance of our method against synthetic data, in which the characteristics of drifts are known. We then show how our method improves the accuracy of existing classifiers for predictive systems compared to benchmarks on real data streams.mpared to benchmarks on real data streams.)
  • Automated Test Selection for CI Feedback on Model Transformation Evolution  + (The development of the transformation modeThe development of the transformation model also comes with the appropriate system-level testing to verify its changes. Due to the complex nature of the transformation model, the number of tests increases as the structure and feature description become more detailed. However, executing all test cases for every change is costly and time-consuming. Thus, it is necessary to conduct a selection for the transformation tests. In this presentation, you will be introduced to a change-based test prioritization and transformation test selection approach for early fault detection.ection approach for early fault detection.)
  • Statistical Generation of High Dimensional Data Streams with Complex Dependencies  + (The evaluation of data stream mining algorThe evaluation of data stream mining algorithms is an important task in current research. The lack of a ground truth data corpus that covers a large number of desireable features (especially concept drift and outlier placement) is the reason why researchers resort to producing their own synthetic data. This thesis proposes a novel framework ("streamgenerator") that allows to create data streams with finely controlled characteristics. The focus of this work is the conceptualization of the framework, however a prototypical implementation is provided as well. We evaluate the framework by testing our data streams against state-of-the-art dependency measures and outlier detection algorithms.measures and outlier detection algorithms.)
  • Statistical Generation of High-Dimensional Data Streams with Complex Dependencies  + (The extraction of knowledge from data streThe extraction of knowledge from data streams is one of the most crucial tasks of modern day data science. Due to their nature data streams are ever evolving and knowledge derrived at one point in time may be obsolete in the next period. The need for specialized algorithms that can deal with high-dimensional data streams and concept drift is prevelant.</br></br>A lot of research has gone into creating these kind of algorithms. The problem here is the lack of data sets with which to evaluate them. A ground truth for a common evaluation approach is missing. A solution to this could be the synthetic generation of data streams with controllable statistical propoerties, such as the placement of outliers and the subspaces in which special kinds of dependencies occur. The goal of this Bachelor thesis is the conceptualization and implementation of a framework which can create high-dimensional data streams with complex dependencies.al data streams with complex dependencies.)
  • Theory-guided Load Disaggregation in an Industrial Environment  + (The goal of Load Disaggregation (or Non-inThe goal of Load Disaggregation (or Non-intrusive Load Monitoring) is to infer the energy consumption of individual appliances from their aggregated consumption. This facilitates energy savings and efficient energy management, especially in the industrial sector.</br></br>However, previous research showed that Load Disaggregation underperforms in the industrial setting compared to the household setting. Also, the domain knowledge available about industrial processes remains unused.</br></br>The objective of this thesis was to improve load disaggregation algorithms by incorporating domain knowledge in an industrial setting. First, we identified and formalized several domain knowledge types that exist in the industry. Then, we proposed various ways to incorporate them into the Load Disaggregation algorithms, including Theory-Guided Ensembling, Theory-Guided Postprocessing, and Theory-Guided Architecture. Finally, we implemented and evaluated the proposed methods.mented and evaluated the proposed methods.)
  • Tuning of Explainable ArtificialIntelligence (XAI) tools in the field of textanalysis  + (The goal of this bachelor thesis was to anThe goal of this bachelor thesis was to analyse classification results using a 2017 published method called shap. Explaining how an artificial neural network makes a decision is an interdisciplinary research subject combining computer science, math, psychology and philosophy. We analysed these explanations from a psychological standpoint and after presenting our findings we will propose a method to improve the interpretability of text explanations using text-hierarchies, without loosing much/any accuracy. Secondary, the goal was to test out a framework developed to analyse a multitude of explanation methods. This Framework will be presented next to our findings and how to use it to create your own analysis. This Bachelor thesis is addressed at people familiar with artificial neural networks and other machine learning methods.tworks and other machine learning methods.)
  • Specifying and Maintaining the Correspondence between Architecture Models and Runtime Observations  + (The goal of this thesis is to provide a geThe goal of this thesis is to provide a generic concept of a correspondence model (CM) to map high-level model elements to corresponding low-level model elements and to generate this mapping during implementation of the high-level model using a correspondence model generator (CGM). In order to evaluate our approach, we implement and integrate the CM for the iObserve project. Further we implement the proposed CMG and integrate it into ProtoCom, the source code generator used by the iObserve project. We first evaluate the feasibility of this approach by checking whether such a correspondence model can be specified as desired and generated by the CGM. Secondly, we evaluate the accuracy of the approach by checking the generated correspondences against a reference model.correspondences against a reference model.)
  • Intelligent Match Merging to Prevent Obfuscation Attacks on Software Plagiarism Detectors  + (The increasing number of computer science The increasing number of computer science students has prompted educators to rely on state-of-the-art source code plagiarism detection tools to deter the submission of plagiarized coding assignments. While these token-based plagiarism detectors are inherently resilient against simple obfuscation attempts, recent research has shown that obfuscation tools empower students to easily modify their submissions, thus evading detection. These tools automatically use dead code insertion and statement reordering to avoid discovery. The emergence of ChatGPT has further raised concerns about its obfuscation capabilities and the need for effective mitigation strategies.</br>Existing defence mechanisms against obfuscation attempts are often limited by their specificity to certain attacks or dependence on programming languages, requiring tedious and error-prone reimplementation. In response to this challenge, this thesis introduces a novel defence mechanism against automatic obfuscation attacks called match merging. It leverages the fact that obfuscation attacks change the token sequence to split up matches between two submissions so that the plagiarism detector discards the broken matches. Match merging reverts the effects of these attacks by intelligently merging neighboring matches based on a heuristic designed to minimize false positives.</br>Our method’s resilience against classic obfuscation attacks is demonstrated through evaluations on diverse real-world datasets, including undergrad assignments and competitive coding challenges, across six different attack scenarios. Moreover, it significantly improves detection performance against AI-based obfuscation. What sets our method apart is its language- and attack-independence while its minimal runtime overhead makes it seamlessly compatible with other defence mechanisms. compatible with other defence mechanisms.)
  • Efficient k-NN Search of Time Series in Arbitrary Time Intervals  + (The k nearest neighbors (k-NN) of a time sThe k nearest neighbors (k-NN) of a time series are the k closest sequences within a</br>dataset regarding a distance measure. Often, not the entire time series, but only specific</br>time intervals are of interest, e.g., to examine phenomena around special events. While</br>numerous indexing techniques support the k-NN search of time series, none of them</br>is designed for an efficient interval-based search. This work presents the novel index</br>structure Time Series Envelopes Index Tree (TSEIT), that significantly speeds up the k-NN</br>search of time series in arbitrary user-defined time intervals. in arbitrary user-defined time intervals.)
  • Reinforcement Learning for Solving the Knight’s Tour Problem  + (The knight’s tour problem is an instance oThe knight’s tour problem is an instance of the Hamiltonian path problem that is a typical NP-hard problem. A knight makes L-shape moves on a chessboard and tries to visit all the squares exactly once. The tour is closed if a knight can finish a complete tour and end on a square that is a neighbourhood of its starting square; Otherwise, it is open. Many algorithms and heuristics have been proposed to solve this problem. The most well-known one is warnsdorff’s heuristic. Warnsdorff’s idea is to move to the square with the fewest possible moves in a greedy fashion. Although this heuristic is fast, it does not always return a closed tour. Also, it only works on boards of certain dimensions. Due to its greedy behaviour, it can get stuck into a local optimum easily. That is similar to the other existing approaches. Our goal in this thesis is to come up with a new strategy based on reinforcement learning. Ideally, it should be able to find a closed tour on chessboards of any size. We will consider several approaches: value-based methods, policy optimization and actor-critic methods. Compared to previous work, our approach is non-deterministic and sees the problem as a single-player game with a tradeoff between exploration and exploitation. We will evaluate the effectiveness and efficiency of the existing methods and new heuristics.f the existing methods and new heuristics.)
  • Discovering data-driven Explanations  + (The main goal knowledge discovery focussesThe main goal knowledge discovery focusses is, an increase of knowledge using some set of data. In many cases it is crucial that results are human-comprehensible. Subdividing the feature space into boxes with unique characteristics is a commonly used approach for achieving this goal. The patient-rule-induction method (PRIM) extracts such "interesting" hyperboxes from a dataset by generating boxes that maximize some class occurrence inside of it. However, the quality of the results varies when applied to small datasets. This work will examine to which extent data-generators can be used to artificially increase the amount of available data in order to improve the accuracy of the results. Secondly, it it will be tested if probabilistic classification can improve the results when using generated data.ove the results when using generated data.)
  • Conception and Design of Privacy-preserving Software Architecture Templates  + (The passing of new regulations like the EuThe passing of new regulations like the European GDPR has clarified that in the future it will be necessary to build privacy-preserving systems to protect the personal data of its users. This thesis will introduce the concept of privacy templates to help software designers and architects in this matter. Privacy templates are at their core similar to design patterns and provide reusable and general architectural structures which can be used in the design of systems to improve privacy in early stages of design. In this thesis we will conceptualize a small collection of privacy templates to make it easier to design privacy-preserving software systems. Furthermore, the privacy templates will be categorized and evaluated to classify them and assess their quality across different quality dimensions.ality across different quality dimensions.)
  • Modellierung und Verifikation von Mehrgüterauktionen als Workflows am Beispiel eines Auktionsdesigns  + (The presentation will be in English. Die ZThe presentation will be in English.</br>Die Zielsetzung in dieser Arbeit war die Entwicklung eines Systems zur Verifikation von Mehrgüterauktionen als Workflows am Beispiel eines Auktionsdesigns. Aufbauend auf diversen Vorarbeiten wurde in dieser Arbeit das Clock-Proxy Auktionsdesign als Workflow modelliert und zur Verifikation mit Prozessverifikationsmethoden vorbereitet. Es bestehen bereits eine Vielzahl an Analyseansätzen für Auktionsdesign, die letztendlich aber auf wenig variierbaren Modellen basieren. Für komplexere Auktionsverfahren, wie Mehrgüterauktionen, die in dieser Arbeit betrachtet wurden, liefern diese Ansätze keine zufriedenstellenden Möglichkeiten. Basierend auf den bereits bestehenden Verfahren wurde ein Ansatz entwickelt, dessen Schwerpunkt auf der datenzentrierten Erweiterung der Modellierung und der Verifikationsansätze liegt. Im ersten Schritt wurden daher die Regeln und Daten in das Workflowmodell integriert. Die Herausforderung bestand darin, den Kontroll-und Datenfluss sowie die Daten und Regeln aus dem Workflowmodell über einen Algorithmus zu extrahieren und bestehende Transformationsalgorithmen hinreichend zu erweitern. Die Evaluation des Ansatzes zeigt, dass die Arbeit mit der entwickelten Software das globale Ziel, einen Workflow mittels Eigenschaften zu verifizieren, erreicht hat.genschaften zu verifizieren, erreicht hat.)
  • Measuring the Privacy Loss with Smart Meters  + (The rapid growth of renewable energy sourcThe rapid growth of renewable energy sources and the increased sales in</br>electric vehicels contribute to a more volatile power grid. Energy suppliers</br>rely on data to predict the demand and to manage the grid accordingly.</br>The rollout of smart meters could provide the necessary data. But on the</br>other hand, smart meters can leak sensitive information about the customer.</br>Several solution were proposed to mitigate this problem. Some depend on</br>privacy measures to calculate the degree of privacy one could expect from a</br>solution. This bachelor thesis constructs a set of experiments which help to</br>analyse some privacy measures and thereby determine, whether the value of</br>a privacy measure increases or decreases with an increase in privacy. or decreases with an increase in privacy.)
  • Standardized Real-World Change Detection Data  + (The reliable detection of change points isThe reliable detection of change points is a fundamental task when analysing data across many fields, e.g., in finance, bioinformatics, and medicine. </br>To define “change points”, we assume that there is a distribution, which may change over time, generating the data we observe. A change point then is a change in this underlying distribution, i.e., the distribution coming before a change point is different from the distribution coming after. The principled way to compare distributions, and to find change points, is to employ statistical tests.</br></br>While change point detection is an unsupervised problem in practice, i.e., the data is unlabelled, the development and evaluation of data analysis algorithms requires labelled data. </br>Only few labelled real world data sets are publicly available and many of them are either too small or have ambiguous labels. Further issues are that reusing data sets may lead to overfitting, and preprocessing (e.g., removing outliers) may manipulate results.</br>To address these issues, van den Burg et al. publish 37 data sets annotated by data scientists and ML researchers and use them for an assessment of 14 change detection algorithms. </br>Yet, there remain concerns due to the fact that these are labelled by hand: Can humans correctly identify changes according to the definition, and can they be consistent in doing so?</br></br>The goal of this Bachelor's thesis is to algorithmically label their data sets following the formal definition and to also identify and label larger and higher-dimensional data sets, thereby extending their work.</br>To this end, we leverage a non-parametric hypothesis test which builds on Maximum Mean Discrepancy (MMD) as a test statistic, i.e., we identify changes in a principled way. </br>We will analyse the labels so obtained and compare them to the human annotations, measuring their consistency with the F1 score. </br>To assess the influence of the algorithmic and definition-conform annotations, we will use them to reevaluate the algorithms of van den Burg et al. and compare the respective performances.. and compare the respective performances.)
  • Standardized Real-World Change Detection Data Defense  + (The reliable detection of change points isThe reliable detection of change points is a fundamental task when analyzing data across many fields, e.g., in finance, bioinformatics, and medicine.</br>To define “change points”, we assume that there is a distribution, which may change over time, generating the data we observe. </br>A change point then is a change in this underlying distribution, i.e., the distribution coming before a change point is different from the distribution coming after. </br>The principled way to compare distributions, and thus to find change points, is to employ statistical tests.</br></br>While change point detection is an unsupervised problem in practice, i.e., the data is unlabeled, the development and evaluation of data analysis algorithms requires labeled data. Only a few labeled real-world data sets are publicly available, and many of them are either too small or have ambiguous labels. Further issues are that reusing data sets may lead to overfitting, and preprocessing may manipulate results. To address these issues, Burg et al. publish 37 data sets annotated by data scientists and ML researchers and assess 14 change detection algorithms on them. </br>Yet, there remain concerns due to the fact that these are labeled by hand: Can humans correctly identify changes according to the definition, and can they be consistent in doing so?n, and can they be consistent in doing so?)
  • Assessing Word Similarity Metrics For Traceability Link Recovery  + (The software development process usually iThe software development process usually involves different artifacts that each describe different parts of the whole software system. Traceability Link Recovery is a technique that aids the development process by establishing relationships between related parts from different artifacts. Artifacts that are expressed in natural language are more difficult for machines to understand and therefore pose a challenge to this link recovery process. A common approach to link elements from different artifacts is to identify similar words using word similarity measures. ArDoCo is a tool that uses word similarity measures to recover trace links between natural language software architecture documentation and formal architectural models. This thesis assesses the effect of different word similarity measures on ArDoCo. The measures are evaluated using multiple case studies. Precision, recall, and encountered challenges for the different measures are reported as part of the evaluation.es are reported as part of the evaluation.)
  • Feedback Mechanisms for Smart Systems  + (The talk will be held remotely from ZurichThe talk will be held remotely from Zurich at https://global.gotomeeting.com/join/935923965 and will be streamed to room 348. You can attend via GotoMeeting or in person in room 348. </br></br>Feedback mechanisms have not yet been sufficiently researched in the context of smart systems. From the research and the industrial perspective, this motivates for investigations on how users could be supported to provide appropriate feedback in the context of smart systems. A key challenge for providing such feedback means in the smart system context might be to understand and consider the needs of smart system users for communicating their feedback.</br></br>Thesis Goal: The goal of this thesis is the creation of innovative feedback mechanisms, that are tailored to a specific context within the domain of smart systems. Already existing feedback mechanisms for software in general and smart systems in particular will be assessed and the users´ needs regarding those mechanisms will be examined. Based on this, improved feedback mechanisms will be developed, either by improving on existing ones or by inventing and implementing new concepts. The overall aim of these innovative feedback mechanisms is to enable smart system users to effectively</br>and efficiently give feedback in the context of smart systems. feedback in the context of smart systems.)
  • Flexible User-Friendly Trip Planning Queries  + (The users of the location-based services oThe users of the location-based services often want to find short routes that pass through multiple Points-of-Interest (PoIs); consequently, developing trip planning queries that can find the shortest routes that passes through user-specified categories has attracted considerable attention. If multiple PoI categories, e.g., restaurant and shopping mall, are in an ordered list (i.e., a category sequence), the trip planning query searches for a sequenced route that passes PoIs that match the user-specified categories in order.</br>Existing approaches find the shortest route based on the user query. A major problem with the existing approaches is that they only take the order of POIs and</br>output the routes which match the sequence perfectly. However, users who they are interested in applying more constraints, like considering the hierarchy of the POIs</br>and the relationship among sequence points, could not express their wishes in the</br>form of query users. Example below, illustrates the problem: </br></br>Example: A user is interested in visiting three department stores (DS) but she needs</br>to have some food after each visit. It is important for the user to visit three different</br>department stores but the restaurants could be the same. How could the user, express her needs to a trip planning system?</br></br>The topic of this bachelor thesis is to design such a language for trip planning system which enables the user to express her needs in the form of user queries in a</br>flexible manner.form of user queries in a flexible manner.)
  • Development and evaluation of efficient kNN search of time series subsequences using the example of the Google Ngram data set  + (There are many data structures and indicesThere are many data structures and indices that speed up kNN queries on time series. The existing indices are designed to work on the full time series only. In this thesis we develop a data structure that allows speeding up kNN queries in an arbitrary time range, i.e. for an arbitrary subsequence. range, i.e. for an arbitrary subsequence.)
  • Evaluation of a Reverse Engineering Approach in the Context of Component-Based Software Systems  + (This thesis aims to evaluate the componentThis thesis aims to evaluate the component architecture generated by component-based software systems after reverse engineering. The evaluation method involves performing a manual analysis of the respective software systems and then comparing the component architecture obtained through the manual analysis with the results of reverse engineering. The goal is to evaluate a number of parameters, with a focus on correctness, related to the results of reverse engineering. This thesis presents the specific steps and considerations involved in manual analysis. It will also perform manual analysis on selected software systems that have already undergone reverse engineering analysis and compare the results to evaluate the differences between reverse engineering and ground truth. In summary, this paper evaluates the accuracy of reverse engineering by contrasting manual analysis with reverse engineering in the analysis of software systems, and provides some direction and support for the future development of reverse engineering.future development of reverse engineering.)
  • Blueprint for the Transition from Static to Dynamic Deployment  + (This thesis defnes a blueprint describing This thesis defnes a blueprint describing a successful ad-hoc deployment with generally applicable rules, thus providing a basis for further developments. The blueprint itself is based on the experience of developing a Continuous Deployment system, the subsequent tests and the continuous user feedback. In order to evaluate the blueprint, the blueprint-based dynamic system was compared with the previously static deployment and a user survey was conducted. The result of the study shows that the rules described in the blueprint have far-reaching consequences and generate an additional value for the users during deployment.nal value for the users during deployment.)
  • Developing a Database Application to Compare the Google Books Ngram Corpus to German News Corpora  + (This thesis focuses on the development of This thesis focuses on the development of a database application that enables a comparative analysis between the Google Books Ngram Corpus(GBNC) and a German news corpora. The GBNC provides a vast collection of books spanning various time periods, while the German news corpora encompass up-to-date linguistic data from news sources. Such comparison aims to uncover insights into language usage patterns, linguistic evolution, and cultural shifts within the German language.</br>Extracting meaningful insights from the compared corpora requires various linguistic metrics, statistical analyses and visualization techniques. By identifying patterns, trends and linguistic changes we can uncover valuable information on language usage evolution over time.</br>This thesis provides a comprehensive framework for comparing the GBNC to other corpora, showcasing the development of a database application that enables not only valuable linguistic analyses but also shed light on the composition of the GBNC by highlighting linguistic similarities and differences.g linguistic similarities and differences.)
  • Feature-Based Time Series Generation  + (To build highly accurate and robust machinTo build highly accurate and robust machine learning algorithms practitioners require data in high quality, quantity and diversity. Available time series data sets often lack in at least one of these attributes. In cases where collecting more data is not possible or too expensive, data-generating methods help to extend existing data. Generation methods are challenged to add diversity to existing data while providing control to the user over what type of data is generated. Modern methods only address one of these challenges. In this thesis we propose a novel generation algorithm that relies on characteristics of time series to enable control over the generation process. We combine classic interpretable features with unsupervised representation learning by modern neural network architectures. Further we propose a measure and visualization for diversity in time series data sets. We show that our approach can create a controlled set of time series as well as adding diversity by recombining characteristics across available instances.haracteristics across available instances.)
  • Assessing Human Understanding of Machine Learning Models  + (To deploy an ML model in practice, a stakeTo deploy an ML model in practice, a stakeholder needs to understand the behaviour and implications of this model. To help stakeholders develop this understanding, researchers propose a variety of technical approaches, so called eXplainable Artificial Intelligence (XAI). Current XAI approaches follow very task- or model-specific objectives. There is currently no consensus on a generic method to evaluate most of these technical solutions. This complicates comparing different XAI approaches and choosing an appropriate solution in practice. To address this problem, we formally define two generic experiments to measure human understanding of ML models. From these definitions we derive two technical strategies to improve understanding, namely (1) training a surrogate model and (2) translating inputs and outputs to effectively perceivable features. We think that most existing XAI approaches only focus on the first strategy. Moreover, we show that established methods to train ML models can also help stakeholders to better understand ML models. In particular, they help to mitigate cognitive biases. In a case study, we demonstrate that our experiments are practically feasible and useful. We suggest that future research on XAI should use our experiments as a template to design and evaluate technical solutions that actually improve human understanding.that actually improve human understanding.)
  • Cost-Efficient Evaluation of ML Classifiers With Feature Attribution Annotations (Final BA Presentation)  + (To evaluate the loss of cognitive ML modelTo evaluate the loss of cognitive ML models, e.g., text or image classifier, accurately, one usually needs a lot of test data which are annotated manually by experts. In order to estimate accurately, the test data should be representative or else it would be hard to assess whether a model overfits, i.e., it uses spurious features of the images significantly to decide on its predictions.With techniques such as Feature Attribution, one can then compare important features that the model sees with their own expectations and can therefore be more confident whether or not he should trust the model. In this work, we propose a method that estimates the loss of image classifiers based on Feature-Attribution techniques. We use the classic approach for loss estimate as our benchmark to evaluate our proposed method. At the end of this work, our analysis reveals that our proposed method seems to have a similar loss estimate to that of the classic approach with a good image classifer and a representative test data. Based on our experiment, we expect that our proposed method could give a better loss estimate than the classic approach in cases where one has a biased test data and an image classifier which overfits.ta and an image classifier which overfits.)
  • Integrated Reliability Analysis of Business Processes and Information Systems  + (Today it is hardly possible to find a busiToday it is hardly possible to find a business process (BP) that does not involve working with an information system (IS). In order to better plan and improve such BPs a lot of research has been done on modeling and analysis of BPs. Given the dependency between BPs and IS such assessment of BPs should take the IS into account. Furthermore, in most assessment of BPs only the functionality, but not the so called non-functional requirements (NFR) are taken into account. This is not adequate, since NFRs influence BPs just as they influence IS. In particular the NFR reliability is interesting for planning of BPs in business environments. Therefore, the presented approach provides an integrated reliability analysis of BPs and IS. The proposed analysis takes humans, device resources and the impact from the IS into account. In order to model reliability information it has to be determined, which metrics will be used for each BP element. Thus a structured literature search on reliability modeling and analysis is conducted in seven resources. Through the structured search 40 papers on modeling and analysis of BP reliability were found. Ten of them were classified as relevant for the topic. The structured search revealed that no approach allows for modeling reliability of activities and resources separate from each other. Moreover, there is no common answer on how to model human resources in BPs. In order to enable such an integrated approach the reliability information of BPs is modeled as an extension of the IntBIIS approach. BP actions get a failure probability and the resources are extended with two reliability related attributes. For device resources the commonly used MTTF and MTTR are added in order to provide reliability information. Roles, that are associated with actor resources, are annotated with MTTF and a newly developed MTTRepl. The next step is a reliability analysis of an BP including the IS. Markov chains and reduction rules are used to analyze the BP reliability. This approach is exemplary implemented with Java in the context of PCM, that already provides analysis for IS. The result of the analysis is the probability of successful execution of the BP including the IS. An evaluation of the implemented analysis presents that it is possible to analyze the reliability of a BP including all resources and the involved IS. The results show that the reliability prediction is more accurate, when BP and IS are assessed through a combined analysis. are assessed through a combined analysis.)
  • Deriving Twitter Based Time Series Data for Correlation Analysis  + (Twitter has been identified as a relevant Twitter has been identified as a relevant data source for modelling purposes in the last decade. In this work, our goal was to model the conversational dynamics of inflation development in Germany through Twitter Data Mining. To accomplish this, we summarized and compared Twitter data mining techniques for time series data from pertinent research. Then, we constructed five models for generating time series from topic-related tweets and user profiles of the last 15 years. Evaluating the models, we observed that several approaches like modelling for user impact or adjusting for automated twitter accounts show promise. Yet, in the scenario of modelling inflation expectation dynamics, these more complex models could not contribute to a higher correlation between German CPI and the resulting time series compared to a baseline approach.me series compared to a baseline approach.)
  • Entwurf und Umsetzung von Zugriffskontrolle in der Sichtenbasierten Entwicklung  + (Um der steigenden Komplexität technischer Um der steigenden Komplexität technischer Systeme zu begegnen, werden in ihrer Entwicklung sichtenbasierte Entwicklungsprozesse eingesetzt. Die dabei definierten Sichten zeigen nur die für ein bestimmtes Informationsbedürfnis relevanten Daten über das System, wie die Architektur, die Implementierung oder einen Ausschnitt davon und reduzieren so die Menge an Informationen und vereinfachen dadurch die Arbeit mit dem System. Neben dem Zweck der Informationsreduktion kann auch eine Einschränkung des Zugriffs aufgrund fehlender Zugriffsberechtigungen notwendig sein. Die Notwendigkeit ergibt sich beispielsweise bei der organisationsübergreifenden Zusammenarbeit zur Umsetzung vertraglicher Vereinbarungen. Um die Einschränkung des Zugriffs umsetzen zu können, wird eine Zugriffskontrolle benötigt. Bestehende Arbeiten nutzen eine Zugriffskontrolle für die Erzeugung einer Sicht. Die Definition weiterer Sichten darauf ist nicht vorgesehen. Außerdem fehlt eine allgemeine Betrachtung einer Integration einer Zugriffskontrolle in einen sichtenbasierten Entwicklungsprozess. Daher stellen wir in dieser Arbeit das Konzept einer Integration einer rollenbasierten Zugriffskontrolle in einen sichtenbasierten Entwicklungsprozess für beliebige Systeme vor. Mit dem Konzept ermöglichen wir die feingranulare Definition und Auswertung von Zugriffsrechten für einzelne Modellelemente für beliebige Metamodelle. Das Konzept implementieren wir prototypisch in Vitruv, einem Framework für sichtenbasierte Entwicklung. Wir evaluieren diesen Prototypen hinsichtlich seiner Funktionalität mithilfe von Fallstudien. Die Zugriffskontrolle konnten wir dabei für verschiedene Fallstudien erfolgreich einsetzen. Außerdem diskutieren wir die Integrierbarkeit des Prototypen in einen allgemeinen sichtenbasierten Entwicklungsprozess.inen sichtenbasierten Entwicklungsprozess.)
  • Konzept eines Dokumentationsassistenten zur Erzeugung strukturierter Anforderungen basierend auf Satzschablonen  + (Um die Qualität und Glaubwürdigkeit eines Um die Qualität und Glaubwürdigkeit eines Produktes zu erhalten, ist ein systematisches Anforderungsmanagement erforderlich, wobei die Merkmale eines Produkts durch Anforderungen beschrieben werden. Deswegen wurde im Rahmen dieser Arbeit ein Konzept für einen Dokumentationsassistenten entwickelt, mit dem Benutzer strukturierte Anforderungen basierend auf den Satzschablonen nach SOPHIST erstellen können. Dies beinhaltet einen linguistischen Aufbereitungsansatz, der semantische Rollen aus freiem Text extrahiert. Während des Dokumentationsprozesses wurden die semantischen Rollen benutzt, um die passendste Satzschablone zu identifizieren und diese als Hilfestellung dem Benutzer aufzuzeigen. Zudem wurde eine weitere Hilfestellung angeboten, nämlich die Autovervollständigung, die mithilfe von Markovketten das nächste Wort vorhersagen kann. Insgesamt wurden rund 500 Anforderungen aus verschiedenen Quellen herangezogen, um die Integrität des Konzepts zu bewerten. Die Klassifizierung der Texteingabe in eine Satzschablone erreicht ein F1-Maß von 0,559. Dabei wurde die Satzschablone für funktionale Anforderungen mit einem F1-Maß von 0,908 am besten identifiziert. Außerdem wurde der Zusammenhang zwischen den Hilfestellungen mithilfe eines Workshops bewertet. Hierbei konnte gezeigt werden, dass die Anwendung des vorliegenden Konzepts, die Vollständigkeit von Anforderungen verbessert und somit die Qualität der zu dokumentierenden Anforderungen steigert.u dokumentierenden Anforderungen steigert.)
  • Überführen von Systemarchitekturmodellen in die datenschutzrechtliche Domäne durch Anwenden der DSGVO  + (Um die im digitalen Raum allgegenwärtigen,Um die im digitalen Raum allgegenwärtigen, personenbezogenen Daten vor Missbrauch zu schützen hat die EU eine Datenschutzgrundverordnung eingeführt. An diese müssen sich sämtliche Unternehmen halten, die mit personenbezogenen Daten im digitalen Raum hantieren. Die Implementierung dieser in Softwaresystemen stellt sich aber durch die Involvierung der juristischen Domäne als aufwändig dar. In dieser Bachelorarbeit wurde daher eine Transformation aus Palladio in ein GDPR-Modell entwickelt, um die Kommunikation der verschiedenen Fachbereiche zu erleichtern.verschiedenen Fachbereiche zu erleichtern.)
  • Kritische Workflows in der Fertigungsindustrie  + (Um mögliche Inkonsistenzen zwischen techniUm mögliche Inkonsistenzen zwischen technischen Modellen und ihren verursachenden Workflows in der Fertigungsindustrie zu identifizieren, wurde der gesamte Fertigungsprozess eines beispielhaften Präzisionsfertigers in einzelne Workflows aufgeteilt. Daraufhin wurden neun Experteninterviews durchgeführt, um mögliche Inkonsistenzen zwischen technischen Modellen zu identifizieren und diese in die jeweiligen verursachenden Workflows zu kategorisieren. Insgesamt wurden 13 mögliche Inkonsistenzen dargestellt und ihre jeweilige Entstehung erläutert. In einer zweiten Interview-Iteration wurden die Experten des Unternehmens erneut zu jeder zuvor identifizierten Inkonsistenz befragt, um die geschätzten Auftrittswahrscheinlichkeiten der Inkonsistenzen und mögliche Auswirkungen auf zuvor durchgeführte, oder darauf folgende Workflows in Erfahrung zu bringen.olgende Workflows in Erfahrung zu bringen.)
  • Modellierung von Annahmen in Softwarearchitekturen  + (Undokumentierte Sicherheitsannahmen könnenUndokumentierte Sicherheitsannahmen können zur Vernachlässigung von Softwareschwachstellen führen, da Zuständigkeit und Bezugspunkte von Sicherheitsannahmen häufig unklar sind. Daher ist das Ziel dieser Arbeit, Sicherheitsannahmen in den komponentenbasierten Entwurf zu integrieren. In dieser Arbeit wurde basierend auf Experteninterviews und Constructive Grounded Theory ein Modell für diesen Zweck abgeleitet. Anhand einer Machbarkeitsstudie wird der Einsatz des Annahmenmodells demonstriert. Einsatz des Annahmenmodells demonstriert.)
  • Komplexe Abbildungen von Formularelementen zur Generierung von aktiven Ontologien  + (Unser heutiges Leben wird zunehmend von AsUnser heutiges Leben wird zunehmend von Assistenzsystemen erleichtert. Hierzu gehören auch die immer häufiger verwendeten intelligenten Sprachassistenten wie Apple's Siri. Statt lästigem Flüge vergleichen auf diversen Internetportalen können Sprachassistenten dieselbe Arbeit tun.</br>Um Informationen verarbeiten und an den passenden Webdienst weiterleiten zu können, muss das Assistenzsystem natürliche Sprache verstehen und formal repräsentieren können. Hierfür werden bei Siri aktive Ontologien (AOs) verwendet, die derzeit mit großem manuellem Aufwand manuell erstell werden müssen. </br>Die am KIT entwickelte Rahmenarchitektur EASIER beschäftigt sich mit der automatischen Generierung von aktiven Ontologien aus Webformularen.</br>Eine Herausforderung bei der Erstellung von AOs aus Webformularen ist die Zuordnung unterschiedlich ausgeprägter Formularelemente mit gleicher Semantik, da semantisch gleiche aber unterschiedlich realisierte Konzepte zu einem AO-Knoten zusammengefasst werden sollen. Es ist daher nötig, semantisch ähnliche Formularelemente identifizieren zu können.</br>Diese Arbeit beschäftigt sich mit der automatischen Identifikation solcher Ähnlichkeiten und der Konstruktion von Abbildungen zwischen Formularelementen.on Abbildungen zwischen Formularelementen.)
  • Erstellung eines Benchmarks zum Anfragen temporaler Textkorpora zur Untersuchung der Begriffsgeschichte und historischen Semantik  + (Untersuchungen innerhalb der BegriffsgeschUntersuchungen innerhalb der Begriffsgeschichte erfahren einen Aufschwung. Anhand neuer technologischer Möglichkeiten ist es möglich große Textmengen maschinengestützt nach wichtigen Belegstellen zu untersuchen. Hierzu wurden die methodischen Arbeitsweisen der Historiker und Linguisten untersucht um bestmöglich deren Informationsbedürfnisse zu befriedigen. Auf dieser Basis wurden neue Anfrageoperatoren entwickelt und diese in Kombination mit bestehenden Operatoren in einem funktionalen Benchmark dargestellt. Insbesondere eine Anfragesprache bietet die nötige Parametrisierbarkeit, um die variable Vorgehensweise der Historiker unterstützen zu können.ise der Historiker unterstützen zu können.)
  • Elicitation and Classification of Security Requirements for Everest  + (Unvollständige und nicht überprüfte AnfordUnvollständige und nicht überprüfte Anforderungen können zu Missverständnissen und falschen Vorstellungen führen. Gerade im Sicherheitsbereich können verletzte Anforderungen Hinweise auf potenzielle Schwachstellen sein. Um eine Software auf Schwachstellen zu prüfen, werden Sicherheitsanforderungen an ihre Implementierung geknüpft. Hierfür müssen spezifische Anforderungsattribute identifiziert und mit dem Design verknüpft werden.</br>In dieser Arbeit werden 93 Sicherheitsanforderungen auf Designebene für die Open-Source-Software EVerest, einer Full-Stack-Umgebung für Ladestationen, erhoben. Mithilfe von Prompt Engineering und Fine-tuning werden Designelemente mittels GPT klassifiziert und ihre jeweiligen Erwähnungen aus den erhobenen Anforderungen extrahiert. Die Ergebnisse deuten darauf hin, dass die Klassifizierung von Designelementen in Anforderungen sowohl bei Prompt Engineering als auch bei Fine-tuning gut funktioniert (F1-Score: 0,67-0,73). In Bezug auf die Extraktion von Designelementen übertrifft Fine-tuning (F1-Score: 0,7) jedoch Prompt Engineering (F1-Score: 0,52). Wenn beide Aufgaben kombiniert werden, übertrifft Fine-tuning (F1-Score: 0,87) ebenfalls Prompt Engineering (F1-Score: 0,61).falls Prompt Engineering (F1-Score: 0,61).)
  • Detecting Outlying Time-Series with Global Alignment Kernels  + (Using outlier detection algorithms, e.g., Using outlier detection algorithms, e.g., Support Vector Data Description (SVDD), for detecting outlying time-series usually requires extracting domain-specific attributes. However, this indirect way needs expert knowledge, making SVDD impractical for many real-world use cases. Incorporating "Global Alignment Kernels" directly into SVDD to compute the distance between time-series data bypasses the attribute-extraction step and makes the application of SVDD independent of the underlying domain.</br></br>In this work, we propose a new time-series outlier detection algorithm, combining "Global Alignment Kernels" and SVDD. Its outlier detection capabilities will be evaluated on synthetic data as well as on real-world data sets. Additionally, our approach's performance will be compared to state-of-the-art methods for outlier detection, especially with regard to the types of detected outliers. regard to the types of detected outliers.)
  • Efficient Verification of Data-Value-Aware Process Models  + (Verification methods detect unexpected behVerification methods detect unexpected behavior of business process models before their execution. In many process models, verification depends on data values. A data value is a value in the domain of a data object, e.g., $1000 as the price of a product. However, verification of process models with data values often leads to state-space explosion. This problem is more serious when the domain of data objects is large. The existing works to tackle this problem often abstract the domain of data objects. However, the abstraction may lead to a wrong diagnosis when process elements modify the value of data objects.</br> </br>In this thesis, we provide a novel approach to enable verification of process models with data values, so-called data-value-aware process models. A distinctive of our approach is to support modification of data values while preserving the verification results. We show the functionality of our approach by conducting the verification of a real-world application: the German 4G spectrum auction model.ion: the German 4G spectrum auction model.)
  • On the Interpretability of Anomaly Detection via Neural Networks  + (Verifying anomaly detection results when wVerifying anomaly detection results when working in on an unsupervised use case is challenging. For large datasets a manual labelling is economical unfeasible. In this thesis we create explanations to help verifying and understanding the detected anomalies. We develop a method to rule generation algorithm that describe frequent patterns in the output of autoencoders. The number of rules is significantly lower than the number of anomalies. Thus, finding explanations for these rules is much less effort compared to finding explanations for every single anomaly. Its performance is evaluated on a real-world use case, where we achieve a significant reduction of effort required for domain experts to understand the detected anomalies but can not specify the usefulness in exact numbers due to the missing labels. Therefore, we also evaluate the approach on benchmark dataset.valuate the approach on benchmark dataset.)
  • A Mobility Case Study Framework for Validating Uncertainty Impact Analyses regarding Confidentiality  + (Vertraulichkeit ist eine wichtige SicherheVertraulichkeit ist eine wichtige Sicherheitsanforderung an Informationssysteme. Bereits im frühen Entwurf existieren Ungewissheiten, sowohl über das System als auch dessen Umgebung, die sich auf die Vertraulichkeit auswirken können. Es existieren Ansätze, die Softwarearchitektinnen und Softwarearchitekten bei der Untersuchung von Ungewissheiten und deren Auswirkung auf die Vertraulichkeit unterstützen und somit den Aufwand reduzieren. Diese Ansätze wurden jedoch noch nicht umfangreich evaluiert. Bei der Evaluierung ist ein einheitliches Vorgehen wichtig, um konsistente Ergebnisse zu erhalten. Obwohl es allgemein Arbeiten in diesem Bereich gibt, sind diese nicht spezifisch genug, um die Anforderung zu erfüllen.</br></br>In dieser Ausarbeitung stellen wir ein Rahmenwerk vor, das diese Lücke schließen soll. Dieses Rahmenwerk besteht aus einem Untersuchungsprozess und einem Fallstudienprotokoll, diese sollen Forschenden helfen, weitere Fallstudien zur Validierung der Ungewissheits-Auswirkungs-Analysen strukturiert durchzuführen und damit auch Ungewissheiten und deren Auswirkung auf Vertraulichkeit zu erforschen. Wir evaluieren unseren Ansatz, indem wir eine Mobilitätsfallstudie durchführen.wir eine Mobilitätsfallstudie durchführen.)