Attribut:Kurzfassung

Aus SDQ-Institutsseminar

Dies ist ein Attribut des Datentyps Text.

Unterhalb werden 50 Seiten angezeigt, auf denen für dieses Attribut ein Datenwert gespeichert wurde.
B
In machine learning, simpler, interpretable models require significantly more training data than complex, opaque models to achieve reliable results. This is a problem when gathering data is a challenging, expensive or time-consuming task. Data synthesis is a useful approach for mitigating these problems. An essential aspect of tabular data is its heterogeneous structure, as it often comes in ``mixed data´´, i.e., it contains both categorical and numerical attributes. Most machine learning methods require the data to be purely numerical. The usual way to deal with this is a categorical encoding. In this thesis, we evaluate a proposed tabular data synthesis pipeline consisting of a categorical encoding, followed by data synthesis and an optional relabeling of the synthetic data by a complex model. This synthetic data is then used to train a simple model. The performance of the simple model is used to quantify the quality of the generated data. We surveyed the current state of research in categorical encoding and tabular data synthesis and performed an extensive benchmark on a motivated selection of encoders and generators.  +
Rückverfolgbarkeitsinformationen zwischen Quelltext und Anforderungen ermöglichen es Werkzeugen Programmierer besser bei der Navigation und der Bearbeitung von Quelltext zu unterstützen. Um solche Verbindungen automatisiert herstellen zu können, muss die Semantik der Anforderungen und des Quelltextes verstanden werden. Im Rahmen dieser Arbeit wird ein Verfahren zur Beschreibung der geteilten Semantik von Gruppierungen von Programmelementen entwickelt. Das Verfahren basiert auf dem statistischen Themenmodell LDA und erzeugt eine Menge von Schlagwörtern als Beschreibung dieser Semantik. Es werden natürlichsprachliche Inhalte im Quelltext der Gruppierungen analysiert und genutzt, um das Modell zu trainieren. Um Unsicherheiten in der Wahl der Parameter von LDA auszugleichen und die Robustheit der Schlagwortmenge zu verbessern, werden mehrere LDA-Modelle kombiniert. Das entwickelte Verfahren wurde im Rahmen einer Nutzerstudie evaluiert. Insgesamt wurde eine durchschnittliche Ausbeute von 0.73 und ein durchschnittlicher F1-Wert von 0.56 erreicht.  +
Das Verständnis der Absicht von Softwareanforderungen ist essenziell für die automatische Generierung von Informationen zur Rückverfolgbarkeit. Funktionale Anforderungen können verschiedene semantische Funktionen, wie die Beschreibung von erwarteten Funktionalitäten oder Zuständen des Systems, beinhalten. Im Rahmen des INDIRECT-Projektes wird ein Werkzeug zur Klassifikation der semantischen Funktion der Sätze in Anforderungsbeschreibungen entwickelt. Dafür werden verschiedene maschinelle Lernverfahren (Stützvektormaschine, Logistische Regression, Random Forest und Naïve Bayes) auf ihre Eignung für diese Aufgabe überprüft. Um ihre Funktionalität zu evaluieren, werden die Verfahren auf einem Datensatz aus frei verfügbaren Anforderungsbeschreibungen getestet, welcher manuell mit semantischen Funktionen etikettiert wurde. Die Ergebnisse zeigen, dass der Random Forest-Klassifikator unter Verwendung von N-Grammen auf Zeichenebene mit einem F1-Maß von 0,79 die beste Leistung auf unbekannten Projekten liefert. Die Lernverfahren werden zusätzlich mittels einer Kreuzvalidierung auf allen vorhandenen Daten getestet. Dabei erzielt die Stützvektormaschine mit einem F1-Maß von 0,90 die besten Ergebnisse, während der Random Forest-Klassifikator ein F1-Maß von 0.89 erreicht.  +
Natürliche Sprache enthält Aktionen, die ausgeführt werden können. Innerhalb eines Diskurses kommt es häufig vor, dass Menschen eine Aktion mehrmals beschreiben. Dies muss nicht immer bedeuten, dass diese Aktion auch mehrmals ausgeführt werden soll. Diese Bachelorarbeit untersucht, wie erkannt werden kann, ob sich eine Nennung einer Aktion auf eine bereits genannte Aktion bezieht. Es wird ein Vorgehen erarbeitet, das feststellt, ob sich mehrere Aktionsnennungen in gesprochener Sprache auf dieselbe Aktionsidentität beziehen. Bei diesem Vorgehen werden Aktionen paarweise verglichen. Das Vorgehen wird als Agent für die Rahmenarchitektur PARSE umgesetzt und evaluiert. Das Werkzeug erzielt ein F1-Maß von 0,8, wenn die Aktionen richtig erkannt werden und Informationen über Korreferenz zwischen Entitäten zur Verfügung stehen.  +
Die Präsentation orientiert sich an meinem Praktikumsbericht zu meinem sechswöchigen Pflichtpraktikum beim Start-Up Unternehmen Morotai. Meine Hauptaufgabe hierbei waren kleine Programmieraufgaben mit HTML, CSS und JavaScript, um die Funktionalität der Website zu verbessern. Ich werde kurz auf das Unternehmen an sich, auf meine Aufgabenstellung in der Abteilung, der Umsetzung dieser und den konkreten Studienbezug (Studiengang Informationswirtschaft) des Praktikums eingehen.  +
Das Messen der Qualität von Datenfluss-Low-Code-Programmen und auch das Erstellen qualitativ hochwertiger Programme ist schwer. Es entstehen viele Programme mit Anzeichen für schlechte Qualität, die zwar Ergebnisse liefern, aber schlecht wartbar und unverständlich sind. Im Laufe dieser Arbeit wurde die Übertragbarkeit, von klassischen Codemetriken und Graphmetriken überprüft und durchgeführt, um zu evaluieren, welche Metriken sich für die Messung der Qualität von Low-Code-Programmen eignen?  +
Erneuerbare Energien wie Photovoltaik-Anlagen stellen für den Privathaushalt eine Möglichkeit dar, eigenen Strom zu produzieren und damit den Geldbeutel sowie die Umwelt zu schonen. Auch in größeren Wohnblocks mit vielen Partien kommen solche Anlagen gemeinschaftlich genutzt zum Einsatz. Der Wunsch, die Nutzung zu optimieren, verleitet dazu, Demand Side Management Strategien zu verwenden. Speziell werden dabei Lastverschiebungen von einzelnen Haushaltsgeräten betrachtet, um die Sonnenenergie besser zu nutzen. Diese Arbeit bewertet verschiedene solcher Lastverschiebungen und ihre lokalen und globalen Effekte auf die Haushalte. Dazu werden verschiedene Modelle für variable Strompreisberechnung, Haushaltssimulation und Umsetzung von Lastverschiebung entworfen und in einem eigens geschriebenen Simulator zur Anwendung gebracht. Ziel dabei ist es, durch verschiedene Experimente, die Auswirkungen auf die Haushalte in ausgewählten Bewertungsmetriken zu erfassen. Es stellt sich heraus, dass es mäßige Sparmöglichkeiten für private Photovoltaik-Nutzer durch Lastverschiebung gibt, die Optimierung jedoch sowohl im lokalen als auch um globalen Bereich aber ein spezifisches Problem darstellt.  +
Moderne Prozessoren erreichen eine Leistungssteigerung durch Hinzufügen mehrerer Kerne. Dadurch muss bei der Softwareentwicklung darauf geachtet werden, die Programmabläufe zu parallelisieren. Einflussfaktoren, die die Leistungsfähigkeit paralleler Programmausführung beeinflussen können, wurden bereits kategorisiert. Der Einfluss der gewählten Parallelisierungsstrategie ist dabei unbekannt. Im Rahmen der Bachelorarbeit wurde der Einfluss der gewählten Parallelisierungsstrategie auf die Leistungsfähigkeit von Software untersucht. Dazu wurden unterschiedliche Hardwareanforderungen genutzt. Mit ihnen wurden einzelne Arbeitspakete generiert. Diese wurden durch verschiedene Parallelisierungsstrategien ausgeführt. Die verwendeten Parallelisierungsstrategien sind: Java Threads, Java ParallelStreams, OpenMp und Akka Actor. Bei jeder Ausführung wurden die Laufzeit und das Cacheverhalten gemessen. Zudem wurden die Experimente auf verschiedenen dezidierten Servern und dem BwUniCluster durchgeführt. Die Auswertungen erfolgten mittels Beschleunigungskurven und der Cache Miss Rate. Die Ergebnisse zeigen, dass sich die Parallelisierungsstrategien bei den verwendeten Arbeitspaketen nur in geringem Maße unterscheiden.  +
Semantic similarity estimation is a widely used and well-researched area. Current state-of-the-art approaches estimate text similarity with large language models. However, semantic similarity estimation often ignores fine-grain differences between semantic similar sentences. This thesis proposes the concept of semantic dimensions to represent fine-grain differences between two sentences. A workshop with domain experts identified ten semantic dimensions. From the workshop insights, a model for semantic dimensions was created. Afterward, 60 participants decided via a survey which semantic dimensions are useful to users. Detectors for the five most useful semantic dimensions were implemented in an extendable framework. To evaluate the semantic dimensions detectors, a dataset of 200 sentence pairs was created. The detectors reached an average F1 score of 0.815.  +
This thesis defnes a blueprint describing a successful ad-hoc deployment with generally applicable rules, thus providing a basis for further developments. The blueprint itself is based on the experience of developing a Continuous Deployment system, the subsequent tests and the continuous user feedback. In order to evaluate the blueprint, the blueprint-based dynamic system was compared with the previously static deployment and a user survey was conducted. The result of the study shows that the rules described in the blueprint have far-reaching consequences and generate an additional value for the users during deployment.  +
C
Algorithms that extract dependencies from data and represent them as causal graphs must also be tested. For such tests, data with a known ground truth is required, but this is rarely available. Generating data under controlled conditions through simulations is expensive and time-consuming. A solution to this problem is to create synthetic datasets, where dependencies are predefined, to evaluate the results of these algorithms. This work focuses on building a framework for the synthesis of data. In the framework, the synthesis process begins with generating a random dependency graph, specifically a directed acyclic graph. Each node in the graph, except the source nodes, has parent nodes and represents a variable. In the next step, each node is populated with predefined random dependencies. A dependency is a model that determines the value of a variable based on its parent variables. From this structure, datasets can be sampled. Users can control the properties of the causal graph through various parameters and choose from multiple types of dependencies, representing different complexity levels. Additionally, the sampling process allows for interactivity by enabling the exchange of dependencies during the sampling process. Dependencies can be exchanged with fixed values, probability distributions, or time series functions. This flexibility provides a robust tool for improving and comparing the mentioned algorithms under various conditions.  +
Particle colliders are a primary method of conducting experiments in particle physics, as they allow to both create short-lived, high-energy particles and observe their properties. The world’s largest particle collider, the Large Hadron Collider (subsequently referred to as LHC), is operated by the European Organization for Nuclear Research (CERN) near Geneva. The operation of this kind of accelerator requires the storage and computationally intensive analysis of large amounts of data. The Worldwide LHC Computing Grid (WLCG), a global computing grid, is being run alongside the LHC to serve this purpose. This Bachelor’s thesis aims to support the creation of an architecture model and simulation for parts of the WLCG infrastructure with the goal of accurately being able to simulate and predict changes in the infrastructure such as the replacement of the load balancing strategies used to distribute the workload between available nodes.  +
Dependency estimation is a crucial task in data analysis and finds applications in, e.g., data understanding, feature selection and clustering. This thesis focuses on Canonical Dependency Analysis, i.e., the task of estimating the dependency between two random vectors, each consisting of an arbitrary amount of random variables. This task is particularly difficult when (1) the dimensionality of those vectors is high, and (2) the dependency is non-linear. We propose Canonical Monte Carlo Dependency Estimation (cMCDE), an extension of Monte Carlo Dependency Estimation (MCDE, Fouché 2019) to solve this task. Using Monte Carlo simulations, cMCDE estimates dependency based on the average discrepancy between empirical conditional distributions. We show that cMCDE inherits the useful properties of MCDE and compare it to existing competitors. We also propose and apply a method to leverage cMCDE for selecting features from very high-dimensional features spaces, demonstrating cMCDE’s practical relevance.  +
Im Laufe der Zeit hat sich die Softwareentwicklung von der Entwicklung von Komplett-systemen zur Entwicklung von Software Komponenten, die in andere Applikation inte- griert werden können,verändert.Bei Software Komponenten handelt es sich um Services, die eine andere Applikation erweitern.Die Applikation wird dabeivonDritten entwickelt. In dieser Bachelorthesis werden die Probleme betrachtet, die bei der Integration von Ser- vices auftreten. Mit einer Umfrage wird das Entwicklungsteam von LogMeIn, welches für die Integration von Services zuständig ist, befragt. Aus deren Erfahrungen werden Probleme ausfndig gemacht und Lösungen dafür entwickelt. Die Probleme und Lösungen werden herausgearbeitet und an hand eines fort laufenden Beispiels, des GoToMeeting Add-ons für den Google Kalender,veranschaulicht.Für die Evaluation wird eine Fallstudie durchgeführt, in der eine GoToMeeting Integration für Slack entwickelt wird. Während dieser Entwicklung treten nicht alle ausgearbeiteten Probleme auf. Jedoch können die Probleme, die auftreten mit den entwickelten Lösungen gelöst werden. Zusätzlich tritt ein neues Problem auf, für das eine neue Lösung entwickelt wird. Das Problem und die zugehörige Lösung werden anschließend zu dem bestehenden Set von Problemen und Lösungen hinzugefügt. Das Hinzufügen des gefundenen Problems ist ein perfektes Beispiel dafür, wie das Set in Zukunft bei neuen Problemen, erweitert werden kann.  +
The data collected in many real-world scenarios such as environmental analysis, manufacturing, and e-commerce are high-dimensional and come as a stream, i.e., data properties evolve over time – a phenomenon known as "concept drift". This brings numerous challenges: data-driven models become outdated, and one is typically interested in detecting specific events, e.g., the critical wear and tear of industrial machines. Hence, it is crucial to detect change, i.e., concept drift, to design a reliable and adaptive predictive system for streaming data. However, existing techniques can only detect "when" a drift occurs and neglect the fact that various drifts may occur in different dimensions, i.e., they do not detect "where" a drift occurs. This is particularly problematic when data streams are high-dimensional. The goal of this Master’s thesis is to develop and evaluate a framework to efficiently and effectively detect “when” and “where” concept drift occurs in high-dimensional data streams. We introduce stream autoencoder windowing (SAW), an approach based on the online training of an autoencoder, while monitoring its reconstruction error via a sliding window of adaptive size. We will evaluate the performance of our method against synthetic data, in which the characteristics of drifts are known. We then show how our method improves the accuracy of existing classifiers for predictive systems compared to benchmarks on real data streams.  +
Data streams are ubiquitous in modern applications such as predictive maintenance or quality control. Data streams can change in unpredictable ways, challenging existing supervised learning algorithms that assume a stationary relationship between input data and labels. Supervised learning algorithms for data streams must therefore "adapt" to changing data distributions. Active learning (AL), a sub-field of supervised learning, aims to reduce the total cost of labeling by identifying the most valuable data points for training. However, existing stream-based AL methods have difficulty adapting to changes in data streams as they rely mainly on the sparsely labeled data and ignore the regionality of changes, resulting in slow change adaptions. To address these issues, this thesis presents an active learning framework for data streams that adapts to regional changes in the underlying data stream. Our idea is to enrich hierarchical data stream clustering with labeling statistics to measure the regionality and relevance of changes. Using such information in stream-based active learning leads to more effective labeling, resulting in faster change adaption.  +
Das Palladio Komponentenmodell (PCM) ermöglicht die Modellierung und Simulation der Qualitätseigenschaften eines Systems aus komponentenbasierter Software und für die Ausführung gewählter Hardware. Stehen dabei bereits Teile des Systems zur Verfügung können diese in die Co-Simulation von Workload, Software und Hardware integriert werden, um weitere Anwendungsgebiete für das PCM zu ermöglichen oder die Anwendung in bestehenden zu verbessern. Die Beiträge dieser Arbeit sind das Erarbeiten von sechs verschiedenen Ansätzen zur Anpassung des PCM für unterschiedliche Anwendungsgebiete und deren Einstufung anhand von Bewertungskriterien. Für den dabei vielversprechendsten Ansatz wurde ein detailliertes Konzept entwickelt und prototypisch umgesetzt. Dieser Ansatz, ein Modell im PCM mittels einer feingranularen Hardwaresimulation zu parametrisieren, wird in Form des Prototyps bezüglich seiner Umsetzbarkeit, Erweiterbarkeit und Vollständigkeit evaluiert. Die Evaluation der prototypischen Umsetzung erfolgt unter anderem anhand der Kriterien Benutzbarkeit, Genauigkeit und Performance, die in Relation zum PCM betrachtet werden. Der Prototyp ermöglicht die Ausführung einer Hardwaresimulation mit im PCM spezifizierten Parametern, die Extraktion dabei gemessener Leistungsmerkmale und deren direkte Verwendung in einer Simulation des PCM.  +
In data analysis, entity matching (EM) or entity resolution is the task of finding the same entity within different data sources. When joining different data sets, it is a required step where the same entities may not always share a common identifier. When applied to graph data like knowledge graphs, ontologies, or abstractions of physical systems, the additional challenge of entity relationships comes into play. Now, not just the entities themselves but also their relationships and, therefore, their neighborhoods need to match. These relationships can also be used to our advantage, which builds the foundation for collective entity matching (CEM). In this bachelor thesis, we focus on a graph data set based on a material simulation with the intent to match entities between neighboring system states. The goal is to identify structures that evolve over time and link their states with a common identifier. Current CEM Algorithms assume perfect matches to be possible, i.e., every entity can be matched. We want to overcome this challenge and address the high imbalance of potential candidates and impossible matches. A third major challenge is the large volumes of data which requires our algorithm to be efficient.  +
An der Entwicklung komplexer Systeme sind viele Teams aus verschiedenen Disziplinen vertreten. So sind zum Beispiel an der Entwicklung einer sicherheitskritischen Systemarchitektur mindestens ein Systemarchitekt als auch ein Sicherheitsexperte beteiligt. Die Aufgabe des ersteren ist es, eine Systemarchitektur zu entwickeln, welche alle funktionalen und nicht-funktionalen Anforderungen erfüllt. Der Sicherheitsexperte analysiert diese Architektur und trägt so zum Nachweis bei, dass das System die geforderten Sicherheitsanforderungen erfüllt. Sicherheit steht hierbei für die Gefahrlosigkeit des Nutzers und der Umwelt durch das System (Safety). Um ihr Ziel zu erreichen, folgen sowohl der Systemarchitekt als auch der Sicherheitsexperte einem eigenen Vorgehensmodell. Aufgrund fehlender Interaktionspunkte müssen beide unabhängig voneinander und unkoordiniert durchgeführt werden. Dies kann zu Inkonsistenzen zwischen Architektur- und Sicherheitsartefakten führen und zusätzlichen Aufwand verursachen, was sich wiederum negativ auf die Entwicklungszeit und Qualität auswirkt. In dieser Arbeit kombinieren wir zwei ausgewählte Vorgehensmodelle zu einem neuen, einzelnen Vorgehensmodell. Die Kombination erfolgt auf Basis des identifizierten Informationsflusses innerhalb und zwischen den ursprünglichen zwei Vorgehensmodellen. Durch die Kombination werden die Vorteile beider Ansätze übernommen und die zuvor genannten Probleme angegangen. Bei den zwei ausgewählten Vorgehensmodellen handelt es sich um den Harmony-Ansatz von IBM und die ISO-Norm 26262. Ersterer erlaubt es eine Systemarchitektur systematisch und modellbasiert mit SysML zu entwickeln, während die ISO-Norm dem Sicherheitsexperten bei seiner Arbeit bezüglich der funktionalen Sicherheit in Straßenfahrzeugen unterstützt. Die Evaluation unseres Ansatzes zeigt dessen Anwendbarkeit im Rahmen einer realen Fallstudie. Außerdem werden dessen Vorteile bezüglich Konsistenz zwischen Architektur- und Sicherheitsartefakten und Durchführungszeit diskutiert, basierend auf einem Vergleich mit ähnlichen Ansätzen.  
Architecture-level performance models, for instance, the PCM, allow performance predictions to evaluate and compare design alternatives. However, software architectures drift over time so that initially created performance models are out-to-date fast due to the required manual high effort to keep them up-to-date. To close the gap between the development and having up-to-date performance models, the Continuous Integration of Performance Models (CIPM) approach has been proposed. It incorporates automatically executed activities into a Continuous Integration pipeline and is realized with Vitruvius combining Java and the PCM. As a consequence, changes from a commit are extracted to incrementally update the models in the VSUM. To estimate the resource demand in the PCM, the CIPM approach adaptively instruments and monitors the source code. In previous work, parts of the CIPM pipeline were prototypically implemented and partly evaluated with artificial projects. A pipeline combining the incremental model update and the adaptive instrumentation is absent. Therefore, this thesis presents the combined pipeline adapting and extending the existing implementations. The evaluation is performed with the TeaStore and indicates the correct model update and instrumentation. Nevertheless, there is a gap towards the calibration pipeline.  +
Die sich weiterentwickelnde Technologie erfordert eine organisationsübergreifende Zusammenarbeit, um die komplexen Aufgaben bei der Entwicklung komplexer Systeme wie Webservices, IoT und heterogener Systeme zu bewältigen. Auf diese Weise hat die Zusammenarbeit das Potenzial, den hohen Ressourcen- und Qualitätsbedarf zu mindern und von den komplementären Ressourcen der Zusammenarbeit zu profitieren. Dennoch muss während der Zusammenarbeit ein Austausch der Versionen der entwickelten Schnittstellen möglich sein. Während des Austauschs muss der Schutz des geistigen Eigentums der kooperierenden Organisationen und der Metadaten der Versionen berücksichtigt werden, um die Kompetenz der Organisation und die Designentscheidungen zu schützen. Einige existierende Ansätze integrieren einen Server, um die Zugangskontrolle und Autorisierung zu erleichtern und eine feingranulare Verschlüsselung der gemeinsam genutzten Assets und den Schutz des geistigen Eigentums zu bieten. Andere Ansätze integrieren Verschlüsselungsmechanismen in die bestehenden Versionskontrollsysteme, z. B. Git. Für die Zugriffskontrolle nutzen wir jedoch keinen Server und schützen das geistige Eigentum und zusätzlich die Metadaten. Das Thema dieser Arbeit konzentriert sich auf die Vertraulichkeit und Integrität der Modellversionen und ihrer Metadaten. Wir definieren Deltachain, die wir aus Blockchain motivieren. Jedes Element einer Deltachain ist mit einem Vorgänger verknüpft, wodurch eine Kette entsteht. Die Elemente von Deltachain bestehen aus verschlüsselten Modelländerungen mit den entsprechenden Metadaten, um die Vertraulichkeit zu gewährleisten. Darüber hinaus kapselt jedes Element von Deltachain den Hash-Wert der Modelländerung und der Metadaten ein, um die Integrität zu gewährleisten. Wir verwenden den Advanced Encryption Standard für die Verschlüsselung und SHA für das Hashing der Versionen. Um Modelländerungen zwischen zwei Versionen eines Modells zu erkennen, verwenden wir die Vitruv-Tools. Für die Definition des Metamodells und die Implementierung der Deltachain verwenden wir das Eclipse Modeling Framework. Wir evaluieren unseren Ansatz unter den Aspekten der Funktionalität, um die erwartete Leistung von Deltachain zu bestätigen, der Skalierbarkeit, um die Auswirkungen der zunehmenden Versionen und Modelle auf Deltachain zu untersuchen, und der Leistung, um Deltachain mit EMFCompare zu vergleichen. Für die Bewertungsklassen Skalierbarkeit und Leistung finden wir drei Open-Source-Modellierungsprojekte auf GitHub.  
The passing of new regulations like the European GDPR has clarified that in the future it will be necessary to build privacy-preserving systems to protect the personal data of its users. This thesis will introduce the concept of privacy templates to help software designers and architects in this matter. Privacy templates are at their core similar to design patterns and provide reusable and general architectural structures which can be used in the design of systems to improve privacy in early stages of design. In this thesis we will conceptualize a small collection of privacy templates to make it easier to design privacy-preserving software systems. Furthermore, the privacy templates will be categorized and evaluated to classify them and assess their quality across different quality dimensions.  +
Im Zeitalter des Cloud Computings und der Big Data existieren Software-Telemetriedaten im Überfluß. Die schiere Menge an Daten und Datenplattformen kann allerdings zu Problemen in ihrer Handhabung führen. In dieser Masterarbeit wird ein Laufzeitmodell vorgestellt, welches es ermöglicht, Messungen von Telemetriedaten auf verschiedenen Datenplatformen durchzuführen. Hierbei folgt das Modell dem vollen Lebenszyklus einer Messung von der Definition durch eine eigens hierfür entwickelte domänenspezifischen Sprache, bis zur Visualisierung der resultierenden Messwerte. Das Modell wurde bei dem Software-as-a-Service-Unternehmen LogMeIn implementiert und getestet. Hierbei wurde eine Evaluation hinsichtlich der Akzeptanz des implementierten Dienstes bei der vermuteten Zielgruppe anhand einer Nutzerstudie innerhalb des Unternehmens durchgeführt.  +
While large language models have succeeded in generating code, the struggle is to modify large existing code bases. The Generated Code Alteration (GCA) process is designed, implemented, and evaluated in this thesis. The GCA process can automatically modify a large existing code base, given a natural language task. Different variations and instantiations of the process are evaluated in an industrial case study. The code generated by the GCA process is compared to code written by human developers. The language model-based GCA process was able to generate 13.3 lines per error, while the human baseline generated 65.8 lines per error. While the generated code did not match the overall human performance in modifying large code bases, it could still provide assistance to human developers.  +
In Industry 4.0 environments highly dynamic and flexible access control strategies are needed. State of the art strategies are often not included in the modelling process but must be considered afterwards. This makes it very difficult to analyse the security properties of a system. In the framework of the Trust 4.0 project the confidentiality analysis tries to solve this problem using a context-based approach. Thus, there is a security model named “context metamodel”. Another important problem is that the transformation of an instance of a security model to a wide-spread access control standard is often not possible. This is also the case for the context metamodel. Moreover, another transformation which is very interesting to consider is one to an ensemble based component system which is also presented in the Trust 4.0 project. This thesis introduces an extension to the beforementioned context metamodel in order to add more extensibility to it. Furthermore, the thesis deals with the creation of a concept and an implementation of the transformations mentioned above. For that purpose, at first, the transformation to the attribute-based access control standard XACML is considered. Thereafter, the transformation from XACML to an ensemble based component system is covered. The evaluation indicated that the model can be used for use cases in Industry 4.0 scenarios. Moreover, it also indicated the transformations produce adequately accurate access policies. Furthermore, the scalability evaluation indicated linear runtime behaviour of the implementations of both transformations for respectively higher number of input contexts or XACML rules.  +
Architecture-level performance models of software like the PCM can aid with the development of the software by preventing architecture degradation and helping to diagnose performance issues during the implementation phase. Previously, manual intervention was required to create and update such models. The CIPM approach can be employed to automatically make a calibrated PCM instance available during the development of software. A prototypical implementation of the CIPM approach targets microservice-based web applications implemented in Java. No implementations for other programming languages exist and the process of adapting the CIPM approach to support another programming language has previously not been explored. We present an approach to adapting CIPM to support Lua-based sensor applications. A prototypical implementation of the adapted approach was evaluated using real-world Lua-based sensor applications from the SICK AppSpace ecosystem. The evaluation demonstrates the feasibility of the adapted approach, but also reveals minor technical issues with the implementation.  +
In software engineering, software architecture documentation plays an important role. It contains many essential information regarding reasoning and design decisions. Therefore, many activities are proposed to deal with documentation for various reasons, e.g., extract- ing information or keeping different forms of documentation consistent. These activities often involve automatic processing of documentation, for example traceability link recovery (TLR). However, there can be problems for automatic processing when coreferences are present in documentation. A coreference occurs when two or more mentions refer to the same entity. These mentions can be different and create ambiguities, for example when there are pronouns. To overcome this problem, this thesis proposes two contributions to resolve coreferences in software architecture documentation. The first contribution is to explore the performance of existing coreference resolution models for software architecture documentation. The second is to divide coreference resolution into many more specific type of resolutions, like pronoun resolution, abbreviation resolution, etc.  +
To evaluate the loss of cognitive ML models, e.g., text or image classifier, accurately, one usually needs a lot of test data which are annotated manually by experts. In order to estimate accurately, the test data should be representative or else it would be hard to assess whether a model overfits, i.e., it uses spurious features of the images significantly to decide on its predictions.With techniques such as Feature Attribution, one can then compare important features that the model sees with their own expectations and can therefore be more confident whether or not he should trust the model. In this work, we propose a method that estimates the loss of image classifiers based on Feature-Attribution techniques. We use the classic approach for loss estimate as our benchmark to evaluate our proposed method. At the end of this work, our analysis reveals that our proposed method seems to have a similar loss estimate to that of the classic approach with a good image classifer and a representative test data. Based on our experiment, we expect that our proposed method could give a better loss estimate than the classic approach in cases where one has a biased test data and an image classifier which overfits.  +
Conventional evaluation of an ML classifier uses test data to estimate its expected loss. For "cognitive" ML tasks like image or text classification, this requires that experts annotate a large and representative test data set, which can be expensive. In this thesis, we explore another approach for estimating the expected loss of an ML classifier. The aim is to enhance test data with additional expert knowledge. Inspired by recent feature attribution techniques, such as LIME or Saliency Maps, the idea is that experts annotate inputs not only with desired classes, but also with desired feature attributions. We then explore different methods to derive a large conventional test data set based on few such feature attribution annotations. We empirically evaluate the loss estimates of our approach against ground-truth estimates on large existing test data sets, with a focus on the tradeoff between the number of expert annotations and the achieved estimation accuracy.  +
Students are confronted with a huge amount of regulations when planning their studies at a university. It is challenging for them to create a personalized study plan while still complying to all official rules. The STUDYplan software aims to overcome the difficulties by enabling an intuitive and individual modeling of study plans. A study plan can be interpreted as a sequence of business process tasks that indicate courses to make use of existing work in the business process domain. This thesis focuses on the idea of synthesizing business process models from declarative specifications that indicate official and user-defined regulations for a study plan. We provide an elaborated approach for the modeling of study plan constraints and a generation concept specialized to study plans. This work motivates, discusses, partially implements and evaluates the proposed approach.  +
D
Te prediction of material failure is useful in many industrial contexts such as predictive maintenance, where it helps reducing costs by preventing outages. However, failure prediction is a complex task. Typically, material scientists need to create a physical material model to run computer simulations. In real-world scenarios, the creation of such models is ofen not feasible, as the measurement of exact material parameters is too expensive. Material scientists can use material models to generate simulation data. Tese data sets are multivariate sensor value time series. In this thesis we develop data-driven models to predict upcoming failure of an observed material. We identify and implement recurrent neural network architectures, as recent research indicated that these are well suited for predictions on time series. We compare the prediction performance with traditional models that do not directly predict on time series but involve an additional step of feature calculation. Finally, we analyze the predictions to fnd abstractions in the underlying material model that lead to unrealistic simulation data and thus impede accurate failure prediction. Knowing such abstractions empowers material scientists to refne the simulation models. The updated models would then contain more relevant information and make failure prediction more precise.  +
Data flow is becoming more and more important for business processes over the last few years. Nevertheless, data in workflows is often considered as second-class object and is not sufficiently supported. In many domains, such as the energy market, the importance of compliance requirements stemming form legal regulations or specific standards has dramatically increased over the past few years. To be broadly applicable, compliance verification has to support data-aware compliance rules as well as to consider data conditions within a process model. In this thesis we model the data-flow of data objects for a scenario in the energy market domain. For this purpose we use a scientific workflow management system, namely the Apache Taverna. We will then insure the correctness of the data flow of the process model. The theoretical starting point for this thesis is a verification approach of the supervisors of this thesis. It formalizes BPMN process models by mapping them to Petri Nets and unfolding the execution semantics regarding data. We develop an algorithm for transforming Taverna workflows to BPMN 2.0. We then ensure the correctness of the data-flow of the process model. For this purpose we analyse which compliance rules are relevant for the data objects and how to specify them using anti-patterns.  +
Static Code Analysis (SCA) has become an integral part of modern software development, especially since the rise of automation in the form of CI/CD. It is an ongoing question of how machine learning can best help improve SCA's state and thus facilitate maintainable, correct, and secure software. However, machine learning needs a solid foundation to learn on. This thesis proposes an approach to build that foundation by mining data on software issues from real-world code. We show how we used that concept to analyze over 4000 software packages and generate over two million issue samples. Additionally, we propose a method for refining this data and apply it to an existing machine learning SCA approach.  +
A group of people with diferent personal preferences wants to fnd a solution to a problem with high variability. Making decisions in the group comes with problems as a lack of communication leads to worse decision outcomes. Group dynamics and biases can lead to suboptimal decisions. Generally group decisions are complex and often the process that yields the decision result is unstructured, thereby not providing any reproducibility of the success. Groups have different power structures and usually individuals have diferent interests. Moreover finding solutions is a rather complex task and group decisions can sufer intransparency. To support groups in their decision making product confguration can be used. It allows to accurately map constraints and dependencies in complex problems and to map the solution space. Using a group recommender a group is supported in their confguration decisions. The goal is to show that these approaches can help a group with the confguration task presented by the usage of a configurator and to better process individual preferences than a human can. The benefts of this approach are, that the need for a group to communicate directly is reduced. Each user gives their own preferences and the group will get a recommendation based on that. This allows to reduce problems arising in groups decisions like lack of communication and bias in groups. Additionally this shows the viability of combining group recommendations and configuration approaches.  +
Consistency preservation between two metamodels can be achieved by defining a model transformation that repairs inconsistencies. In that case, there exists a consistency relation between metamodels. When there are multiple interrelated metamodels, consistency relations form a network. In multi-model consistency preservation, we are interested in methods to preserve consistency in a network of consistency relations. However, combinations of binary transformations can lead to specific interoperability issues. The purpose of this thesis is the decomposition of relations, an optimization technique for consistency relation networks. In this thesis, we design a decomposition procedure to detect independent and redundant subsets of consistency relations. The procedure aims to help developers find incompatibilities in consistency relation networks.  +
Entwicklungsprozesse von komplexen, softwareintensiven Systemen sind heutzutage von organisationsübergreifender Zusammenarbeit geprägt. Organisationen teilen verschiedene Artefakte miteinander und leisten darauf aufbauend ihren Beitrag zur Systementwicklung. Die Synchronisation von Änderungen an solchen geteilten Artefakten erfolgt hauptsächlich in Form von regelmäßigen, aber seltenen Meetings. Darüber hinaus enthalten die Artefakte im Allgemeinen geistiges Eigentum, das geschützt werden muss, auch vor mitwirkenden Organisationen. Wir entwerfen eine Referenzarchitektur, die einen konstanten Datenfluss beim organisationsübergreifenden Austausch von Artefakten unter Schutz des geistigen Eigentums ermöglicht.  +
Outlier detection algorithms are widely used in application fields such as image processing and fraud detection. Thus, during the past years, many different outlier detection algorithms were developed. While a lot of work has been put into comparing the efficiency of these algorithms, comparing methods in terms of effectiveness is rather difficult. One reason for that is the lack of commonly agreed-upon benchmark data. In this thesis the effectiveness of density-based outlier detection algorithms (such as KNN, LOF and related methods) on entirely synthetically generated data are compared, using its underlying density as ground truth.  +
Outlier detection is a popular topic in research, with a number of different approaches developed. Evaluating the effectiveness of these approaches however is a rather rarely touched field. The lack of commonly accepted benchmark data most likely is one of the obstacles for running a fair comparison of unsupervised outlier detection algorithms. This thesis compares the effectiveness of twelve density-based outlier detection algorithms in nearly 800.000 experiments over a broad range of algorithm parameters using the probability density as ground truth.  +
In view-based software development, views may share concepts and thus contain redundant or dependent information. Keeping the individual views synchronized is a crucial property to avoid inconsistencies in the system. In approaches based on a Single Underlying Model (SUM), inconsistencies are avoided by establishing the SUM as a single source of truth from which views are projected. To synchronize updates from views to the SUM, delta-based consistency preservation is commonly applied. This requires the views to provide fine-grained change sequences which are used to incrementally update the SUM. However, the functionality of providing these change sequences is rarely found in real-world applications. Instead, only state-based differences are persisted. Therefore, it is desirable to also support views which provide state-based differences in delta-based consistency preservation. This can be achieved by estimating the fine-grained change sequences from the state-based differences. This thesis evaluates the quality of estimated change sequences in the context of model consistency preservation. To derive such sequences, matching elements across the compared models need to be identified and their differences need to be computed. We evaluate a sequence derivation strategy that matches elements based on their unique identifier and one that establishes a similarity metric between elements based on the elements’ features. As an evaluation baseline, different test suites are created. Each test consists of an initial and changed version of both a UML class diagram and consistent Java source code. Using the different strategies, we derive and propagate change sequences based on the state-based difference of the UML view and evaluate the outcome in both domains. The results show that the identity-based matching strategy is able to derive the correct change sequence in almost all (97 %) of the considered cases. For the similarity-based matching strategy we identify two reoccurring error patterns across different test suites. To address these patterns, we provide an extended similarity-based matching strategy that is able to reduce the occurrence frequency of the error patterns while introducing almost no performance overhead.  
Twitter has been identified as a relevant data source for modelling purposes in the last decade. In this work, our goal was to model the conversational dynamics of inflation development in Germany through Twitter Data Mining. To accomplish this, we summarized and compared Twitter data mining techniques for time series data from pertinent research. Then, we constructed five models for generating time series from topic-related tweets and user profiles of the last 15 years. Evaluating the models, we observed that several approaches like modelling for user impact or adjusting for automated twitter accounts show promise. Yet, in the scenario of modelling inflation expectation dynamics, these more complex models could not contribute to a higher correlation between German CPI and the resulting time series compared to a baseline approach.  +
Die Spezifikation eines software-intensiven Systems umfasst eine Vielzahl von Artefakten. Diese Artefakte sind nicht unabhängig voneinander, sondern stellen die gleichen Elemente des Systems in unterschiedlichen Kontexten und Repräsentationen dar. In dieser Arbeit wurde im Rahmen einer Fallstudie ein neuer Ansatz untersucht, mit dem sich diese Überschneidungen von Artefakten konsistent halten lassen. Die Idee ist es, die Gemeinsamkeiten der Artefakte explizit zu modellieren und Änderungen über ein Zwischenmodell dieser Gemeinsamkeiten zwischen Artefakten zu übertragen. Der Ansatz verspricht eine bessere Verständlichkeit der Abhängigkeiten zwischen Artefakten und löst einige Probleme bisheriger Ansätze für deren Konsistenzerhaltung. Für die Umsetzung der Fallstudie wurde eine Sprache weiterentwickelt, mit der sich die Gemeinsamkeiten und deren Manifestationen in den verschiedenen Artefakten ausdrücken lassen. Wir konnten einige grundlegende Funktionalitäten der Sprache ergänzen und damit 64% der Konsistenzbeziehungen in unserer Fallstudie umsetzen. Für die restlichen Konsistenzbeziehungen müssen weitere Anpassungen an der Sprache vorgenommen werden. Für die Evaluation der generellen Anwendbarkeit des Ansatzes sind zusätzliche Fallstudien nötig.  +
In the early stages of developing a software architecture, many properties of the final system are yet unknown, or difficult to determine. There may be multiple viable architectures, but uncertainty about which architecture performs the best. Software architects can use Design Space Exploration to evaluate quality properties of architecture candidates to find the optimal solution. Design Space Exploration can be a resource intensive process. An architecture candidate may feature certain properties which disqualify it from consideration as an optimal candidate, regardless of its quality metrics. An example for this would be confidentiality violations in data flows introduced by certain components or combinations of components in the architecture. If these properties can be identified early, quality evaluation can be skipped and the candidate discarded, saving resources. Currently, analyses for identifying such properties are performed disjunct from the design space exploration process. Optimal candidates are determined first, and analyses are then applied to singular architecture candidates. Our approach augments the PerOpteryx design space exploration pipeline with an additional architecture candidate filter stage, which allows existing generic candidate analyses to be integrated into the DSE process. This enables automatic execution of analyses on architecture candidates during DSE, and early discarding of unwanted candidates before quality evaluation takes place. We use our filter stage to perform data flow confidentiality analyses on architecture candidates, and further provide a set of example analyses that can be used with the filter. We evaluate our approach by running PerOpteryx on case studies with our filter enabled. Our results indicate that the filter stage works as expected, able to analyze architecture candidates and skip quality evaluation for unwanted candidates.  +
Die Arbeit entwickelt einen Ansatz, der die automatische Adaption mit Fokus auf die Leistungsoptimierung mit einem Ansatz zur Bedienerintegration vereint. Der Ansatz verwendt automatischen Entwurfsraumexploration, um Laufzeit-Architekturmodelle der Anwendung zu optimieren und mit einem Modell-basierten Ansatz zur Adaptionsplanung und -ausführung zu kombinieren, der Bedienereingrife während der Adaptionsausführung ermöglicht.  +
Business Process Model and Notation (BPMN) is a standard language to specify business process models. It helps organizations around the world to analyze, improve and automate their processes. It is very important to make sure that those models are correct, as faulty models can do more harm than good. While many verification methods for BPMN concentrate only on control flow, the importance of correct data flow is often neglected. Additionally the few approaches tackling this problem, only do it on a surface level ignoring certain important aspects, such as data states. Because data objects with states can cause different types of errors than data objects without them, ignoring data states can lead to overlooking certain mistakes. This thesis tries to address the problem of detecting data flow errors on the level of data states, while also taking optional data and alternative data into account. We propose a new transformation for BPMN models to Petri Nets and specify suitable anti-patterns. Using a model checker, we are then capable of automatically detecting data flow errors regarding data states. In combination with existing approaches, which detect control flow errors or data flow errors on the level of data values, business process designers will be able to prove with a higher certainty that their models are actually flawless.  +
Using outlier detection algorithms, e.g., Support Vector Data Description (SVDD), for detecting outlying time-series usually requires extracting domain-specific attributes. However, this indirect way needs expert knowledge, making SVDD impractical for many real-world use cases. Incorporating "Global Alignment Kernels" directly into SVDD to compute the distance between time-series data bypasses the attribute-extraction step and makes the application of SVDD independent of the underlying domain. In this work, we propose a new time-series outlier detection algorithm, combining "Global Alignment Kernels" and SVDD. Its outlier detection capabilities will be evaluated on synthetic data as well as on real-world data sets. Additionally, our approach's performance will be compared to state-of-the-art methods for outlier detection, especially with regard to the types of detected outliers.  +
Detecting outlying time-series poses two challenges: First, labeled training data is rare, as it is costly and error-prone to obtain. Second, algorithms usually rely on distance metrics, which are not readily applicable to time-series data. To address the first challenge, one usually employs unsupervised algorithms. To address the second challenge, existing algorithms employ a feature-extraction step and apply the distance metrics to the extracted features instead. However, feature extraction requires expert knowledge, rendering this approach also costly and time-consuming. In this thesis, we propose GAK-SVDD. We combine the well-known SVDD algorithm to detect outliers in an unsupervised fashion with Global Alignment Kernels (GAK), bypassing the feature-extraction step. We evaluate GAK-SVDD's performance on 28 standard benchmark data sets and show that it is on par with its closest competitors. Comparing GAK with a DTW-based kernel, GAK improves the median Balanced Accuracy by 4%. Additionally, we extend our method to the active learning setting and examine the combination of GAK and domain-independent attributes.  +
This thesis focuses on the development of a database application that enables a comparative analysis between the Google Books Ngram Corpus(GBNC) and a German news corpora. The GBNC provides a vast collection of books spanning various time periods, while the German news corpora encompass up-to-date linguistic data from news sources. Such comparison aims to uncover insights into language usage patterns, linguistic evolution, and cultural shifts within the German language. Extracting meaningful insights from the compared corpora requires various linguistic metrics, statistical analyses and visualization techniques. By identifying patterns, trends and linguistic changes we can uncover valuable information on language usage evolution over time. This thesis provides a comprehensive framework for comparing the GBNC to other corpora, showcasing the development of a database application that enables not only valuable linguistic analyses but also shed light on the composition of the GBNC by highlighting linguistic similarities and differences.  +
In the last decade, ample research has been produced regarding the value of user-generated data from microblogs as a basis for time series analysis in various fields.In this context, the objective of this thesis is to develop a domain-agnostic framework for mining microblog data (i.e., Twitter). Taking the subject related postings of a time series (e.g., inflation) as its input, the framework will generate temporal data sets that can serve as basis for time series analysis of the given target time series (e.g., inflation rate). To accomplish this, we will analyze and summarize the prevalent research related to microblog data-based forecasting and analysis, with a focus on the data processing and mining approach. Based on the findings, one or several candidate frameworks are developed and evaluated by testing the correlation of their generated data sets against the target time series they are generated for. While summative research on microblog data-based correlation analysis exists, it is mainly focused on summarizing the state of the field. This thesis adds to the body of research by applying summarized findings and generating experimental evidence regarding the generalizability of microblog data mining approaches and their effectiveness.  +
There are many data structures and indices that speed up kNN queries on time series. The existing indices are designed to work on the full time series only. In this thesis we develop a data structure that allows speeding up kNN queries in an arbitrary time range, i.e. for an arbitrary subsequence.  +