Suche mittels Attribut

Diese Seite stellt eine einfache Suchoberfläche zum Finden von Objekten bereit, die ein Attribut mit einem bestimmten Datenwert enthalten. Andere verfügbare Suchoberflächen sind die Attributsuche sowie der Abfragengenerator.

Suche mittels Attribut

Eine Liste aller Seiten, die das Attribut „Kurzfassung“ mit dem Wert „TBA“ haben. Weil nur wenige Ergebnisse gefunden wurden, werden auch ähnliche Werte aufgelistet.

Hier sind 75 Ergebnisse, beginnend mit Nummer 1.

Zeige (vorherige 500 | nächste 500) (20 | 50 | 100 | 250 | 500)


    

Liste der Ergebnisse

    • Preventing Refactoring Attacks on Software Plagiarism Detection through Graph-Based Structural Normalization  + (TBD)
    • Generation of Checkpoints for Hardware Architecture Simulators  + (TBD)
    • Konzept und Integration eines Deltachain Prototyps  + (TBD)
    • Data-Driven Approaches to Predict Material Failure and Analyze Material Models  + (Te prediction of material failure is usefuTe prediction of material failure is useful in many industrial contexts such as predictive maintenance, where it helps reducing costs by preventing outages. However, failure prediction is a complex task. Typically, material scientists need to create a physical material model to run computer simulations. In real-world scenarios, the creation of such models is ofen not feasible, as the measurement of exact material parameters is too expensive. Material scientists can use material models to generate simulation data. Tese data sets are multivariate sensor value time series. In this thesis we develop data-driven models to predict upcoming failure of an observed material. We identify and implement recurrent neural network architectures, as recent research indicated that these are well suited for predictions on time series. We compare the prediction performance with traditional models that do not directly predict on time series but involve an additional step of feature calculation. Finally, we analyze the predictions to fnd abstractions in the underlying material model that lead to unrealistic simulation data and thus impede accurate failure prediction. Knowing such abstractions empowers material scientists to refne the simulation models. The updated models would then contain more relevant information and make failure prediction more precise. and make failure prediction more precise.)
    • Improving SAP Document Information Extraction via Pretraining and Fine-Tuning  + (Techniques for extracting relevant informaTechniques for extracting relevant information from documents have made significant progress in recent years and became a key task in the digital transformation. With deep neural networks, it became possible to process documents without specifying hard-coded extraction rules or templates for each layout. However, such models typically have a very large number of parameters. As a result, they require many annotated samples and long training times. One solution is to create a basic pretrained model using self-supervised objectives and then to fine-tune it using a smaller document-specific annotated dataset. However, implementing and controlling the pretraining and fine-tuning procedures in a multi-modal setting is challenging. In this thesis, we propose a systematic method that consists in pretraining the model on large unlabeled data and then to fine-tune it with a virtual adversarial training procedure. For the pretraining stage, we implement an unsupervised informative masking method, which improves upon standard Masked-Language Modelling (MLM). In contrast to randomly masking tokens like in MLM, our method exploits Point-Wise Mutual Information (PMI) to calculate individual masking rates based on statistical properties of the data corpus, e.g., how often certain tokens appear together on a document page. We test our algorithm in a typical business context at SAP and report an overall improvement of 1.4% on the F1-score for extracted document entities. Additionally, we show that the implemented methods improve the training speed, robustness and data-efficiency of the algorithm.ness and data-efficiency of the algorithm.)
    • Analyse von Zeitreihen-Kompressionsmethoden am Beispiel von Google N-Grams  + (Temporal text corpora like the Google NgraTemporal text corpora like the Google Ngram dataset usually incorporate a vast number of words and expressions, called ngrams, and their respective usage frequencies over the years. The large quantity of entries complicates working with the dataset, as transformations and queries are resource and time intensive. However, many use-cases do not require the whole corpus to have a sufficient dataset and achieve acceptable results. We propose various compression methods to reduce the absolute number of ngrams in the corpus. Additionally, we utilize time-series compression methods for quick estimations about the properties of ngram usage frequencies. As basis for our compression method design and experimental validation serve CHQL (Conceptual History Query Language) queries on the Google Ngram dataset. The goal is to find compression methods that reduce the complexity of queries on the corpus while still maintaining good results.rpus while still maintaining good results.)
    • Analyse von Zeitreihen-Kompressionsmethoden am Beispiel von Google N-Gram  + (Temporal text corpora like the Google NgraTemporal text corpora like the Google Ngram Data Set usually incorporate a vast number of words and expressions, called ngrams, and their respective usage frequencies over the years. The large quantity of entries complicates working with the data set, as transformations and queries are resource and time intensive. However, many use cases do not require the whole corpus to have a sufficient data set and achieve acceptable query results. We propose various compression methods to reduce the total number of ngrams in the corpus. Specially, we propose compression methods that, given an input dictionary of target words, find a compression tailored for queries on a specific topic. Additionally, we utilize time-series compression methods for quick estimations about the properties of ngram usage frequencies. As basis for our compression method design and experimental validation serve CHQL (Conceptual History Query Language) queries on the Google Ngram Data Set.age) queries on the Google Ngram Data Set.)
    • Implementation and Evaluation of CHQL Operators in Relational Database Systems  + (The IPD defined CHQL, a query algebra thatThe IPD defined CHQL, a query algebra that enables to formalize queries about conceptual history. CHQL is currently implemented in MapReduce which offers less flexibility for query optimization than relational database systems does. The scope of this thesis is to implement the given operators in SQL and analyze performance differences by identifying limiting factors and query optimization on the logical and physical level. At the end, we will provide efficient query plans and fast operator implementations to execute CHQL queries in relational database systems.QL queries in relational database systems.)
    • The Kconfig Variability Framework as a Feature Model  + (The Kconfig variability framework is used The Kconfig variability framework is used to develop highly variable software such as the Linux kernel, ZephyrOS and NuttX. Kconfig allows developers to break down their software in modules and define the dependencies between these modules, so that when a concrete configuration is created, the semantic dependencies between the selected modules are fulfilled, ensuring that the resulting software product can function. Kconfig has often been described as a tool of define software product lines (SPLs), which often occur within the context of feature-oriented programming (FOP). In this paper, we introduce methods to transform Kconfig files into feature models so that the semantics of the model defined in a Kconfig file are preserved. The resulting feature models can be viewed with FeatureIDE, which allows the further analysis of the Kconfig file, such as the detection of redundant dependencies and cyclic dependencies.dant dependencies and cyclic dependencies.)
    • Review of data efficient dependency estimation  + (The amount and complexity of data collecteThe amount and complexity of data collected in the industry is increasing, and data analysis rises in importance. Dependency estimation is a significant part of knowledge discovery and allows strategic decisions based on this information.</br>There are multiple examples that highlight the importance of dependency estimation, like knowing there exists a correlation between the regular dose of a drug and the health of a patient helps to understand the impact of a newly manufactured drug.</br>Knowing how the case material, brand, and condition of a watch influences the price on an online marketplace can help to buy watches at a good price.</br>Material sciences can also use dependency estimation to predict many properties of a material before it is synthesized in the lab, so fewer experiments are necessary.</br></br>Many dependency estimation algorithms require a large amount of data for a good estimation. But data can be expensive, as an example experiments in material sciences, consume material and take time and energy.</br>As we have the challenge of expensive data collection, algorithms need to be data efficient. But there is a trade-off between the amount of data and the quality of the estimation. With a lack of data comes an uncertainty of the estimation. However, the algorithms do not always quantify this uncertainty. As a result, we do not know if we can rely on the estimation or if we need more data for an accurate estimation.</br></br>In this bachelor's thesis we compare different state-of-the-art dependency estimation algorithms using a list of criteria addressing these challenges and more. We partly developed the criteria our self as well as took them from relevant publications. The existing publications formulated many of the criteria only qualitative, part of this thesis is to make these criteria measurable quantitative, where possible, and come up with a systematic approach of comparison for the rest.</br></br>From 14 selected criteria, we focus on criteria concerning data efficiency and uncertainty estimation, because they are essential for lowering the cost of dependency estimation, but we will also check other criteria relevant for the application of algorithms.</br>As a result, we will rank the algorithms in the different aspects given by the criteria, and thereby identify potential for improvement of the current algorithms.</br></br>We do this in two steps, first we check general criteria in a qualitative analysis. For this we check if the algorithm is capable of guided sampling, if it is an anytime algorithm and if it uses incremental computation to enable early stopping, which all leads to more data efficiency.</br></br>We also conduct a quantitative analysis on well-established and representative datasets for the dependency estimation algorithms, that performed well in the qualitative analysis.</br>In these experiments we evaluate more criteria:</br>The robustness, which is necessary for error-prone data, the efficiency which saves time in the computation, the convergence which guarantees we get an accurate estimation with enough data, and consistency which ensures we can rely on an estimation.hich ensures we can rely on an estimation.)
    • Identifying Security Requirements in Natural Language Documents  + (The automatic identification of requiremenThe automatic identification of requirements, and their classification according to their security objectives, can be helpful to derive insights into the security of a given system. However, this task requires significant security expertise to perform. In this thesis, the capability of modern Large Language Models (such as GPT) to replicate this expertise is investigated. This requires the transfer of the model's understanding of language to the given specific task. In particular, different prompt engineering approaches are combined and compared, in order to gain insights into their effects on performance. GPT ultimately performs poorly for the main tasks of identification of requirements and of their classification according to security objectives. Conversely, the model performs well for the sub-task of classifying the security-relevance of requirements. Interestingly, prompt components influencing the format of the model's output seem to have a higher performance impact than components containing contextual information.ponents containing contextual information.)
    • Predicting System Dependencies from Tracing Data Instead of Computing Them  + (The concept of Artificial Intelligence forThe concept of Artificial Intelligence for IT Operations combines big data and machine learning methods to replace a broad range of IT operations including availability and performance monitoring of services. In large-scale distributed cloud infrastructures a service is deployed on different separate nodes. As the size of the infrastructure increases in production, the analysis of metrics parameters becomes computationally expensive. We address the problem by proposing a method to predict dependencies between metrics parameters of system components instead of computing them. To predict the dependencies we use time windowing with different aggregation methods and distributed tracing data that contain detailed information for the system execution workflow. In this bachelor thesis, we inspect the different representations of distributed traces from simple counting of events to more complex graph representations. We compare them with each other and evaluate the performance of such methods. evaluate the performance of such methods.)
    • Change Detection in High Dimensional Data Streams  + (The data collected in many real-world scenThe data collected in many real-world scenarios such as environmental analysis, manufacturing, and e-commerce are high-dimensional and come as a stream, i.e., data properties evolve over time – a phenomenon known as "concept drift". This brings numerous challenges: data-driven models become outdated, and one is typically interested in detecting specific events, e.g., the critical wear and tear of industrial machines. Hence, it is crucial to detect change, i.e., concept drift, to design a reliable and adaptive predictive system for streaming data. However, existing techniques can only detect "when" a drift occurs and neglect the fact that various drifts may occur in different dimensions, i.e., they do not detect "where" a drift occurs. This is particularly problematic when data streams are high-dimensional. </br></br>The goal of this Master’s thesis is to develop and evaluate a framework to efficiently and effectively detect “when” and “where” concept drift occurs in high-dimensional data streams. We introduce stream autoencoder windowing (SAW), an approach based on the online training of an autoencoder, while monitoring its reconstruction error via a sliding window of adaptive size. We will evaluate the performance of our method against synthetic data, in which the characteristics of drifts are known. We then show how our method improves the accuracy of existing classifiers for predictive systems compared to benchmarks on real data streams.mpared to benchmarks on real data streams.)
    • Automated Test Selection for CI Feedback on Model Transformation Evolution  + (The development of the transformation modeThe development of the transformation model also comes with the appropriate system-level testing to verify its changes. Due to the complex nature of the transformation model, the number of tests increases as the structure and feature description become more detailed. However, executing all test cases for every change is costly and time-consuming. Thus, it is necessary to conduct a selection for the transformation tests. In this presentation, you will be introduced to a change-based test prioritization and transformation test selection approach for early fault detection.ection approach for early fault detection.)
    • Statistical Generation of High Dimensional Data Streams with Complex Dependencies  + (The evaluation of data stream mining algorThe evaluation of data stream mining algorithms is an important task in current research. The lack of a ground truth data corpus that covers a large number of desireable features (especially concept drift and outlier placement) is the reason why researchers resort to producing their own synthetic data. This thesis proposes a novel framework ("streamgenerator") that allows to create data streams with finely controlled characteristics. The focus of this work is the conceptualization of the framework, however a prototypical implementation is provided as well. We evaluate the framework by testing our data streams against state-of-the-art dependency measures and outlier detection algorithms.measures and outlier detection algorithms.)
    • Statistical Generation of High-Dimensional Data Streams with Complex Dependencies  + (The extraction of knowledge from data streThe extraction of knowledge from data streams is one of the most crucial tasks of modern day data science. Due to their nature data streams are ever evolving and knowledge derrived at one point in time may be obsolete in the next period. The need for specialized algorithms that can deal with high-dimensional data streams and concept drift is prevelant.</br></br>A lot of research has gone into creating these kind of algorithms. The problem here is the lack of data sets with which to evaluate them. A ground truth for a common evaluation approach is missing. A solution to this could be the synthetic generation of data streams with controllable statistical propoerties, such as the placement of outliers and the subspaces in which special kinds of dependencies occur. The goal of this Bachelor thesis is the conceptualization and implementation of a framework which can create high-dimensional data streams with complex dependencies.al data streams with complex dependencies.)
    • Theory-guided Load Disaggregation in an Industrial Environment  + (The goal of Load Disaggregation (or Non-inThe goal of Load Disaggregation (or Non-intrusive Load Monitoring) is to infer the energy consumption of individual appliances from their aggregated consumption. This facilitates energy savings and efficient energy management, especially in the industrial sector.</br></br>However, previous research showed that Load Disaggregation underperforms in the industrial setting compared to the household setting. Also, the domain knowledge available about industrial processes remains unused.</br></br>The objective of this thesis was to improve load disaggregation algorithms by incorporating domain knowledge in an industrial setting. First, we identified and formalized several domain knowledge types that exist in the industry. Then, we proposed various ways to incorporate them into the Load Disaggregation algorithms, including Theory-Guided Ensembling, Theory-Guided Postprocessing, and Theory-Guided Architecture. Finally, we implemented and evaluated the proposed methods.mented and evaluated the proposed methods.)
    • Tuning of Explainable ArtificialIntelligence (XAI) tools in the field of textanalysis  + (The goal of this bachelor thesis was to anThe goal of this bachelor thesis was to analyse classification results using a 2017 published method called shap. Explaining how an artificial neural network makes a decision is an interdisciplinary research subject combining computer science, math, psychology and philosophy. We analysed these explanations from a psychological standpoint and after presenting our findings we will propose a method to improve the interpretability of text explanations using text-hierarchies, without loosing much/any accuracy. Secondary, the goal was to test out a framework developed to analyse a multitude of explanation methods. This Framework will be presented next to our findings and how to use it to create your own analysis. This Bachelor thesis is addressed at people familiar with artificial neural networks and other machine learning methods.tworks and other machine learning methods.)
    • Specifying and Maintaining the Correspondence between Architecture Models and Runtime Observations  + (The goal of this thesis is to provide a geThe goal of this thesis is to provide a generic concept of a correspondence model (CM) to map high-level model elements to corresponding low-level model elements and to generate this mapping during implementation of the high-level model using a correspondence model generator (CGM). In order to evaluate our approach, we implement and integrate the CM for the iObserve project. Further we implement the proposed CMG and integrate it into ProtoCom, the source code generator used by the iObserve project. We first evaluate the feasibility of this approach by checking whether such a correspondence model can be specified as desired and generated by the CGM. Secondly, we evaluate the accuracy of the approach by checking the generated correspondences against a reference model.correspondences against a reference model.)
    • Intelligent Match Merging to Prevent Obfuscation Attacks on Software Plagiarism Detectors  + (The increasing number of computer science The increasing number of computer science students has prompted educators to rely on state-of-the-art source code plagiarism detection tools to deter the submission of plagiarized coding assignments. While these token-based plagiarism detectors are inherently resilient against simple obfuscation attempts, recent research has shown that obfuscation tools empower students to easily modify their submissions, thus evading detection. These tools automatically use dead code insertion and statement reordering to avoid discovery. The emergence of ChatGPT has further raised concerns about its obfuscation capabilities and the need for effective mitigation strategies.</br>Existing defence mechanisms against obfuscation attempts are often limited by their specificity to certain attacks or dependence on programming languages, requiring tedious and error-prone reimplementation. In response to this challenge, this thesis introduces a novel defence mechanism against automatic obfuscation attacks called match merging. It leverages the fact that obfuscation attacks change the token sequence to split up matches between two submissions so that the plagiarism detector discards the broken matches. Match merging reverts the effects of these attacks by intelligently merging neighboring matches based on a heuristic designed to minimize false positives.</br>Our method’s resilience against classic obfuscation attacks is demonstrated through evaluations on diverse real-world datasets, including undergrad assignments and competitive coding challenges, across six different attack scenarios. Moreover, it significantly improves detection performance against AI-based obfuscation. What sets our method apart is its language- and attack-independence while its minimal runtime overhead makes it seamlessly compatible with other defence mechanisms. compatible with other defence mechanisms.)
    • Efficient k-NN Search of Time Series in Arbitrary Time Intervals  + (The k nearest neighbors (k-NN) of a time sThe k nearest neighbors (k-NN) of a time series are the k closest sequences within a</br>dataset regarding a distance measure. Often, not the entire time series, but only specific</br>time intervals are of interest, e.g., to examine phenomena around special events. While</br>numerous indexing techniques support the k-NN search of time series, none of them</br>is designed for an efficient interval-based search. This work presents the novel index</br>structure Time Series Envelopes Index Tree (TSEIT), that significantly speeds up the k-NN</br>search of time series in arbitrary user-defined time intervals. in arbitrary user-defined time intervals.)
    • Reinforcement Learning for Solving the Knight’s Tour Problem  + (The knight’s tour problem is an instance oThe knight’s tour problem is an instance of the Hamiltonian path problem that is a typical NP-hard problem. A knight makes L-shape moves on a chessboard and tries to visit all the squares exactly once. The tour is closed if a knight can finish a complete tour and end on a square that is a neighbourhood of its starting square; Otherwise, it is open. Many algorithms and heuristics have been proposed to solve this problem. The most well-known one is warnsdorff’s heuristic. Warnsdorff’s idea is to move to the square with the fewest possible moves in a greedy fashion. Although this heuristic is fast, it does not always return a closed tour. Also, it only works on boards of certain dimensions. Due to its greedy behaviour, it can get stuck into a local optimum easily. That is similar to the other existing approaches. Our goal in this thesis is to come up with a new strategy based on reinforcement learning. Ideally, it should be able to find a closed tour on chessboards of any size. We will consider several approaches: value-based methods, policy optimization and actor-critic methods. Compared to previous work, our approach is non-deterministic and sees the problem as a single-player game with a tradeoff between exploration and exploitation. We will evaluate the effectiveness and efficiency of the existing methods and new heuristics.f the existing methods and new heuristics.)
    • Discovering data-driven Explanations  + (The main goal knowledge discovery focussesThe main goal knowledge discovery focusses is, an increase of knowledge using some set of data. In many cases it is crucial that results are human-comprehensible. Subdividing the feature space into boxes with unique characteristics is a commonly used approach for achieving this goal. The patient-rule-induction method (PRIM) extracts such "interesting" hyperboxes from a dataset by generating boxes that maximize some class occurrence inside of it. However, the quality of the results varies when applied to small datasets. This work will examine to which extent data-generators can be used to artificially increase the amount of available data in order to improve the accuracy of the results. Secondly, it it will be tested if probabilistic classification can improve the results when using generated data.ove the results when using generated data.)
    • Conception and Design of Privacy-preserving Software Architecture Templates  + (The passing of new regulations like the EuThe passing of new regulations like the European GDPR has clarified that in the future it will be necessary to build privacy-preserving systems to protect the personal data of its users. This thesis will introduce the concept of privacy templates to help software designers and architects in this matter. Privacy templates are at their core similar to design patterns and provide reusable and general architectural structures which can be used in the design of systems to improve privacy in early stages of design. In this thesis we will conceptualize a small collection of privacy templates to make it easier to design privacy-preserving software systems. Furthermore, the privacy templates will be categorized and evaluated to classify them and assess their quality across different quality dimensions.ality across different quality dimensions.)
    • Modellierung und Verifikation von Mehrgüterauktionen als Workflows am Beispiel eines Auktionsdesigns  + (The presentation will be in English. Die ZThe presentation will be in English.</br>Die Zielsetzung in dieser Arbeit war die Entwicklung eines Systems zur Verifikation von Mehrgüterauktionen als Workflows am Beispiel eines Auktionsdesigns. Aufbauend auf diversen Vorarbeiten wurde in dieser Arbeit das Clock-Proxy Auktionsdesign als Workflow modelliert und zur Verifikation mit Prozessverifikationsmethoden vorbereitet. Es bestehen bereits eine Vielzahl an Analyseansätzen für Auktionsdesign, die letztendlich aber auf wenig variierbaren Modellen basieren. Für komplexere Auktionsverfahren, wie Mehrgüterauktionen, die in dieser Arbeit betrachtet wurden, liefern diese Ansätze keine zufriedenstellenden Möglichkeiten. Basierend auf den bereits bestehenden Verfahren wurde ein Ansatz entwickelt, dessen Schwerpunkt auf der datenzentrierten Erweiterung der Modellierung und der Verifikationsansätze liegt. Im ersten Schritt wurden daher die Regeln und Daten in das Workflowmodell integriert. Die Herausforderung bestand darin, den Kontroll-und Datenfluss sowie die Daten und Regeln aus dem Workflowmodell über einen Algorithmus zu extrahieren und bestehende Transformationsalgorithmen hinreichend zu erweitern. Die Evaluation des Ansatzes zeigt, dass die Arbeit mit der entwickelten Software das globale Ziel, einen Workflow mittels Eigenschaften zu verifizieren, erreicht hat.genschaften zu verifizieren, erreicht hat.)
    • Measuring the Privacy Loss with Smart Meters  + (The rapid growth of renewable energy sourcThe rapid growth of renewable energy sources and the increased sales in</br>electric vehicels contribute to a more volatile power grid. Energy suppliers</br>rely on data to predict the demand and to manage the grid accordingly.</br>The rollout of smart meters could provide the necessary data. But on the</br>other hand, smart meters can leak sensitive information about the customer.</br>Several solution were proposed to mitigate this problem. Some depend on</br>privacy measures to calculate the degree of privacy one could expect from a</br>solution. This bachelor thesis constructs a set of experiments which help to</br>analyse some privacy measures and thereby determine, whether the value of</br>a privacy measure increases or decreases with an increase in privacy. or decreases with an increase in privacy.)
    • Standardized Real-World Change Detection Data  + (The reliable detection of change points isThe reliable detection of change points is a fundamental task when analysing data across many fields, e.g., in finance, bioinformatics, and medicine. </br>To define “change points”, we assume that there is a distribution, which may change over time, generating the data we observe. A change point then is a change in this underlying distribution, i.e., the distribution coming before a change point is different from the distribution coming after. The principled way to compare distributions, and to find change points, is to employ statistical tests.</br></br>While change point detection is an unsupervised problem in practice, i.e., the data is unlabelled, the development and evaluation of data analysis algorithms requires labelled data. </br>Only few labelled real world data sets are publicly available and many of them are either too small or have ambiguous labels. Further issues are that reusing data sets may lead to overfitting, and preprocessing (e.g., removing outliers) may manipulate results.</br>To address these issues, van den Burg et al. publish 37 data sets annotated by data scientists and ML researchers and use them for an assessment of 14 change detection algorithms. </br>Yet, there remain concerns due to the fact that these are labelled by hand: Can humans correctly identify changes according to the definition, and can they be consistent in doing so?</br></br>The goal of this Bachelor's thesis is to algorithmically label their data sets following the formal definition and to also identify and label larger and higher-dimensional data sets, thereby extending their work.</br>To this end, we leverage a non-parametric hypothesis test which builds on Maximum Mean Discrepancy (MMD) as a test statistic, i.e., we identify changes in a principled way. </br>We will analyse the labels so obtained and compare them to the human annotations, measuring their consistency with the F1 score. </br>To assess the influence of the algorithmic and definition-conform annotations, we will use them to reevaluate the algorithms of van den Burg et al. and compare the respective performances.. and compare the respective performances.)
    • Standardized Real-World Change Detection Data Defense  + (The reliable detection of change points isThe reliable detection of change points is a fundamental task when analyzing data across many fields, e.g., in finance, bioinformatics, and medicine.</br>To define “change points”, we assume that there is a distribution, which may change over time, generating the data we observe. </br>A change point then is a change in this underlying distribution, i.e., the distribution coming before a change point is different from the distribution coming after. </br>The principled way to compare distributions, and thus to find change points, is to employ statistical tests.</br></br>While change point detection is an unsupervised problem in practice, i.e., the data is unlabeled, the development and evaluation of data analysis algorithms requires labeled data. Only a few labeled real-world data sets are publicly available, and many of them are either too small or have ambiguous labels. Further issues are that reusing data sets may lead to overfitting, and preprocessing may manipulate results. To address these issues, Burg et al. publish 37 data sets annotated by data scientists and ML researchers and assess 14 change detection algorithms on them. </br>Yet, there remain concerns due to the fact that these are labeled by hand: Can humans correctly identify changes according to the definition, and can they be consistent in doing so?n, and can they be consistent in doing so?)
    • Assessing Word Similarity Metrics For Traceability Link Recovery  + (The software development process usually iThe software development process usually involves different artifacts that each describe different parts of the whole software system. Traceability Link Recovery is a technique that aids the development process by establishing relationships between related parts from different artifacts. Artifacts that are expressed in natural language are more difficult for machines to understand and therefore pose a challenge to this link recovery process. A common approach to link elements from different artifacts is to identify similar words using word similarity measures. ArDoCo is a tool that uses word similarity measures to recover trace links between natural language software architecture documentation and formal architectural models. This thesis assesses the effect of different word similarity measures on ArDoCo. The measures are evaluated using multiple case studies. Precision, recall, and encountered challenges for the different measures are reported as part of the evaluation.es are reported as part of the evaluation.)
    • Feedback Mechanisms for Smart Systems  + (The talk will be held remotely from ZurichThe talk will be held remotely from Zurich at https://global.gotomeeting.com/join/935923965 and will be streamed to room 348. You can attend via GotoMeeting or in person in room 348. </br></br>Feedback mechanisms have not yet been sufficiently researched in the context of smart systems. From the research and the industrial perspective, this motivates for investigations on how users could be supported to provide appropriate feedback in the context of smart systems. A key challenge for providing such feedback means in the smart system context might be to understand and consider the needs of smart system users for communicating their feedback.</br></br>Thesis Goal: The goal of this thesis is the creation of innovative feedback mechanisms, that are tailored to a specific context within the domain of smart systems. Already existing feedback mechanisms for software in general and smart systems in particular will be assessed and the users´ needs regarding those mechanisms will be examined. Based on this, improved feedback mechanisms will be developed, either by improving on existing ones or by inventing and implementing new concepts. The overall aim of these innovative feedback mechanisms is to enable smart system users to effectively</br>and efficiently give feedback in the context of smart systems. feedback in the context of smart systems.)
    • Flexible User-Friendly Trip Planning Queries  + (The users of the location-based services oThe users of the location-based services often want to find short routes that pass through multiple Points-of-Interest (PoIs); consequently, developing trip planning queries that can find the shortest routes that passes through user-specified categories has attracted considerable attention. If multiple PoI categories, e.g., restaurant and shopping mall, are in an ordered list (i.e., a category sequence), the trip planning query searches for a sequenced route that passes PoIs that match the user-specified categories in order.</br>Existing approaches find the shortest route based on the user query. A major problem with the existing approaches is that they only take the order of POIs and</br>output the routes which match the sequence perfectly. However, users who they are interested in applying more constraints, like considering the hierarchy of the POIs</br>and the relationship among sequence points, could not express their wishes in the</br>form of query users. Example below, illustrates the problem: </br></br>Example: A user is interested in visiting three department stores (DS) but she needs</br>to have some food after each visit. It is important for the user to visit three different</br>department stores but the restaurants could be the same. How could the user, express her needs to a trip planning system?</br></br>The topic of this bachelor thesis is to design such a language for trip planning system which enables the user to express her needs in the form of user queries in a</br>flexible manner.form of user queries in a flexible manner.)
    • Development and evaluation of efficient kNN search of time series subsequences using the example of the Google Ngram data set  + (There are many data structures and indicesThere are many data structures and indices that speed up kNN queries on time series. The existing indices are designed to work on the full time series only. In this thesis we develop a data structure that allows speeding up kNN queries in an arbitrary time range, i.e. for an arbitrary subsequence. range, i.e. for an arbitrary subsequence.)
    • Evaluation of a Reverse Engineering Approach in the Context of Component-Based Software Systems  + (This thesis aims to evaluate the componentThis thesis aims to evaluate the component architecture generated by component-based software systems after reverse engineering. The evaluation method involves performing a manual analysis of the respective software systems and then comparing the component architecture obtained through the manual analysis with the results of reverse engineering. The goal is to evaluate a number of parameters, with a focus on correctness, related to the results of reverse engineering. This thesis presents the specific steps and considerations involved in manual analysis. It will also perform manual analysis on selected software systems that have already undergone reverse engineering analysis and compare the results to evaluate the differences between reverse engineering and ground truth. In summary, this paper evaluates the accuracy of reverse engineering by contrasting manual analysis with reverse engineering in the analysis of software systems, and provides some direction and support for the future development of reverse engineering.future development of reverse engineering.)
    • Blueprint for the Transition from Static to Dynamic Deployment  + (This thesis defnes a blueprint describing This thesis defnes a blueprint describing a successful ad-hoc deployment with generally applicable rules, thus providing a basis for further developments. The blueprint itself is based on the experience of developing a Continuous Deployment system, the subsequent tests and the continuous user feedback. In order to evaluate the blueprint, the blueprint-based dynamic system was compared with the previously static deployment and a user survey was conducted. The result of the study shows that the rules described in the blueprint have far-reaching consequences and generate an additional value for the users during deployment.nal value for the users during deployment.)
    • Developing a Database Application to Compare the Google Books Ngram Corpus to German News Corpora  + (This thesis focuses on the development of This thesis focuses on the development of a database application that enables a comparative analysis between the Google Books Ngram Corpus(GBNC) and a German news corpora. The GBNC provides a vast collection of books spanning various time periods, while the German news corpora encompass up-to-date linguistic data from news sources. Such comparison aims to uncover insights into language usage patterns, linguistic evolution, and cultural shifts within the German language.</br>Extracting meaningful insights from the compared corpora requires various linguistic metrics, statistical analyses and visualization techniques. By identifying patterns, trends and linguistic changes we can uncover valuable information on language usage evolution over time.</br>This thesis provides a comprehensive framework for comparing the GBNC to other corpora, showcasing the development of a database application that enables not only valuable linguistic analyses but also shed light on the composition of the GBNC by highlighting linguistic similarities and differences.g linguistic similarities and differences.)
    • Feature-Based Time Series Generation  + (To build highly accurate and robust machinTo build highly accurate and robust machine learning algorithms practitioners require data in high quality, quantity and diversity. Available time series data sets often lack in at least one of these attributes. In cases where collecting more data is not possible or too expensive, data-generating methods help to extend existing data. Generation methods are challenged to add diversity to existing data while providing control to the user over what type of data is generated. Modern methods only address one of these challenges. In this thesis we propose a novel generation algorithm that relies on characteristics of time series to enable control over the generation process. We combine classic interpretable features with unsupervised representation learning by modern neural network architectures. Further we propose a measure and visualization for diversity in time series data sets. We show that our approach can create a controlled set of time series as well as adding diversity by recombining characteristics across available instances.haracteristics across available instances.)
    • Assessing Human Understanding of Machine Learning Models  + (To deploy an ML model in practice, a stakeTo deploy an ML model in practice, a stakeholder needs to understand the behaviour and implications of this model. To help stakeholders develop this understanding, researchers propose a variety of technical approaches, so called eXplainable Artificial Intelligence (XAI). Current XAI approaches follow very task- or model-specific objectives. There is currently no consensus on a generic method to evaluate most of these technical solutions. This complicates comparing different XAI approaches and choosing an appropriate solution in practice. To address this problem, we formally define two generic experiments to measure human understanding of ML models. From these definitions we derive two technical strategies to improve understanding, namely (1) training a surrogate model and (2) translating inputs and outputs to effectively perceivable features. We think that most existing XAI approaches only focus on the first strategy. Moreover, we show that established methods to train ML models can also help stakeholders to better understand ML models. In particular, they help to mitigate cognitive biases. In a case study, we demonstrate that our experiments are practically feasible and useful. We suggest that future research on XAI should use our experiments as a template to design and evaluate technical solutions that actually improve human understanding.that actually improve human understanding.)
    • Cost-Efficient Evaluation of ML Classifiers With Feature Attribution Annotations (Final BA Presentation)  + (To evaluate the loss of cognitive ML modelTo evaluate the loss of cognitive ML models, e.g., text or image classifier, accurately, one usually needs a lot of test data which are annotated manually by experts. In order to estimate accurately, the test data should be representative or else it would be hard to assess whether a model overfits, i.e., it uses spurious features of the images significantly to decide on its predictions.With techniques such as Feature Attribution, one can then compare important features that the model sees with their own expectations and can therefore be more confident whether or not he should trust the model. In this work, we propose a method that estimates the loss of image classifiers based on Feature-Attribution techniques. We use the classic approach for loss estimate as our benchmark to evaluate our proposed method. At the end of this work, our analysis reveals that our proposed method seems to have a similar loss estimate to that of the classic approach with a good image classifer and a representative test data. Based on our experiment, we expect that our proposed method could give a better loss estimate than the classic approach in cases where one has a biased test data and an image classifier which overfits.ta and an image classifier which overfits.)
    • Integrated Reliability Analysis of Business Processes and Information Systems  + (Today it is hardly possible to find a busiToday it is hardly possible to find a business process (BP) that does not involve working with an information system (IS). In order to better plan and improve such BPs a lot of research has been done on modeling and analysis of BPs. Given the dependency between BPs and IS such assessment of BPs should take the IS into account. Furthermore, in most assessment of BPs only the functionality, but not the so called non-functional requirements (NFR) are taken into account. This is not adequate, since NFRs influence BPs just as they influence IS. In particular the NFR reliability is interesting for planning of BPs in business environments. Therefore, the presented approach provides an integrated reliability analysis of BPs and IS. The proposed analysis takes humans, device resources and the impact from the IS into account. In order to model reliability information it has to be determined, which metrics will be used for each BP element. Thus a structured literature search on reliability modeling and analysis is conducted in seven resources. Through the structured search 40 papers on modeling and analysis of BP reliability were found. Ten of them were classified as relevant for the topic. The structured search revealed that no approach allows for modeling reliability of activities and resources separate from each other. Moreover, there is no common answer on how to model human resources in BPs. In order to enable such an integrated approach the reliability information of BPs is modeled as an extension of the IntBIIS approach. BP actions get a failure probability and the resources are extended with two reliability related attributes. For device resources the commonly used MTTF and MTTR are added in order to provide reliability information. Roles, that are associated with actor resources, are annotated with MTTF and a newly developed MTTRepl. The next step is a reliability analysis of an BP including the IS. Markov chains and reduction rules are used to analyze the BP reliability. This approach is exemplary implemented with Java in the context of PCM, that already provides analysis for IS. The result of the analysis is the probability of successful execution of the BP including the IS. An evaluation of the implemented analysis presents that it is possible to analyze the reliability of a BP including all resources and the involved IS. The results show that the reliability prediction is more accurate, when BP and IS are assessed through a combined analysis. are assessed through a combined analysis.)
    • Deriving Twitter Based Time Series Data for Correlation Analysis  + (Twitter has been identified as a relevant Twitter has been identified as a relevant data source for modelling purposes in the last decade. In this work, our goal was to model the conversational dynamics of inflation development in Germany through Twitter Data Mining. To accomplish this, we summarized and compared Twitter data mining techniques for time series data from pertinent research. Then, we constructed five models for generating time series from topic-related tweets and user profiles of the last 15 years. Evaluating the models, we observed that several approaches like modelling for user impact or adjusting for automated twitter accounts show promise. Yet, in the scenario of modelling inflation expectation dynamics, these more complex models could not contribute to a higher correlation between German CPI and the resulting time series compared to a baseline approach.me series compared to a baseline approach.)
    • Entwurf und Umsetzung von Zugriffskontrolle in der Sichtenbasierten Entwicklung  + (Um der steigenden Komplexität technischer Um der steigenden Komplexität technischer Systeme zu begegnen, werden in ihrer Entwicklung sichtenbasierte Entwicklungsprozesse eingesetzt. Die dabei definierten Sichten zeigen nur die für ein bestimmtes Informationsbedürfnis relevanten Daten über das System, wie die Architektur, die Implementierung oder einen Ausschnitt davon und reduzieren so die Menge an Informationen und vereinfachen dadurch die Arbeit mit dem System. Neben dem Zweck der Informationsreduktion kann auch eine Einschränkung des Zugriffs aufgrund fehlender Zugriffsberechtigungen notwendig sein. Die Notwendigkeit ergibt sich beispielsweise bei der organisationsübergreifenden Zusammenarbeit zur Umsetzung vertraglicher Vereinbarungen. Um die Einschränkung des Zugriffs umsetzen zu können, wird eine Zugriffskontrolle benötigt. Bestehende Arbeiten nutzen eine Zugriffskontrolle für die Erzeugung einer Sicht. Die Definition weiterer Sichten darauf ist nicht vorgesehen. Außerdem fehlt eine allgemeine Betrachtung einer Integration einer Zugriffskontrolle in einen sichtenbasierten Entwicklungsprozess. Daher stellen wir in dieser Arbeit das Konzept einer Integration einer rollenbasierten Zugriffskontrolle in einen sichtenbasierten Entwicklungsprozess für beliebige Systeme vor. Mit dem Konzept ermöglichen wir die feingranulare Definition und Auswertung von Zugriffsrechten für einzelne Modellelemente für beliebige Metamodelle. Das Konzept implementieren wir prototypisch in Vitruv, einem Framework für sichtenbasierte Entwicklung. Wir evaluieren diesen Prototypen hinsichtlich seiner Funktionalität mithilfe von Fallstudien. Die Zugriffskontrolle konnten wir dabei für verschiedene Fallstudien erfolgreich einsetzen. Außerdem diskutieren wir die Integrierbarkeit des Prototypen in einen allgemeinen sichtenbasierten Entwicklungsprozess.inen sichtenbasierten Entwicklungsprozess.)
    • Konzept eines Dokumentationsassistenten zur Erzeugung strukturierter Anforderungen basierend auf Satzschablonen  + (Um die Qualität und Glaubwürdigkeit eines Um die Qualität und Glaubwürdigkeit eines Produktes zu erhalten, ist ein systematisches Anforderungsmanagement erforderlich, wobei die Merkmale eines Produkts durch Anforderungen beschrieben werden. Deswegen wurde im Rahmen dieser Arbeit ein Konzept für einen Dokumentationsassistenten entwickelt, mit dem Benutzer strukturierte Anforderungen basierend auf den Satzschablonen nach SOPHIST erstellen können. Dies beinhaltet einen linguistischen Aufbereitungsansatz, der semantische Rollen aus freiem Text extrahiert. Während des Dokumentationsprozesses wurden die semantischen Rollen benutzt, um die passendste Satzschablone zu identifizieren und diese als Hilfestellung dem Benutzer aufzuzeigen. Zudem wurde eine weitere Hilfestellung angeboten, nämlich die Autovervollständigung, die mithilfe von Markovketten das nächste Wort vorhersagen kann. Insgesamt wurden rund 500 Anforderungen aus verschiedenen Quellen herangezogen, um die Integrität des Konzepts zu bewerten. Die Klassifizierung der Texteingabe in eine Satzschablone erreicht ein F1-Maß von 0,559. Dabei wurde die Satzschablone für funktionale Anforderungen mit einem F1-Maß von 0,908 am besten identifiziert. Außerdem wurde der Zusammenhang zwischen den Hilfestellungen mithilfe eines Workshops bewertet. Hierbei konnte gezeigt werden, dass die Anwendung des vorliegenden Konzepts, die Vollständigkeit von Anforderungen verbessert und somit die Qualität der zu dokumentierenden Anforderungen steigert.u dokumentierenden Anforderungen steigert.)
    • Überführen von Systemarchitekturmodellen in die datenschutzrechtliche Domäne durch Anwenden der DSGVO  + (Um die im digitalen Raum allgegenwärtigen,Um die im digitalen Raum allgegenwärtigen, personenbezogenen Daten vor Missbrauch zu schützen hat die EU eine Datenschutzgrundverordnung eingeführt. An diese müssen sich sämtliche Unternehmen halten, die mit personenbezogenen Daten im digitalen Raum hantieren. Die Implementierung dieser in Softwaresystemen stellt sich aber durch die Involvierung der juristischen Domäne als aufwändig dar. In dieser Bachelorarbeit wurde daher eine Transformation aus Palladio in ein GDPR-Modell entwickelt, um die Kommunikation der verschiedenen Fachbereiche zu erleichtern.verschiedenen Fachbereiche zu erleichtern.)
    • Kritische Workflows in der Fertigungsindustrie  + (Um mögliche Inkonsistenzen zwischen techniUm mögliche Inkonsistenzen zwischen technischen Modellen und ihren verursachenden Workflows in der Fertigungsindustrie zu identifizieren, wurde der gesamte Fertigungsprozess eines beispielhaften Präzisionsfertigers in einzelne Workflows aufgeteilt. Daraufhin wurden neun Experteninterviews durchgeführt, um mögliche Inkonsistenzen zwischen technischen Modellen zu identifizieren und diese in die jeweiligen verursachenden Workflows zu kategorisieren. Insgesamt wurden 13 mögliche Inkonsistenzen dargestellt und ihre jeweilige Entstehung erläutert. In einer zweiten Interview-Iteration wurden die Experten des Unternehmens erneut zu jeder zuvor identifizierten Inkonsistenz befragt, um die geschätzten Auftrittswahrscheinlichkeiten der Inkonsistenzen und mögliche Auswirkungen auf zuvor durchgeführte, oder darauf folgende Workflows in Erfahrung zu bringen.olgende Workflows in Erfahrung zu bringen.)
    • Modellierung von Annahmen in Softwarearchitekturen  + (Undokumentierte Sicherheitsannahmen könnenUndokumentierte Sicherheitsannahmen können zur Vernachlässigung von Softwareschwachstellen führen, da Zuständigkeit und Bezugspunkte von Sicherheitsannahmen häufig unklar sind. Daher ist das Ziel dieser Arbeit, Sicherheitsannahmen in den komponentenbasierten Entwurf zu integrieren. In dieser Arbeit wurde basierend auf Experteninterviews und Constructive Grounded Theory ein Modell für diesen Zweck abgeleitet. Anhand einer Machbarkeitsstudie wird der Einsatz des Annahmenmodells demonstriert. Einsatz des Annahmenmodells demonstriert.)
    • Komplexe Abbildungen von Formularelementen zur Generierung von aktiven Ontologien  + (Unser heutiges Leben wird zunehmend von AsUnser heutiges Leben wird zunehmend von Assistenzsystemen erleichtert. Hierzu gehören auch die immer häufiger verwendeten intelligenten Sprachassistenten wie Apple's Siri. Statt lästigem Flüge vergleichen auf diversen Internetportalen können Sprachassistenten dieselbe Arbeit tun.</br>Um Informationen verarbeiten und an den passenden Webdienst weiterleiten zu können, muss das Assistenzsystem natürliche Sprache verstehen und formal repräsentieren können. Hierfür werden bei Siri aktive Ontologien (AOs) verwendet, die derzeit mit großem manuellem Aufwand manuell erstell werden müssen. </br>Die am KIT entwickelte Rahmenarchitektur EASIER beschäftigt sich mit der automatischen Generierung von aktiven Ontologien aus Webformularen.</br>Eine Herausforderung bei der Erstellung von AOs aus Webformularen ist die Zuordnung unterschiedlich ausgeprägter Formularelemente mit gleicher Semantik, da semantisch gleiche aber unterschiedlich realisierte Konzepte zu einem AO-Knoten zusammengefasst werden sollen. Es ist daher nötig, semantisch ähnliche Formularelemente identifizieren zu können.</br>Diese Arbeit beschäftigt sich mit der automatischen Identifikation solcher Ähnlichkeiten und der Konstruktion von Abbildungen zwischen Formularelementen.on Abbildungen zwischen Formularelementen.)
    • Erstellung eines Benchmarks zum Anfragen temporaler Textkorpora zur Untersuchung der Begriffsgeschichte und historischen Semantik  + (Untersuchungen innerhalb der BegriffsgeschUntersuchungen innerhalb der Begriffsgeschichte erfahren einen Aufschwung. Anhand neuer technologischer Möglichkeiten ist es möglich große Textmengen maschinengestützt nach wichtigen Belegstellen zu untersuchen. Hierzu wurden die methodischen Arbeitsweisen der Historiker und Linguisten untersucht um bestmöglich deren Informationsbedürfnisse zu befriedigen. Auf dieser Basis wurden neue Anfrageoperatoren entwickelt und diese in Kombination mit bestehenden Operatoren in einem funktionalen Benchmark dargestellt. Insbesondere eine Anfragesprache bietet die nötige Parametrisierbarkeit, um die variable Vorgehensweise der Historiker unterstützen zu können.ise der Historiker unterstützen zu können.)
    • Elicitation and Classification of Security Requirements for Everest  + (Unvollständige und nicht überprüfte AnfordUnvollständige und nicht überprüfte Anforderungen können zu Missverständnissen und falschen Vorstellungen führen. Gerade im Sicherheitsbereich können verletzte Anforderungen Hinweise auf potenzielle Schwachstellen sein. Um eine Software auf Schwachstellen zu prüfen, werden Sicherheitsanforderungen an ihre Implementierung geknüpft. Hierfür müssen spezifische Anforderungsattribute identifiziert und mit dem Design verknüpft werden.</br>In dieser Arbeit werden 93 Sicherheitsanforderungen auf Designebene für die Open-Source-Software EVerest, einer Full-Stack-Umgebung für Ladestationen, erhoben. Mithilfe von Prompt Engineering und Fine-tuning werden Designelemente mittels GPT klassifiziert und ihre jeweiligen Erwähnungen aus den erhobenen Anforderungen extrahiert. Die Ergebnisse deuten darauf hin, dass die Klassifizierung von Designelementen in Anforderungen sowohl bei Prompt Engineering als auch bei Fine-tuning gut funktioniert (F1-Score: 0,67-0,73). In Bezug auf die Extraktion von Designelementen übertrifft Fine-tuning (F1-Score: 0,7) jedoch Prompt Engineering (F1-Score: 0,52). Wenn beide Aufgaben kombiniert werden, übertrifft Fine-tuning (F1-Score: 0,87) ebenfalls Prompt Engineering (F1-Score: 0,61).falls Prompt Engineering (F1-Score: 0,61).)
    • Detecting Outlying Time-Series with Global Alignment Kernels  + (Using outlier detection algorithms, e.g., Using outlier detection algorithms, e.g., Support Vector Data Description (SVDD), for detecting outlying time-series usually requires extracting domain-specific attributes. However, this indirect way needs expert knowledge, making SVDD impractical for many real-world use cases. Incorporating "Global Alignment Kernels" directly into SVDD to compute the distance between time-series data bypasses the attribute-extraction step and makes the application of SVDD independent of the underlying domain.</br></br>In this work, we propose a new time-series outlier detection algorithm, combining "Global Alignment Kernels" and SVDD. Its outlier detection capabilities will be evaluated on synthetic data as well as on real-world data sets. Additionally, our approach's performance will be compared to state-of-the-art methods for outlier detection, especially with regard to the types of detected outliers. regard to the types of detected outliers.)
    • Efficient Verification of Data-Value-Aware Process Models  + (Verification methods detect unexpected behVerification methods detect unexpected behavior of business process models before their execution. In many process models, verification depends on data values. A data value is a value in the domain of a data object, e.g., $1000 as the price of a product. However, verification of process models with data values often leads to state-space explosion. This problem is more serious when the domain of data objects is large. The existing works to tackle this problem often abstract the domain of data objects. However, the abstraction may lead to a wrong diagnosis when process elements modify the value of data objects.</br> </br>In this thesis, we provide a novel approach to enable verification of process models with data values, so-called data-value-aware process models. A distinctive of our approach is to support modification of data values while preserving the verification results. We show the functionality of our approach by conducting the verification of a real-world application: the German 4G spectrum auction model.ion: the German 4G spectrum auction model.)
    • On the Interpretability of Anomaly Detection via Neural Networks  + (Verifying anomaly detection results when wVerifying anomaly detection results when working in on an unsupervised use case is challenging. For large datasets a manual labelling is economical unfeasible. In this thesis we create explanations to help verifying and understanding the detected anomalies. We develop a method to rule generation algorithm that describe frequent patterns in the output of autoencoders. The number of rules is significantly lower than the number of anomalies. Thus, finding explanations for these rules is much less effort compared to finding explanations for every single anomaly. Its performance is evaluated on a real-world use case, where we achieve a significant reduction of effort required for domain experts to understand the detected anomalies but can not specify the usefulness in exact numbers due to the missing labels. Therefore, we also evaluate the approach on benchmark dataset.valuate the approach on benchmark dataset.)
    • A Mobility Case Study Framework for Validating Uncertainty Impact Analyses regarding Confidentiality  + (Vertraulichkeit ist eine wichtige SicherheVertraulichkeit ist eine wichtige Sicherheitsanforderung an Informationssysteme. Bereits im frühen Entwurf existieren Ungewissheiten, sowohl über das System als auch dessen Umgebung, die sich auf die Vertraulichkeit auswirken können. Es existieren Ansätze, die Softwarearchitektinnen und Softwarearchitekten bei der Untersuchung von Ungewissheiten und deren Auswirkung auf die Vertraulichkeit unterstützen und somit den Aufwand reduzieren. Diese Ansätze wurden jedoch noch nicht umfangreich evaluiert. Bei der Evaluierung ist ein einheitliches Vorgehen wichtig, um konsistente Ergebnisse zu erhalten. Obwohl es allgemein Arbeiten in diesem Bereich gibt, sind diese nicht spezifisch genug, um die Anforderung zu erfüllen.</br></br>In dieser Ausarbeitung stellen wir ein Rahmenwerk vor, das diese Lücke schließen soll. Dieses Rahmenwerk besteht aus einem Untersuchungsprozess und einem Fallstudienprotokoll, diese sollen Forschenden helfen, weitere Fallstudien zur Validierung der Ungewissheits-Auswirkungs-Analysen strukturiert durchzuführen und damit auch Ungewissheiten und deren Auswirkung auf Vertraulichkeit zu erforschen. Wir evaluieren unseren Ansatz, indem wir eine Mobilitätsfallstudie durchführen.wir eine Mobilitätsfallstudie durchführen.)
    • Erhaltung des Endanwenderflows in PREEvision durch asynchrone Job-Verarbeitung  + (Viele modellgetriebene EntwicklungsumgebunViele modellgetriebene Entwicklungsumgebungen verfolgen einen rein sequenziellen Ansatz. Modelltransformationen werden sequenziell ausgeführt und zu einem Zeitpunkt darf stets nur eine Modelltransformation ausgeführt werden. Auf entsprechend großen Datenmengen ergeben sich hierdurch jedoch einige Einschränkungen. So kann es dazu kommen, dass Nutzer mehrere Minuten oder sogar Stunden auf den Abschluss einer Modelltransformation warten müssen und die Software währenddessen nicht für Nutzereingaben zur Verfügung steht, selbst wenn die Modelltransformation nur auf einen Teil des Modells zugreift. Dieser Zustand kann jedoch den Nutzerflow unterbrechen, einen mentalen Zustand des Nutzers, der gleichzeitig produktiv ist und als belohnend wahrgenommen wird. </br></br>Eine Möglichkeit, um das Risiko zu minimieren, dass der Nutzerflow unterbrochen wird, ist die Wartezeit für den Nutzer zu verkürzen, indem Modelltransformationen asynchron im Hintergrund ausgeführt werden. Der Nutzer kann dann mit eingeschränkt weiterarbeiten, während die Modelltransformation durchgeführt wird. </br></br>Im Kontext von modellgetriebener Softwareentwicklung findet sich zu Nebenläufigkeit nur wenig Forschung. Zwar gibt es einige Ambitionen, Modelltransformationen zu parallelisieren, jedoch gibt es keine Forschung dazu, Modelltransformationen asynchron auszuführen um weitere Modelltransformationen simultan durchführen zu können. </br></br>Die vorliegende Arbeit stellt am Beispiel der modellgetrieben entwickelten Software PREEvision der Firma Vector Informatik GmbH, Mechanismen und mögliche Implementierungen vor, mit denen simultane Modelltransformationen realisiert werden können. Für vier Operationen in PREEvision wird außerdem beispielhaft beschrieben, wie die Operationen mit Hilfe der vorgestellten Mechanismen so modifiziert werden können, dass diese asynchron ausgeführt werden. Die Prototypen der beschriebenen Modifikationen werden anschließend im Hinblick auf die Unterbrechung des Nutzerflows und die Korrektheit evaluiert. Abschließend zieht die Arbeit ein Fazit über die Anwendbarkeit der vorgestellten Mechanismen und darüber, ob der Nutzer durch die Prototypen seltener auf Wartedialoge warten muss.pen seltener auf Wartedialoge warten muss.)
    • Ein Ansatz zur Wiederherstellung von Nachverfolgbarkeitsverbindungen für natürlichsprachliche Softwaredokumentation und Quelltext  + (Wartbarkeit spielt eine zentrale Rolle fürWartbarkeit spielt eine zentrale Rolle für die Langlebigkeit von Softwareprojekten. Ein wichtiger Teil der Wartbarkeit besteht darin, dass die natürlichsprachliche Dokumentation des Quelltextes einen guten Einblick in das Projekt und seinen dazugehörigen Quelltext liefert. Zur besseren Wartbarkeit dieser beiden Software-Artefakte besteht die Aufgabe dieser Arbeit darin, Verbindungen zwischen den Elementen dieser beiden Artefakte aufzubauen. Diese Verbindungen heißen Trace Links und können für verschiedene Zwecke der Wartbarkeit genutzt werden. Diese Trace Links ermöglichen zum Beispiel die Inkonsistenzerkennung zwischen den beiden Software-Artefakten oder können auch für verschiedene Analysen benutzt werden. Um diese Trace Links nachträglich aus den beiden Software-Artefakten natürlichsprachlicher Dokumentation und Quelltext zu gewinnen, wird das bereits bestehende ArDoCo Framework benutzt und auf das Software-Artefakt Quelltext erweitert. Ebenfalls werden ArDoCos bestehende Entscheidungskriterien auf den neuen Kontext angepasst. Der neuartige Kontext führt zu Herausforderungen bezüglich der Datenmenge, die durch neue Entscheidungskriterien adressiert werden. Dabei zeugen die Ergebnisse dieser Arbeit eindeutige von Potenzial, weswegen weiter darauf aufgebaut werden sollte.gen weiter darauf aufgebaut werden sollte.)
    • Uncertainty-aware Confidentiality Analysis Using Architectural Variations  + (Wenn man Softwaresysteme auf Verletzungen Wenn man Softwaresysteme auf Verletzungen der Vertraulichkeit untersuchen will, führen Ungewissheiten zu falschen Aussagen über die Architektur. Vertraulichkeitsaussagen können zur Entwurfszeit kaum getroffen werden, ohne diese Ungewissheiten zu behandeln. Wir entwickeln einen Kombinationsalgorithmus, der Informationen über die Ungewissheiten bei der Analyse der Architekturszenarien berücksichtigt und daraus eine Aussage über die Vertraulichkeit des Systems treffen kann.</br>Wir evaluieren, ob es möglich ist, ein System mit zusätzlichen Informationen nicht-binär zu bewerten, wie genau der Kombinationsalgorithmus ist und ob die zusätzlichen Informationen so minimal bleiben, dass ein Softwarearchitekt den Kombinationsalgorithmus überhaupt verwenden kann.tionsalgorithmus überhaupt verwenden kann.)
    • Surrogate models for crystal plasticity - predicting stress, strain and dislocation density over time  + (When engineers design structures, prior knWhen engineers design structures, prior knowledge of how they will react to external forces is crucial. Applied forces introduce stress, leading to dislocations of individual molecules that ultimately may cause material failure, like cracks, if the internal strain of the material exceeds a certain threshold. We can observe this by applying increasing physical forces to a structure and measure the stress, strain and the dislocation density curves.</br></br>Finite Elemente Analysis (FEM) enables the simulation of a material deforming under external forces, but it comes with very high computational costs. This makes it unfeasible to conduct a large number of simulations with varying parameters.</br>In this thesis, we use neural network based sequence models to build a data-driven surrogate model that predicts stress, strain and dislocation density curves produced by an FEM-simulation based on the simulation’s input parameters.ased on the simulation’s input parameters.)
    • Context Generation for Code and Architecture Changes Using Large Language Models  + (While large language models have succeededWhile large language models have succeeded in generating code, the struggle is to modify large existing code bases. The Generated Code Alteration (GCA) process is designed, implemented, and evaluated in this thesis. The GCA process can automatically modify a large existing code base, given a natural language task. Different variations and instantiations of the process are evaluated in an industrial case study. The code generated by the GCA process is compared to code written by human developers. The language model-based GCA process was able to generate 13.3 lines per error, while the human baseline generated 65.8 lines per error. While the generated code did not match the overall human performance in modifying large code bases, it could still provide assistance to human developers.ll provide assistance to human developers.)
    • Generating Causal Domain Knowledge for Cloud Systems Monitoring  + (While standard machine learning approachesWhile standard machine learning approaches rely solely on data to learn relevant patterns, in certain fields, this may not be sufficient. Researchers in the Healthcare domain, have successfully applied causal domain knowledge to improve prediction quality of machine learning models, especially for rare diseases. The causal domain knowledge informs the machine learning model about similar diseases, thus improving the quality of the predictions.</br></br>However, some domains, such as Cloud Systems Monitoring, lack readily</br>available causal domain knowledge, and thus the knowledge must be approximated.</br>Therefore, it is important to have a systematic investigation of the processes and</br>design decision that affect the knowledge generation process.</br></br>In this study, we showed how causal discovery algorithms can be employed to generate causal domain knowledge</br>from raw textual logs in the Cloud Systems Monitoring domain. We also</br>investigated the impact of various design choices on the domain knowledge</br>generation process through systematic testing across multiple datasets and</br>shared the insights we gained. To our knowledge, this is the first time such an</br>investigation has been conducted. such an investigation has been conducted.)
    • Model-Based Rule Engine for the Reconstruction of Component-Based Software Architectures for Quality Prediction  + (With architecture models, software developWith architecture models, software developers and architects are able to enhance their documentation and communication, perform architecture analysis, design decisions and finally with PCM, can start quality predictions. However, the manual creation of component architecture models for complex systems is difficult and time consuming. Instead, the automatic generation of architecture models out of existing projects saves time and effort. For this purpose, a new approach is proposed which uses technology specific rule artifacts and a rule engine that transforms the source code of software projects into a model representation, applies the given rules and then automatically generates a static software architecture model. The resulting architecture model is then usable for quality prediction purposes inside the PCM context. The concepts for this approach are presented and a software system is developed, which can be easily extended with new rule artifacts to be useful for a broader range of technologies used in different projects. With the implementation of a prototype, the collection of technology specific rule sets and an evaluation including different reference systems the proposed functionality is proven and a solid foundation for future improvements is given.undation for future improvements is given.)
    • Enabling the Collaborative Collection of Uncertainty Sources Regarding Confidentiality  + (With digitalization in progress, the amounWith digitalization in progress, the amount of sensitive data stored in software systems is increasing. However, the confidentiality of this data can often not be guaranteed, as uncertainties with an impact on confidentiality exist, especially in the early stages of software development. As the consideration of uncertainties regarding confidentiality is still novel, there is a lack of awareness of the topic among software architects. Additionally, the existing knowledge is scattered among researchers and institutions, making it challenging to comprehend and utilize for software architects. Current research on uncertainties regarding confidentiality has focused on analyzing software systems to assess the possibilities of confidentiality violations, as well as the development of methods to classify uncertainties. However, these approaches are limited to the researchers’ observed uncertainties, limiting the generalizability of classification systems, the validity of analysis results, and the development of mitigation strategies. This thesis presents an approach to enable the collection and management of knowledge on uncertainties regarding confidentiality, enabling software architects to comprehend better and identify uncertainties regarding confidentiality. Furthermore, the proposed approach strives to enable collaboration between researchers and practitioners to manage the effort to collect the knowledge and maintain it. To validate this approach, a prototype was developed and evaluated with a user study of 17 participants from software engineering, including 7 students, 5 researchers, and 5 practitioners. Results show that the approach can support software architects in identifying and describing uncertainties regarding confidentiality, even with limited prior knowledge, as they could identify and describe uncertainties correctly in a close-to-real-world scenario in 94.4% of the cases.real-world scenario in 94.4% of the cases.)
    • Automated Classification of Software Engineering Papers along Content Facets  + (With existing search strategies, specific With existing search strategies, specific paper contents can only be searched indirectly. Keywords are used to describe the searched content as accurately as possible but many of the results are not related to what was searched for. A classification of these contents, if automated, could extend the search process and thereby allow to specify the searched content directly and enhance current state of scholarly communication.</br>In this thesis, we investigated the automatic classification of scientific papers in the Software Engineering domain.</br>In doing so, a classification scheme of paper contents with regard to Research Object, Statement, and Evidence was consolidated.</br>We then investigate in a comparative analysis the machine learning algorithms Naïve Bayes, Support Vector Machine, Multi-Layer Perceptron, Logistic Regression, Decision Tree, and BERT applied to the classification task.d BERT applied to the classification task.)
    • Location sharing with secrecy guarantees in mobile social networks  + (With the increasing popularity of locationWith the increasing popularity of location-based services and mobile online social networks (mOSNs), secrecy concerns have become one of the main worries of its users due to location information exposure. Users are required to store their location, i.e., physical position, and the relationships that they have with other users, e.g., friends, to have access to the services offered by these networks. This information, however, is sensitive and has to be protected from unauthorized access.</br>In this thesis, we aim to offer location-based services to users of mOSNs while guaranteeing that an adversary, including the service provider, will not be able to learn the locations of the users (location secrecy) and the relationship existing between them (relationship secrecy). We consider both linking attacks and collusion attacks. We propose two approaches R-mobishare and V-mobishare, which combine existing cryptographic techniques. Both approaches use, among others, private broadcast encryption and homomorphic encryption. Private broadcast encryption is used to protect the relationships existing between users, and homomorphic encryption is used to protect the location of the users. Our system allows users to query their nearby friends. Next, we prove that our proposed approaches fulfill our secrecy guarantees, i.e., location and relationship secrecy. Finally, we evaluate the query performance of our proposed approaches and use real online social networks to compare their performance. The result of our experiments shows that in a region with low population density such as suburbs, our first approach, R-mobishare, performs better than our approach V-mobishare. On the contrary, in a region with high population density such as downtown, our second approach, V-mobishare, perform better than R-mobishare.obishare, perform better than R-mobishare.)
    • Traceability of Telemetry Data in Hybrid Architectures  + (With the rise of Software-as-a-Service proWith the rise of Software-as-a-Service products, the software development landscape transformed to a more agile and data-driven environment. The amount of telemetry data, collected from the users actions, is rapidly increasing and with it the possibilities but also the challenges of using the collected data for quality improvement purposes.</br> </br>LogMeIn Inc. is a global company offering Software-as-a-Service solutions for remote collaboration and IT management. An example product is GoToMeeting which allows to create and join virtual meeting rooms.</br> </br>This Master’s Thesis presents the JoinTracer approach which enables the telemetry-data-based traceability of GoToMeeting join-flows of the GoToMeeting architecture. The approach combines new mechanics and already existing traceability techniques from different traceability communities to leverage synergies and to enable the traceability of individual join-flows.</br>In this work, the JoinTracer approach is designed and implemented as well as evaluated regarding the functionality, performance and acceptance. The results are discussed to analyze the future development and the applicability of this approach to other contexts as well.f this approach to other contexts as well.)
    • Worteinbettungen für die Anforderungsdomäne  + (Worteinbettungen werden in Aufgaben aus deWorteinbettungen werden in Aufgaben aus der Anforderungsdomäne auf vielfältige Weise eingesetzt. In dieser Arbeit werden Worteinbettungen für die Anforderungsdomäne gebildet und darauf geprüft, ob sie in solchen Aufgaben bessere Ergebnisse als generische Worteinbettungen erzielen. Dafür wird ein Korpus von in der Anforderungsdomäne üblichen Dokumenten aufgebaut. Er umfasst 21458 Anforderungsbeschreibungen und 1680 Anwendererzählungen. Verschiedene Worteinbettungsmodelle werden auf ihre Eignung für das Training auf dem Korpus analysiert. Mit dem fastText-Modell, das durch die Berücksichtigung von Teilwörtern seltene Wörter besser darstellen kann, werden die domänenspezifischen Worteinbettungen gebildet. Sie werden durch Untersuchung von Wortähnlichkeiten und Clusteranalysen intrinsisch evaluiert. Die domänenspezifischen Worteinbettungen erfassen einige domänenspezifische Feinheiten besser, die untersuchten generischen Worteinbettungen hingegen stellen manche Wörter besser dar. Um die Vorteile beider Worteinbettungen zu nutzen, werden verschiedene Kombinationsverfahren analysiert und evaluiert. In einer Aufgabe zur Klassifizierung von Sätzen aus Anforderungsbeschreibungen erzielt eine gewichtete Durchschnittsbildung mit einer Gewichtung von 0,7 zugunsten der generischen Worteinbettungen die besten Ergebnisse. Ihr bester Wert ist eine Genauigkeit von 0,83 mittels eines LSTMs als Klassifikator und der Training-Test-Teilung als Testverfahren. Die domänenspezifischen, bzw. generischen Worteinbettungen liefern dabei hingegen lediglich 0,75, bzw. 0,72. dabei hingegen lediglich 0,75, bzw. 0,72.)
    • Bayesian Optimization for Wrapper Feature Selection  + (Wrapper feature selection can lead to highWrapper feature selection can lead to highly accurate classifications. However, the computational costs for this are very high in general. Bayesian Optimization on the other hand has already proven to be very efficient in optimizing black box functions. This approach uses Bayesian Optimization in order to minimize the number of evaluations, i.e. the training of models with different feature subsets. We propose four different ways to set up the objective function for the Bayesian optimization. On 14 different classification datasets the approach is compared against 14 other established feature selection methods, including other wrapper methods, but also filter methods and embedded methods. We use gaussian processes and random forests for the surrogate model. The classifiers which are applied to the selected feature subsets are logistic regression and naive bayes. We compare all the different feature selection methods against each other by comparing their classification accuracies and runtime. Our approach shows to keep up with the most established feature selection methods, but the evaluation also shows that the experimental setup does not value the feature selection enough. Concluding, we give guidelines how an experimental setup can be more appropriate and several concepts are provided of how to develop the Bayesian optimization for wrapper feature selection further.ion for wrapper feature selection further.)
    • Praktikumsbericht: Toolentwicklung zur Bearbeitung und Analyse von High-Speed-Videoaufnahmen  + (Während des Praktikums bestand mein AufgabWährend des Praktikums bestand mein Aufgabengebiet im Rahmen der Weiterentwicklung einer vollumfangreichen Softwareumgebung zur Videobearbeitung und Synchronisation von Motordaten daraus, mich in die Softwareumgebung MATLAB einzuarbeiten und mich daraufhin mit der vorhandenen Software vertraut zu machen, um diese dann in vielerlei Hinsicht aufzufrischen und um neue Funktionen zu erweitern.schen und um neue Funktionen zu erweitern.)
    • Development of an Approach to Describe and Compare Simulators  + (Ziel der Arbeit ist die Beschreibung von SZiel der Arbeit ist die Beschreibung von Simulatoren und deren Vergleich.</br>Damit Simulatoren beschrieben werden können ist es notwendig die Elemente zu identifizieren, die in Summ eine vollständige Beschreibung eines Simulators ermöglicht. Basierend</br>auf der Beschreibung werden dann Vergleichsmöglichkeiten entwickelt, sodass</br>beschriebene Simulatoren miteinander Verglichen werden können. Der Vergleich dient</br>der Ermittlung der Ähnlichkeit von Simulatoren. Da die Ähnlichkeit zwischen Simulatoren</br>nicht allgemeingültig definierbar ist, ist auch Teil der Arbeit diese Ähnlichkeitsmaße</br>zu definieren und zu beschreiben. Im Fokus dieser Arbeit sind diskrete ereignisorientierte Simulatoren.</br>Das übergeordnete Ziel ist das wiederfinden von Simulatoren in bereits bestehenden</br>Simulationen um die Wiederverwendung zu ermöglichen. Daher ist das Ziel die</br>Vergleichsmöglichkeiten dahingehend zu entwickeln, dass auch Teile von Simulationen</br>wiedergefunden werden können. Das entwickelte Tool DesComp implementiert sowohl</br>die Möglichkeit der Beschreibung als auch die notwendigen Verfahren für den Vergleich</br>von Simulatoren. Für die Evaluation der Eignung der entwickelten Verfahren wird eine</br>Fallstudie anhand des Simulators EventSim durchgeführt.hand des Simulators EventSim durchgeführt.)
    • Vorhersage und Optimierung von Konzernsteuerquoten auf Basis der SEC-Edgar-Datenbank  + (Ziel der vorliegenden Arbeit ist die KonzeZiel der vorliegenden Arbeit ist die Konzernsteuerquoten (effecktive tax rate, oder ETR) vorherzusagen durch die Prognosemodelle in Data-Mining. Durch die Analyse und Vergleich von ETR kann die Wettbewerbsfähigkeit verbessert werden und spielt somit eine große Rolle in der Konzernplanung in kommenden Jahren.</br></br>Voraussetzung ist eine verlässliche Grundlage von Beispieldaten aus der realen Steuerungskennzahl (key performance indicator, oder KPI) eines Jahresabschlussberichtes, um ein erfolgreiches Training der Modelle zu ermöglichen. Eine solche Datengrundlage bietet die SEC-Edgar-Datenbank. Ab dem Jahr 1994 uber die Edgar-Datenbank (Electronic Data Gathering, Analysis, and Retrieval) sind alle Dokumente zugänglich. Eine SEC-Filling ist ein formales und standardisiertes Dokument, welches amerikanische Unternehmen seit dem Securities Exchange Act von 1934 bei der SEC einreichen mussen. Laut dem Steuerexperte werden die KPIs durch Python-Skript in der uber mehrerer Jahre (2012-2016) hinweg extrahiert. Wegen der fehlenden Kenntnisse von der Hochladen des Unternehmen sind die 10-K Reporte sehr dunn (mehr als 60% Daten sind fehlende). Zufällige dunne Besetzung (random sparsity) entsteht bei nicht regelmäßiger bzw. schwer vorhersehbarer Belegung.</br></br>Ein bewährter Ansatz fur die fehlende Werte ist Online Spärlichkeit Lineare Regression (OSLR). OSLR stellt einerseits eine Art der Ausfullen der fehlenden Werte dar und dienen andererseits der Beseitigung der Datenqualitätsmängel.</br></br>Die Untersuchung der Anwendbarkeit Multipler lineare Regression, der Zeitreihenanalyse (ARIMA) und kunstlicher neuronaler Netze (KNN) im Bereich der Finanzberichterstattung. Es wird dabei die ETR berucksichtigt. Danach werden diese Methode aus den Bereich Data-Mining verwendet, um die ETR aus Steuerungsgröße vorherzusagen.die ETR aus Steuerungsgröße vorherzusagen.)
    • Modellierung und Export von Multicore-Eigenschaften für Simulationen während der Steuergeräteentwicklung für Fahrzeuge  + (Zukünftige Anwendungen der AutomobilindustZukünftige Anwendungen der Automobilindustrie, wie beispielsweise das autonome Fahren oder die fortschreitende Elektrifizierung der Fahrzeuge, resultieren in einer ständig steigenden Anzahl an Funktionen bzw. einen immer größer werdenden Bedarf an Rechenleistung der elektronischen Steuereinheiten. Damit derartige Anwendungen realisiert werden können, führte die Entwicklung bei sicherheitskritischen, echtzeitfähigen eingebetteten Systemen zu Prozessoren mit mehreren Kernen (Multicore-Prozessoren). Dies reduziert einerseits die Komplexität des Netzwerks innerhalb des Fahrzeugs, jedoch werden aber sowohl die Komplexität der Hardware-Architektur für das Steuergerät als auch die Komplexität der Software-Architektur erhöht, aufgrund des zeitlichen Verhaltens des Systems, der gemeinsamen Ressourcennutzung, des gemeinsamen Speicherzugriffs, etc. Dadurch entstehen auch neue Anforderungen an die Tools des Enwticklungsprozesses von Multicore-Systemen. Um eine nahtlose Toolchain für diesen Entwicklungsprozess zu entwerfen, muss es schon zu einer frühen Phase der Funktionsentwicklung möglich sein, die benötigten Multicore-Eigenschaften des Systems zu modellieren, um diese nachher evaluieren zu können.en, um diese nachher evaluieren zu können.)
    • Multiwort-Bedeutungsaufösung für Anforderungen  + (Zur automatischen Erzeugung von RückverfolZur automatischen Erzeugung von Rückverfolgbarkeitsinformationen muss zunächst die Absicht der Anforderungen verstanden werden. Die Grundvoraussetzung hierfür bildet das Verständnis der Bedeutungen der Worte innerhalb von Anforderungen. Obwohl hierfür bereits klassische Systeme zur Wortbedeutungsauflösung existieren, arbeiten diese meist nur auf Wortebene und ignorieren sogenannte "Multiwort-Ausdrücke" (MWAs), deren Bedeutung sich von der Bedeutung der einzelnen Teilworte unterscheidet. Im Rahmen des INDIRECT-Projektes wird deshalb ein System entwickelt, welches die MWAs mithilfe eines einfach verketteten Zufallsfeldes erkennt und anschließend eine wissensbasierte Bedeutungsauflösung mit den Wissensbasen DBpedia und WordNet 3.1 durchführt. Um das System zu evaluieren wird ein Datensatz aus frei verfügbaren Anforderungen erstellt. Das Teilsystem für die Erkennung von MWAs erreicht dabei maximal einen F1-Wert von 0.81. Die Bedeutungsauflösung mit der Wissensbasis DBpedia erreicht maximal einen F1-Wert von 0.496. Mit der Wissensbasis WordNet 3.1 wird maximal ein F1-Wert von 0.547 erreicht.rd maximal ein F1-Wert von 0.547 erreicht.)
    • Test-Vortrag  + (erat, sed diam voluptua. At vero eos et acerat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet,ulla facilisi. Lorem ipsum dolor sit amet,)
    • Patrick Deubel  + (folgt)