Suche mittels Attribut

Diese Seite stellt eine einfache Suchoberfläche zum Finden von Objekten bereit, die ein Attribut mit einem bestimmten Datenwert enthalten. Andere verfügbare Suchoberflächen sind die Attributsuche sowie der Abfragengenerator.

Suche mittels Attribut

Eine Liste aller Seiten, die das Attribut „Kurzfassung“ mit dem Wert „TBA“ haben. Weil nur wenige Ergebnisse gefunden wurden, werden auch ähnliche Werte aufgelistet.

Hier sind 12 Ergebnisse, beginnend mit Nummer 1.

Zeige (vorherige 20 | nächste 20) (20 | 50 | 100 | 250 | 500)


    

Liste der Ergebnisse

  • GUI-basiertes Testen einer Lernplattform-Anwendung durch Nutzung von Neuroevolution  + (Software-Testing ist notwendig, um die QuaSoftware-Testing ist notwendig, um die Qualität und Funktionsfähigkeit von Softwareartefakten sicherzustellen. Es gibt sowohl automatisierte als auch manuelle Testverfahren. Allerdings sind automatisierte Verfahren, sowie menschliches Testen und skriptbasiertes Testen in Bezug auf Zeitaufwand und Kosten weniger gut skalierbar. Monkey-Testing, das durch zufällige Klicks auf der Benutzeroberfläche gekennzeichnet ist, berücksichtigt die Applikationslogik oft nicht ausreichend.</br>Der Fokus dieser Bachelorarbeit liegt auf dem automatisierten neuroevolutionären Testverfahren, das neuronale Netze als Testagenten verwendet und sie mithilfe evolutionärer Algorithmen über mehrere Generationen hinweg verbessert. Um das Training der Agenten zu ermöglichen und den Vergleich zum Monkey-Testing zu ermöglichen, wurde eine simulierte Version der Lernplattform Anki implementiert. Zur Beurteilung der Testagenten wurde eine Belohnungsstruktur in der simulierten Anwendung entwickelt.</br>Die Ergebnisse zeigen, dass das neuroevolutionäre Testverfahren im Vergleich zum Monkey-Testing in Bezug auf erreichte Belohnungen signifikant besser abschneidet. Dadurch wird die Applikationslogik im Testprozess besser berücksichtigt.ogik im Testprozess besser berücksichtigt.)
  • Entity Linking für Softwarearchitekturdokumentation  + (Softwarearchitekturdokumentationen enthaltSoftwarearchitekturdokumentationen enthalten Fachbegriffe aus der Domäne der Softwareentwicklung. Wenn man diese Begriffe findet und zu den passenden Begriffen in einer Datenbank verknüpft, können Menschen und Textverarbeitungssysteme diese Informationen verwenden, um die Dokumentation besser zu verstehen. Die Fachbegriffe in Dokumentationen entsprechen dabei Entitätserwähnungen im Text.</br>In dieser Ausarbeitung stellen wir unser domänenspezifisches Entity-Linking-System vor. Das System verknüpft Entitätserwähnungen innerhalb von Softwarearchitekturdokumentationen zu den zugehörigen Entitäten innerhalb einer Wissensbasis. </br>Das System enthält eine domänenspezifische Wissensbasis, ein Modul zur Vorverarbeitung und ein Entity-Linking-System.erarbeitung und ein Entity-Linking-System.)
  • Entwicklung einer Entwurfszeit-DSL zur Formalisierung von Runtime Adaptationsstrategien für SAS zum Zweck der Strategie-Optimierung  + (Softwaresysteme der heutigen Zeit werden zSoftwaresysteme der heutigen Zeit werden zunehmend komplexer und unterliegen immer</br>mehr variierenden Bedingungen. </br>Dadurch gewinnen selbst-adaptive Systeme an Bedeutung, da diese sich neuen Bedingungen dynamisch anpassen können, indem sie Veränderungen an sich selbst vornehmen. </br>Domänenspezifische Modellierungssprachen (DSL) zur Formalisierung von Adaptionsstrategien stellen ein wichtiges Mittel dar, um den Entwurf von Rückkopplungsschleifen selbst-adaptiver Softwaresysteme zu modellieren und zu optimieren. </br>Hiermit soll eine Bachelorarbeit vorgeschlagen werden, die sich mit der Fragestellung befasst, wie eine Optimierung von Adaptionsstrategien in einer DSL zur Entwurfszeit beschrieben werden kann. zur Entwurfszeit beschrieben werden kann.)
  • Preventing Code Insertion Attacks on Token-Based Software Plagiarism Detectors  + (Some students tasked with mandatory prograSome students tasked with mandatory programming assignments lack the time or dedication to solve the assignment themselves. Instead, they plagiarize a peer’s solution by slightly modifying the code. However, there exist numerous tools that assist in detecting these kinds of plagiarism. These tools can be used by instructors to identify plagiarized programs. The most used type of plagiarism detection tools is token-based plagiarism detectors. They are resilient against many types of obfuscation attacks, such as renaming variables or whitespace modifications. However, they are susceptible to inserting lines of code that do not affect the program flow or result.</br>The current working assumption was that the successful obfuscation of plagiarism takes more effort and skill than solving the assignment itself. This assumption was broken by automated plagiarism generators, which exploit this weakness. This work aims to develop mechanisms against code insertions that can be directly integrated into existing token-based plagiarism detectors. For this, we first develop mechanisms to negate the negative effect of many types of code insertion. Then we implement these mechanisms prototypically into a state-of-the-art plagiarism detector. We evaluate our implementation by running it on a dataset consisting of real student submissions and automatically generated plagiarism. We show that with our mechanisms, the similarity rating of automatically generated plagiarism increases drastically. Consequently, the plagiarism generator we use fails to create usable plagiarisms.we use fails to create usable plagiarisms.)
  • Software Plagiarism Detection on Intermediate Representation  + (Source code plagiarism is a widespread proSource code plagiarism is a widespread problem in computer science education. To counteract this, software plagiarism detectors can help identify plagiarized code. Most state-of-the-art plagiarism detectors are token-based. It is common to design and implement a new dedicated language module to support a new programming language. This process can be time-consuming, furthermore, it is unclear whether it is even necessary. In this thesis, we evaluate the necessity of dedicated language modules for Java and C/C++ and derive conclusions for designing new ones. To achieve this, we create a language module for the intermediate representation of LLVM. For the evaluation, we compare it to two existing dedicated language modules in JPlag. While our results show that dedicated language modules are better for plagiarism detection, language modules for intermediate representations show better resilience to obfuscation attacks. better resilience to obfuscation attacks.)
  • Portables Auto-Tuning paralleler Anwendungen  + (Sowohl Offline- als auch Online-Tuning steSowohl Offline- als auch Online-Tuning stellen gängige Lösungen zur automatischen Optimierung von parallelen Anwendungen dar. Beide Verfahren haben ihre individuellen Vor- und Nachteile: das Offline-Tuning bietet minimalen negativen Einfluss auf die Laufzeiten der Anwendung, die getunten Parameterwerte sind allerdings nur auf im Voraus bekannter Hardware verwendbar. Online-Tuning hingegen bietet dynamische Parameterwerte, die zur Laufzeit der Anwendung und damit auf der Zielhardware ermittelt werden, dies kann sich allerdings negativ auf die Laufzeit der Anwendung ausüben.</br>Wir versuchen die Vorteile beider Ansätze zu verschmelzen, indem im Voraus optimierte Parameterkonfigurationen auf der Zielhardware, sowie unter Umständen mit einer anderen Anwendung, verwendet werden. Wir evaluieren sowohl die Hardware- als auch die Anwendungsportabilität der Konfigurationen anhand von fünf Beispielanwendungen.ionen anhand von fünf Beispielanwendungen.)
  • DomainML: A modular framework for domain knowledge-guided machine learning  + (Standard, data-driven machine learning appStandard, data-driven machine learning approaches learn relevant patterns solely from data. In some fields however, learning only from data is not sufficient. A prominent example for this is healthcare, where the problem of data insufficiency for rare diseases is tackled by integrating high-quality domain knowledge into the machine learning process.</br></br>Despite the existing work in the healthcare context, making general observations about the impact of domain knowledge is difficult, as different publications use different knowledge types, prediction tasks and model architectures. It further remains unclear if the findings in healthcare are transferable to other use-cases, as well as how much intellectual effort this requires.</br></br>With this Thesis we introduce DomainML, a modular framework to evaluate the impact of domain knowledge on different data science tasks. We demonstrate the transferability and flexibility of DomainML by applying the concepts from healthcare to a cloud system monitoring. We then observe how domain knowledge impacts the model’s prediction performance across both domains, and suggest how DomainML could further be used to refine both the given domain knowledge as well as the quality of the underlying dataset. as the quality of the underlying dataset.)
  • State of the Art: Multi Actor Behaviour and Dataflow Modelling for Dynamic Privacy  + (State of the Art Vortrag im Rahmen der Praxis der Forschung.)
  • Data-Preparation for Machine-Learning Based Static Code Analysis  + (Static Code Analysis (SCA) has become an iStatic Code Analysis (SCA) has become an integral part of modern software development, especially since the rise of automation in the form of CI/CD. It is an ongoing question of how machine learning can best help improve SCA's state and thus facilitate maintainable, correct, and secure software. However, machine learning needs a solid foundation to learn on. This thesis proposes an approach to build that foundation by mining data on software issues from real-world code. We show how we used that concept to analyze over 4000 software packages and generate over two million issue samples. Additionally, we propose a method for refining this data and apply it to an existing machine learning SCA approach.an existing machine learning SCA approach.)
  • Creating Study Plans by Generating Workflow Models from Constraints in Temporal Logic  + (Students are confronted with a huge amountStudents are confronted with a huge amount of regulations when planning their studies at a university. It is challenging for them to create a personalized study plan while still complying to all official rules. The STUDYplan software aims to overcome the difficulties by enabling an intuitive and individual modeling of study plans. A study plan can be interpreted as a sequence of business process tasks that indicate courses to make use of existing work in the business process domain. This thesis focuses on the idea of synthesizing business process models from declarative specifications that indicate official and user-defined regulations for a study plan. We provide an elaborated approach for the modeling of study plan constraints and a generation concept specialized to study plans. This work motivates, discusses, partially implements and evaluates the proposed approach.ments and evaluates the proposed approach.)
  • A comparative study of subgroup discovery methods  + (Subgroup discovery is a data mining techniSubgroup discovery is a data mining technique that is used to extract interesting relationships in a dataset related to to a target variable. These relationships are described in the form of rules. Multiple SD techniques have been developed over the years. This thesis establishes a comparative study between a number of these techniques in order to identify the state-of-the-art methods. It also analyses the effects discretization has on them as a preprocessing step . Furthermore, it investigates the effect of hyperparameter optimization on these methods. </br></br>Our analysis showed that PRIM, DSSD, Best Interval and FSSD outperformed the other subgroup discovery methods evaluated in this study and are to be considered state-of-the-art . It also shows that discretization offers an efficiency improvement on methods that do not employ internal discretization. It has a negative impact on the quality of subgroups generated by methods that perform it internally. The results finally demonstrates that Apriori-SD and SD-Algorithm were the most positively affected by the hyperparameter optimization.fected by the hyperparameter optimization.)
  • Preventing Refactoring Attacks on Software Plagiarism Detection through Graph-Based Structural Normalization  + (TBD)
  • Generation of Checkpoints for Hardware Architecture Simulators  + (TBD)
  • Konzept und Integration eines Deltachain Prototyps  + (TBD)
  • Data-Driven Approaches to Predict Material Failure and Analyze Material Models  + (Te prediction of material failure is usefuTe prediction of material failure is useful in many industrial contexts such as predictive maintenance, where it helps reducing costs by preventing outages. However, failure prediction is a complex task. Typically, material scientists need to create a physical material model to run computer simulations. In real-world scenarios, the creation of such models is ofen not feasible, as the measurement of exact material parameters is too expensive. Material scientists can use material models to generate simulation data. Tese data sets are multivariate sensor value time series. In this thesis we develop data-driven models to predict upcoming failure of an observed material. We identify and implement recurrent neural network architectures, as recent research indicated that these are well suited for predictions on time series. We compare the prediction performance with traditional models that do not directly predict on time series but involve an additional step of feature calculation. Finally, we analyze the predictions to fnd abstractions in the underlying material model that lead to unrealistic simulation data and thus impede accurate failure prediction. Knowing such abstractions empowers material scientists to refne the simulation models. The updated models would then contain more relevant information and make failure prediction more precise. and make failure prediction more precise.)
  • Improving SAP Document Information Extraction via Pretraining and Fine-Tuning  + (Techniques for extracting relevant informaTechniques for extracting relevant information from documents have made significant progress in recent years and became a key task in the digital transformation. With deep neural networks, it became possible to process documents without specifying hard-coded extraction rules or templates for each layout. However, such models typically have a very large number of parameters. As a result, they require many annotated samples and long training times. One solution is to create a basic pretrained model using self-supervised objectives and then to fine-tune it using a smaller document-specific annotated dataset. However, implementing and controlling the pretraining and fine-tuning procedures in a multi-modal setting is challenging. In this thesis, we propose a systematic method that consists in pretraining the model on large unlabeled data and then to fine-tune it with a virtual adversarial training procedure. For the pretraining stage, we implement an unsupervised informative masking method, which improves upon standard Masked-Language Modelling (MLM). In contrast to randomly masking tokens like in MLM, our method exploits Point-Wise Mutual Information (PMI) to calculate individual masking rates based on statistical properties of the data corpus, e.g., how often certain tokens appear together on a document page. We test our algorithm in a typical business context at SAP and report an overall improvement of 1.4% on the F1-score for extracted document entities. Additionally, we show that the implemented methods improve the training speed, robustness and data-efficiency of the algorithm.ness and data-efficiency of the algorithm.)
  • Analyse von Zeitreihen-Kompressionsmethoden am Beispiel von Google N-Grams  + (Temporal text corpora like the Google NgraTemporal text corpora like the Google Ngram dataset usually incorporate a vast number of words and expressions, called ngrams, and their respective usage frequencies over the years. The large quantity of entries complicates working with the dataset, as transformations and queries are resource and time intensive. However, many use-cases do not require the whole corpus to have a sufficient dataset and achieve acceptable results. We propose various compression methods to reduce the absolute number of ngrams in the corpus. Additionally, we utilize time-series compression methods for quick estimations about the properties of ngram usage frequencies. As basis for our compression method design and experimental validation serve CHQL (Conceptual History Query Language) queries on the Google Ngram dataset. The goal is to find compression methods that reduce the complexity of queries on the corpus while still maintaining good results.rpus while still maintaining good results.)
  • Analyse von Zeitreihen-Kompressionsmethoden am Beispiel von Google N-Gram  + (Temporal text corpora like the Google NgraTemporal text corpora like the Google Ngram Data Set usually incorporate a vast number of words and expressions, called ngrams, and their respective usage frequencies over the years. The large quantity of entries complicates working with the data set, as transformations and queries are resource and time intensive. However, many use cases do not require the whole corpus to have a sufficient data set and achieve acceptable query results. We propose various compression methods to reduce the total number of ngrams in the corpus. Specially, we propose compression methods that, given an input dictionary of target words, find a compression tailored for queries on a specific topic. Additionally, we utilize time-series compression methods for quick estimations about the properties of ngram usage frequencies. As basis for our compression method design and experimental validation serve CHQL (Conceptual History Query Language) queries on the Google Ngram Data Set.age) queries on the Google Ngram Data Set.)
  • Implementation and Evaluation of CHQL Operators in Relational Database Systems  + (The IPD defined CHQL, a query algebra thatThe IPD defined CHQL, a query algebra that enables to formalize queries about conceptual history. CHQL is currently implemented in MapReduce which offers less flexibility for query optimization than relational database systems does. The scope of this thesis is to implement the given operators in SQL and analyze performance differences by identifying limiting factors and query optimization on the logical and physical level. At the end, we will provide efficient query plans and fast operator implementations to execute CHQL queries in relational database systems.QL queries in relational database systems.)
  • The Kconfig Variability Framework as a Feature Model  + (The Kconfig variability framework is used The Kconfig variability framework is used to develop highly variable software such as the Linux kernel, ZephyrOS and NuttX. Kconfig allows developers to break down their software in modules and define the dependencies between these modules, so that when a concrete configuration is created, the semantic dependencies between the selected modules are fulfilled, ensuring that the resulting software product can function. Kconfig has often been described as a tool of define software product lines (SPLs), which often occur within the context of feature-oriented programming (FOP). In this paper, we introduce methods to transform Kconfig files into feature models so that the semantics of the model defined in a Kconfig file are preserved. The resulting feature models can be viewed with FeatureIDE, which allows the further analysis of the Kconfig file, such as the detection of redundant dependencies and cyclic dependencies.dant dependencies and cyclic dependencies.)