Lesegruppe/2022-01-12

Aus SDQ-Wiki
Zur Navigation springen Zur Suche springen
Datum 2022/01/12 12:00 – 2022/01/12 13:00
Ort Teams
Vortragende(r) Tobias Hey
Forschungsgruppe
Titel Information retrieval versus deep learning approaches for generating traceability links in bilingual projects
Autoren Jinfeng Lin, Yalin Liu, Jane Cleland-Huang
PDF https://link.springer.com/content/pdf/10.1007/s10664-021-10050-0.pdf
URL https://doi.org/10.1007/s10664-021-10050-0
BibTeX https://citation-needed.springer.com/v2/references/10.1007/s10664-021-10050-0?format=bibtex&flavour=citation
Abstract Software traceability links are established between diverse artifacts of the software development process in order to support tasks such as compliance analysis, safety assurance, and requirements validation. However, practice has shown that it is difficult and costly to create and maintain trace links in non-trivially sized projects. For this reason, many researchers have proposed and evaluated automated approaches based on information retrieval and deep-learning. Generating trace links automatically can also be challenging – especially in multi-national projects which include artifacts written in multiple languages. The intermingled language use can reduce the efficiency of automated tracing solutions. In this work, we analyze patterns of intermingled language that we observed in several different projects, and then comparatively evaluate different tracing algorithms. These include Information Retrieval techniques, such as the Vector Space Model (VSM), Latent Semantic Indexing (LSI), Latent Dirichlet Allocation (LDA), and various models that combine mono- and cross-lingual word embeddings with the Generative Vector Space Model (GVSM), and a deep-learning approach based on a BERT language model. Our experimental analysis of trace links generated for 14 Chinese-English projects indicates that our MultiLingual Trace-BERT approach performed best in large projects with close to 2-times the accuracy of the best IR approach, while the IR-based GVSM with neural machine translation and a monolingual word embedding performed best on small projects.