Analyse von Zeitreihen-Kompressionsmethoden am Beispiel von Google N-Gram: Unterschied zwischen den Versionen

Aus SDQ-Institutsseminar
(Die Seite wurde neu angelegt: „{{Vortrag |vortragender=Jonas Bernhard |email=ueeto@student.kit.edu |vortragstyp=Bachelorarbeit |betreuer=Martin Schäler |termin=Institutsseminar/2020-02-28 |…“)
 
Keine Bearbeitungszusammenfassung
 
Zeile 5: Zeile 5:
|betreuer=Martin Schäler
|betreuer=Martin Schäler
|termin=Institutsseminar/2020-02-28
|termin=Institutsseminar/2020-02-28
|kurzfassung=TBA
|kurzfassung=Temporal text corpora like the Google Ngram Data Set usually incorporate a vast number of words and expressions, called ngrams, and their respective usage frequencies over the years. The large quantity of entries complicates working with the data set, as transformations and queries are resource and time intensive. However, many use cases do not require the whole corpus to have a sufficient data set and achieve acceptable query results. We propose various compression methods to reduce the total number of ngrams in the corpus. Specially, we propose compression methods that, given an input dictionary of target words, find a compression tailored for queries on a specific topic. Additionally, we utilize time-series compression methods for quick estimations about the properties of ngram usage frequencies. As basis for our compression method design and experimental validation serve CHQL (Conceptual History Query Language) queries on the Google Ngram Data Set.
}}
}}

Aktuelle Version vom 26. Februar 2020, 09:10 Uhr

Vortragende(r) Jonas Bernhard
Vortragstyp Bachelorarbeit
Betreuer(in) Martin Schäler
Termin Fr 28. Februar 2020
Vortragsmodus
Kurzfassung Temporal text corpora like the Google Ngram Data Set usually incorporate a vast number of words and expressions, called ngrams, and their respective usage frequencies over the years. The large quantity of entries complicates working with the data set, as transformations and queries are resource and time intensive. However, many use cases do not require the whole corpus to have a sufficient data set and achieve acceptable query results. We propose various compression methods to reduce the total number of ngrams in the corpus. Specially, we propose compression methods that, given an input dictionary of target words, find a compression tailored for queries on a specific topic. Additionally, we utilize time-series compression methods for quick estimations about the properties of ngram usage frequencies. As basis for our compression method design and experimental validation serve CHQL (Conceptual History Query Language) queries on the Google Ngram Data Set.