Suche mittels Attribut

Diese Seite stellt eine einfache Suchoberfläche zum Finden von Objekten bereit, die ein Attribut mit einem bestimmten Datenwert enthalten. Andere verfügbare Suchoberflächen sind die Attributsuche sowie der Abfragengenerator.

Suche mittels Attribut

Eine Liste aller Seiten, die das Attribut „Kurzfassung“ mit dem Wert „Conventional evaluation of an ML classifier uses test data to estimate its expected loss. For "cognitive" ML tasks like image or text classification, this requires that experts annotate a large and representative test data set, which can be expensive. In this thesis, we explore another approach for estimating the expected loss of an ML classifier. The aim is to enhance test data with additional expert knowledge. Inspired by recent feature attribution techniques, such as LIME or Saliency Maps, the idea is that experts annotate inputs not only with desired classes, but also with desired feature attributions. We then explore different methods to derive a large conventional test data set based on few such feature attribution annotations. We empirically evaluate the loss estimates of our approach against ground-truth estimates on large existing test data sets, with a focus on the tradeoff between the number of expert annotations and the achieved estimation accuracy.“ haben. Weil nur wenige Ergebnisse gefunden wurden, werden auch ähnliche Werte aufgelistet.

Hier sind 2 Ergebnisse, beginnend mit Nummer 1.

Zeige (vorherige 50 | nächste 50) (20 | 50 | 100 | 250 | 500)


    

Liste der Ergebnisse

    • Cost-Efficient Evaluation of ML Classifiers With Feature Attribution Annotations (Proposal)  + (Conventional evaluation of an ML classifieConventional evaluation of an ML classifier uses test data to estimate its expected loss. For "cognitive" ML tasks like image or text classification, this requires that experts annotate a large and representative test data set, which can be expensive.</br></br>In this thesis, we explore another approach for estimating the expected loss of an ML classifier. The aim is to enhance test data with additional expert knowledge. Inspired by recent feature attribution techniques, such as LIME or Saliency Maps, the idea is that experts annotate inputs not only with desired classes, but also with desired feature attributions. We then explore different methods to derive a large conventional test data set based on few such feature attribution annotations. We empirically evaluate the loss estimates of our approach against ground-truth estimates on large existing test data sets, with a focus on the tradeoff between the number of expert annotations and the achieved estimation accuracy.ions and the achieved estimation accuracy.)