Cost-Efficient Evaluation of ML Classifiers With Feature Attribution Annotations (Final BA Presentation)

Aus IPD-Institutsseminar
(Weitergeleitet von Test)
Zur Navigation springen Zur Suche springen
Vortragende(r) Nobel Liaw
Vortragstyp Bachelorarbeit
Betreuer(in) Moritz Renftle
Termin Fr 25. Juni 2021
Kurzfassung To evaluate the loss of cognitive ML models, e.g., text or image classifier, accurately, one usually needs a lot of test data which are annotated manually by experts. In order to estimate accurately, the test data should be representative or else it would be hard to assess whether a model overfits, i.e., it uses spurious features of the images significantly to decide on its predictions.With techniques such as Feature Attribution, one can then compare important features that the model sees with their own expectations and can therefore be more confident whether or not he should trust the model. In this work, we propose a method that estimates the loss of image classifiers based on Feature-Attribution techniques. We use the classic approach for loss estimate as our benchmark to evaluate our proposed method. At the end of this work, our analysis reveals that our proposed method seems to have a similar loss estimate to that of the classic approach with a good image classifer and a representative test data. Based on our experiment, we expect that our proposed method could give a better loss estimate than the classic approach in cases where one has a biased test data and an image classifier which overfits.