https://sdq.kastel.kit.edu/index.php?title=Quantitative_Evaluation_of_the_Expected_Antagonism_of_Explainability_and_Privacy&feed=atom&action=history
Quantitative Evaluation of the Expected Antagonism of Explainability and Privacy - Versionsgeschichte
2024-03-29T09:20:02Z
Versionsgeschichte dieser Seite in SDQ-Institutsseminar
MediaWiki 1.39.6
https://sdq.kastel.kit.edu/mediawiki-institutsseminar/index.php?title=Quantitative_Evaluation_of_the_Expected_Antagonism_of_Explainability_and_Privacy&diff=1751&oldid=prev
Kw5266 am 10. August 2021 um 11:16 Uhr
2021-08-10T11:16:22Z
<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="de">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Nächstältere Version</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Version vom 10. August 2021, 12:16 Uhr</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l4">Zeile 4:</td>
<td colspan="2" class="diff-lineno">Zeile 4:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|vortragstyp=Bachelorarbeit</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|vortragstyp=Bachelorarbeit</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|betreuer=Clemens Müssener</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|betreuer=Clemens Müssener</div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>|termin=Institutsseminar/2021-<del style="font-weight: bold; text-decoration: none;">06</del>-<del style="font-weight: bold; text-decoration: none;">11</del></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>|termin=Institutsseminar/2021-<ins style="font-weight: bold; text-decoration: none;">08</ins>-<ins style="font-weight: bold; text-decoration: none;">20</ins></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|kurzfassung=Explainable artificial intelligence (XAI) offers a reasoning behind a model's behavior.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|kurzfassung=Explainable artificial intelligence (XAI) offers a reasoning behind a model's behavior.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For many explainers this proposed reasoning gives us more information about </div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For many explainers this proposed reasoning gives us more information about </div></td></tr>
</table>
Kw5266
https://sdq.kastel.kit.edu/mediawiki-institutsseminar/index.php?title=Quantitative_Evaluation_of_the_Expected_Antagonism_of_Explainability_and_Privacy&diff=1750&oldid=prev
Kw5266 am 10. August 2021 um 11:15 Uhr
2021-08-10T11:15:48Z
<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="de">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Nächstältere Version</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Version vom 10. August 2021, 12:15 Uhr</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l2">Zeile 2:</td>
<td colspan="2" class="diff-lineno">Zeile 2:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|vortragender=Martin Lange</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|vortragender=Martin Lange</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|email=martin.lange@student.kit.edu</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|email=martin.lange@student.kit.edu</div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>|vortragstyp=<del style="font-weight: bold; text-decoration: none;">Proposal</del></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>|vortragstyp=<ins style="font-weight: bold; text-decoration: none;">Bachelorarbeit</ins></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|betreuer=Clemens Müssener</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|betreuer=Clemens Müssener</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|termin=Institutsseminar/2021-06-11</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|termin=Institutsseminar/2021-06-11</div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>|kurzfassung=<del style="font-weight: bold; text-decoration: none;">Explainers for machine learning models help humans and models work together. They build trust in </del>a model's <del style="font-weight: bold; text-decoration: none;">decision by giving further insight into </del>the <del style="font-weight: bold; text-decoration: none;">decision making process</del>. <del style="font-weight: bold; text-decoration: none;">However, it </del>is <del style="font-weight: bold; text-decoration: none;">unclear </del>whether <del style="font-weight: bold; text-decoration: none;">this insight </del>can <del style="font-weight: bold; text-decoration: none;">also expose </del>private <del style="font-weight: bold; text-decoration: none;">information</del>. <del style="font-weight: bold; text-decoration: none;">The question </del>of <del style="font-weight: bold; text-decoration: none;">my </del>thesis <del style="font-weight: bold; text-decoration: none;">is whether there exists a conflict of objectives between explainability and </del>privacy <del style="font-weight: bold; text-decoration: none;">and how </del>to <del style="font-weight: bold; text-decoration: none;">measure </del>the <del style="font-weight: bold; text-decoration: none;">effects </del>of <del style="font-weight: bold; text-decoration: none;">this conflict.</del></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>|kurzfassung=<ins style="font-weight: bold; text-decoration: none;">Explainable artificial intelligence (XAI) offers a reasoning behind </ins>a model's <ins style="font-weight: bold; text-decoration: none;">behavior.</ins></div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div> </div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">For many explainers this proposed reasoning gives us more information about </ins></div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">I propose two different possible types of attack that can be applied against explainers: </del>model extraction and <del style="font-weight: bold; text-decoration: none;">information about the </del>training data. <del style="font-weight: bold; text-decoration: none;">Differential privacy is introduced as a way to measure the privacy breach </del>of these <del style="font-weight: bold; text-decoration: none;">attacks. Finally, three </del>specific use cases <del style="font-weight: bold; text-decoration: none;">are presented where </del>explainers can <del style="font-weight: bold; text-decoration: none;">realistically </del>be <del style="font-weight: bold; text-decoration: none;">abused to breach differential privacy</del>.</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">the inner workings of the model or even about </ins>the <ins style="font-weight: bold; text-decoration: none;">training data</ins>. <ins style="font-weight: bold; text-decoration: none;">Since data privacy </ins>is </div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">becoming an important issue the question arises </ins>whether <ins style="font-weight: bold; text-decoration: none;">explainers </ins>can <ins style="font-weight: bold; text-decoration: none;">leak </ins>private <ins style="font-weight: bold; text-decoration: none;">data</ins>.</div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">It is unclear what private data can be obtained from different kinds </ins>of <ins style="font-weight: bold; text-decoration: none;">explanation.</ins></div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">In this </ins>thesis <ins style="font-weight: bold; text-decoration: none;">I adapt three </ins>privacy <ins style="font-weight: bold; text-decoration: none;">attacks in machine learning </ins>to the <ins style="font-weight: bold; text-decoration: none;">field </ins>of <ins style="font-weight: bold; text-decoration: none;">XAI: </ins></div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>model extraction<ins style="font-weight: bold; text-decoration: none;">, membership inference </ins>and training data <ins style="font-weight: bold; text-decoration: none;">extraction</ins>. </div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">The different kinds </ins>of <ins style="font-weight: bold; text-decoration: none;">explainers are sorted into </ins>these <ins style="font-weight: bold; text-decoration: none;">categories argumentatively and I present </ins>specific use cases <ins style="font-weight: bold; text-decoration: none;">how an attacker can obtain private data from an </ins></div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">explanation. I demonstrate membership inference and training data extraction for two specific </ins>explainers <ins style="font-weight: bold; text-decoration: none;">in experiments. Thus, privacy </ins>can be <ins style="font-weight: bold; text-decoration: none;">breached with the help of explainers</ins>.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}</div></td></tr>
</table>
Kw5266
https://sdq.kastel.kit.edu/mediawiki-institutsseminar/index.php?title=Quantitative_Evaluation_of_the_Expected_Antagonism_of_Explainability_and_Privacy&diff=1689&oldid=prev
Uuwig am 8. Juni 2021 um 10:35 Uhr
2021-06-08T10:35:25Z
<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="de">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Nächstältere Version</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Version vom 8. Juni 2021, 11:35 Uhr</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l5">Zeile 5:</td>
<td colspan="2" class="diff-lineno">Zeile 5:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|betreuer=Clemens Müssener</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|betreuer=Clemens Müssener</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|termin=Institutsseminar/2021-06-11</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|termin=Institutsseminar/2021-06-11</div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>|kurzfassung=Explainers for machine learning models help humans and models work together. They build trust in a model's decision by giving further insight into the decision making process. However, it is unclear whether this insight can also expose private information. The question of <del style="font-weight: bold; text-decoration: none;">our </del>thesis is whether there exists a conflict of objectives between explainability and privacy and how <del style="font-weight: bold; text-decoration: none;">we </del>measure the effects of this conflict<del style="font-weight: bold; text-decoration: none;">. Specifically we are looking at local feature importance explainers</del>.</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>|kurzfassung=Explainers for machine learning models help humans and models work together. They build trust in a model's decision by giving further insight into the decision making process. However, it is unclear whether this insight can also expose private information. The question of <ins style="font-weight: bold; text-decoration: none;">my </ins>thesis is whether there exists a conflict of objectives between explainability and privacy and how <ins style="font-weight: bold; text-decoration: none;">to </ins>measure the effects of this conflict.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">We </del>propose <del style="font-weight: bold; text-decoration: none;">a use case where the prediction </del>of <del style="font-weight: bold; text-decoration: none;">a </del>model <del style="font-weight: bold; text-decoration: none;">for a person is considered their private </del>data. <del style="font-weight: bold; text-decoration: none;">An attacker might be able </del>to <del style="font-weight: bold; text-decoration: none;">gain insight into </del>the <del style="font-weight: bold; text-decoration: none;">predictions for other people by abusing their own explanation to imitate the model's behavior</del>. <del style="font-weight: bold; text-decoration: none;">We will test this </del>use <del style="font-weight: bold; text-decoration: none;">case experimentally </del>to <del style="font-weight: bold; text-decoration: none;">determine whether such an attack is possible</del>.</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">I </ins>propose <ins style="font-weight: bold; text-decoration: none;">two different possible types </ins>of <ins style="font-weight: bold; text-decoration: none;">attack that can be applied against explainers: </ins>model <ins style="font-weight: bold; text-decoration: none;">extraction and information about the training </ins>data. <ins style="font-weight: bold; text-decoration: none;">Differential privacy is introduced as a way </ins>to <ins style="font-weight: bold; text-decoration: none;">measure </ins>the <ins style="font-weight: bold; text-decoration: none;">privacy breach of these attacks</ins>. <ins style="font-weight: bold; text-decoration: none;">Finally, three specific </ins>use <ins style="font-weight: bold; text-decoration: none;">cases are presented where explainers can realistically be abused </ins>to <ins style="font-weight: bold; text-decoration: none;">breach differential privacy</ins>.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}</div></td></tr>
</table>
Uuwig
https://sdq.kastel.kit.edu/mediawiki-institutsseminar/index.php?title=Quantitative_Evaluation_of_the_Expected_Antagonism_of_Explainability_and_Privacy&diff=1682&oldid=prev
Uuwig am 20. Mai 2021 um 10:31 Uhr
2021-05-20T10:31:44Z
<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="de">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Nächstältere Version</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Version vom 20. Mai 2021, 11:31 Uhr</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l5">Zeile 5:</td>
<td colspan="2" class="diff-lineno">Zeile 5:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|betreuer=Clemens Müssener</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|betreuer=Clemens Müssener</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|termin=Institutsseminar/2021-06-11</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|termin=Institutsseminar/2021-06-11</div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">|kurzfassung=Explainers for machine learning models help humans and models work together. They build trust in a model's decision by giving further insight into the decision making process. However, it is unclear whether this insight can also expose private information. The question of our thesis is whether there exists a conflict of objectives between explainability and privacy and how we measure the effects of this conflict. Specifically we are looking at local feature importance explainers.</ins></div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">We propose a use case where the prediction of a model for a person is considered their private data. An attacker might be able to gain insight into the predictions for other people by abusing their own explanation to imitate the model's behavior. We will test this use case experimentally to determine whether such an attack is possible.</ins></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}</div></td></tr>
</table>
Uuwig
https://sdq.kastel.kit.edu/mediawiki-institutsseminar/index.php?title=Quantitative_Evaluation_of_the_Expected_Antagonism_of_Explainability_and_Privacy&diff=1676&oldid=prev
Kw5266: Die Seite wurde neu angelegt: „{{Vortrag |vortragender=Martin Lange |email=martin.lange@student.kit.edu |vortragstyp=Proposal |betreuer=Clemens Müssener |termin=Institutsseminar/2021-06-11 }}“
2021-05-18T11:43:42Z
<p>Die Seite wurde neu angelegt: „{{Vortrag |vortragender=Martin Lange |email=martin.lange@student.kit.edu |vortragstyp=Proposal |betreuer=Clemens Müssener |termin=Institutsseminar/2021-06-11 }}“</p>
<p><b>Neue Seite</b></p><div>{{Vortrag<br />
|vortragender=Martin Lange<br />
|email=martin.lange@student.kit.edu<br />
|vortragstyp=Proposal<br />
|betreuer=Clemens Müssener<br />
|termin=Institutsseminar/2021-06-11<br />
}}</div>
Kw5266