Fiorella Battaglia (LMU, Munich)
Quando: 17.06.2021, ore 17:00
Dove: Il seminario si terrà su Zoom a questo link:
Link identifier #identifier__145673-1https://tufts.zoom.us/j/95065406997?pwd=MURrWGZDR1RlZnViallNaVlkOVQ3Zz09#success
ID meeting: 950 6540 6997 Passcode: 674726
Predictive Algorithms and Epistemic Injustice
It is well acknowledged that decision support systems hiding their internal logic to the user constitute both technical and ethical issues. It is less acknowledged that predictive decision support systems guessing propositional attitudes of individuals might undermine human’s first-person authority. It is a matter of the subject’s being wronged in their capacity as a knower, and thus it is an issue of epistemic injustice arising from the introduction of decision systems in almost every domain of our social interactions.
The aim of this talk is to broaden the concept of epistemic injustice and to apply it to the debate on the ethics of AI with a view to ensuring a comprehensive assessment of these new technologies. The evolution of the concept of epistemic injustice in this field is a condition for accurately addressing the ethical assessment of predictive models.
Furthermore, I will argue that it is also useful for the machine-learning and data-mining communities that those questions do not remain unaddressed.