Sociotechnical case study
AI Interpretability and Accountability in the Humanitarian Sector
Investigates pitfalls of the use of ML in the humanitarian sector
Technical | Design | Socio– |
Dataset | Model | Application |
This subproject deals with algorithmic systems that are deployed as part of conflict prediction efforts, that is, practices that aim to predict and pre-empt social conflict by using statistical techniques such as machine learning. It aims to understand how social conflicts are productive of machines that are meant to pre-empt them, and to assess the techniques, logics, and extent according to which these practices govern subjects. The thesis specifically deals with the history and current forms of conflict prediction in the humanitarian sector; it researches two challenges to these practices along the axes of:
- Interpretability (as per the AI Forensics project) and
- Accountability (as per the self-imposed ethical standards of the humanitarian sector, stipulated by principles such as ‘first do no harm’, and as per AI industry ethical standards).
The problematic of interpretability is tied to forensic methods that are being applied to predict future crimes, rather than explain past ones. The project places particular focus on the security of technical interpretability, which scholars have referred to in the context of a ‘prediction-interpretation gap’.