Sociotechnical case study
AI Design Interventions for Social Diversity
How could new epistemologies and methodologies for AI systems design be developed based on critical insights from feminist and postcolonial science and technology studies?
Technical | Design | Socio– |
Dataset | Model | Application |
This case study asks how new epistemologies and methodologies for AI systems design could be developed based on critical insights from feminist and postcolonial science and technology studies.
Visual AI systems exhibit well documented racial and gender bias 123. Furthermore, the gendered, raced, classed and colonial aspects both of AI development and of the effects of AI design and deployment have been pointed out 45678, alongside calls for feminist, intersectional, postcolonial, decolonial and indigenous computing and AI 9101112. Proliferation of such calls signals the need not only for better understanding of how specific categorial and cultural biases appear in the AI pipeline and where, but also for development of epistemologies and AI design methodologies that are explicitly oriented towards social justice and intersectional diversity.
This case study thus examines how “minor histories” (i.e. less known or canonized historical narratives) of AI and critical intersectional epistemologies can provide a basis for new approaches to AI design and development, and expand the utility of the AI forensics toolkit beyond single individuals or institutions to include disenfranchised communities and social justice work. Technically, the case study reflects and builds on work that predicts criminality based on faces, and psychological studies suggesting we infer social class based on faces 13 which inform other decisions downstream. However, no deep learning papers have looked at social class biases in face recognition datasets.
References
- R. Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.
- J. Buolamwini and T. Gebru,
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,
In Proc. Proceedings of Machine Learning Research Conference on Fairness, Accountability, and Transparency, 2018 - O. Keyes,
The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition,
In Proc. Proceedings of the ACM on Human-Computer Interaction, 2018, vol. 2. doi:10.1145/3274357 - C. Draude, Computing Bodies: Gender Codes and Anthropomorphic Design at the Human-Computer Interface. Springer VS Research, 2017. [Online]. Available: https://doi.org/10.1007/978-3-658-18660-9
- J. Zou and L. Schiebinger,
AI Can Be Sexist and Racist - It’s Time to Make It Fair,
Nature, vol. 559, no. 7714, p. 324–326, 2018. doi:10.1038/d41586-018-05707-8 - J. Thatcher, D. O’Sullivan, and D. Mahmoudi,
Data Colonialism through Accumulation by Dispossession: New Metaphors for Daily Data,
Environment and Planning D: Society and Space, vol. 34, no. 6, p. 990–1006, 2016. doi:10.1177/0263775816633195 - M. Kwet,
Digital Colonialism: US Empire and the New Imperialism in the Global South,
Race & Class, vol. 60, no. 4, p. 3–26, 2019. doi:10.1177/0306396818823172 - C. O’Neil, Weapons of math destruction: How big data increases inequality and threatens democracy. B/D/W/Y Broadway Books, 2017.
- N. Kumar and N. Karusala,
Intersectional Computing,
Interactions, vol. XXVI, no. 2, p. 50, 2019. doi:10.1145/3305360 - A. Schlesinger, W. Edwards, and R. Grinter,
Intersectional HCI: Engaging Identity through Gender, Race, and Class,
In Proc. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI '17, 2017, p. 5412–5427. doi:10.1145/3025453.3025766 - S. Mohamed, M. Png, and W. Isaac,
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence,
Philosophy & Technology, vol. 33, no. 4, pp. 659–684, 2020. doi:10.1007/s13347-020-00405-8 - A. Abdilla, N. Arista, K. Baker, S. Benesiinaabandan, M. Brown, M. Cheung, M. Coleman, A. Cordes, J. Davison, K. Duncan, S. Garzon, D. Harrell, P. Jones, K. Kealiikanakaoleohaililani, M. Kelleher, S. Kite, O. Lagon, J. Leigh, M. Levesque, J. Lewis, K. Mahelona, C. Moses, I. Nahuewai, K. Noe, D. Olson, ‘. Parker Jones, C. Running Wolf, M. Running Wolf, M. Silva, S. Fragnito, and H. Whaanga,
Indigenous Protocol and Artificial Intelligence Position Paper,
2020. doi:10.11573/SPECTRUM.LIBRARY.CONCORDIA.CA.00986506 - R. Bjornsdottir and N. Rule,
The Visibility of Social Class from Facial Cues,
Journal of Personality and Social Psychology, vol. 113, no. 4, 2017. doi:10.1037/pspa0000091
Outputs in connection to the subproject
Keynote
19 Sep 2024Response-ability in Sociotechnical Systems Design
Workshop
3 Sep 2024Participatory and socially responsible technology development
Research Residency
1 Jul 2024Algorithms + Slimes
Workshop talk
28 Jun 2024Diagrams, Transpositions, Implicated Futures. Some Strategies for Critical Technical Practice
Workshop talk
18 Jun 2024Experiential Heuristics
Workshop
17 Jun 2024After Explainability: AI Metaphors and Materialisations Beyond Transparency
Workshop
9 Nov 2023Bayesian Knowledge: Situated and Pluriversal Perspectives
Workshop paper
16 Oct 2023Feminist epistemology for machine learning systems design
Workshop paper
28 Apr 2023Explaining the ghosts: Feminist intersectional XAI and cartography as methods to account for invisible labour
Conference paper
23 Apr 2023Towards Feminist Intersectional XAI: From Explainability to Response-Ability