Forensics toolkit
Exploratory machine learning interfaces
This project produces several interactive pedagogical interfaces on specific machine learning concepts.
Technical | Design | Socio– |
Dataset | Model | Application |
How do we create the conditions for people to grasp the structural and ideological patterns of generative AI? Can we establish eruptive yet informed understandings outside of media hyperbole and corporate narratives, for instance of solutionism or technological determinism?
Large image datasets, when built without careful consideration of societal implications, pose a threat to the welfare and well-being of individuals. Most often, vulnerable people and marginalised populations pay a disproportionately high price. 1
How can this harm be prevented—by education, abolition, ‘fair’ AI, AI ethics, or toolkits? Deploying the ‘Generative AI Untoolkit’, Dare positions AI Forensics as a methodology and a contingent pedagogy—entangled with arts-based research and necessitating close attention to materiality, power structures, and historical as well as aesthetic investigation.
A pedagogic counter to the overwhelmingly uncritical mediation of statistical computing (as we should properly call ‘AI’). This work was undertaken as part of the AI Forensics research project at Cambridge Digital Humanities (CDH), University of Cambridge. In pdfs available here Dare discusses and evaluates a range of arts-based workshops, as well as providing pdfs of synthetically created images, which formed the core of this research
References
- V. Prabhu and A. Birhane,
Large image datasets: A pyrrhic win for computer vision?,
2020. doi:10.48550/arXiv.2006.16923