AI Forensics

Accountability through interpretability in AI systems

An interdisciplinary research project of HfG Karlsruhe, Universität Kassel, Durham University, Cambridge University, and UC Santa Barbara; funded by the VolkswagenStiftung, 2022–2025.

Event

The Predictive Turn of Language. Reading the Future in the Vector Space

Event

Questioning language models before and after AI

Event

Operativism, Mediation, and AI

Event

Empire Model Collapse: Information Entropy and the Toxic Implosion of Technofeudalism

Event

Artificial Intelligence in the History of Cultural Techniques

Event

Vector Media: Notes on the Epistemology of Machine Vision

Sociotechnical case study

A Pedagogy of Machines: Technology in Education and Universities in Translation

Sociotechnical case study

Interpretability and Accountability of AI Systems in the Sciences

Sociotechnical case study

Scopic regimes of neural wiring

Forensics toolkit

Latent mechanistic interpretability

Sociotechnical case study

AI Design Interventions for Social Diversity

Forensics toolkit

Exploratory machine learning interfaces

Sociotechnical case study

AI Interpretability and Accountability in the Humanitarian Sector

Toolkit + Case study

Exposing.ai – the production pipeline of facial recognition systems

Sociotechnical case study

AI Interpretability and Linear Cause-Effect Models in Medicine: Is Non-Linear Diagnosis Possible?