AI Forensics

Accountability through interpretability in AI systems

An interdisciplinary research project of HfG Karlsruhe, Universität Kassel, Durham University, Cambridge University, and UC Santa Barbara; funded by the VolkswagenStiftung, 2022–2025.

Publication

The Resource Debate in Machine Translation and Large Language Models

Publication

The Cultural Politics of Artificial Intelligence in China

Publication

BatchTopK Sparse Autoencoders

Publication

Stitching Sparse Autoencoders of Different Sizes

Event

Carceral Diffusion: From Galton's Criminal Composites to Horse-riding Astronauts and Beyond

Event

Response-ability in Sociotechnical Systems Design

Sociotechnical case study

A Pedagogy of Machines: Technology in Education and Universities in Translation

Forensics toolkit

Latent mechanistic interpretability

Forensics toolkit

Exploratory machine learning interfaces

Sociotechnical case study

AI Design Interventions for Social Diversity

Sociotechnical case study

AI Interpretability and Accountability in the Humanitarian Sector

Sociotechnical case study

Interpretability and Accountability of AI Systems in the Sciences

Sociotechnical case study

Scopic regimes of neural wiring

Toolkit + Case study

Exposing.ai – the production pipeline of facial recognition systems

Sociotechnical case study

AI Interpretability and Linear Cause-Effect Models in Medicine: Is Non-Linear Diagnosis Possible?