AI Forensics

Accountability through interpretability in AI systems

An interdisciplinary research project of HfG Karlsruhe, Universität Kassel, Durham University, Cambridge University, and UC Santa Barbara; funded by the VolkswagenStiftung, 2022–2025.

Publication

New Materialist Informatics: New Materialism/Computer Science/Technology Design

Publication

Posthuman Convergences: Transdisciplinary Methods and Practicess

Publication

Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models

Event

Diagrammatics of AI: Tracing and Diffracting Epistemologies of Machine Learning Algorithms

Event

The Predictive Turn of Language. Reading the Future in the Vector Space

Sociotechnical case study

AI Design Interventions for Social Diversity

Forensics toolkit

Latent mechanistic interpretability

Sociotechnical case study

A Pedagogy of Machines: Technology in Education and Universities in Translation

Sociotechnical case study

Scopic regimes of neural wiring

Sociotechnical case study

Interpretability and Accountability of AI Systems in the Sciences

Forensics toolkit

Exploratory machine learning interfaces

Sociotechnical case study

AI Interpretability and Accountability in the Humanitarian Sector

Toolkit + Case study

Exposing.ai – the production pipeline of facial recognition systems

Sociotechnical case study

AI Interpretability and Linear Cause-Effect Models in Medicine: Is Non-Linear Diagnosis Possible?