AI Forensics

Accountability through interpretability in AI systems

An interdisciplinary research project of HfG Karlsruhe, Universität Kassel, Durham University, Cambridge University, and UC Santa Barbara; funded by the VolkswagenStiftung, 2022–2025.

event

Vectors for Workers: Models of Automation and Autonomy in the Long AI Century

Publication

Matters of Explanation: Rethinking Explainability with Tangible, Embodied, Material Interactions

Event

Art x AI: Who Makes, Who Owns, Who Decides?

Event

Vector Media

Publication

New Materialist Informatics: New Materialism/Computer Science/Technology Design

Publication

Posthuman Convergences: Transdisciplinary Methods and Practicess

Sociotechnical case study

AI Design Interventions for Social Diversity

Forensics toolkit

Latent mechanistic interpretability

Sociotechnical case study

A Pedagogy of Machines: Technology in Education and Universities in Translation

Sociotechnical case study

Scopic regimes of neural wiring

Sociotechnical case study

Interpretability and Accountability of AI Systems in the Sciences

Forensics toolkit

Exploratory machine learning interfaces

Sociotechnical case study

AI Interpretability and Accountability in the Humanitarian Sector

Toolkit + Case study

Exposing.ai – the production pipeline of facial recognition systems

Sociotechnical case study

AI Interpretability and Linear Cause-Effect Models in Medicine: Is Non-Linear Diagnosis Possible?