Sociotechnical case study
Interpretability and Accountability of AI Systems in the Sciences
Investigates potential strategies and methods to expose the epistemic failures of visual AI systems in the natural and social sciences
Technical | Design | Socio– |
Dataset | Model | Application |
This case study thus investigates potential strategies and methods to expose the epistemic failures of AI systems in the natural and social sciences, with a particular focus on epistemic/inductive biases in pre-trained deep learning models.
As protein folding has been claimed to be the one “win” for AI research that’s not just decorative (i.e. generative) we decided to look at this first. The models in this domain are also based on the transformer architecture, which underlies almost all recent research, and is the current challenge of interpretability research. What we are especially interested is how the language paradigm in biology (e.g. “genetic code”) and the (operationalized-as-transformer) language paradigm in machine learning are mapped to each other/historically linked.
Outputs in connection to the subproject
Workshop talk
17 Jun 2024XAI as Science
Preprint
16 May 2024Synthesizing Proteins on the Graphics Card. Protein Folding and the Limits of Critical AI Studies
Book chapter
19 Feb 2024Projektbericht: Maschineninterpretation mit Interpretationsmaschinen. Explainable Artificial Intelligence als bildgebendes Verfahren und bildwissenschaftliches Problem
Panel
11 Nov 2023The politics and aesthetics of synthetic media
Podcast
25 Feb 2023Latent Deep Space