Transformer Architecture

Interpretability and Accountability of AI Systems in the Sciences

Visual AI Sciences Generative AI Transformer architecture Large Language Models

Investigates potential strategies and methods to expose the epistemic failures of visual AI systems in the natural and social sciences

Synthesizing Proteins on the Graphics Card. Protein Folding and the Limits of Critical AI Studies

Transformer architecture Protein folding Language models LLMs

Fabian Offert, Paul Kim, and Qiaoyu Cai look at Meta’s ESM-2 protein folding ’language model’, asking what kind of knowledge the transformer architecture produces?

Scopic regimes of neural wiring

Visual AI Transformer architecture Synthetic data

Investigating the ideologies of vision of the neural wiring itself