Transformer Architecture

Scopic regimes of neural wiring

Visual AI Transformer architecture Synthetic data

Investigating the ideologies of vision of the neural wiring itself

Interpreting intelligent machines?

Interpretation AI Forensics Transformer architecture

The two-day symposium features an array of presentations—of papers, with software, and on subprojects—all accompanied by continuous, in-depth, discussion.

Interpreting intelligent machines

ai forensics methodology machine translation transformer architecture vector media embedding cartography general intellect

project retrospective / transformers—models of? / cartography / labour / knowledges and practices of interpretability

The Predictive Turn of Language. Reading the Future in the Vector Space

LLMs Multidimensional space Predictive turn Linguistic turn Prediction Translation Machine translation Transformer architecture Cartography

Paolo Caffoni presents on the long historical arc of the predictive turn in a panel chaired by Matteo Pasquinelli

Interpretability and Accountability of AI Systems in the Sciences

Visual AI Sciences Generative AI Transformer architecture Large Language Models

Investigates potential strategies and methods to expose the epistemic failures of visual AI systems in the natural and social sciences

Synthesizing Proteins on the Graphics Card. Protein Folding and the Limits of Critical AI Studies

Transformer architecture Protein folding Language models LLMs

Fabian Offert, Paul Kim, and Qiaoyu Cai look at Meta’s ESM-2 protein folding ’language model’, asking what kind of knowledge the transformer architecture produces?