2 and 3 July, join the AI Forensics team for a two-day public symposium:

Interpreting intelligent machines?

What does it mean to interpret machines? What is language to a computer? What is the epistemology determining artificial intelligence? Is there an act of interpretation common to traditional hermeneutics and the nascent attempts to reverse engineer neural algorithms? According to which historical trajectories do social relations, processes of abstraction, definitions of intelligence, and predictive models configure one another? What epistemic horizons thereby impose themselves on us, and what modes of knowing could we mobilise in renewal?

Logistical information

The symposium will take place over two days—Wed., 2 July & Thu., 3 July at Trust in Berlin. There will be a mix of lectures, panel discussions, as well as more participatory, workshoop-esque formats.

  • The symposium will be livestreamed.
    • The link is here (opens Zoom).
    • Please note: while it will of course be possible to ask questions or speak via stream, actively participating in the more open formats may be difficult.
    • Selected recordings will also be made available after the fact.
  • Changes to the schedule are possible! The programme on the current page will be kept up-to-date throughout.
  • You can request reminders and updates per email through this form.
  • Add the symposium to your calendar: Day 1 Day 2
...
Event poster, 16:9
Project symposium

Interpreting intelligent machines pt. 1

Trust, Kluckstraße 25, 10785 Berlin

project retrospective / transformers—models of? / cartography / labour / knowledges and practices of interpretability

  • Arrival
    10:00
    Opening
  • Introduction
    10:20
    What is AI Forensics? A retrospective introduction

    Lead PI Matteo Pasquinelli (HfG Karlsruhe/U Venice) opens the symposium: Why forensics?—The search for a methodology commensurate to the societal transformations wrought by AI. Exploding (the view of the) AI production pipeline. Characterising the project.

    Discussion
    Talk
  • Transformer hegemony
    11:15

    Revisiting milestones in the transformers’ ascent, these contributions offer critical reappraisals that complicate the eponymous catchphrase of attention being all you need. They put forth pieces of an ‘alternative chronology’ of these models.

    Machine translation seq2seq,NLLB

    Paolo Caffoni (HfG Karlsruhe) investigates tokenization as a metric of labour.

    Visual culture CLIP Leonardo Impett (U Cambridge) looks into OpenAI’s multimodal CLIP embedding model of visual culture.
    Discussion
    Panel
  • Cartography—language models, labour, and the social
    12:20
    Cartography as methodology

    Goda Klumbytė (U Kassel) introduces cartography as a method.

    Intelligence, social relations, general intellect?

    Matteo Pasquinelli (HfG Karlsruhe/U Venice): from the Maschinenfragment via Postoperaismo to the internet-scale compression task that is LLM training…

    Discussion + collaborative mapping
    roundtable
  • Lunch
    13:15
    Break
  • Vector epistemology
    15:00

    The techniques and mechanisms underlying the shape of AI—and their form(s) of interpretation.

    Vector media. Towards a materialist epistemology of AI?

    Presenting work forthcoming (2025) on meson press by Leonardo Impett (U Cambridge/B Hertziana) and Fabian Offert (UCSB)

    Mechanistic interpretability with sparse autoencoders

    Patrick Leask (Durham U) contextualises the SAE approach within the wider frame of mechanistic interpretability, discussing also the significance recent findings (Leask et al, 2025)

    Discussion

    What kinds of knowledge and practice is interpreting AI?

    Panel
  • Informal
    17:00
    Break
Project symposium

Interpreting intelligent machines? pt. 2

Trust, Kluckstraße 25, 10785 Berlin

historical epistemology / statistics / pedagogy / arts

  • Statistical normativities
    11:00
    Calculating health

    Giulia Gandolfi (HfG Karlsruhe) on the historical epistemology of correlational knowledge in medicine

    AI interpretability and accountability in the humanitarian sector

    Arif Kornweitz (HfG Karlsruhe) presents his PhD project—from the inception of prediction in political philosophy to the “hard problem of conflict prediction”

    Discussion
    panel
  • Lunch
    13:15
    Break
  • Learning otherwise
    14:30
    Worlding beyond benchmarking

    Eleanor Dare (U Cambridge) discusses worlding and arts-based research in the context of a series of pedagogical workshops on AI held in Cambridge, England

    Tnagibility and embodiment beyond the rational

    Goda Klumbytė (U Kassel) on the need for new modes of knowing

    Discussion
    panel
  • Adversariality
    15:30
    Concepts from Beyond Explainability

    Goda Klumbytė (U. Kassel) reports on the Beyond Explainability workshop

    Panel
  • Announcements: Outputs (forthcoming)
    16:00
    Two books, a workshop, and an anthology of boundary concepts—what the remainder of the project has in store
    _predictions_
  • Interpretability interfaces
    16:30
    Tutorials, followed by guided or freeform exploration with AI interpretability interfaces
    Workshop