Event description

The inevitable bias of multimodal machine learning models lies not only in what they represent, but in the logic of representation itself. Their ideologies are often not directly visible in their generated outputs or even their training data, the focus of almost all existing work. Instead, they emerge from how the model organizes and transforms information within itself. While previous media technologies created new formats or imitated existing ones, such models instead seek to dissolve prior media into a universal space of commensurability: the vector space. Cultural objects, once specific to a medium, are rendered fungible; commodities in a new neural economy, expressed only in terms of their neural exchange value. Algorithmic justice, thus, has to be sought not only through the inclusion or exclusion of data points but through a closer-than-before analysis of the conditions of production established by such vector media.