This project investigates the notion of interpretability in the use of AI for medical diagnosis vis-à-vis concepts of linear vs. non-linear causality. It asks whether non-linear causality can be a useful tool for data interpretation, including of anomalies, and for addressing the problem of intepretability in medicine?

The project is divided into three parts:

  1. Defining linear vs. non-linear explanation: It asks whether it is possible to conceive, starting from an inferentialist approach, of a non-linear causality to be used in diagnosis?
  2. Viewing the medical knowledge generated by AI as being shaped by societal influences? It looks at the environment in which input knowledge is formulated, as well as how AI’s knowledge output is interpreted, in the context of the knowledge paradigms in which they are inscribed.
  3. Envisioning a diagnostic approach in medicine that moves away from traditional linear causality with the help of AI? In this regard, a key point involves reconsiering the role of uncertainity in diagnosis through the lens of randomised model outputs. Uncertainty can lead to three kinds of errors: those related to individual patient factors, those arising from the healthcare system itself, and those from Black Swan events (cf. Taleb 2001). How can such error models positively contribute to the discussion on causality?