(Special issue collection)[https://link.springer.com/collections/becggdhbad] description:

“Reproducibility” and “explainability” are important methodological considerations in the Sciences, and are increasingly relevant in Digital Humanities. The discussions around Nan Z. Da’s The Computational Case against Computational Literary Studies (2019), Katherine Bode’s Why You Can’t Model Away Bias (2020), Beatrice Fazi’s Beyond Human: Deep Learning, Explainability and Representation (2020) and Perceptual Bias and Technical Metapictures (2020) by Fabian Offert and Peter Bell foregrounded a crucial methodological desideratum for the interconnection of digital methods, humanities research, and critical approaches. Many methods and tools prevalent in Digital Humanities research are not reproducible. With the advent of AI, the predictions of many models are not explainable. Finally, Digital Humanities research still has to develop standards, as well as a culture, of comprehensive documentation and “understanding by reproducing”. This special issue of IJDH will explore practical, methodological, theoretical, and critical approaches to the reproducibility and explainability of digital scholarship in the humanities (and adjacent disciplines, including e.g. social sciences, life sciences, and computer science), in order to promote the development of best practices and standards that ensure transparency and accountability on a methodological level, improve the integration of digital scholarship in the humanities, and develop methods and source criticism as an integrated aspect of DH.