Error message

The text size have not been saved, because your browser do not accept cookies.

Situating “explainability”: Making sense of data-driven assemblages in organizational context

Type: Presentation
Author: Christine T. Wolf
Year of Publishing: 2019
Keywords:

The rise of data-driven technologies in recent years has sparked renewed attention to questions of technological sensemaking and in particular the explainability or interpretability of such systems (a growing technical subfield commonly called “Explainable AI” or “XAI”). These approaches tend to focus on the technology itself (often at the level of model or predictive output) and fail to consider the broader context of use and the situated nature of technological sensemaking. We engage with such concerns and report on an ongoing study of the design, development, and deployment of an intelligent workplace system in the IT services domain. We describe the system, which is designed to augment the complex design work of highly-skilled IT architects with the use of natural language processing (NLP) and optimization modelling. We outline results from our study, which analyzes feedback from architects as they interacted with various prototypes of the system. From architects’ feedback, we identify three layers to their sensemaking practices: algorithm or model layer (how an algorithmic output is generated); interactive or application layer (how different models interact within the context of a multi-model application); and organizational layer (how system outputs can or should be integrated with established artifacts and workflows). These findings complicate the notion of “explainability,” pointing to its emergent and situated qualities – features that require careful articulation to support workers as they make sense of and successfully integrate data-driven systems into their everyday work practice.