Exploring Explainability

A Definition, a Model, and a Knowledge Catalogue

authored by
Larissa Chazette, Wasja Brunotte, Timo Speith
Abstract

The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trust-worthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.

Organisation(s)
PhoenixD: Photonics, Optics, and Engineering - Innovation Across Disciplines
Software Engineering Section
External Organisation(s)
Saarland University
Type
Conference contribution
Pages
197-208
No. of pages
12
Publication date
2021
Publication status
Published
Peer reviewed
Yes
ASJC Scopus subject areas
Computer Science(all), Engineering(all), Strategy and Management
Electronic version(s)
https://doi.org/10.1109/RE51729.2021.00025 (Access: Closed)