The FARI Brussels Conference returns on 18 and 19 November in Brussels, find out more.
An Initiative of
Supported by
APR 2024
Abstract
Digital health solutions that operate with or without artificial intelligence (D/AI) raise several responsibility challenges. Though many frameworks and tools have been developed, determining what principles should be translated into practice remains under debate. This scoping review aims to provide policymakers with a rigorous body of knowledge by asking: 1) what kinds of practice-oriented tools are available?; 2) on what principles do they predominantly rely?; and 3) what are their limitations?
We searched six academic and three grey literature databases for practice-oriented tools, defined as frameworks and/or sets of principles with clear operational explanations, published in English or French from 2015 to 2021. Characteristics of the tools were qualitatively coded and variations across the dataset identified through descriptive statistics and a network analysis.
A total of 56 tools met our inclusion criteria: 19 health-specific tools (33.9%) and 37 generic tools (66.1%). They adopt a normative (57.1%), reflective (35.7%), operational (3.6%), or mixed approach (3.6%) to guide developers (14.3%), managers (16.1%), end users (10.7%), policymakers (5.4%) or multiple groups (53.6%). The frequency of 40 principles varies greatly across tools (from 0% for ‘environmental sustainability’ to 83.8% for ‘transparency’). While 50% or more of the generic tools promote up to 19 principles, 50% or more of the health-specific tools promote 10 principles, and 50% or more of all tools disregard 21 principles. In contrast to the scattered network of principles proposed by academia, the business sector emphasizes closely connected principles. Few tools rely on a formal methodology (17.9%).
Despite a lack of consensus, there is a solid knowledge-basis for policymakers to anchor their role in such a dynamic field. Because several tools lack rigour and ignore key social, economic, and environmental issues, an integrated and methodologically sound approach to responsibility in D/AI solutions is warranted.
Authors: P. Lehoux, L. Rivard, R. Rocha de Oliveira, C.M. Mörch, H. Alami
Contributors
Share
Other publications
Date
JUL 2024
Researchers
Journal Article
Committing to the wrong artificial delegate in a collective-risk dilemma is better than directly committing mistakes
Date
AUG 2024
Researchers
Date
AUG 2024
Researchers
Journal Article
Assessing Responsibility in Digital Health Solutions that operate with or without AI - 3 Phase Mixed Methods Study
Date
APR 2024
Researchers
Journal Article
Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma
Date
MAY 2022
Researchers