Responsible AI and algorithm governance: An institutional perspective

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

While the discussion about ethical AI centers around conflicts between automated systems and individual human right, those systems are often adopted to aid institutions rather than individuals. Starting from this observation, this chapter delineates the potential conflicts between institutions and ethical algorithms, with particular focus on two major attempts by the ML community—fair ML and interpretable ML—to make algorithms more responsible. Computer scientists, legal scholars, philosophers, and social scientists have presented both immanent and external critiques regarding the formalization of responsible AI/ML. Such critiques have been based on the computational or mathematical complexity of creating fair, transparent algorithms as well as on the argument that computational solutions cannot accurately account for the entirety of social problems and could potentially worsen them. As an alternative, this chapter suggests an institutional perspective to responsible AI as relevant to considerations of polycentric governance over sociotechnical platforms in the embedding of automated decision systems, where cooperation among users, civic societies, regulatory entities, and related firms is required to secure systems' regularity and integrity.

Original languageEnglish
Title of host publicationHuman-Centered Artificial Intelligence
Subtitle of host publicationResearch and Applications
PublisherElsevier Inc.
Pages251-270
Number of pages20
ISBN (Electronic)9780323856485
ISBN (Print)9780323856492
DOIs
StatePublished - 1 Jan 2022

Keywords

  • Artificial intelligence
  • Black box
  • Human-computer interaction
  • Labeling
  • Machine learning

Fingerprint

Dive into the research topics of 'Responsible AI and algorithm governance: An institutional perspective'. Together they form a unique fingerprint.

Cite this