Abstract
While the discussion about ethical AI centers around conflicts between automated systems and individual human right, those systems are often adopted to aid institutions rather than individuals. Starting from this observation, this chapter delineates the potential conflicts between institutions and ethical algorithms, with particular focus on two major attempts by the ML community—fair ML and interpretable ML—to make algorithms more responsible. Computer scientists, legal scholars, philosophers, and social scientists have presented both immanent and external critiques regarding the formalization of responsible AI/ML. Such critiques have been based on the computational or mathematical complexity of creating fair, transparent algorithms as well as on the argument that computational solutions cannot accurately account for the entirety of social problems and could potentially worsen them. As an alternative, this chapter suggests an institutional perspective to responsible AI as relevant to considerations of polycentric governance over sociotechnical platforms in the embedding of automated decision systems, where cooperation among users, civic societies, regulatory entities, and related firms is required to secure systems' regularity and integrity.
Original language | English |
---|---|
Title of host publication | Human-Centered Artificial Intelligence |
Subtitle of host publication | Research and Applications |
Publisher | Elsevier Inc. |
Pages | 251-270 |
Number of pages | 20 |
ISBN (Electronic) | 9780323856485 |
ISBN (Print) | 9780323856492 |
DOIs | |
State | Published - 1 Jan 2022 |
Keywords
- Artificial intelligence
- Black box
- Human-computer interaction
- Labeling
- Machine learning