Addressing discriminatory bias in artificial intelligence systems operated by companies: An analysis of end-user perspectives

The use of AI in different applications for different purposes has raised concerns due to discriminatory biases that have been identified in the technology. This paper aims to identify and analyze some of the main measures proposed by Bill No. 2338/23 of the Federative Republic of Brazil to combat d...

Full description

Saved in:
Bibliographic Details
Published in:Technovation Vol. 138; p. 103118
Main Authors: Borba, Rafael Lucas, de Paula Ferreira, Iuri Emmanuel, Bertucci Ramos, Paulo Henrique
Format: Journal Article
Language:English
Published: Elsevier Ltd 01-12-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The use of AI in different applications for different purposes has raised concerns due to discriminatory biases that have been identified in the technology. This paper aims to identify and analyze some of the main measures proposed by Bill No. 2338/23 of the Federative Republic of Brazil to combat discriminatory bias that companies should adopt to provide and/or operate fair and non-discriminatory AIs. To do so, it will first attempt to measure and analyze people's perceptions of the possibility that AI systems are discriminatory. For this a qualitative descriptive exploratory was made using as a reference sample the inhabitants of the Southeast region of Brasil. The survey results suggest that people are more aware that AIs are not neutral and that they may come to incorporate and reproduce prejudices and discriminations present in society. The incorporation of such biases is the result of issues related to the quality and diversity of the data used, inaccuracies in the algorithms employed, and biases on the part of both developers and operators. Thus, this work sought to reduce this gap and at the same time break the barrier of the lack of dialogue with the public in order to contribute to a democratic debate with society. •People recognize that artificial intelligence is not neutral.•Artificial intelligence may incorporate and reproduce discriminatory biases.•People understand the need for companies to adopt measures against biases.
ISSN:0166-4972
DOI:10.1016/j.technovation.2024.103118