MLOps as Enabler of Trustworthy AI

As Artificial Intelligence (AI) systems are becoming ever more capable of performing complex tasks, their prevalence in industry, as well as society, is increasing rapidly. Adoption of AI systems requires humans to trust them, leading to the concept of trustworthy AI which covers principles such as...

Full description

Saved in:
Bibliographic Details
Published in:2024 11th IEEE Swiss Conference on Data Science (SDS) pp. 37 - 40
Main Authors: Billeter, Yann, Denzel, Philipp, Chavarriaga, Ricardo, Forster, Oliver, Schilling, Frank-Peter, Brunner, Stefan, Frischknecht-Gruber, Carmen, Reif, Monika, Weng, Joanna
Format: Conference Proceeding
Language:English
Published: IEEE 30-05-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As Artificial Intelligence (AI) systems are becoming ever more capable of performing complex tasks, their prevalence in industry, as well as society, is increasing rapidly. Adoption of AI systems requires humans to trust them, leading to the concept of trustworthy AI which covers principles such as fairness, reliability, explainability, or safety. Implementing AI in a trustworthy way is encouraged by newly developed industry norms and standards, and will soon be enforced by legislation such as the EU AI Act (EU AIA). We argue that Machine Learning Operations (MLOps), a paradigm which covers best practices and tools to develop and maintain AI and Machine Learning (ML) systems in production reliably and efficiently, provides a guide to implementing trustworthiness into the AI development and operation lifecycle. In addition, we present an implementation of a framework based on various MLOps tools which enables verification of trustworthiness principles using the example of a computer vision ML model.
ISSN:2835-3420
DOI:10.1109/SDS60720.2024.00013