Assessor Feedback Mechanism for Machine Learning Model

Musulmon Lolaev, Anand Paul, Jeonghong Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Evaluating artificial intelligence (AI) systems is crucial for their successful deployment and safe operation in real-world applications. The assessor meta-learning model has been recently introduced to assess AI system behaviors developed from emergent characteristics of AI systems and their responses on a test set. The original approach lacks covering continuous ranges, for example, regression problems, and it produces only the probability of success. In this work, to address existing limitations and enhance practical applicability, we propose an assessor feedback mechanism designed to identify and learn from AI system errors, enabling the system to perform the target task more effectively while concurrently correcting its mistakes. Our empirical analysis demonstrates the efficacy of this approach. Specifically, we introduce a transition methodology that converts prediction errors into relative success, which is particularly beneficial for regression tasks. We then apply this framework to both neural network and support vector machine models across regression and classification tasks, thoroughly testing its performance on a comprehensive suite of 30 diverse datasets. Our findings highlight the robustness and adaptability of the assessor feedback mechanism, showcasing its potential to improve model accuracy and reliability across varied data contexts.

Original languageEnglish
Pages (from-to)4707-4726
Number of pages20
JournalComputers, Materials and Continua
Volume81
Issue number3
DOIs
StatePublished - 2024

Keywords

  • Artificial Intelligence
  • assessor model
  • evaluation
  • explainable AI
  • meta-learning
  • trustworthy

Fingerprint

Dive into the research topics of 'Assessor Feedback Mechanism for Machine Learning Model'. Together they form a unique fingerprint.

Cite this