Parameterized Luenberger-Type HState Estimator for Delayed Static Neural Networks

Yongsik Jin, Wookyong Kwon, Sangmoon Lee

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This article proposes a new Luenberger-type state estimator that has parameterized observer gains dependent on the activation function, to improve the $H_{\infty }$ state estimation performance of the static neural networks with time-varying delay. The nonlinearity of the activation function has a significant impact on stability analysis and robustness/performance. In the proposed state estimator, a parameter-dependent estimator gain is reconstructed by using the properties of the sector nonlinearity of the activation functions that are represented as linear combinations of weighting parameters. In the reformulated form, the constraints of the parameters for the activation function are considered in terms of linear matrix inequalities. Based on the Lyapunov-Krasovskii function and the improved reciprocally convex inequality, enhanced conditions for designing a new state estimator that guarantees $H_{\infty }$ performance are derived through a parameterization technique. The compared results with recent studies demonstrate the superiority and effectiveness of the presented method.

Original languageEnglish
Pages (from-to)2791-2800
Number of pages10
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume33
Issue number7
DOIs
StatePublished - 1 Jul 2022

Keywords

  • Linear matrix inequalities (LMIs)
  • performance analysis
  • state estimator
  • static neural networks
  • time delay

Fingerprint

Dive into the research topics of 'Parameterized Luenberger-Type HState Estimator for Delayed Static Neural Networks'. Together they form a unique fingerprint.

Cite this