Abstract
In order to find the control rules of the nonlinear system from the learned data, it is necessary to interpret the learned policy in Deep Reinforcement Learning (DRL). This paper presents a novel interpretable Neuro-Fuzzy (NF) inference system based on Modified Triplet-Average Deep Deterministic Policy Gradient (MTADD) reinforcement learning algorithm with a two-phased training method. The first phase involves exploring and initiating the T-S fuzzy system rule and premise parameter. The second step is the deep reinforcement learning of the NF policy network, which uses a Modified Triplet-Average Deep Deterministic policy gradient algorithm. The experiment results demonstrate that the proposed approach decreases the training time, enhances the control performance, and increases the interpretability of NF DRL.
| Original language | English |
|---|---|
| Article number | 107653 |
| Journal | Journal of the Franklin Institute |
| Volume | 362 |
| Issue number | 7 |
| DOIs | |
| State | Published - 1 May 2025 |
Keywords
- Interpretable neuro-fuzzy controller
- Inverted pendulum
- Reinforcement learning
- Twin-delay
- Two-phase training