TY - JOUR
T1 - Exponential filtering technique for Euclidean norm-regularized extreme learning machines
AU - Naik, Shraddha M.
AU - Subramani, Chinnamuthu
AU - Jagannath, Ravi Prasad K.
AU - Paul, Anand
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
PY - 2023/8
Y1 - 2023/8
N2 - Extreme Learning Machine (ELM) is a feedforward neural network that utilizes a single hidden layer to effectively tackle the learning speed challenges commonly associated with conventional gradient-based neural networks. ELM has been reported to achieve faster learning rates and better performance than traditional neural networks. However, it is susceptible to unreliable solutions when applied to real-world input data with inconsistent noise, resulting in overfitting. To mitigate these limitations, we investigate various regularization techniques that can be employed in conjunction with ELM, including Tikhonov regularization, a well-established method in the field. However, one of the main drawbacks of Tikhonov regularization is its assumption of the input data’s noise to be white and Gaussian, which may not be the case in real-world applications. This assumption can lead to suboptimal regularization and poor generalization performance of the model. Therefore, we propose using an exponential filtering method in conjunction with ELM to overcome this limitation and improve the model’s reliability. We compare our approach with Tikhonov regularization and other existing methods to evaluate its efficacy. Our experimental results demonstrate that our proposed strategy achieves superior accuracy and generalization capability compared to the other methods. Moreover, we provide statistical evidence to support the significance of our findings.
AB - Extreme Learning Machine (ELM) is a feedforward neural network that utilizes a single hidden layer to effectively tackle the learning speed challenges commonly associated with conventional gradient-based neural networks. ELM has been reported to achieve faster learning rates and better performance than traditional neural networks. However, it is susceptible to unreliable solutions when applied to real-world input data with inconsistent noise, resulting in overfitting. To mitigate these limitations, we investigate various regularization techniques that can be employed in conjunction with ELM, including Tikhonov regularization, a well-established method in the field. However, one of the main drawbacks of Tikhonov regularization is its assumption of the input data’s noise to be white and Gaussian, which may not be the case in real-world applications. This assumption can lead to suboptimal regularization and poor generalization performance of the model. Therefore, we propose using an exponential filtering method in conjunction with ELM to overcome this limitation and improve the model’s reliability. We compare our approach with Tikhonov regularization and other existing methods to evaluate its efficacy. Our experimental results demonstrate that our proposed strategy achieves superior accuracy and generalization capability compared to the other methods. Moreover, we provide statistical evidence to support the significance of our findings.
KW - Classification
KW - Exponential filtering
KW - Extreme learning machine
KW - Regularization parameter
UR - http://www.scopus.com/inward/record.url?scp=85162675543&partnerID=8YFLogxK
U2 - 10.1007/s10044-023-01174-8
DO - 10.1007/s10044-023-01174-8
M3 - Article
AN - SCOPUS:85162675543
SN - 1433-7541
VL - 26
SP - 1453
EP - 1462
JO - Pattern Analysis and Applications
JF - Pattern Analysis and Applications
IS - 3
ER -