TY - JOUR
T1 - Multi-Task Learning with Self-Defined Tasks for Adversarial Robustness of Deep Networks
AU - Hyun, Changhun
AU - Park, Hyeyoung
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - Despite the considerable progress made in the development of deep neural networks (DNNs), their vulnerability to adversarial attacks remains a major hindrance to their practical application. Consequently, there has been a surge of interest and investment in researching adversarial attacks and defense mechanisms, with a considerable focus on comprehending the properties of adversarial robustness. Among these intriguing studies, a couple of works show that multi-task learning can enhance the adversarial robustness of DNNs. Based on the previous works, we propose an efficient way to improve the adversarial robustness of a given main task in a more practical multi-task learning scenario by leveraging self-defined auxiliary task. The core concept of our proposed approach lies not just in jointly training predefined auxiliary tasks but in manually defining auxiliary tasks based on the built-in labels of given data, which enables users to efficiently perform multi-task learning without the need for pre-defined auxiliary tasks. The newly generated self-defined tasks remain 'hidden' from attackers and serve a supplementary role in improving the adversarial accuracy of the main task. In addition, the hidden auxiliary tasks also enable to build a rejection module that utilizes predictions from the auxiliary tasks to enhance the reliability of the prediction results. Through experiments conducted on five benchmark datasets, we confirmed that multi-task learning with self-defined hidden tasks can be actively employed to enhance the adversarial robustness and reliability.
AB - Despite the considerable progress made in the development of deep neural networks (DNNs), their vulnerability to adversarial attacks remains a major hindrance to their practical application. Consequently, there has been a surge of interest and investment in researching adversarial attacks and defense mechanisms, with a considerable focus on comprehending the properties of adversarial robustness. Among these intriguing studies, a couple of works show that multi-task learning can enhance the adversarial robustness of DNNs. Based on the previous works, we propose an efficient way to improve the adversarial robustness of a given main task in a more practical multi-task learning scenario by leveraging self-defined auxiliary task. The core concept of our proposed approach lies not just in jointly training predefined auxiliary tasks but in manually defining auxiliary tasks based on the built-in labels of given data, which enables users to efficiently perform multi-task learning without the need for pre-defined auxiliary tasks. The newly generated self-defined tasks remain 'hidden' from attackers and serve a supplementary role in improving the adversarial accuracy of the main task. In addition, the hidden auxiliary tasks also enable to build a rejection module that utilizes predictions from the auxiliary tasks to enhance the reliability of the prediction results. Through experiments conducted on five benchmark datasets, we confirmed that multi-task learning with self-defined hidden tasks can be actively employed to enhance the adversarial robustness and reliability.
KW - Adversarial attack
KW - adversarial robustness
KW - adversarial training
KW - multi-task learning
KW - self-defined auxiliary tasks
UR - http://www.scopus.com/inward/record.url?scp=85182937175&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3355924
DO - 10.1109/ACCESS.2024.3355924
M3 - Article
AN - SCOPUS:85182937175
SN - 2169-3536
VL - 12
SP - 83248
EP - 83259
JO - IEEE Access
JF - IEEE Access
ER -