Asymmetry between right and left fundus images identified using convolutional neural networks

Tae Seen Kang, Bum Jun Kim, Ki Yup Nam, Seongjin Lee, Kyonghoon Kim, Woong sub Lee, Jinhyun Kim, Yong Soep Han

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

We analyzed fundus images to identify whether convolutional neural networks (CNNs) can discriminate between right and left fundus images. We gathered 98,038 fundus photographs from the Gyeongsang National University Changwon Hospital, South Korea, and augmented these with the Ocular Disease Intelligent Recognition dataset. We created eight combinations of image sets to train CNNs. Class activation mapping was used to identify the discriminative image regions used by the CNNs. CNNs identified right and left fundus images with high accuracy (more than 99.3% in the Gyeongsang National University Changwon Hospital dataset and 91.1% in the Ocular Disease Intelligent Recognition dataset) regardless of whether the images were flipped horizontally. The depth and complexity of the CNN affected the accuracy (DenseNet121: 99.91%, ResNet50: 99.86%, and VGG19: 99.37%). DenseNet121 did not discriminate images composed of only left eyes (55.1%, p = 0.548). Class activation mapping identified the macula as the discriminative region used by the CNNs. Several previous studies used the flipping method to augment data in fundus photographs. However, such photographs are distinct from non-flipped images. This asymmetry could result in undesired bias in machine learning. Therefore, when developing a CNN with fundus photographs, care should be taken when applying data augmentation with flipping.

Original languageEnglish
Article number1444
JournalScientific Reports
Volume12
Issue number1
DOIs
StatePublished - Dec 2022

Fingerprint

Dive into the research topics of 'Asymmetry between right and left fundus images identified using convolutional neural networks'. Together they form a unique fingerprint.

Cite this