TY - GEN
T1 - Exploiting Residual Edge Information in Deep Fully Convolutional Neural Networks for Retinal Vessel Segmentation
AU - Khan, Tariq M.
AU - Naqvi, Syed S.
AU - Arsalan, Muhammad
AU - Khan, Muhamamd Aurangzeb
AU - Khan, Haroon A.
AU - Haider, Adnan
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/7
Y1 - 2020/7
N2 - Accurate automatic segmentation of the retinal vessels is crucial for early detection and diagnosis of vision-threatening retinal diseases. A new supervised method using a variant of the fully convolutional neural network is pro-posed with the advantages of reduced hyper-parameters, reduced computational/memory requirements, and robust performance in capturing tiny vessel information. The fully convolutional architectures previously employed for vessel segmentation have multiple tunable hyperparameters and difficulty in end-to-end training due to their decoder structure. We resolve this problem by sharing information from the encoder for upsampling at the decoder stage, resulting in a significantly smaller number of tunable parameters and low computational overhead at the train and test stages. Moreover, the need for pre- and post-processing steps are eradicated. Consequently, the detection accuracy is significantly improved with scores of 0.9620, 0.9623, and 0.9620 on DRIVE, STARE, and CHASE-DB1 datasets respectively.
AB - Accurate automatic segmentation of the retinal vessels is crucial for early detection and diagnosis of vision-threatening retinal diseases. A new supervised method using a variant of the fully convolutional neural network is pro-posed with the advantages of reduced hyper-parameters, reduced computational/memory requirements, and robust performance in capturing tiny vessel information. The fully convolutional architectures previously employed for vessel segmentation have multiple tunable hyperparameters and difficulty in end-to-end training due to their decoder structure. We resolve this problem by sharing information from the encoder for upsampling at the decoder stage, resulting in a significantly smaller number of tunable parameters and low computational overhead at the train and test stages. Moreover, the need for pre- and post-processing steps are eradicated. Consequently, the detection accuracy is significantly improved with scores of 0.9620, 0.9623, and 0.9620 on DRIVE, STARE, and CHASE-DB1 datasets respectively.
KW - Deep fully con-volutional neural network
KW - Low-level semantic information
KW - Residual edge information
KW - Retinal vessel segmentation
KW - Semantic segmentation
UR - http://www.scopus.com/inward/record.url?scp=85093867050&partnerID=8YFLogxK
U2 - 10.1109/IJCNN48605.2020.9207411
DO - 10.1109/IJCNN48605.2020.9207411
M3 - Conference contribution
AN - SCOPUS:85093867050
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 International Joint Conference on Neural Networks, IJCNN 2020
Y2 - 19 July 2020 through 24 July 2020
ER -