Journal of Digital Imaging | 2018-10-05

Towards Accurate Segmentation of Retinal Vessels and the Optic Disc in Fundoscopic Images with Generative Adversarial Networks.


Automatic segmentation of the retinal vasculature and the optic disc is a crucial task for accurate geometric analysis and reliable automated diagnosis. In recent years, Convolutional Neural Networks (CNN) have shown outstanding performance compared to the conventional approaches in the segmentation tasks. In this paper, we experimentally measure the performance gain for Generative Adversarial Networks (GAN) framework when applied to the segmentation tasks. We show that GAN achieves statistically significant improvement in area under the receiver operating characteristic (AU-ROC) and area under the precision and recall curve (AU-PR) on two public datasets (DRIVE, STARE) by segmenting fine vessels. Also, we found a model that surpassed the current state-of-the-art method by 0.2 − 1.0% in AU-ROC and 0.8 − 1.2% in AU-PR and 0.5 − 0.7% in dice coefficient. In contrast, significant improvements were not observed in the optic disc segmentation task on DRIONS-DB, RIM-ONE (r3) and Drishti-GS datasets in AU-ROC and AU-PR.
MICCAI LABELS 2018 | 2018-09-18

An Efficient and Comprehensive Labeling Tool for Large-Scale Annotation of Fundus Images


Computerized labeling tools are often used to systematically record the assessment for fundus images. Carefully designed labeling tools not only save time and enable comprehensive and thorough assessment at clinics, but also economize large-scale data collection processes for the development of automatic algorithms. To realize efficient and thorough fundus assessment, we developed a new labeling tool with novel schemes - stepwise labeling and regional encoding. We have used our tool in a large-scale data annotation project in which 318,376 annotations for 109,885 fundus images were gathered with a total duration of 421 h. We believe that the fundamental concepts in our tool would inspire other data collection processes and annotation procedure in different domains.
MICCAI OMIA 2018 | 2018-09-18

Classification of Findings with Localized Lesions in Fundoscopic Images using a Regionally Guided CNN


Fundoscopic images are often investigated by ophthalmologists to spot abnormal lesions to make diagnoses. Recent successes of convolutional neural networks are confined to diagnoses of few diseases without proper localization of lesion. In this paper, we propose an efficient annotation method for localizing lesions and a CNN architecture that can classify an individual finding and localize the lesions at the same time. Also, we introduce a new loss function to guide the network to learn meaningful patterns with the guidance of the regional annotations. In experiments, we demonstrate that our network performed better than the widely used network and the guidance loss helps achieve higher AUROC up to 4.1% and superior localization capability.
Acute and Critical Care | 2018-08-31

Deep Learning in the Medical Domain: Predicting Cardiac Arrest Using Deep Learning


With the wider adoption of electronic health records, the rapid response team initially believed that mortalities could be significantly reduced but due to low accuracy and false alarms, the healthcare system is currently fraught with many challenges. Rule-based methods (e.g., Modified Early Warning Score) and machine learning (e.g., random forest) were proposed as a solution but not effective. In this article, we introduce the DeepEWS (Deep learning based Early Warning Score), which is based on a novel deep learning algorithm. Relative to the standard of care and current solutions in the marketplace, there is high accuracy, and in the clinical setting even when we consider the number of alarms, the accuracy levels are superior.
Journal of Korean Medical Science | 2018-08-08

A Novel Fundus Image Reading Tool for Efficient Generation of a Multi-dimensional categorical Image Database for Machine Learning Algorithm Training


Background We described a novel multi-step retinal fundus image reading system for providing high-quality large data for machine learning algorithms, and assessed the grader variability in the large-scale dataset generated with this system. Methods A 5-step retinal fundus image reading tool was developed that rates image quality, presence of abnormality, findings with location information, diagnoses, and clinical significance. Each image was evaluated by 3 different graders. Agreements among graders for each decision were evaluated. Results The 234,242 readings of 79,458 images were collected from 55 licensed ophthalmologists during 6 months. The 34,364 images were graded as abnormal by at-least one rater. Of these, all three raters agreed in 46.6% in abnormality, while 69.9% of the images were rated as abnormal by two or more raters. Agreement rate of at-least two raters on a certain finding was 26.7%–65.2%, and complete agreement rate of all-three raters was 5.7%–43.3%. As for diagnoses, agreement of at-least two raters was 35.6%–65.6%, and complete agreement rate was 11.0%–40.0%. Agreement of findings and diagnoses were higher when restricted to images with prior complete agreement on abnormality. Retinal/glaucoma specialists showed higher agreements on findings and diagnoses of their corresponding subspecialties. Conclusion This novel reading tool for retinal fundus images generated a large-scale dataset with high level of information, which can be utilized in future development of machine learning-based algorithms for automated identification of abnormal conditions and clinical decision supporting system. These results emphasize the importance of addressing grader variability in algorithm developments.


Thanks for subscribing to our newsletter.