While CADe(Computer aided detection) systems can achieve high sensitivity, their relatively low specificity has limited its implementation in the clinical setting. One of the major limiting factors for false positive reduction is the lack of good quality labeled data(with lesion labels). Our approach to solving this problem was utilizing unlabeled data (with unknown lesion and class labels), which tends to be more readily available. The goal of this study is to develop a semi-supervised learning method, that allows us to find pseudo-negative labeled data from unlabeled data and use this to improve the specificity of the detection task. We will then compare this to the false positive reduction achieved using clinically verified negative data, which is the theoretical optimum within our model and data setting.
Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone under-segmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.
To evaluate the accuracy of the new automatic software system for bone age assessment and validate its feasibility in clinical practice.
The bone age assessment is a critical procedure in pediatric radiology for diagnosis of many disorders and assessment of response to treatment. It can be performed by either Greulich and Pyle(GP) or Tanner- Whitehouse(TW) method. Although GP method is convenient to use, it can lead to subjective results. To overcome these limitations, there have been a few trials for automated bone age assessment including commercialized system. To verify the validity of these attempts, we developed Korean based bone age assessment system based on convolutional neural network (CNN) and compared our system with the commercialized system(CS) and Harvard's system(HS).
Interstitial lung diseases (ILDs) represent a major cause of morbidity and mortality. High-resolution computed tomography (HRCT) has become critical to characterize the imaging patterns of ILD, but this approach remains vulnerable to inter- and intra-observer variation. To overcome human variation, automated techniques have been applied for differentiating a variety of obstructive lung diseases based on the features of a density histogram and texture analyses[1-8]. Quantitative assessment of lung parenchymal texture is important to analyze and differentiate regional diseased patterns of ILD, which would lead to content based image retrieval (CBIR). Using deep learning technique with Siamese convolutional neural net (CNN) on raw image itself and classified disease patterns, 3D CBIR at HRCT is potentially useful for diagnosis and decision by retrieval of similar HRCT to referring similar patient previously diagnosed with known treatment response and survival. To address these unmet clinical needs, we have developed DILD CBIR platform, a deep learning-based CBIR system and its evaluation tool with known 100 paired HRCTs of same patient; thus to provide an efficient and reliable quantification for the assessment of CBIR performance for ILD patients.
To determine whether deep learning artificial intelligence can be used to automatically estimate and screen for osteoporosis on abdominal CT exams.
The purpose of this demonstration is to showcase an Aview ILD texture platform for automatic differentiation of subregional diseased patterns of diffuse infiltrative diseases and quantitative analysis on HRCT. The educational demonstration of Aview ILD texture platform will use computerbased handson demonstration at RSNA. We will set up a cloud platform of Aview ILD texture platform with use of multiple computers, one for the thin client server and the other for thin client and mobile interface. Demonstration will cover the entire workflow ranging from image acquisition protocol, automated postprocessing, interactive reviewing, automated measurements, advanced analysis and structured reporting, and will select patient cases from our clinical study approved by institutional review board of Asan Medical Center, which have been anonymized in accordance with the HIPAA Privacy Rule.
To introduce deep learning-based feature extraction method which adaptively learns the most significant features for the given task using deep structure to classify six kinds of regional patterns in diffuse lung disease.