This study aimed to compare shallow and deep learning of classifying the patterns of interstitial lung diseases (ILDs). Using high-resolution computed tomography images, two experienced radiologists marked 1200 regions of interest (ROIs), in which 600 ROIs were each acquired using a GE or Siemens scanner and each group of 600 ROIs consisted of 100 ROIs for subregions that included normal and five regional pulmonary disease patterns (ground-glass opacity, consolidation, reticular opacity, emphysema, and honeycombing). We employed the convolution neural network (CNN) with six learnable layers that consisted of four convolution layers and two fully connected layers. The classification results were compared with the results classified by a shallow learning of a support vector machine (SVM). The CNN classifier showed significantly better performance for accuracy compared with that of the SVM classifier by 6–9%. As the convolution layer increases, the classification accuracy of the CNN showed better performance from 81.27 to 95.12%. Especially in the cases showing pathological ambiguity such as between normal and emphysema cases or between honeycombing and reticular opacity cases, the increment of the convolution layer greatly drops the misclassification rate between each case. Conclusively, the CNN classifier showed significantly greater accuracy than the SVM classifier, and the results implied structural characteristics that are inherent to the specific ILD patterns.
Recent advances in deep learning have brought many breakthroughs in medical image analysis by providing more robust and consistent tools for the detection, classification and quantification of patterns in medical images. Specifically, analysis of advanced modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) has benefited most from the data-driven nature of deep learning. This is because the need of knowledge and experience-oriented feature engineering process can be circumvented by automatically deriving representative features from the complex high dimensional medical images with respect to the target tasks. In this paper, we will review recent applications of deep learning in the analysis of CT and MR images in a range of tasks and target organs. While most applications are focused on the enhancement of the productivity and accuracy of current diagnostic analysis, we will also introduce some promising applications which will significantly change the current workflow of medical imaging. We will conclude by discussing opportunities and challenges of applying deep learning to advanced imaging and suggest future directions in this domain.
While CADe(Computer aided detection) systems can achieve high sensitivity, their relatively low specificity has limited its implementation in the clinical setting. One of the major limiting factors for false positive reduction is the lack of good quality labeled data(with lesion labels). Our approach to solving this problem was utilizing unlabeled data (with unknown lesion and class labels), which tends to be more readily available. The goal of this study is to develop a semi-supervised learning method, that allows us to find pseudo-negative labeled data from unlabeled data and use this to improve the specificity of the detection task. We will then compare this to the false positive reduction achieved using clinically verified negative data, which is the theoretical optimum within our model and data setting.
Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone under-segmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.
To evaluate the accuracy of the new automatic software system for bone age assessment and validate its feasibility in clinical practice.
The bone age assessment is a critical procedure in pediatric radiology for diagnosis of many disorders and assessment of response to treatment. It can be performed by either Greulich and Pyle(GP) or Tanner- Whitehouse(TW) method. Although GP method is convenient to use, it can lead to subjective results. To overcome these limitations, there have been a few trials for automated bone age assessment including commercialized system. To verify the validity of these attempts, we developed Korean based bone age assessment system based on convolutional neural network (CNN) and compared our system with the commercialized system(CS) and Harvard's system(HS).
Interstitial lung diseases (ILDs) represent a major cause of morbidity and mortality. High-resolution computed tomography (HRCT) has become critical to characterize the imaging patterns of ILD, but this approach remains vulnerable to inter- and intra-observer variation. To overcome human variation, automated techniques have been applied for differentiating a variety of obstructive lung diseases based on the features of a density histogram and texture analyses[1-8]. Quantitative assessment of lung parenchymal texture is important to analyze and differentiate regional diseased patterns of ILD, which would lead to content based image retrieval (CBIR). Using deep learning technique with Siamese convolutional neural net (CNN) on raw image itself and classified disease patterns, 3D CBIR at HRCT is potentially useful for diagnosis and decision by retrieval of similar HRCT to referring similar patient previously diagnosed with known treatment response and survival. To address these unmet clinical needs, we have developed DILD CBIR platform, a deep learning-based CBIR system and its evaluation tool with known 100 paired HRCTs of same patient; thus to provide an efficient and reliable quantification for the assessment of CBIR performance for ILD patients.
To determine whether deep learning artificial intelligence can be used to automatically estimate and screen for osteoporosis on abdominal CT exams.