Publications


RSNA 2019 | 2019-12-03

A Deep Learning-Based CAD that Can Reduce False Negative Reports: A Preliminary Study in Health Screening Center

Hyunho Park, MD, Soo-Young Ham, MD, PhD, Hwa-Young Kim, MD, Hyon Joo Kwag, MD, Seungho Lee, MS, Gwangbeen Park, MS, Sangkeun Kim, MS, Minsuk Park, MS, Jin-Kyeong Sung, MD, PhD, Kyu-Hwan Jung, PhD

Abstract

PURPOSE
To evaluate the clinical value of a deep learning-based computer-aided detection (DLCAD) model that can reduce false negative reports on screening chest CTs that were considered normal.

METHOD AND MATERIALS
A DLCAD consisting of a 2.5D CNN for candidate detection and a 3D CNN for false positive reduction was trained with a public LIDC-IDRI dataset. Preliminary validation performance for the same dataset was 90.7% sensitivity under one false-positive per scan threshold. Ten thousand low dose chest CT cases that were reported normal were collected from a single-center screening cohort from the year 2011 to 2015. 'Normal' was defined as containing no malignant or benign lesions. The deep learning-based CAD analyzed these cases reported as normal and detected nodule candidates. Four radiologists reviewed the results of CAD independently. When the candidate nodule was accepted, the type (solid, part-solid and ground-glass nodule [GGN]) and size of nodules were annotated.

RESULTS
DLCAD analyzed 9952 cases (48 cases with inappropriate parameters, scan range or field of view were excluded) and detected 471 nodule candidates. Among them, 283 nodules from 269 patients were reported to be the true nodules by more than three radiologists. Excluding 67 nodules (with insufficient consensus), 216 nodules were categorized to be the same diameter range and nodule type by more than three radiologists. Among 216 nodules, 151 (69.9%) nodules were solid, three (1.4%) were part-solid, and 62 (28.7%) were GGN. Among 151 solid nodules, 10 (6.6%) nodules were larger than or equal to 6mm (eight [5.3%] 6 to 8mm, two [1.3%] 8 to 15mm) and 141 (93.4%) were smaller than 6mm. All three part-solid nodules were smaller than 6mm. All 62 GGN were smaller than 20mm. According to the Lung-RADS, two solid nodules were category 4A, eight solid nodules were category 3, and the remaining 206 nodules were category 2.

CONCLUSION
The deep learning-based CAD has detected 2.7% (269/9952) false negative cases with neglected nodules. 4.6% (10/216) nodules were higher than Lung-RADS category 3, which require follow up scans.

CLINICAL RELEVANCE/APPLICATION
The deep learning-based CAD will perform an ancillary role as a safeguard and a competent second reader by reducing false negative rates.

RSNA 2019 | 2019-12-03

Evaluation of the Performance of Deep Learning Models Trained on a Combination of Major Abnormal Patterns on Chest Radiographs for Major Chest Diseases at International Multi-Centers

Woong Bae, MS, Beohee Park, MS, Minki Jung, MS, Jin-Kyeong Sung, MD, PhD, Kyu-Hwan Jung, PhD, Sang Min Lee, MD, PhD, Joon Beom Seo, MD, PhD

Abstract

PURPOSE
To evaluate the abnormal classification performance for major chest diseases using a deep learning model that was trained on a combination of major abnormal patterns on chest radiographs.

METHOD AND MATERIALS
We experimented with the abnormal classification performance for a deep learning model for major diseases (tuberculosis and pneumonia) that was trained on a combination of different patterns (nodule, consolidation and interstitial opacity) on CRs. To evaluate the effect of each pattern combination on performance for major diseases, we tested five cases of patterns, which is composed of the nodule case, the consolidation case, the interstitial opacity case, the combination of consolidation and interstitial opacity case, and the combination of all three cases. When training each case, all normal data was used for training. CRs with three abnormal patterns and normal patterns were used as training datasets, which were received from two hospitals and consisted of 2095, 2401, 1290, and 3000 images for nodule, consolidation, interstitial opacity, and normal patterns, respectively. And all abnormal CRs were clinically confirmed by CT scans. For an explicit evaluation, the public dataset was used as the test dataset, which consists of the Shenzhen (normal: 326, tuberculosis: 336) and PadChest (normal: 300, pneumonia: 127, randomly selected) dataset, which was used to evaluate tuberculosis and pneumonia, respectively.

RESULTS
In the test dataset, for tuberculosis and pneumonia, the classification performance of the models trained with the five cases of patterns showed AUC 0.58 / 0.69 for nodule case, 0.76 / 0.82 for consolidation, 0.52 / 0.76 for interstitial opacity case, 0.79 / 0.83 for combination of consolidation and interstitial opacity case, 0.79 / 0.82 for combination of all three case, respectively.

CONCLUSION
We have shown through experimentations that the deep learning model trained from data with major patterns (nodule, consolidation, interstitial opacity) can classify major diseases (tuberculosis, pneumonia) as abnormal. Also, consolidation was highly correlated with tuberculosis and pneumonia. On the other hand, interstitial opacity and nodule were more correlated with pneumonia, tuberculosis, respectively.

CLINICAL RELEVANCE/APPLICATION
The diagnosis based on the patterns of abnormal findings allows detection of various diseases.

RSNA 2019 | 2019-12-02

Deep Learning-Based Automated Segmentation of Prostate Cancer on Multiparametric MRI: Comparison with Experienced Uroradiologists

Wonmo Jung, MD, PhD, Sung Il Hwang, MD, Sejin Park, MS, Jin-Kyeong Sung, MD, PhD, Kyu-Hwan Jung, PhD, Hyungwoo Ahn, MD, PhD, Hak Jong Lee, MD, PhD, Sang Youn Kim, Myoung Seok Lee, MD, Younggi Kim

Abstract

PURPOSE
To compare the performance of deep learning based prostate cancer (PCa) segmentation with manual segmentation of experienced uroradiologists.

METHOD AND MATERIALS
From 2011 Jan to 2018 Apr, 350 patients who underwent prostatectomy for prostate cancer were enrolled retrospectively. To collect histopathological ground truth, pathologic slides of whole resected prostate were scanned and PCa lesions were drawn by a uropathologist with 25 years' experience. With reference to the histopathological lesion, radiological ground truth of PCa was drawn on the T2 weighted image by a uroradiologist with 19 years' experience. A U-Net type deep neural network, in which the encoder part has more convolution blocks than the decoder, was trained for segmentation. Four different MR sequences including T2 weighted images, diffusion weighted images (b = 0, 1000), and apparent diffusion coefficient (ADC) images, were used as input images after affine registration. Besides the automatic segmentation by the deep neural network, two experienced uroradiologists marked suspected sectors of PCa among 39 sectors provided by PIRADS-v2 after reviewing same images of four MR sequences. The manual segmentation performance of uroradiologists was measured using the number of sectors that coincided with the ground truth PCa lesion.

RESULTS
The dice coefficient scores (DCSs) achieved by two uroradiologists were 0.490 and 0.310 respectively. The DCS was calculated based on the number of sectors. The DCS of automatic segmentation by a deep neural network was 0.558 (calculated by the number of pixels) which is slightly better than the average (0.40) DCSs of uroradiologists.

CONCLUSION
Automated segmentation of PCa on multiparametric MR based on histopathologically confirmed lesion label achieved comparable performance with experienced uroradiologist.

CLINICAL RELEVANCE/APPLICATION
The automated segmentation of prostate cancer using a deep neural network not only reduce time consuming work but also provide reliable location and size information required for treatment decision.

RSNA 2019 | 2019-12-01

Deep Learning Algorithm for Reducing CT Slice Thickness: Effect on Reproducibility of Radiomics in Lung Cancer

Sohee Park, MD, Sang Min Lee, MD, PhD, Kyu-Hwan Jung, PhD, Hyunho Park, MD, Woong Bae, MS, Joon Beom Seo, MD, PhD

Abstract

PURPOSE 
To retrospectively assess the effect of CT slice thickness on the reproducibility of radiomic features (RFs) of lung cancer, and to investigate if convolutional neural network (CNN)-based super-resolution (SR) algorithms can improve the reproducibility of RFs obtained from different slice thicknesses. 

METHOD AND MATERIALS
CT images from 100 pathologically proven lung cancers acquired between July 2017 and December 2017 were evaluated, including 1, 3, and 5 mm slice thicknesses. CNN-based SR algorithms using residual learning were developed to convert thick-slice images into 1 mm slices. Lung cancers were semi-automatically segmented and a total of 702 RFs (tumor intensity, texture, and wavelet features) were extracted from 1, 3, and 5 mm slices, as well as the 1 mm slices generated from the 3 and 5 mm images. The stabilities of the RFs were evaluated using concordance correlation coefficients (CCCs).

RESULTS 
All CT scans were successfully converted to 1 mm slice images at a rate of 2.5 s/slice. The mean CCCs for the comparisons of original 1 vs 3 mm, 1 vs 5 mm, and 3 vs 5 mm images were 0.41, 0.27, and 0.65, respectively (all, P<0.001). Tumor intensity features showed the best reproducibility and wavelets the lowest. The majority of RFs failed to reach reproducibility (CCC>=0.85; 3.6%, 1.0%, and 21.5%, respectively). In terms of nodule type, GGNs had better reproducibility than solid nodules in all RF classes and in all slice-thickness pairings (P < 0.001 for 1 vs 3 mm and 1 vs 5 mm, and P = 0.002 for 3 vs 5 mm). After applying CNN-based SR algorithms, the reproducibility significantly improved in all three pairings (mean CCCs: 0.58, 0.45, and 0.72; all, P<0.001). This improvement was also observed in the subgroupings based on the classes of RFs and nodule types. The reproducible RFs also increased (36.3%, 17.4%, and 36.9%, respectively).

CONCLUSION 
The reproducibility of RFs in lung cancer is significantly influenced by CT slice thickness, which can be improved by the CNN-based SR algorithms.

CLINICAL RELEVANCE/APPLICATION 
On the basis of the findings of our study, the comparisons of radiomics results derived from CT images with different slice thicknesses may be unreliable. As our convolutional neural network-based image conversion algorithm is easily applicable and reliable, this algorithm may be used for enhancing reproducibility of radiomic features when the CT slice-thicknesses are different.

Scientific Reports | 2019-11-26

DeNTNet: Deep Neural Transfer Network for the detection of periodontal bone loss using panoramic dental radiographs

Jaeyoung Kim, Hong-Seok Lee, In-Seok Song, and Kyu-Hwan Jung

Abstract

In this study, a deep learning-based method for developing an automated diagnostic support system that detects periodontal bone loss in the panoramic dental radiographs is proposed. The presented method called DeNTNet not only detects lesions but also provides the corresponding teeth numbers of the lesion according to dental federation notation. DeNTNet applies deep convolutional neural networks(CNNs) using transfer learning and clinical prior knowledge to overcome the morphological variation of the lesions and imbalanced training dataset. With 12,179 panoramic dental radiographs annotated by experienced dental clinicians, DeNTNet was trained, validated, and tested using 11,189, 190, and 800 panoramic dental radiographs, respectively. Each experimental model was subjected to comparative study to demonstrate the validity of each phase of the proposed method. When compared to the dental clinicians, DeNTNet achieved the F1 score of 0.75 on the test set, whereas the average performance of dental clinicians was 0.69.

WELCOME!

Thanks for subscribing to our newsletter.