Publications


Journal of American Heart Association | 2018-06-26

An Algorithm Based on Deep Learning for Predicting In‐Hospital Cardiac Arrest


Abstract

In‐hospital cardiac arrest is a major burden to public health, which affects patient safety. Although traditional track‐and‐trigger systems are used to predict cardiac arrest early, they have limitations, with low sensitivity and high false‐alarm rates. We propose a deep learning–based early warning system that shows higher performance than the existing track‐and‐trigger systems. This retrospective cohort study reviewed patients who were admitted to 2 hospitals from June 2010 to July 2017. A total of 52 131 patients were included. Specifically, a recurrent neural network was trained using data from June 2010 to January 2017. The result was tested using the data from February to July 2017. The primary outcome was cardiac arrest, and the secondary outcome was death without attempted resuscitation. As comparative measures, we used the area under the receiver operating characteristic curve (AUROC), the area under the precision–recall curve (AUPRC), and the net reclassification index. Furthermore, we evaluated sensitivity while varying the number of alarms. The deep learning–based early warning system (AUROC: 0.850; AUPRC: 0.044) significantly outperformed a modified early warning score (AUROC: 0.603; AUPRC: 0.003), a random forest algorithm (AUROC: 0.780; AUPRC: 0.014), and logistic regression (AUROC: 0.613; AUPRC: 0.007). Furthermore, the deep learning–based early warning system reduced the number of alarms by 82.2%, 13.5%, and 42.1% compared with the modified early warning system, random forest, and logistic regression, respectively, at the same sensitivity. An algorithm based on deep learning had high sensitivity and a low false‐alarm rate for detection of patients with cardiac arrest in the multicenter study.
Journal of Digital Imaging | 2018-06-12

Laterality Classification of Fundus Images using Interpretable Deep Neural Networks


Abstract

In this paper, we aimed to understand and analyze the outputs of a convolutional neural network model that classifies the laterality of fundus images. Our model not only automatizes the classification process, which results in reducing the labors of clinicians, but also highlights the key regions in the image and evaluates the uncertainty for the decision with proper analytic tools. Our model was trained and tested with 25,911 fundus images (43.4% of macula-centered images and 28.3% each of superior and nasal retinal fundus images). Also, activation maps were generated to mark important regions in the image for the classification. Then, uncertainties were quantified to support explanations as to why certain images were incorrectly classified under the proposed model. Our model achieved a mean training accuracy of 99%, which is comparable to the performance of clinicians. Strong activations were detected at the location of optic disc and retinal blood vessels around the disc, which matches to the regions that clinicians attend when deciding the laterality. Uncertainty analysis discovered that misclassified images tend to accompany with high prediction uncertainties and are likely ungradable. We believe that visualization of informative regions and the estimation of uncertainty, along with presentation of the prediction result, would enhance the interpretability of neural network models in a way that clinicians can be benefitted from using the automatic classification system.
Journal of Digital Imaging

Comparison of Shallow and Deep Learning Methods on Classifying the Regional Pattern of Diffuse Lung Disease


Abstract

This study aimed to compare shallow and deep learning of classifying the patterns of interstitial lung diseases (ILDs). Using high-resolution computed tomography images, two experienced radiologists marked 1200 regions of interest (ROIs), in which 600 ROIs were each acquired using a GE or Siemens scanner and each group of 600 ROIs consisted of 100 ROIs for subregions that included normal and five regional pulmonary disease patterns (ground-glass opacity, consolidation, reticular opacity, emphysema, and honeycombing). We employed the convolution neural network (CNN) with six learnable layers that consisted of four convolution layers and two fully connected layers. The classification results were compared with the results classified by a shallow learning of a support vector machine (SVM). The CNN classifier showed significantly better performance for accuracy compared with that of the SVM classifier by 6–9%. As the convolution layer increases, the classification accuracy of the CNN showed better performance from 81.27 to 95.12%. Especially in the cases showing pathological ambiguity such as between normal and emphysema cases or between honeycombing and reticular opacity cases, the increment of the convolution layer greatly drops the misclassification rate between each case. Conclusively, the CNN classifier showed significantly greater accuracy than the SVM classifier, and the results implied structural characteristics that are inherent to the specific ILD patterns.
Hanyang Medical Reviews

Deep Learning for Medical Image Analysis : Application to Computed Tomography and Magnetic Resonance Imaging


Abstract

Recent advances in deep learning have brought many breakthroughs in medical image analysis by providing more robust and consistent tools for the detection, classification and quantification of patterns in medical images. Specifically, analysis of advanced modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) has benefited most from the data-driven nature of deep learning. This is because the need of knowledge and experience-oriented feature engineering process can be circumvented by automatically deriving representative features from the complex high dimensional medical images with respect to the target tasks. In this paper, we will review recent applications of deep learning in the analysis of CT and MR images in a range of tasks and target organs. While most applications are focused on the enhancement of the productivity and accuracy of current diagnostic analysis, we will also introduce some promising applications which will significantly change the current workflow of medical imaging. We will conclude by discussing opportunities and challenges of applying deep learning to advanced imaging and suggest future directions in this domain.
CMMI2017

False Positive Reduction by Actively Mining Negative Samples for Pulmonary Nodule Detection in Chest Radiographs


Abstract

While CADe(Computer aided detection) systems can achieve high sensitivity, their relatively low specificity has limited its implementation in the clinical setting. One of the major limiting factors for false positive reduction is the lack of good quality labeled data(with lesion labels). Our approach to solving this problem was utilizing unlabeled data (with unknown lesion and class labels), which tends to be more readily available. The goal of this study is to develop a semi-supervised learning method, that allows us to find pseudo-negative labeled data from unlabeled data and use this to improve the specificity of the detection task. We will then compare this to the false positive reduction achieved using clinically verified negative data, which is the theoretical optimum within our model and data setting.

WELCOME!

Thanks for subscribing to our newsletter.