Skip NavigationSkip to Content

Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation

  1. Author:
    Zhang, Ling
    Wang, Xiaosong
    Yang, Dong
    Sanford, Thomas
    Harmon,Stephanie
    Turkbey, Baris
    Wood, Bradford J.
    Roth, Holger
    Myronenko, Andriy
    Xu, Daguang
    Xu, Ziyue
  2. Author Address

    Nvidia Corp, Bethesda, MD 20814 USA.PAII Inc, Bethesda, MD 20817 USA.NIH, Ctr Clin, Bethesda, MD 20892 USA.NCI, Clin Res Directorate, Frederick Natl Lab Canc Res, Bethesda, MD 20892 USA.
    1. Year: 2020
    2. Date: JUL
  1. Journal: IEEE TRANSACTIONS ON MEDICAL IMAGING
  2. IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC,
    1. 39
    2. 7
    3. Pages: 2531-2540
  3. Type of Article: Article
  4. ISSN: 0278-0062
  1. Abstract:

    Recent advances in deep learning for medical image segmentation demonstrate expert-level accuracy. However, application of these models in clinically realistic environments can result in poor generalization and decreased accuracy, mainly due to the domain shift across different hospitals, scanner vendors, imaging protocols, and patient populations etc. Common transfer learning and domain adaptation techniques are proposed to address this bottleneck. However, these solutions require data (and annotations) from the target domain to retrain the model, and is therefore restrictive in practice for widespread model deployment. Ideally, we wish to have a trained (locked) model that can work uniformly well across unseen domains without further training. In this paper, we propose a deep stacked transformation approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image during network training. The underlying assumption is that the "expected" domain shift for a specific medical imaging modality could be simulated by applying extensive data augmentation on a single source domain, and consequently, a deep model trained on the augmented "big" data (BigAug) could generalize well on unseen domains. We exploit four surprisingly effective, but previously understudied, image-based characteristics for data augmentation to overcome the domain generalization problem. We train and evaluate the BigAug model (with n = 9 transformations) on three different 3D segmentation tasks (prostate gland, left atrial, left ventricle) covering two medical imaging modalities (MRI and ultrasound) involving eight publicly available challenge datasets. The results show that when training on relatively small dataset (n = 10 similar to 32 volumes, depending on the size of the avail- able datasets) from a single source domain: (i) BigAug models degrade an average of 11% (Dice score change)from source to unseen domain, substantially better than conventional augmentation (degrading 39%) and CycleGAN-based domain adaptation method (degrading 25%), (ii) BigAug is better than "shallower" stacked transforms (i.e. those with fewer transforms) on unseen domains and demonstrates modest improvement to conventional augmentation on the source domain, (iii) after training with BigAug on one source domain, performance on an unseen domain is similar to training a model from scratch on that domain when using the same number of training samples. When training on large datasets (n = 465 volumes) with BigAug, (iv) application to unseen domains reaches the performance of state-ofthe-art fully supervised models that are trained and tested on their source domains. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of highly robust deep segmentation models for clinical deployment.

    See More

  1. Keywords:

External Sources

  1. DOI: 10.1109/TMI.2020.2973595
  2. WOS: 000545410200022

Library Notes

  1. Fiscal Year: FY2019-2020
NCI at Frederick

You are leaving a government website.

This external link provides additional information that is consistent with the intended purpose of this site. The government cannot attest to the accuracy of a non-federal site.

Linking to a non-federal site does not constitute an endorsement by this institution or any of its employees of the sponsors or the information and products presented on the site. You will be subject to the destination site's privacy policy when you follow the link.

ContinueCancel