iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/978-3-031-19812-0_10
MVDG: A Unified Multi-view Framework for Domain Generalization | SpringerLink
Skip to main content

MVDG: A Unified Multi-view Framework for Domain Generalization

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13687))

Included in the following conference series:

Abstract

To generalize the model trained in source domains to unseen target domains, domain generalization (DG) has recently attracted lots of attention. Since target domains can not be involved in training, overfitting source domains is inevitable. As a popular regularization technique, the meta-learning training scheme has shown its ability to resist overfitting. However, in the training stage, current meta-learning-based methods utilize only one task along a single optimization trajectory, which might produce a biased and noisy optimization direction. Beyond the training stage, overfitting could also cause unstable prediction in the test stage. In this paper, we propose a novel multi-view DG framework to effectively reduce the overfitting in both the training and test stage. Specifically, in the training stage, we develop a multi-view regularized meta-learning algorithm that employs multiple optimization trajectories to produce a suitable optimization direction for model updating. We also theoretically show that the generalization bound could be reduced by increasing the number of tasks in each trajectory. In the test stage, we utilize multiple augmented images to yield a multi-view prediction to alleviate unstable prediction, which significantly promotes model reliability. Extensive experiments on three benchmark datasets validate that our method can find a flat minimum to enhance generalization and outperform several state-of-the-art approaches. The code is available at https://github.com/koncle/MVRML.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Change history

  • 28 April 2023

    A correction has been published.

References

  1. Ayhan, M.S., Berens, P.: Test-time data augmentation for estimation of heteroscedastic aleatoric uncertainty in deep neural networks. In: MIDL (2018)

    Google Scholar 

  2. Balaji, Y., Sankaranarayanan, S., Chellappa, R.: MetaReg: towards domain generalization using meta-regularization. In: NeurIPS (2018)

    Google Scholar 

  3. Ben-David, S., Blitzer, J., Crammer, K., Pereira, F., et al.: Analysis of representations for domain adaptation. In: NeurIPS (2007)

    Google Scholar 

  4. Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., Erhan, D.: Domain separation networks. In: NeurIPS (2016)

    Google Scholar 

  5. Bousquet, O., Elisseeff, A.: Stability and generalization. In: JMLR (2002)

    Google Scholar 

  6. Carlucci, F.M., D’Innocente, A., Bucci, S., Caputo, B., Tommasi, T.: Domain generalization by solving jigsaw puzzles. In: CVPR (2019)

    Google Scholar 

  7. Cha, J., et al.: SWAD: domain generalization by seeking flat minima. arXiv (2021)

    Google Scholar 

  8. Chattopadhyay, P., Balaji, Y., Hoffman, J.: Learning to balance specificity and invariance for in and out of domain generalization. In: ECCV (2020)

    Google Scholar 

  9. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)

    Google Scholar 

  10. Dou, Q., de Castro, D.C., Kamnitsas, K., Glocker, B.: Domain generalization via model-agnostic learning of semantic features. In: NeurIPS (2019)

    Google Scholar 

  11. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017)

    Google Scholar 

  12. Frankle, J., Dziugaite, G.K., Roy, D., Carbin, M.: Linear mode connectivity and the lottery ticket hypothesis. In: ICML (2020)

    Google Scholar 

  13. Garipov, T., Izmailov, P., Podoprikhin, D., Vetrov, D.P., Wilson, A.G.: Loss surfaces, mode connectivity, and fast ensembling of dnns. In: NeurIPS (2018)

    Google Scholar 

  14. Goodfellow, I., et al.: Generative adversarial nets. In: NeurIPS (2014)

    Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  16. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV (2017)

    Google Scholar 

  17. Huang, Z., Wang, H., Xing, E.P., Huang, D.: Self-challenging improves cross-domain generalization. In: ECCV (2020)

    Google Scholar 

  18. Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., Wilson, A.G.: Averaging weights leads to wider optima and better generalization. arXiv (2018)

    Google Scholar 

  19. Jeon, S., Hong, K., Lee, P., Lee, J., Byun, H.: Feature stylization and domain-aware contrastive learning for domain generalization. In: ACMMM (2021)

    Google Scholar 

  20. Kouw, W.M., Loog, M.: A review of domain adaptation without target labels. In: TPAMI (2019)

    Google Scholar 

  21. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NeurIPS (2012)

    Google Scholar 

  22. Lee, H., Lee, H., Hong, H., Kim, J.: Test-time mixup augmentation for uncertainty estimation in skin lesion diagnosis. In: MIDL (2021)

    Google Scholar 

  23. Li, D., Yang, Y., Song, Y.Z., Hospedales, T.M.: Deeper, broader and artier domain generalization. In: ICCV (2017)

    Google Scholar 

  24. Li, D., Yang, Y., Song, Y.Z., Hospedales, T.M.: Learning to generalize: meta-learning for domain generalization. In: AAAI (2018)

    Google Scholar 

  25. Li, H., Jialin Pan, S., Wang, S., Kot, A.C.: Domain generalization with adversarial feature learning. In: CVPR (2018)

    Google Scholar 

  26. Li, L., et al.: Progressive domain expansion network for single domain generalization. In: CVPR (2021)

    Google Scholar 

  27. Li, P., Li, D., Li, W., Gong, S., Fu, Y., Hospedales, T.M.: A simple feature augmentation for domain generalization. In: ICCV (2021)

    Google Scholar 

  28. Li, Y., et al.: Deep domain generalization via conditional invariant adversarial networks. In: Ferrari, Vittorio, Hebert, Martial, Sminchisescu, Cristian, Weiss, Yair (eds.) ECCV 2018. LNCS, vol. 11219, pp. 647–663. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_38

    Chapter  Google Scholar 

  29. Li, Y., Yang, Y., Zhou, W., Hospedales, T.M.: Feature-critic networks for heterogeneous domain generalizationx. arXiv (2019)

    Google Scholar 

  30. Liu, Q., Dou, Q., Heng, P.A.: Shape-aware meta-learning for generalizing prostate MRI segmentation to unseen domains. In: MICCAI (2020)

    Google Scholar 

  31. Mahajan, D., Tople, S., Sharma, A.: Domain generalization using causal matching. In: ICML (2021)

    Google Scholar 

  32. Matsuura, T., Harada, T.: Domain generalization using a mixture of multiple latent domains. In: AAAI (2020)

    Google Scholar 

  33. Melas-Kyriazi, L., Manrai, A.K.: PixMatch: unsupervised domain adaptation via pixelwise consistency training. In: CVPR (2021)

    Google Scholar 

  34. Molchanov, D., Lyzhov, A., Molchanova, Y., Ashukha, A., Vetrov, D.: Greedy policy search: a simple baseline for learnable test-time augmentation. arXiv (2020)

    Google Scholar 

  35. Muandet, K., Balduzzi, D., Schölkopf, B.: Domain generalization via invariant feature representation. In: ICML (2013)

    Google Scholar 

  36. Na, J., Jung, H., Chang, H.J., Hwang, W.: FixBi: bridging domain spaces for unsupervised domain adaptation. In: CVPR (2021)

    Google Scholar 

  37. Nichol, A., Schulman, J.: Reptile: a scalable metalearning algorithm. arXiv (2018)

    Google Scholar 

  38. Nuriel, O., Benaim, S., Wolf, L.: Permuted adain: Enhancing the representation of local cues in image classifiers. arXiv (2020)

    Google Scholar 

  39. Piratla, V., Netrapalli, P., Sarawagi, S.: Efficient domain generalization via common-specific low-rank decomposition. In: ICML (2020)

    Google Scholar 

  40. Qiao, F., Zhao, L., Peng, X.: Learning to learn single domain generalization. In: CVPR (2020)

    Google Scholar 

  41. Rahman, M.M., Fookes, C., Baktashmotlagh, M., Sridharan, S.: Correlation-aware adversarial domain adaptation and generalization. In: PR (2019)

    Google Scholar 

  42. Seo, S., Suh, Y., Kim, D., Han, J., Han, B.: Learning to optimize domain specific normalization for domain generalization. In: ECCV (2020)

    Google Scholar 

  43. Shu, M., Wu, Z., Goldblum, M., Goldstein, T.: Prepare for the worst: generalizing across domain shifts with adversarial batch normalization. arXiv (2020)

    Google Scholar 

  44. Sypetkowski, M., Jasiulewicz, J., Wojna, Z.: Augmentation inside the network. arXiv (2020)

    Google Scholar 

  45. Thrun, S., Pratt, L. (eds.): Larning to Learn(1998)

    Google Scholar 

  46. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR (2011)

    Google Scholar 

  47. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: CVPR (2017)

    Google Scholar 

  48. Wang, S., Yu, L., Li, C., Fu, C.W., Heng, P.A.: Learning from extrinsic and intrinsic supervisions for domain generalization. In: ECCV (2020)

    Google Scholar 

  49. Wang, Y., Qi, L., Shi, Y., Gao, Y.: Feature-based style randomization for domain generalization. arXiv (2021)

    Google Scholar 

  50. Wang, Y., Li, H., Chau, L.P., Kot, A.C.: Variational disentanglement for domain generalization. arXiv (2021)

    Google Scholar 

  51. Xu, Q., Zhang, R., Zhang, Y., Wang, Y., Tian, Q.: A fourier-based framework for domain generalization. In: CVPR (2021)

    Google Scholar 

  52. Xu, Z., Liu, D., Yang, J., Raffel, C., Niethammer, M.: Robust and generalizable visual representation learning via random convolutions. arXiv (2020)

    Google Scholar 

  53. Yue, X., Zhang, Y., Zhao, S., Sangiovanni-Vincentelli, A., Keutzer, K., Gong, B.: Domain randomization and pyramid consistency: simulation-to-real generalization without accessing target domain data. In: ICCV (2019)

    Google Scholar 

  54. Yue, Z., Sun, Q., Hua, X.S., Zhang, H.: Transporting causal mechanisms for unsupervised domain adaptation. In: ICCV (2021)

    Google Scholar 

  55. Zhang, J., Qi, L., Shi, Y., Gao, Y.: Generalizable model-agnostic semantic segmentation via target-specific normalization. In: PR (2022)

    Google Scholar 

  56. Zhao, S., Gong, M., Liu, T., Fu, H., Tao, D.: Domain generalization via entropy regularization. In: NeurIPS (2020)

    Google Scholar 

  57. Zhou, K., Yang, Y., Hospedales, T.M., Xiang, T.: Deep domain-ad image generation for domain generalisation. In: AAAI (2020)

    Google Scholar 

  58. Zhou, K., Yang, Y., Qiao, Y., Xiang, T.: Domain adaptive ensemble learning. In: TIP (2021)

    Google Scholar 

  59. Zhou, K., Yang, Y., Qiao, Y., Xiang, T.: Mixstyle neural networks for domain generalization and adaptation. arXiv (2021)

    Google Scholar 

Download references

Acknowledgement

This work was supported by NSFC Major Program (62192 783), CAAI-Huawei MindSpore Project (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Foundation Project (2021M690609), Jiangsu Natural Science Foundation Project (BK20210224), and CCF-Lenovo Bule Ocean Research Fund.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Lei Qi or Yinghuan Shi .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 904 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J., Qi, L., Shi, Y., Gao, Y. (2022). MVDG: A Unified Multi-view Framework for Domain Generalization. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13687. Springer, Cham. https://doi.org/10.1007/978-3-031-19812-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19812-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19811-3

  • Online ISBN: 978-3-031-19812-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics