Skip to main content
Log in

Class conditional distribution alignment for domain adaptation

  • Published:
Control Theory and Technology Aims and scope Submit manuscript

Abstract

In this paper, we study the problem of domain adaptation, which is a crucial ingredient in transfer learning with two domains, that is, the source domain with labeled data and the target domain with none or few labels. Domain adaptation aims to extract knowledge from the source domain to improve the performance of the learning task in the target domain. A popular approach to handle this problem is via adversarial training, which is explained by the HΔH-distance theory. However, traditional adversarial network architectures just align the marginal feature distribution in the feature space. The alignment of class condition distribution is not guaranteed. Therefore, we proposed a novel method based on pseudo labels and the cluster assumption to avoid the incorrect class alignment in the feature space. The experiments demonstrate that our framework improves the accuracy on typical transfer learning tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. C. Foley, A. Quinn. Fully probabilistic design for knowledge transfer in a pair of Kaiman filters. IEEE Signal Processing Letters, 2017, 2017(25): 4–487.

    Google Scholar 

  2. Y. Kim, J. Sung, S. R. Hong, et al. Analyzing OTDR measurement data using the Kaiman filter. IEEE Transactions on Instrumentation and Measurement, 2008, 2008(57): 5–947.

    Google Scholar 

  3. U. Rosolia, F. Borrelli. Learning model predictive control for iterative tasks. A data-driven control framework. IEEE Transactions on Automatic Control, 2017, 7(63): 7–1883.

    MATH  Google Scholar 

  4. T. Oomen, R. van der Maas, C. R. Rojas, et al. Iterative data-driven H∞ norm estimation of multivariable systems with application to robust active vibration isolation. IEEE Transactions on Control Systems Technology, 2014, 2014(22): 6–2247.

    Google Scholar 

  5. Y. I. Bengio, J. Goodfellow, A. Courville. Deep Learning. Cambridge: MIT Press, 2016.

    MATH  Google Scholar 

  6. Y. Ming. Variational Bayesian data analysis on manifold. Control Theory and Technology, 2018, 2018(16): 3–212.

    MathSciNet  Google Scholar 

  7. M. Wang, W. Deng. Deep visual domain adaptation: A survey. Neurocomputing, 2018, 312: 135–153.

    Article  Google Scholar 

  8. C. Tan, F. Sun, T. Kong, et al. A Survey on Deep Transfer Learning. Proceedings of the 27th International Conference on Artificial Neural Networks, Rhodes, Greece: Springer International Publishing AG, 2018: 270–279.

    Google Scholar 

  9. J. Hu, J. Lu, Y.-P. Tan. Deep transfer metric learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston: IEEE, 2015: 325–333.

    Google Scholar 

  10. X. Zhang, F. X. Yu, S.-F. Chang, et al. Deep transfer network: Unsupervised domain adaptation. arXiv, 2015: arXiv:1503.00591.

  11. W. Lu, B. Liang, Y. Cheng, et al. Deep model based domain adaptation for fault diagnosis. IEEE Transactions on Industrial Electronics, 2016, 2016(64): 3–2296.

    Google Scholar 

  12. Y. Ganin, V. Lempitsky. Unsupervised domain adaptation by backpropagation. arXiv, 2014: arXiv:1409.7495.

  13. M. Liu, O. Tuzel. Coupled generative adversarial networks. Proceedings of the 30th Conference on Neural Information Processing Systems, Barcelona, Spain: Neural Information Processing Systems, 2016.

    Google Scholar 

  14. T. Tzeng, J. Hoffman, K. Saenko, et al. Adversarial discriminative domain adaptation. Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu: IEEE, 2017.

    Google Scholar 

  15. I. J. Goodfellow, J. Pouget-Abadie; M. Mirza, et al. Generative adversarial nets. Proceedings of the 28th Conference on Neural Information Processing Systems, Montreal: Neural Information Processing Systems, 2014.

    Google Scholar 

  16. B. Schölkopf, J. Platt, T. Hofmann. Analysis of representations for domain adaptation. Advances in Neural Information Processing Systems, Cambridge: MIT Press, 2007: 137–144.

    Chapter  Google Scholar 

  17. S. Ben-David, J. Blitzer, K. Crammer, et al. A theory of learning from different domains. Machine learning, 2010, 79(1/2): 151–175.

    Article  MathSciNet  Google Scholar 

  18. T. Miyato, S. I. Maeda, M. Koyama, et al. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41 (8): 1979–1993.

    Article  Google Scholar 

  19. R. Shu, H. H. Bui, H. Narui, et al. A DIRT-T approach to unsupervised domain adaptation. arXiv, 2018: arXiv:1802.08735.

  20. A. Kumar, P. Sattigeri, K. Wadhawan, et al. Co-regularized alignment for unsupervised domain adaptation. Proceedings of the 32nd Conference on Neural Information Processing Systems, Montreal: Neural Information Processing Systems, 2018.

    Google Scholar 

  21. S. J. Pan, Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 2010(22): 10–1345.

    Google Scholar 

  22. E. Tzeng, J. Hoffman, N. Zhang, et al. Deep domain confusion: Maximizing for domain invariance. arXiv, 2014: arXiv:1412.3474.

    Google Scholar 

  23. K. Saito, K. Watanabe, Y. Ushiku, et al. Maximum classifier discrepancy for unsupervised domain adaptation. Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City: IEEE, 2018: 3723–3732.

    Google Scholar 

  24. R. Volpi, P. Morerio, S. Savarese, et al. Adversarial feature augmentation for unsupervised domain adaptation. Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City: IEEE, 2018: 5495–5504.

    Google Scholar 

  25. Z. Cao, M. Long, J. Wang, et al. Partial transfer learning with selective adversarial networks. Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City: IEEE, 2018: 2724–2732.

    Google Scholar 

  26. D. Yoo, N. Kim, S. Park, et al. Pixel-level domain transfer. Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Netherlands: Springer International Publishing AG, 2016: 517–532.

    Google Scholar 

  27. K. Bousmalis, N. Silberman, D. Dohan, et al. Unsupervised pixel-level domain adaptation with generative adversarial networks. Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu: IEEE, 2017: 95–104.

    Google Scholar 

  28. M. Long, Y. Cao, J. Wang, et al. Learning transferable features with deep adaptation networks. Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France: ACM, 2015: 97–105.

    Google Scholar 

  29. M. Ghifary, W. B. Kleijn, M. Zhang, et al. Deep reconstruction-classification networks for unsupervised domain adaptation. European Conference on Computer Vision, Cham: Springer, 2016: 597–613.

    Google Scholar 

  30. K. Bousmalis, G. Trigeorgis, N. Silberman, et al. Domain separation networks. Proceedings of the 30th Conference on Neural Information Processing Systems, Barcelona, Spain: Neural Information Processing Systems, 2016: 343–351.

    Google Scholar 

  31. K. Saito, Y. Ushiku, T. Harada. Asymmetric tri-training for unsupervised domain adaptation. arXiv, 2017: arXiv:1702.08400.

  32. O. Chapelle, A. Zien. Semi-supervised classification by low density separation. Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics, Elsevier, 2005: 57–64.

    Google Scholar 

Download references

Acknowledgments

The authors like to thank Prof. Hong for his constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Ming.

Additional information

This work was supported by the National Key Research and Development Program of China (No. 2016YFB0901902) and the National Natural Science Foundation of China (No. 61733018).

Kai CAO received the B.Sc. degree from University of Science and Technology of China in 2017. He is currently working towards the Ph.D. degree in Academy of Mathematics and Systems Science, Chinese Academy of Sciences. His research interests include transfer learning, big data analysis and biocomputing.

Zhipeng TU received his B.Sc. degree from Xi'an jiaotong University in 2018. He is currently working on the Ph.D. degree in Academy of Mathematics and Systems Science, Chinese Academy of Sciences. His research interests include distributed machine learning and transfer learning.

Yang MING received his B.Sc. degree from Northwest University in 2015. He is currently a Ph.D. candidate in Academy of Mathematics and Systems Science, Chinese Academy of Sciences. His research interests include machine learning and random matrix.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cao, K., Tu, Z. & Ming, Y. Class conditional distribution alignment for domain adaptation. Control Theory Technol. 18, 72–80 (2020). https://doi.org/10.1007/s11768-020-9126-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11768-020-9126-1

Keywords

Navigation