PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Finding robust transfer features for unsupervised domain adaptation

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
An insufficient number or lack of training samples is a bottleneck in traditional machine learning and object recognition. Recently, unsupervised domain adaptation has been proposed and then widely applied for cross-domain object recognition, which can utilize the labeled samples from a source domain to improve the classification performance in a target domain where no labeled sample is available. The two domains have the same feature and label spaces but different distributions. Most existing approaches aim to learn new representations of samples in source and target domains by reducing the distribution discrepancy between domains while maximizing the covariance of all samples. However, they ignore subspace discrimination, which is essential for classification. Recently, some approaches have incorporated discriminative information of source samples, but the learned space tends to be overfitted on these samples, because they do not consider the structure information of target samples. Therefore, we propose a feature reduction approach to learn robust transfer features for reducing the distribution discrepancy between domains and preserving discriminative information of the source domain and the local structure of the target domain. Experimental results on several well-known cross-domain datasets show that the proposed method outperforms state-of-the-art techniques in most cases.
Rocznik
Strony
99--112
Opis fizyczny
Bibliogr. 29 poz., rys., tab., wykr.
Twórcy
autor
  • School of Computer Science and Technology, Harbin Institute of Technology, No. 92 Xidazhi Street, Harbin 150000, China
autor
  • School of Computer Science and Technology, Harbin Institute of Technology, No. 92 Xidazhi Street, Harbin 150000, China
autor
  • School of Computer Science and Technology, Harbin Institute of Technology, No. 92 Xidazhi Street, Harbin 150000, China
autor
  • School of Computer Science and Technology, Harbin Institute of Technology, No. 92 Xidazhi Street, Harbin 150000, China
  • School of Computer Science and Technology, Harbin Institute of Technology, No. 92 Xidazhi Street, Harbin 150000, China
Bibliografia
  • [1] Bay, H., Tuytelaars, T. and Van Gool, L. (2006). SURF: Speeded up robust features, European Conference on Computer Vision, Graz, Austria, pp. 404–417.
  • [2] Belkin, M. and Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation, Neural Computation 15(6): 1373–1396.
  • [3] Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E. and Darrell, T. (2014). DeCAF: A deep convolutional activation feature for generic visual recognition, International Conference on Machine Learning, Beijing, China, pp. 647–655.
  • [4] Duan, L., Tsang, I.W., Xu, D. and Maybank, S.J. (2009). Domain transfer SVM for video concept detection, 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, pp. 1375–1381.
  • [5] Fukunaga and Keinosuke (1990). Introduction to Statistical Pattern Recognition, 2nd Edn, Academic Press, San Diego, CA, pp. 2133–2143.
  • [6] Gheisari, M. and Baghshah, M.S. (2015). Unsupervised domain adaptation via representation learning and adaptive classifier learning, Neurocomputing 165: 300–311.
  • [7] Gong, B., Grauman, K. and Sha, F. (2013). Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation, Proceedings of the 30th International Conference on International Conference on Machine Learning, ICML’13, Atlanta, GA, USA, pp. 222–230.
  • [8] Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B. and Smola, A. (2012). A kernel two-sample test, Journal of Machine Learning Research 13(1): 723–773.
  • [9] Griffin, G., Holub, A. and Perona, P. (2007). Caltech-256 object category dataset, https://authors.library.caltech.edu/7694/.
  • [10] He, X. (2003). Locality preserving projections, Advances in Neural Information Processing Systems 16(1): 186–197.
  • [11] He, X. and Niyogi, P. (2004). Locality preserving projections, Advances in Neural Information Processing Systems, Vancouver, Canada, pp. 153–160.
  • [12] Hongfu, L., Ming, S., Zhengming, D. and Yun, F. (2019). Structure-preserved unsupervised domain adaptation, IEEE Transactions on Knowledge and Data Engineering 31(4): 799–812.
  • [13] Li, S., Song, S., Huang, G., Ding, Z. and Wu, C. (2018). Domain invariant and class discriminative feature learning for visual domain adaptation, IEEE Transactions on Image Processing 27(9): 4260–4273.
  • [14] Long, M., Wang, J., Ding, G., Sun, J. and Yu, P.S. (2013). Transfer feature learning with joint distribution adaptation, 2013 IEEE International Conference on Computer Vision, Portland, OR, USA, pp. 2200–2207.
  • [15] Long, M., Wang, J., Ding, G., Sun, J. and Yu, P.S. (2014). Transfer joint matching for unsupervised domain adaptation, IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, pp. 1410–1417.
  • [16] Mardia, K.V., Kent, J.T. and Bibby, J.M. (1979). Multivariate analysis, Mathematical Gazette 37(1): 123–131.
  • [17] Nene, S.A., Nayar, S.K. and Murase, H. (1996). Columbia object image library (coil-20), Technical Report CUCS-005-96, Columbia University, New York, NY, http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php.
  • [18] Pan, S.J., Tsang, I.W., Kwok, J.T. and Yang, Q. (2011). Domain adaptation via transfer component analysis, IEEE Transactions on Neural Networks 22(2): 199.
  • [19] Pan, S.J. and Yang, Q. (2010). A survey on transfer learning, IEEE Transactions on Knowledge and Data Engineering 22(10): 1345–1359.
  • [20] Roweis, S.T. and Saul, L.K. (2000). Nonlinear dimensionality reduction by locally linear embedding, Science 290(5500): 2323–2326.
  • [21] Saenko, K., Kulis, B., Fritz, M. and Darrell, T. (2010). Adapting visual category models to new domains, European Conference on Computer Vision, Crete, Greece, pp. 213–226.
  • [22] Sha, F., Shi, Y., Gong, B. and Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation, 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, pp. 2066–2073.
  • [23] Tahmoresnezhad, J. and Hashemi, S. (2016). Visual domain adaptation via transfer feature learning, Knowledge and Information Systems 50(2): 1–21.
  • [24] Tan, Q., Deng, H. and Yang, P. (2012). Kernel mean matching with a large margin, International Conference on Advanced Data Mining and Applications, Nanijing, China, pp. 223–234.
  • [25] Tao, J., Wen, S. and Hu, W. (2015). Robust domain adaptation image classification via sparse and low rank representation, Journal of Visual Communication and Image Representation 33: 134–148.
  • [26] Ting, X., Peng, L., Wei, Z., Hongwei, L. and Xianglong, T. (2019). Structure preservation and distribution alignment in discriminative transfer subspace learning, Neurocomputing 337: 218–234.
  • [27] Wei, J., Liang, J., He, R. and Yang, J. (2018). Learning discriminative geodesic flow kernel for unsupervised domain adaptation, 2018 IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, USA, pp. 1–6.
  • [28] Yang, J., Yan, R. and Hauptmann, A.G. (2007). Cross-domain video concept detection using adaptive SVMs, Proceedings of the 15th ACM International Conference on Multimedia, Augsburg, Germany, pp. 188–197.
  • [29] Zhang, J., Li, W. and Ogunbona, P. (2017). Joint geometrical and statistical alignment for visual domain adaptation, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 5150–5158.
Uwagi
PL
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2020).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-5948cffc-1230-47dc-adba-9dfb9b0c5f41
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.