PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

3D face reconstruction with region based best fit blending using mobile phone for virtual reality based social media

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The use of virtual reality (VR) has been exponentially increasing and due to that many researchers have started to work on developing new VR based social media. For this purpose it is important to have an avatar of the user which look like them to be easily generated by the devices which are accessible, such as mobile phones. In this paper, we propose a novel method of recreating a 3D human face model captured with a phone camera image or video data. The method focuses more on model shape than texture in order to make the face recognizable. We detect 68 facial feature points and use them to separate a face into four regions. For each area the best fitting models are found and are further morphed combined to find the best fitting models for each area. These are then combined and further morphed in order to restore the original facial proportions. We also present a method of texturing the resulting model, where the aforementioned feature points are used to generate a texture for the resulting model.
Rocznik
Strony
125--132
Opis fizyczny
Bibliogr. 43 poz., rys.
Twórcy
  • iCV Research Group, Institute of Technology, University of Tartu, Tartu 50411, Estonia
  • Department of Electrical and Electronic Engineering, Hasan Kalyoncu University, Gaziantep, Turkey
autor
  • iCV Research Group, Institute of Technology, University of Tartu, Tartu 50411, Estonia
autor
  • iCV Research Group, Institute of Technology, University of Tartu, Tartu 50411, Estonia
autor
  • iCV Research Group, Institute of Technology, University of Tartu, Tartu 50411, Estonia
autor
  • iCV Research Group, Institute of Technology, University of Tartu, Tartu 50411, Estonia
Bibliografia
  • [1] J.L. Olson, D.M. Krum, E.A. Suma, and M. Bolas, “A design for a smartphone-based head mounted display,” in Virtual Reality Conference (VR), 2011 IEEE, pp. 233–234.
  • [2] B.S. Santos, P. Dias, A. Pimentel, J.-W. Baggerman, C. Ferreira, S. Silva, and J. Madeira, “Head-mounted display versus desktop for 3d navigation in virtual reality: a user study,” Multimedia Tools and Applications, 41(1), p. 161 (2009).
  • [3] J.-S. Kim and S.-M. Choi, “A virtual environment for 3d facial makeup,” Virtual Reality, pp. 488–496 (2007).
  • [4] G. Anbarjafari, “An objective no-reference measure of illumination assessment,” Measurement Science Review, 15(6), 319–322 (2015).
  • [5] B.J. Fernández-Palacios, D. Morabito, and F. Remondino, “Access to complex reality-based 3d models using virtual reality solutions,” Journal of Cultural Heritage, 23, 40–48 (2017).
  • [6] D. Zeng, H. Chen, R. Lusch, and S.-H. Li, “Social media analytics and intelligence,” IEEE Intelligent Systems, 25(6), 13–16, (2010).
  • [7] D. Trenholme and S.P. Smith, “Computer game engines for developing first-person virtual environments,” Virtual reality, 12(3), 181–187 (2008).
  • [8] E. Avots, M. Daneshmand, A. Traumann, S. Escalera, and G. Anbarjafari, “Automatic garment retexturing based on infrared information,” Computers & Graphics, 59, 28–38 (2016).
  • [9] T. Yamasaki, I. Nakamura, and K. Aizawa, “Fast face model reconstruction and synthesis using an rgb-d camera and its subjective evaluation,” in Multimedia (ISM), IEEE International Symposium on, pp. 53–56 (2015).
  • [10] S. Ding, Y. Li, S. Cao, Y.F. Zheng, and R.L. Ewing, “From rgbd image to hologram,” in Aerospace and Electronics Conference (NAECON) and Ohio Innovation Summit (OIS), 2016 IEEE National, pp. 387–390.
  • [11] X. Huang, J. Cheng, and X. Ji, “Human contour extraction from rgbd camera for action recognition,” in Information and Automation (ICIA), IEEE International Conference on, pp. 1822–1827 (2016).
  • [12] L. Valgma, M. Daneshmand, and G. Anbarjafari, “Iterative closest point based 3d object reconstruction using rgb-d acquisition devices,” in Signal Processing and Communication Application Conference (SIU), 2016 24th, pp. 457‒460.
  • [13] C. Ding and L. Liu, “A survey of sketch based modeling systems,” Frontiers of Computer Science, 10(6), 985‒999 (2016).
  • [14] I. Lüsi and G. Anbarjafari, “Mimicking speaker’s lip movement on a 3d head model using cosine function fitting,” Bul. Pol. Ac.: Tech., 65(5), 733–739 (2017).
  • [15] M. Daneshmand, E. Avots, and G. Anbarjafari, “Proportional error back-propagation (peb): Real-time automatic loop closure correction for maintaining global consistency in 3d reconstruction with minimal computational cost,” Measurement Science Review, 18(3), 86–93 (2018).
  • [16] V. Blanz, K. Scherbaum, and H.-P. Seidel, “Fitting a morphable model to 3d scans of faces,” in Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pp. 1–8.
  • [17] A. Traumann, M. Daneshmand, S. Escalera, and G. Anbarjafari, “Accurate 3d measurement using optical depth information,” Electronics Letters, 51(18), 1420–1422 (2015).
  • [18] M. Daneshmand, A. Aabloo, C. Ozcinar, and G. Anbarjafari, “Real-time, automatic shape-changing robot adjustment and gender classification,” Signal, Image and Video Processing, 10(4), 753–760 (2016).
  • [19] I. Fateeva, M.A. Rodriguez, S.R. Royo, and C. Stiller, “Applying 3d least squares matching technique for registration of data taken with an 3d scanner of human body,” in Sensors and Measuring Systems 2014; 17. ITG/GMA Symposium; Proceedings of, VDE, pp. 1–5 (2014).
  • [20] M. Daneshmand, A. Helmi, E. Avots, F. Noroozi, F. Alisinanoglu, H.S. Arslan, J. Gorbova, R.E. Haamer, C. Ozcinar, and G. Anbarjafari, “3d scanning: A comprehensive survey,” arXiv preprint arXiv:1801.08863, 2018.
  • [21] K. Kolev, P. Tanskanen, P. Speciale, and M. Pollefeys, “Turning mobile phones into 3d scanners,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3946–3953 (2014).
  • [22] P. Ondrúška, P. Kohli, and S. Izadi, “Mobilefusion: Real-time volumetric surface reconstruction and dense tracking on mobile phones,” IEEE transactions on visualization and computer graphics, 21(11), 1251–1258 (2015).
  • [23] P. Tanskanen, K. Kolev, L. Meier, F. Camposeco, O. Saurer, and M. Pollefeys, “Live metric 3d reconstruction on mobile phones,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, 65–72 (2013).
  • [24] H. Zhu, Y. Nie, T. Yue, and X. Cao, “The role of prior in image based 3d modeling: a survey,” Frontiers of Computer Science, 11(2), 175–191 (2017).
  • [25] F. Maninchedda, C. Häne, M.R. Oswald, and M. Pollefeys, “Face reconstruction on mobile devices using a height map shape model and fast regularization,” in 3D Vision (3DV), 2016 Fourth International Conference on, IEEE, pp. 489–498 (2016).
  • [26] H. Jain, O. Hellwich, and R. Anand, “Improving 3d face geometry by adapting reconstruction from stereo image pair to generic morphable model,” in Information Fusion (FUSION), 2016 19th International Conference on, IEEE, pp. 1720‒1727 (2016).
  • [27] V. Blanz and T. Vetter, “A morphable model for the synthesis of 3d faces,” in Proceedings of the 26th annual conference on Computer graphics and interactive techniques, ACM Press/Addison-Wesley Publishing Co., pp. 187–194 (1999).
  • [28] E. Wood, T. Baltrušaitis, L.-P. Morency, P. Robinson, and A. Bulling, “A 3d morphable eye region model for gaze estimation,” in European Conference on Computer Vision, Springer, pp. 297–313 (2016).
  • [29] V. Blanz and T. Vetter, “Face recognition based on fitting a 3d morphable model,” IEEE Transactions on pattern analysis and machine intelligence, 25(9), 1063–1074 (2003).
  • [30] J.P. Lewis, K. Anjyo, T. Rhee, M. Zhang, F.H. Pighin, and Z. Deng, “Practice and theory of blendshape facial models,” Eurographics (State of the Art Reports), 1, 8 (2014).
  • [31] C. Baumberger, M. Reyes, M. Constantinescu, R. Olariu, E. de Aguiar, and T.O. Santos, “3d face reconstruction from video using 3d morphable model and silhouette,” in Graphics, Patterns and Images (SIBGRAPI), 2014 27th SIBGRAPI Conference on, IEEE, pp. 1–8 (2014).
  • [32] P. Dou, Y.Wu, S.K. Shah, and I.A. Kakadiaris, “Robust 3d face shape reconstruction from single images via two-fold coupled structure learning,” in Proc. British Machine Vision Conference, pp. 1–13 (2014).
  • [33] J. Choi, G. Medioni, Y. Lin, L. Silva, O. Regina, M. Pamplona, and T.C. Faltemier, “3d face reconstruction using a single or multiple views,” in Pattern Recognition (ICPR), 2010 20th International Conference on, IEEE, pp. 3959–3962 (2010).
  • [34] I. Kemelmacher-Shlizerman and R. Basri, “3d face reconstruction from a single image using a single reference face shape,” IEEE transactions on pattern analysis and machine intelligence, 33(2), 394–405 (2011).
  • [35] Q. Zhang and L. Shi, “3d face model reconstruction based on stretching algorithm,” in Cloud Computing and Intelligent Systems (CCIS), 2012 IEEE 2nd International Conference on, IEEE, 1, 197–200 (2012).
  • [36] W. Lin, H. Weijun, C. Rui, and W. Xiaoxi, “Three-dimensional reconstruction of face model based on single photo,” in Computer Application and System Modeling (ICCASM), 2010 International Conference on, IEEE, 3, V3–674 (2010).
  • [37] X. Fan, Q. Peng, and M. Zhong, “3d face reconstruction from single 2d image based on robust facial feature points extraction and generic wire frame model,” in Communications and Mobile Computing (CMC), 2010 International Conference on, IEEE, 3, 396–400 (2010).
  • [38] T. Wu, F. Zhou, and Q. Liao, “A fast 3d face reconstruction method from a single image using adjustable model,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pp. 1656–1660 (2016).
  • [39] C. Qu, E. Monari, T. Schuchert, and J. Beyerer, “Fast, robust and automatic 3d face model reconstruction from videos,” in Advanced Video and Signal Based Surveillance (AVSS), 2014 11th IEEE International Conference on, pp. 113‒118 (2014).
  • [40] C. van Dam, R. Veldhuis, and L. Spreeuwers, “Landmark-based model-free 3d face shape reconstruction from video sequences,” in Biometrics Special Interest Group (BIOSIG), 2013 International Conference of the, IEEE, pp. 1–5 (2013).
  • [41] T. Baltrušaitis, P. Robinson, and L.-P. Morency, “Openface: an open source facial behavior analysis toolkit,” in Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pp. 1–10.
  • [42] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, 13(4), 600–612 (2004).
  • [43] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971–987 (2002).
Uwagi
EN
This work has been partially supported by Estonian Research Council Grants (PUT638), The Scientific and Technological Research Council of Turkey (TÜBİTAK) (Proje 1001‒116E097), the Estonian Centre of Excellence in IT (EXCITE) funded by the European Regional Development Fund and the European Network on Integrating Vision and Language (iV&L Net) ICT COST Action IC1307.
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-27c19543-fd41-4753-ba62-8ab9af2a0f90
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.