PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
Tytuł artykułu

Deep-segmentation of plantar pressure images incorporating fully convolutional neural networks

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Comfort shoe-last design relies on the key points of last curvature. Traditional plantar pressure image segmentation methods are limited to their local and global minimization issues. In this work, an improved fully convolutional networks (FCN) employing SegNet (SegNet+FCN 8 s) is proposed. The algorithm design and operation are performed using the visual geometry group (VGG). The method has high efficiency for the segmentation in positive indices of global accuracy (0.8105), average accuracy (0.8015), and negative indices of average cross-ratio (0.6110) and boundary F1 index (0.6200). The research has potential applications in improving the comfort of shoes.
Twórcy
autor
  • Tianjin Key Laboratory of Process Measurement and Control, School of Electrical Engineering and Automation, Tianjin University, PR China
autor
  • Wenzhou Polytechnic, Wenzhou, PR China
autor
  • Dept. of IT, Techno India College of Technology, West Bengal, India
  • Faculty of Engineering, Tanta University, Egypt
  • Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunarea de Jos University of Galati, Galati, Romania
  • Department of Biomedical Engineering, the University of Reading, Reading, UK
autor
  • Cancer Institute of New Jersey, Rutgers University, NJ, USA
Bibliografia
  • [1] Parida P. Fuzzy clustering-based transition region extraction for image segmentation. Future Comput Inform J 2018;3(2):321–33.
  • [2] Kouadria N, Mechouek K, Harize S, Doghmane N, et al. Region-of-interest based image compression using the discrete Tchebichef transform in wireless visual sensor networks. Comput Electr Eng 2019;73:194–208.
  • [3] Parida P, Bhoi N. Wavelet based transition region extraction for image segmentation. Future Comput Inform J 2017;2 (2):65–78.
  • [4] Feng Q, Gao B, Lu P, et al. Automatic seeded region growing for thermography debonding detection of CFRP. Ndt E Int 2018;99:36–49.
  • [5] Punitha S, Amuthan A, Joseph KS. Benign and malignant breast cancer segmentation using optimized region growing technique. Future Comput Inform J 2018;3(2):348–58.
  • [6] Jeevakala S, Brintha Therese A, Rangasami R. A novel segmentation of cochlear nerve using region growing algorithm. Biomed Signal Process Control 2018;39:117–29.
  • [7] Liu Y, He C, Gao P, et al. A binary level set variational model with L1 data term for image segmentation. Signal Process 2019;155:193–201.
  • [8] Inthiyaz S, Madhav B, Kishore P. Flower image segmentation with PCA fused colored covariance and Gabor texture features based level sets. Ain Shams Eng J 2018;9 (4):3277–91.
  • [9] Yang Y, Tian D, Jia W, et al. Split Bregman method-based level set formulations for segmentation and correction with application to MR images and color images. Magn Reson Imaging 2019;57:50–67.
  • [10] Zhi X, Shen H. Saliency driven region-edge-based top down level set evolution reveals the asynchronous focus in image segmentation. Pattern Recognit 2018;80:241–55.
  • [11] Lateef Fahad, Ruichek Yassine. Survey on semantic segmentation using deep learning techniques. Neurocomputing 2019;338:321–48.
  • [12] Garcia-Garcia A, Orts-Escolano S, Oprea S, et al. A survey on deep learning techniques for image and video semantic segmentation. Appl Soft Comput 2018;70:41–65.
  • [13] Ates HF, Sunetci S. Multi-hypothesis contextual modeling for semantic segmentation. Pattern Recognit Lett 2019;117:104–10.
  • [14] Khodaskar A, Ladhake S. Semantic image analysis for intelligent image retrieval. Procedia Comput Sci 2015;48:192–7.
  • [15] Kemker R, Salvaggio C, Kanan C. Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning. Isprs J Photogramm Remote Sens 2018;145(Part A):60–77.
  • [16] Hofbauer H, Jalilian E, Uhl A. Exploiting superior CNN-based iris segmentation for better recognition accuracy. Pattern Recognit Lett 2019;120:17–23.
  • [17] Zhang P, Liu W, Wang H, et al. Deep gated attention networks for large-scale street-level scene segmentation. Pattern Recognit 2019;88:702–14.
  • [18] Dhomne A, Kumar R, Bhan V. Gender recognition through face using deep learning. Procedia Comput Sci 2018;132:2–10.
  • [19] Chavan TR, Nandedkar AV. AgroAVNET for crops and weeds classification: a step forward in automatic farming. Comput Electron Agric 2018;154:361–72.
  • [20] Fang H, Lu G, Fang X, et al. Weakly and semi supervised human body part parsing via pose-guided knowledge transfer. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018. pp. 70–8.
  • [21] Wang Dan, Li Zairan, Dey Nilanjan, Ashour Amira S, Moraru Luminita, Biswas Anjan, et al. Optical pressure sensors based plantar image segmenting using an improved fully convolutional network. Opt – Int J Light Electron Opt 2019;179:99–114.
  • [22] Li Zairan, Wang Dan, Dey Nilanjan, Ashour Amira S, Simon Sheratt R, Shi Fuqian. Plantar pressure image fusion for comfort fusion in diabetes mellitus using an improved fuzzy hidden Markov model. Biocybern Biomed Eng 2019;39:742–52.
  • [23] Lye K, Chua K, Ko C. Performance of SegNet—a simulation study. Comput Commun 1987;10(6):297–303.
  • [24] Al-masni MA, Al-antari MA, Choi M, et al. Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Comput Methods Programs Biomed 2018;162:221–31.
  • [25] Chen Yating, Chen Gaoxiang, Wang Yu, Dey Nilanjan, Simon Sherratt R, Shi Fuqian. A distance regularized level-set evolution model based MRI dataset segmentation of brain's caudate nucleus. IEEE Access 2019;7:124128–40.
  • [26] Rodrigues É, Conci A, Liatsis P. Morphological classifiers. Pattern Recognit 2018;84:82–96.
Uwagi
PL
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2020).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-e0ab0fcd-e3dd-4a9c-9cb0-da5aa93ee1a3
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.