PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Sareenet : saree texture classification via region-based patch generation with an optimized efficient Aquila network

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Online saree shopping has become a popular way for adolescents to shop for fashion. Purchasing from e-commerce is a huge time-saver in this situation. Female apparel has many difficult-to-describe qualities, such as texture, form, colour, print, and length. Research involving online shopping often involves studying consumer behaviour and preferences. Fashion image analysis for product search still faces difficulties in detecting textures based on query images. To solve the above problem, a novel deep learning-based SareeNet is presented to quickly classify the tactile sensation of a saree according to the user’s query. The proposed work consists of three phases: i) saree image pre-processing phase, ii) patch generation phase, and iii) texture detection and optimization for efficient classification. The input image is first denoised using a contrast stretching adaptive bilateral (CSAB) filter. The deep learning-based mask region-based convolutional neural network (Mask R-CNN) divides the region of interest into saree patches. A deep learning-based improved EfficientNet-B3 has been introduced which includes an optimized squeeze and excitation block to categorise 25 textures of saree images. The Aquila optimizer is applied within the squeeze and excitation block of the improved EfficientNet to normalise the parameters for improving the accuracy in saree texture classification. The experimental results show that SareeNet is effective in categorising texture in saree images with 98.1% accuracy. From the experimental results, the proposed improved EfficientNet-B3 improves overall accuracy by 2.54%, 0.17%, 2.06%, 1.78%, and 0.63%, for MobileNet, DenseNet201, ResNet152, and InspectionV3, respectively.
Rocznik
Strony
art. no. e149164
Opis fizyczny
Bibliogr. 28 poz., rys., tab., wykr., fot.
Twórcy
  • Department of Electronics and Communication Engineering, Coimbatore Institute of Engineering and Technology, Coimbatore, India
  • Department of Electronics and Communication Engineering, Thiagarajar College of Engineering, Tamilnadu, India
  • Department of Electronics and Communication Engineering, PSN College of Engineering and Technology, Tirunelveli, Tamilnadu, India
  • Centre for Future Networks and Digital Twin, Department of Computer Science and Engineering, Sri Eshwar Engineering College, Coimbatore, Tamilnadu, India
Bibliografia
  • [1] Vijayaraj, A. et al. Deep learning image classification for fashion design. Wirel. Commun. Mob. Comput. 2022, 7549397 (2022). https://doi.org/10.1155/2022/7549397.
  • [2] Rajput, P. S. & Aneja, S. IndoFashion: Apparel Classification for Indian Ethnic Clothes. in IEEE/CVF Conference on Computer Vision And Pattern Recognition Workshops 3930-3934 (IEEE, 2021). https://doi.org/10.1109/cvprw53098.2021.00440.
  • [3] Revathy, G. & Kalaivani, R. Fabric defect detection and classifi-cation via deep learning-based improved Mask RCNN. Signal Image Video Process. 17, 1-11 (2023). https://doi.org/10.1007/s11760-023-02884-6.
  • [4] Usmani, U. A., Happonen, A. & Watada, J. Enhanced Deep Learning Framework for Fine-Grained Segmentation of Fashion and Apparel. in Intelligent Computing Proc. of the 2022 Computing Conference (ed. Arai, K.) 2, 29-44 (Springer International Publishing, 2022). https://doi.org/10.1007/978-3-031-10464-0_3.
  • [5] Almeida, T., Moutinho, F. & Matos-Carvalho, J. P. Fabric defect detection with deep learning and false negative reduction. IEEE Access 9, 81936-81945 (2021). https://doi.org/10.1109/ACCESS.2021.3086028.
  • [6] Sonawane, C., Singh, D. P., Sharma, R., Nigam, A. & Bhavsar, A. Fabric Classification and Matching Using CNN and Siamese Network for E-commerce. in Computer Analysis of Images and Patterns. (eds. Vento, M. & Percannella, G.) vol. 11679 (Springer, 2019). https://doi.org/10.1007/978-3-030-29891-3_18.
  • [7] Padilha, R. & da Silva Vasconcelos, R. C. Prototype for Recognition and Classification of Textile Weaves Using Machine Learning. in High Performance Computing and Networking. (eds. Satyanarayana, C., Samanta, D., Gao, X Z. & Kapoor, R. K.) vol. 853 (Springe, 2022). https://doi.org/10.1007/978-981-16-9885-9_17.
  • [8] Priya, D. K., Bama, B. S., Ramkumar, M. P. & Roomi, S. M. M. STD-net: saree texture detection via deep learning framework for E-commerce applications. Signal Image Video Process. 27, 1-9 (2023). https://doi.org/10.1007/s11760-023-02757-y.
  • [9] Chaitra, H. V., Shadaab, M., Dubey, K., Subudhi, R. & Maurya, S. Image Segmentation and Classification for Fashion Apparel. in 2022 IEEE North Karnataka Subsection Flagship International Conference (NKCon) 1-6 (IEEE, 2022). https://doi.org/10.1109/nkcon56289.2022.10126905.
  • [10] Dinesh Jackson, S. R. et al. Real time violence detection framework for football stadium comprising of big data analysis and deep learning through bidirectional LSTM. Comput. Netw. 151, 191-200 (2019). https://doi.org/10.1016/j.comnet.2019.01.028.
  • [11] Zhang, H. et al. ClothingOut: a category-supervised GAN model for clothing segmentation and retrieval. Neural Comput. Appl. 32, 4519-4530 (2020). https://doi.org/10.1007/s00521-018-3691-y.
  • [12] Hsieh, C. W. et al. FashionOn: Semantic-Guided Image-Based Virtual Try-On with Detailed Human and Clothing Information. in Proc. of the 27th ACM International Conference on Multimedia 275-283 (Association for Computing Machinery, 2019). https://doi.org/10.1145/3343031.3351075.
  • [13] Steffens Henrique, A. et al. Classifying garments from fashion-MNIST dataset through CNNs. Adv. Sci. Technol. Eng. Syst. J. 6, 989-994 (2021). https://comum.rcaap.pt/bitstream/10400.26/37237/ 1/ASTESJ_0601109.pdf.
  • [14] Kolisnik, B., Hogan, I. & Zulkernine, F. Condition-CNN: A hierarchi-cal multi-label fashion image classification model. Expert Syst. Appl. 182, 115195 (2021). https://doi.org/10.1016/j.eswa.2021.115195.
  • [15] Zhang, J. & Liu, C. A study of a clothing image segmentation method in complex conditions using a features fusion model. Automatika 61, 150-157 (2020). https://doi.org/10.1080/00051144.2019.1691835.
  • [16] Xiang, Z., Zhu, C., Qian, M., Shen, Y. & Shao, Y. FashionSegNet: a model for high-precision semantic segmentation of clothing images. Vis. Comput. 39, 1-7 (2023). https://doi.org/10.1007/s00371-023-02881-3.
  • [17] Varma, M. & Zisserman. A. A statistical approach to texture classification from single images. Int. J. Comput. Vis. 62, 61-81 (2005). https://doi.org/10.1007/s11263-005-4635-4.
  • [18] Varma, M. & Zisserman. A. A statistical approach to material classification using image patch exemplars. IEEE Trans. Pattern Anal. Mach. Intell. 31, 2032-2047 (2009). https://doi.org/10.1109/TPAMI.2008.182.
  • [19] Dana, K., Van-Ginneken, B., Nayar, S. & Koenderink, J. Reflec-tance and texture of real-world surfaces. ACM Trans. Graph. 18, 1-34 (1999). https://doi.org/10.1145/300776.300778.
  • [20] Padilha, R. & da Silva Vasconcelos, R. C. Prototype for Recognition and Classification of Textile Weaves Using Machine Learning. in High Performance Computing and Networking: Select Proceedings of CHSN (eds. Satyanarayana, C., Samanta, D., Gao, X. Z. & Kapoor, R. K.) 205-213 (Springer, 2022). https://doi.org/10.1007/978-981-16-9885-9_17.
  • [21] Deng, J. et al. Imagenet: A Large-Scale Hierarchical Image Database. in IEEE Conf. on Computer Vision and Pattern Recognition. (CVPR) 248-255 (IEEE, 2009). https://doi.org/10.1109/CVPR.2009.5206848.
  • [22] Sharan, L., Rosenholtz, R. & Adelson, E. Material perception: What can you see in a brief glance? J. Vis. 9, 784 (2009). https://doi.org/10.1167/9.8.784.
  • [23] Nocentini, O., Kim, J., Bashir, M. Z. & Cavallo, F. Image classifica-tion using multiple convolutional neural networks on the Fashion-MNIST Dataset. Sensors 22, 9544 (2022). https://doi.org/10.3390/s22239544.
  • [24] Wang, W. et al. A novel image classification approach via dense-MobileNet models. Mob. Inf. Syst. 2020, 7602384 (2020). https://doi.org/10.1155/2020/7602384.
  • [25] Sanghvi, H. A. A deep learning approach for classification of COVID and pneumonia using DenseNet‐201. Int. J. Imaging Syst. Technol. 33, 18-38 (2023). https://doi.org/10.1002/ima.22812.
  • [26] Uddin, A. H., Mahamud, S. S. & Arif, A. S. M. A Novel Leaf Fragment Dataset and Resnet for Small-Scale Image Analysis. in Intelligent Sustainable Systems: Proceedings of ICISS 2021 (eds. Raj., J. S., Palanisamy, R., Perikos, I. & Shi, Y.) 25-40 (Springer, 2021). https://doi.org/10.1007/978-981-16-2422-3_3.
  • [27] Xia, X., Xu, C. & Nan, B. Inception-v3 for Flower Classification. in 2017 2nd International Conference on Image, Vision and Computing (ICIVC) 783-787 (IEEE, 2017). https://doi.org/10.1109/icivc.2017.7984661.
  • [28] Alquzi, S., Alhichri, H. & Bazi, Y. Detection of COVID-19 Using EfficientNet-B3 CNN and Chest Computed Tomography Images. in International Conference on Innovative Computing and Communications Proc. of ICICC 2021 (eds. Khanna, A., Gupta, D., Bhattacharyya, S., Hassanien, A. E., Anand, S. & Jaiswal, A.) 1, 365-373 (Springer, 2022). https://doi.org/10.1007/978-981-16-2594-7_30.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-dad7666a-96b2-436c-bd00-cfcd030337bf
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.