Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2021 | Vol. 26 | 3--11
Tytuł artykułu

An impact of tensor-based data compression methods on deep neural network accuracy

Wybrane pełne teksty z tego czasopisma
Warianty tytułu
Konferencja
Federated Conference on Computer Science and Information Systems (16 ; 02-05.09.2021 ; online)
Języki publikacji
EN
Abstrakty
EN
The emergence of the deep neural architectures greatly influenced the contemporary big data revolution. How-ever, requirements on large datasets even increased a necessity for efficient data storage. The storage problem is present at all stages, from the dataset creation up to the training and prediction stages. However, compression algorithms can significantly deteriorate the quality of data and in effect the classification models. In this article, an in-depth analysis of the influence of the tensor-based lossy data compression on the performance of the various deep neural architectures is presented. We show that the Tucker and the Tensor Train decomposition methods, with properly selected parameters, allow for very high compression ratios, while conveying enough information in the decompressed data to achieve only a negligible or very small drop in the accuracy. The measurements were performed on the popular deep neural architectures: AlexNet, ResNet, VGG, and MNASNet. We show that further augmentation of the tensor decompositions with the ZFP floating-point compression algorithm allows for finding optimal parameters and even higher compressions ratios at the same recognition accuracy. Our experiments show data compressions of 94%-97% that result in less than 1% accuracy drop.
Wydawca

Rocznik
Tom
Strony
3--11
Opis fizyczny
Biblogr. 45 poz., wz., wykr., tab.
Twórcy
  • Department of Electronics, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Kraków, Poland, jakub.grabek@qed.pl
  • QED Software Sp. z o.o., Mazowiecka 11/49, 00-052 Warszawa, Poland
  • Department of Electronics, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Kraków, Poland, cyganek@agh.edu.pl
  • QED Software Sp. z o.o., Mazowiecka 11/49, 00-052 Warszawa, Poland
Bibliografia
  • 1. Cococcioni, M., et al. Novel arithmetics in deep neural networks signal processing for autonomous driving: Challenges and opportunities. In IEEE Signal Processing Magazine, 2020, 38.1: 97-110. http://dx.doi.org/10.1109/MSP.2020.2988436
  • 2. Cyganek, B. Object Detection and Recognition in Digital Images: Theory and Practice; John Wiley & Sons: New York, NY, USA, 2013. http://dx.doi.org/10.1002/9781118618387
  • 3. Kolda, T.; Bader, B. Tensor Decompositions and Applications. SIAM Rev. 51.3 2009, 51, 455-500. http://dx.doi.org/10.1137/07070111X
  • 4. Cyganek, B., Thumbnail Tensor-A Method for Multidimensional Data Streams Clustering with an Efficient Tensor Subspace Model in the Scale-Space, Sensors, 19(19), 4088, 2019, http://dx.doi.org/10.3390/s19194088
  • 5. LI, J., LIU, Z., Multispectral transforms using convolution neural networks for remote sensing multispectral image compression. In Remote Sensing 11.7: 759, 2019. http://dx.doi.org/10.3390/rs11070759
  • 6. CHOI, Y., EL-KHAMY, M., LEE, J., Universal deep neural network compression. In IEEE Journal of Selected Topics in Signal Processing, 14.4, 2020, pp. 715-726. http://dx.doi.org/10.1109/JSTSP.2020.2975903
  • 7. Przyborowski M., et al. Toward Machine Learning on Granulated Data - a Case of Compact Autoencoder-based Representations of Satellite Images. In 2018 IEEE International Conference on Big Data (Big Data), 2018, pp. 2657-2662, http://dx.doi.org/10.1109/BigData.2018.8622562.
  • 8. Wang, N; Yeung, D. Y., Learning a deep compact image representation for visual tracking. In Advances in neural information processing systems, 2013
  • 9. Lindstrom, P., Fixed-Rate Compressed Floating-Point Arrays. In IEEE Transactions on Visualization and Computer Graphics 20(12) 2014, pp. 2674-2683, http://dx.doi.org/10.1109/TVCG.2014.2346458
  • 10. Ziv, J., Lempel, A., Compression of individual sequences via variable-rate coding. In IEEE transactions on Information Theory, 1978, 24.5: 530-536. http://dx.doi.org/10.1109/TIT.1978.1055934
  • 11. Cyganek, B., A Framework for Data Representation, Processing, and Dimensionality Reduction with the Best-Rank Tensor Decomposition. Proceedings of the ITI 2012 34th International Conference Information Technology Interfaces, June 25-28, 2012, Cavtat, Croatia, pp. 325-330, http://dx.doi.org/10.2498/iti.2012.0466, 2012.
  • 12. De Lathauwer, L.; De Moor, B.; Vandewalle, J. On the best rank-1 and rank-(R1, R2,..., Rn) approximation of higher-order tensors. Siam J. Matrix Anal. Appl. 2000, 21, 1324-1342. http://dx.doi.org/10.1137/S0895479898346995
  • 13. Ballé, J., Laparra, V., Simoncelli, E. P., End-to-end optimized image compression. In arXiv preprint https://arxiv.org/abs/1611.01704, 2016.
  • 14. Zhang, L., et al. Compression of hyperspectral remote sensing images by tensor approach. In Neurocomputing, 147, 2015, pp. 358-363. http://dx.doi.org/10.1016/j.neucom.2014.06.052
  • 15. Aidini, A., Tsagkatakis, G., Tsakalides, P., Compression of high-dimensional multispectral image time series using tensor decomposition learning. In: 2019 27th European Signal Processing Conference (EUSIPCO). IEEE, 2019. p. 1-5. http://dx.doi.org/10.23919/EUSIPCO.2019.8902838
  • 16. Watkins, Y. Z., Sayeh, M. R., Image data compression and noisy channel error correction using deep neural network. In Procedia Computer Science, 95, 2016, pp. 145-152. http://dx.doi.org/10.1016/j.procs.2016.09.305
  • 17. Friedland, G., et al. On the Impact of Perceptual Compression on Deep Learning. In 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2020, p. 219-224. http://dx.doi.org/10.1109/MIPR49039.2020.00052
  • 18. Dejean-Servières, M., et al. Study of the impact of standard image compression techniques on performance of image classification with a convolutional neural network. 2017. PhD Thesis. INSA Rennes; Univ Rennes; IETR; Institut Pascal.
  • 19. Ullrich, K., Meeds, E., Welling, M., Soft weight-sharing for neural network compression. In arXiv preprint https://arxiv.org/abs/1702.04008, 2017.
  • 20. JIN, S. et al. DeepSZ: A novel framework to compress deep neural networks by using error-bounded lossy compression. In Proceedings of the 28th International Symposium on High-Performance Parallel and Distributed Computing, 2019 p. 159-170. http://dx.doi.org/10.1145/3307681.3326608
  • 21. Deng, Lei, et al. Model compression and hardware acceleration for neural networks: A comprehensive survey. In Proceedings of the IEEE, 2020, 108.4: 485-532. http://dx.doi.org/10.1109/JPROC.2020.2976475
  • 22. Muti, D.; Bourennane, S. Multidimensional filtering based on a tensor approach. Signal Process. 2005, 85, 2338-2353. http://dx.doi.org/10.1016/j.sigpro.2004.11.029
  • 23. Cyganek, B.; Smołka, B. Real-time framework for tensor-based image enhancement for object classification. Proc. SPIE 2016, 9897, 98970Q. http://dx.doi.org/10.1117/12.2227797
  • 24. Cyganek, B.; Krawczyk, B.; Wozniak, M. Multidimensional Data Classification with Chordal Distance Based Kernel and Support Vector Machines. Eng. Appl. Artif. Intell. 2015, 46, 10-22. http://dx.doi.org/10.1016/j.engappai.2015.08.001
  • 25. Cyganek, B.; Wozniak, M. Tensor-Based Shot Boundary Detection in Video Streams. New Gener. Comput. 2017, 35, 311-340. http://dx.doi.org/10.1007/s00354-017-0024-0
  • 26. Marot, J.; Fossati, C.; Bourennane, S. Fast subspace-based tensor data filtering. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7-10 November 2009; pp. 3869-3872. http://dx.doi.org/10.1109/ICIP.2009.5414048
  • 27. Khoromskij, B. N., Khoromskaia, V., Multigrid accelerated tensor approximation of function related multidimensional arrays. In SIAM J. Sci. Comput., 31, 2009, pp. 3002-3026. http://dx.doi.org/10.1137/080730408
  • 28. Oseledets, I. V., Savostianov, D. V., Tyrtyshnikov, E. E., Tucker dimensionality reduction of three-dimensional arrays in linear time. In SIAM J. Matrix Anal. Appl., 30, 2008, pp. 939-956. http://dx.doi.org/10.1137/060655894
  • 29. Lee, N., Cichocki, A., Fundamental tensor operations for large-scale data analysis using tensor network formats. In Multidimensional Syst. Signal Process., vol. 29, no. 3, 2017, pp. 921-960 http://dx.doi.org/10.1007/s11045-017-0481-0
  • 30. Hubener, R., Nebendahl, V., Dur, W., Concatenated tensor network states. In New J. Phys., 12, 2010, 025004. http://dx.doi.org/10.1088/1367-2630/12/2/025004
  • 31. Van Loan, C. F., Tensor network computations in quantum chemistry Technical report, available online at www.cs.cornell.edu/cv/OtherPdf/ZeuthenCVL.pdf, 2008.
  • 32. Oseledets, I., Tensor-Train Decomposition. In SIAM J. Scientific Computing. 33., 2011, pp. 2295-2317. http://dx.doi.org/10.1137/090752286.
  • 33. Lindstrom, P., Fixed-Rate Compressed Floating-Point Arrays. In IEEE Transactions on Visualization and Computer Graphics vol. 20; 2014, http://dx.doi.org/10.1109/TVCG.2014.2346458.
  • 34. Lemley, J., Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision. In IEEE Consumer Electronics Magazine vol. 6, Iss. 2; 2017 http://dx.doi.org/10.1109/MCE.2016.2640698
  • 35. Krizhevsky, A., Sutskever, I., Hinton, G. E., ImageNet classification with deep convolutional neural networks. In Communications of the ACM. 60 (6) pp. 84-90. http://dx.doi.org/10.1145/3065386
  • 36. Simonyan, K., Zisserman, A. Very deep convolutional networks for large-scale image recognition. In arXiv preprint https://arxiv.org/abs/1409.1556. 2014
  • 37. He, Kaiming, et al. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition 2016, pp. 770-778 http://dx.doi.org/10.1109/CVPR.2016.90
  • 38. Krizhevsky, A., et al., ImageNet classification with deep convolutional neural networks. In Proc. 25th Int. Conf. Neural Inf. Process. Syst. (NIPS), vol. 1., Red Hook, NY, USA: Curran Associates, 2012, pp. 1097-1105. http://dx.doi.org/10.1145/3065386
  • 39. Simonyan K. and Zisserman A., Very deep convolutional networks for large-scale image recognition. In Proc. 3rd Int. Conf. Learn. Represent. (ICLR), San Diego, CA, USA, Y. Bengio and Y. LeCun, Eds., 2015, pp. 1-14.
  • 40. Xie S., et al., Aggregated residual transformations for deep neural networks.In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 5987-5995. http://dx.doi.org/10.1109/CVPR.2017.634
  • 41. Szegedy, S. et al., Inception-v4, inception-resnet and the impact of residual connections on learning. In Proc. 31st AAAI Conf. Artif. Intell., San Francisco, CA, USA, S. P. Singh and S. Markovitch, Eds., 2017, pp. 4278-4284.
  • 42. Tan, M., et al. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition 2019, pp. 2820-2828 doi: 10.1109/CVPR.2019.00293
  • 43. Kossaifi, J.; Panagakis, Y.; Kumar, A.; Pantic, M. TensorLy: Tensor Learning in Python. arXiv preprint 2018, https://arxiv.org/abs/1610.09555.
  • 44. Howard, J., imagenette dataset, https://github.com/fastai/imagenette/
  • 45. Oseledets, I. V., Tensor-train decomposition. In SIAM J. Sci. Comput., vol. 33, no. 5, 2011, pp. 2295-2317 http://dx.doi.org/10.1137/090752286
Uwagi
1. This research was co-funded by Smart Growth Operational Programme 2014-2020, financed by European Regional Development Fund, in frame of project POIR.01.01.01-00-0570/19, operated by National Centre for Research and Development in Poland.
2. Preface
3. Session: 15th International Symposium Advances in Artificial Intelligence and Applications
4. Communication Papers
Typ dokumentu
Bibliografia
Identyfikatory
Identyfikator YADDA
bwmeta1.element.baztech-b89cce4c-1db7-41a0-92d8-987b131b6549
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.