PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Autoenkodery. Podstawy budowy wydajnych modeli uczenia maszynowego

Treść / Zawartość
Identyfikatory
Warianty tytułu
EN
Autoencoders. Fundamentals of building efficient machine learning models
Języki publikacji
PL
Abstrakty
PL
Autoenkoder jest siecią neuronową złożoną z pary koder-dekoder. Koder odpowiada za redukcję wymiarowości danych w modelu przy jednoczesnym zachowaniu kluczowych cech, niezbędnych do odtworzenia danych wejściowych przez dekoder. Z uwagi na cechy architektury wewnętrznej wyodrębnia się autoenkodery deterministyczne oraz probabilistyczne. Istnieją wyspecjalizowane wersje autoenkoderów odpowiadające tematyce realizowanych modeli uczenia maszynowego, na przykład autoenkodery odszumiające, rekurencyjne, splotowe, wariacyjne lub rzadkie. W artykule zostały przedstawione jedynie najistotniejsze zagadnienia związane z autoenkoderami.
EN
An autoencoder is a neural network composed of an encoder-decoder pair. The encoder reduces the dimensionality of the data leaving only key features in the model to allow the decoder to reconstruct the input data. Taking into account the internal architecture of autoencoders, a distinction can be made between deterministic autoencoders and probabilistic autoencoders. Only the latter are generative in nature. There are specialised versions of autoencoders corresponding to the subject matter of the machine learning models implemented, for example, de-noising, recursive, convolutional, variational or sparse autoencoders. This paper aims to present the most relevant issues related to autoencoders.
Rocznik
Tom
Strony
21--60
Opis fizyczny
Bibliogr. 43 po., fot., rys.
Twórcy
  • Warszawska Wyższa Szkoła Informatyki
Bibliografia
  • [1] D. Bank, N. Koenigstein, and R. Giryes, “Autoencoders,” https://arxiv.org/abs/2003.05991, 2021.
  • [2] A. Géron, Uczenie maszynowe z użyciem Scikit-Learn i TensorFlow. Helion, 2020.
  • [3] D. Foster, Deep learning i modelowanie generatywne. Helion, 2021.
  • [4] E. Kan, “What the heck are vae-gans?” https://towardsdatascience.com/what-the-heck-are-vae-gans-17b86023588a, 2018.
  • [5] A. Creswell and A. A. Bharath, “Denoising adversarial autoencoders,” https://arxiv.org/abs/1703.01220, 2018.
  • [6] S. Irhum, “Intuitively understanding variational autoencoders,” https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf, 2018.
  • [7] https://thispersondoesnotexist.com/.
  • [8] https://thiscatdoesnotexist.com.
  • [9] https://thishorsedoesnotexist.com/.
  • [10] J. Thompson, “Neural style transfer with swift for tensorflow,” https://medium.com/@build it for fun/neural-style-transfer-with-swift-for-tensorflow-b8544105b854, 2019.
  • [11] H. Liang, L. Yu, G. Xu, B. Raj, and R. Singh, “Controlled autoencoders to generate faces from voices,” https://arxiv.org/abs/2107.07988, 2021.
  • [12] C. Doersch, “Tutorial on variational autoencoders,” https://arxiv.org/abs/1606.05908, 2021.
  • [13] D. P. Kingma and M. Welling, “An introduction to variational autoencoders,” https://arxiv.org/abs/1906.02691, 2019.
  • [14] L. Regenwetter, A. H. Nobari, and F. Ahmed, “Deep generative models in engineering design: A review,” https://arxiv.org/abs/2110.10863, 2021.
  • [15] http://thisxdoesnotexist.com.
  • [16] T. Park, J.-Y. Zhu, O. Wang, J. Lu, E. Shechtman, A. A. Efros, and R. Zhang, “Swapping autoencoder for deep image manipulation,” https://arxiv.org/abs/2007.00653, 2020.
  • [17] L. Weng, “From autoencoder to beta-vae,” https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae, 2018.
  • [18] K. Cho, “Boltzmann machines and denoising autoencoders for image denoising,” https://arxiv.org/abs/1301.3468, 2013.
  • [19] I. Zenbout, A. Bouramoul, and S. Meshoul, “Stacked sparse autoencoder for unsupervised features learning in pancancer mirna cancer classification,” http://ceur-ws.org/Vol-2589/Paper3.pdf, 2020.
  • [20] A. Ng, “Sparse autoencoder,” https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf.
  • [21] E. Blanco-Mallo, B. Remeseiro, V. Bolón-Canedo, and A. Alonso-Betanzos, “On the effectiveness of convolutional autoencoders on image-based personalized recommender systems,” https://arxiv.org/abs/2003.06205, 2020.
  • [22] R. Kumar, “Faster image classification using ten-sorflow’s graph mode,” https://medium.com/artificialis/faster-image-classification-using-tensorflows-graph-mode-67098154808b, 2021.
  • [23] J. Krohn, G. Beyleveld, and A. Bassens, Uczenie gł ̨ebokie i sztuczna inteligencja. Helion, 2021.
  • [24] Y. LeCun, C. Cortes, and C. J. C. Burges, “The MNIST database of handwritten digits,” http://yann.lecun.com/exdb/mnist/.
  • [25] P. Ganesh, “Types of convolution kernels: Simplified,” https://towardsdatascience.com/types-of-convolution-kernels-simplified-f040cb307c37, 2019.
  • [26] S. Saha, “A comprehensive guide to convolutional neural networks - the eli5 way,” https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53, 2018.
  • [27] “Convolutional autoencoders for image noise reduction,” https://towardsdatascience.com/convolutional-autoencoders-for-image-noise-reduction-32fce9fc1763, 2019.
  • [28] L. Moroney, Sztuczna inteligencja i uczenie maszynowe dla programistów. Helion, 2021.
  • [29] I. Aoukli, “Application of rnn autoencoders: Translation,” https://medium.com/@ikhlass.aoukli/application-of-rnn-autoencoders-translation-b4da019e81ea, 2020.
  • [30] T. Wong and Z. Luo, “Recurrent auto-encoder model for large-scale industrial sensor signal analysis,” https://arxiv.org/abs/1807.03710, 2018.
  • [31] G. Seif, “Understanding the 3 most common loss functions for machine learning regression,” https://towardsdatascience.com/understanding-the-3-most-common-loss-functions-for-machine-learning-regression-23e0ef3e14d3, 2019.
  • [32] “Keras api reference / losses / regression losses,” https://keras.io/api/losses/regression_losses/.
  • [33] A. Opidi, “Pytorch loss functions: The ultimate guide,” https://neptune.ai/blog/pytorch-loss-functions, 2021.
  • [34] P. Sharma, “Pytorch optimizers - complete guide for beginner,” https://machinelearningknowledge.ai/pytorch-optimizers-complete-guide-for-beginner/, 2021.
  • [35] R. Schmidt, F. Schneider, and P. Hennig, “Descending through a crowded valley - benchmarking deep learning optimizers,” https://arxiv.org/abs/2007.01547, 2020.
  • [36] J. Howard and S. Gugger, Deep learning dla programistów. Helion, 2021.
  • [37] “Journey of gradient descent - from local to global,” https://laptrinhx.com/journey-of-gradient-descent-from-local-to-global-2829573297/, 2021.
  • [38] “Neural netrowks 3,” https://cs231n.github.io/neural-networks-3/, 2020, stanford University.
  • [39] G. Tanner, “Understanding optimization algorithms,” https://mlexplained.com/blog/gradient-descent-explained, 2021.
  • [40] S. Weidman, Uczenie głębokie od zera. Helion, 2020.
  • [41] “Adam optimizer,” https://machinelearningjourney.com/index.php/2021/01/09/adam-optimizer/.
  • [42] D. Choi, C. J. Shallue, Z. Nado, J. Lee, C. J. Maddison, and G. E. Dahl, “On empirical comparisons of optimizers for deep learning,” https://arxiv.org/abs/1910.05446, 2020.
  • [43] A. Torfi, R. A. Shirvani, Y. Keneshloo, N. Tavaf, and E. A. Fox, “Natural language processing advancements by deep learning: A survey,” https://arxiv.org/abs/2003.01200, 2020.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-cf3369c2-307e-4542-9c5b-f867085b3a20
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.