Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 8

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  QR decomposition
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
This paper presents a parallel approach to the Levenberg-Marquardt algorithm (LM). The use of the Levenberg-Marquardt algorithm to train neural networks is associated with significant computational complexity, and thus computation time. As a result, when the neural network has a big number of weights, the algorithm becomes practically ineffective. This article presents a new parallel approach to the computations in Levenberg-Marquardt neural network learning algorithm. The proposed solution is based on vector instructions to effectively reduce the high computational time of this algorithm. The new approach was tested on several examples involving the problems of classification and function approximation, and next it was compared with a classical computational method. The article presents in detail the idea of parallel neural network computations and shows the obtained acceleration for different problems.
EN
In this paper, an asymmetric audio and image encryption mechanism using QR decomposition and random modulus decomposition (RD) in the Fresnel domain is proposed. The audio file is recorded as a vector and converted to a two-dimensional array to act as an image or a sound map. This sound map is encrypted using the image encryption algorithm proposed in this paper. The proposed cryptosystem is validated for both audios and grayscale images. Fresnel parameters and the two private keys obtained from QR decomposition and random modulus decomposition (RD) form the key space. Computer-based reproductions have been carried out to prove the validity and authenticity of the scheme. Simulation results authenticate that the scheme is robust and efficient against various attacks and is sensitive to input parameters.
EN
A novel asymmetric scheme for double image encryption, compression and watermarking based on QR decomposition in the Fresnel domain has been presented. The QR decomposition provides a permutation matrix as a ciphertext, and the product of orthogonal and triangular matrix as a key. The ciphertext obtained through this process is a sparse matrix that is compressed by the CSR method to give compressed encrypted data, which when combined with a host image, gives a watermarked image. Thus, a cryptosystem that involves compression and watermarking is proposed. The proposed scheme is validated for grayscale images. To check the efficacy of the proposed scheme, histograms, statistical parameters, and key sensitivity are analyzed. The scheme is also tested against various attacks. Numerical simulations are performed to validate the security of the scheme.
EN
This paper presents a novel fast algorithm for feedforward neural networks training. It is based on the Recursive Least Squares (RLS) method commonly used for designing adaptive filters. Besides, it utilizes two techniques of linear algebra, namely the orthogonal transformation method, called the Givens Rotations (GR), and the QR decomposition, creating the GQR (symbolically we write GR + QR = GQR) procedure for solving the normal equations in the weight update process. In this paper, a novel approach to the GQR algorithm is presented. The main idea revolves around reducing the computational cost of a single rotation by eliminating the square root calculation and reducing the number of multiplications. The proposed modification is based on the scaled version of the Givens rotations, denoted as SGQR. This modification is expected to bring a significant training time reduction comparing to the classic GQR algorithm. The paper begins with the introduction and the classic Givens rotation description. Then, the scaled rotation and its usage in the QR decomposition is discussed. The main section of the article presents the neural network training algorithm which utilizes scaled Givens rotations and QR decomposition in the weight update process. Next, the experiment results of the proposed algorithm are presented and discussed. The experiment utilizes several benchmarks combined with neural networks of various topologies. It is shown that the proposed algorithm outperforms several other commonly used methods, including well known Adam optimizer.
EN
In this paper1 a new neural networks training algorithm is presented. The algorithm originates from the Recursive Least Squares (RLS) method commonly used in adaptive filtering. It uses the QR decomposition in conjunction with the Givens rotations for solving a normal equation - resulting from minimization of the loss function. An important parameter in neural networks is training time. Many commonly used algorithms require a big number of iterations in order to achieve a satisfactory outcome while other algorithms are effective only for small neural networks. The proposed solution is characterized by a very short convergence time compared to the well-known backpropagation method and its variants. The paper contains a complete mathematical derivation of the proposed algorithm. There are presented extensive simulation results using various benchmarks including function approximation, classification, encoder, and parity problems. Obtained results show the advantages of the featured algorithm which outperforms commonly used recent state-of-the-art neural networks training algorithms, including the Adam optimizer and the Nesterov’s accelerated gradient.
EN
This paper presents a local modification of the Levenberg-Marquardt algorithm (LM). First, the mathematical basics of the classic LM method are shown. The classic LM algorithm is very efficient for learning small neural networks. For bigger neural networks, whose computational complexity grows significantly, it makes this method practically inefficient. In order to overcome this limitation, local modification of the LM is introduced in this paper. The main goal of this paper is to develop a more complexity efficient modification of the LM method by using a local computation. The introduced modification has been tested on the following benchmarks: the function approximation and classification problems. The obtained results have been compared to the classic LM method performance. The paper shows that the local modification of the LM method significantly improves the algorithm’s performance for bigger networks. Several possible proposals for future works are suggested.
EN
A novel method for feature extraction and recognition called Kernel Fuzzy Discriminant Analysis (KFDA) is proposed in this paper to deal with recognition problems, e.g., for images. The KFDA method is obtained by combining the advantages of fuzzy methods and a kernel trick. Based on the orthogonal-triangular decomposition of a matrix and Singular Value Decomposition (SVD), two different variants, KFDA/QR and KFDA/SVD, of KFDA are obtained. In the proposed method, the membership degree is incorporated into the definition of between-class and within-class scatter matrices to get fuzzy between-class and within-class scatter matrices. The membership degree is obtained by combining the measures of features of samples data. In addition, the effects of employing different measures is investigated from a pure mathematical point of view, and the t-test statistical method is used for comparing the robustness of the learning algorithm. Experimental results on ORL and FERET face databases show that KFDA/QR and KFDA/SVD are more effective and feasible than Fuzzy Discriminant Analysis (FDA) and Kernel Discriminant Analysis (KDA) in terms of the mean correct recognition rate.
8
Content available remote Numerical solutions of symmetric saddle point problem by direct methods
EN
Numerical stability of two main direct methods for solving the symmetric saddle point problem are analyzed. The first one is a generalization of Golub’s method for the augmented system formulation (ASF) and uses the Householder QR decomposition. The second method is supported by the singular value decomposition (SVD). Numerical comparison of some direct methods are given.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.