Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 18

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  vector quantization
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
When constructing a new data classification algorithm, relevant quality indices such as classification accuracy (ACC) or the area under the receiver operating characteristic curve (AUC) should be investigated. End-users of these algorithms are interested in high values of the metrics as well as the proposed algorithm’s understandability and transparency. In this paper, a simple evolving vector quantization (SEVQ) algorithm is proposed, which is a novel supervised incremental learning classifier. Algorithms from the family of adaptive resonance theory and learning vector quantization inspired this method. Classifier performance was tested on 36 data sets and compared with 10 traditional and 15 incremental algorithms. SEVQ scored very well, especially among incremental algorithms, and it was found to be the best incremental classifier if the quality criterion is the AUC. The Scott–Knott analysis showed that SEVQ is comparable in performance to traditional algorithms and the leading group of incremental algorithms. The Wilcoxon rank test confirmed the reliability of the obtained results. This article shows that it is possible to obtain outstanding classification quality metrics while keeping the conceptual and computational simplicity of the classification algorithm.
EN
Audio data compression is used to reduce the transmission bandwidth and storage requirements of audio data. It is the second stage in the audio mastering process with audio equalization being the first stage. Compression algorithms such as BSAC, MP3 and AAC are used as standards in this paper. The challenge faced in audio compression is compressing the signal at low bit rates. The previous algorithms which work well at low bit rates cannot be dominant at higher bit rates and vice-versa. This paper proposes an altered form of vector quantization algorithm which produces a scalable bit stream which has a number of fine layers of audio fidelity. This modified form of the vector quantization algorithm is used to generate a perceptually audio coder which is scalable and uses the quantization and encoding stages which are responsible for the psychoacoustic and arithmetical terminations that are actually detached as practically all the data detached during the prediction phases at the encoder side is supplemented towards the audio signal at decoder stage. Therefore, clearly the quantization phase which is modified to produce a bit stream which is scalable. This modified algorithm works well at both lower and higher bit rates. Subjective evaluations were done by audio professionals using the MUSHRA test and the mean normalized scores at various bit rates was noted and compared with the previous algorithms.
EN
Learning vector quantization (LVQ) is one of the most powerful approaches for prototype based classification of vector data, intuitively introduced by Kohonen. The prototype adaptation scheme relies on its attraction and repulsion during the learning providing an easy geometric interpretability of the learning as well as of the classification decision scheme. Although deep learning architectures and support vector classifiers frequently achieve comparable or even better results, LVQ models are smart alternatives with low complexity and computational costs making them attractive for many industrial applications like intelligent sensor systems or advanced driver assistance systems. Nowadays, the mathematical theory developed for LVQ delivers sufficient justification of the algorithm making it an appealing alternative to other approaches like support vector machines and deep learning techniques. This review article reports current developments and extensions of LVQ starting from the generalized LVQ (GLVQ), which is known as the most powerful cost function based realization of the original LVQ. The cost function minimized in GLVQ is an soft-approximation of the standard classification error allowing gradient descent learning techniques. The GLVQ variants considered in this contribution, cover many aspects like bordersensitive learning, application of non-Euclidean metrics like kernel distances or divergences, relevance learning as well as optimization of advanced statistical classification quality measures beyond the accuracy including sensitivity and specificity or area under the ROC-curve. According to these topics, the paper highlights the basic motivation for these variants and extensions together with the mathematical prerequisites and treatments for integration into the standard GLVQ scheme and compares them to other machine learning approaches. For detailed description and mathematical theory behind all, the reader is referred to the respective original articles. Thus, the intention of the paper is to provide a comprehensive overview of the stateof- the-art serving as a starting point to search for an appropriate LVQ variant in case of a given specific classification problem as well as a reference to recently developed variants and improvements of the basic GLVQ scheme.
EN
Planning and optimization of distribution centers locations and routes between them and their recipients is one of the fundamental issues in logistics, which influences the operating costs of enterprises. This article describes the problem of how to optimize the selection of locations for the distribution centers. In the real environment, transportation of goods requires moving between several locations, delivering goods to different locations in a place and its surrounding area. In this case, the goods distribution problem can be split into optimizing the route selection between groups of locations and location grouping into individual clusters, in whose focal points a local warehouse and distribution center will be created. The authors of this article propose a novel approach which contains clustering techniques by means of vector quantization methods.
PL
Planowanie i optymalizacja lokalizacji centrów dystrybucyjnych oraz tras między nimi a odbiorcami jest jednym z podstawowych problemów w logistyce, który ma wpływ na koszty operacyjne przedsiębiorstw. W artykule tym opisano w jaki sposób można zoptymalizować wybór lokalizacji centrów dystrybucyjnych. W rzeczywistym środowisku, transport towarów wymaga przemieszczania się pomiędzy wieloma miastami oraz dostarczanie produktów do kilku miejsc w mieście i jego okolicach. W tym przypadku zagadnienie dystrybucji towarów można podzielić na optymalizację wyboru tras pomiędzy grupami lokalizacji i wyznaczenie lokalizacji poszczególnych punktów, w których zostaną utworzone lokalne centra magazynowo-dystrybucyjne. Autorzy artykułu proponują nowe podejście wyznaczania takich centrów w oparciu o metody grupowania za pomocą algorytmów kwantyzacji wektorowej.
EN
This paper presents an analysis of issues related to the fixed-point implementation of a speech signal applied to biometric purposes. For preparing the system for automatic speaker identification and for experimental tests we have used the Matlab computing environment and the development software for Texas Instruments digital signal processors, namely the Code Composer Studio (CCS). The tested speech signals have been processed with the TMS320C5515 processor. The paper examines limitations associated with operation of the realized embedded system, demonstrates advantages and disadvantages of the technique of automatic software conversion from Matlab to the CCS and shows the impact of the fixed-point representation on the speech identification effectiveness.
6
Content available remote Similarity detection of image using vector quantization and compression
EN
In every day is a lot of new images and photos get into the internet. Problem of image similarity is up-to-date in image retrieval. There are a lot of methods for comparison of images. We use vector quantization and NCD method for look for similar images in collection that vector quantization prepares image files for NCD. In this paper we can show how to convert 2D image into 1D string using by vector quantization and how NCD method is used for image similarity detection.
PL
W artykule analizowano problem podobieństwa obrazu. Użyto metody kwantyzacji wektora i metody NCD. Pokazano jak konwertować obraz 2D w strumień 1D.
EN
This paper presents the effectiveness of speaker identification based on short Polish sequences. An impact of automatic removal of silence on the speaker recognition accuracy is considered. Several methods to detect the beginnings and ends of the voice signal have been used. Experimental research was carried out in Matlab environment with the use of a specially prepared database of short speech sequences in Polish. The construction of speaker models was realized with two techniques: Vector Quantization (VQ) and Gaussian Mixture Models (GMM). We also tested the influence of the sampling rate reduction on the speaker recognition performance.
PL
Artykuł przedstawia badania efektywności rozpoznawania mówcy opartego na krótkich wypowiedziach w języku polskim. Sprawdzono wpływ automatycznego wykrywania i usuwania ciszy na jakość rozpoznawania mówcy. Przebadano kilka różnych metod wykrywania początku i końca fragmentów mowy w wypowiadanych sekwencjach. Eksperymenty zostały przeprowadzone z użyciem środowiska Matlab i specjalnie utworzonej bazy krótkich wypowiedzi w języku polskim. Do budowy modeli mówców wykorzystano kwantyzacja wektorowa (VQ) oraz Gaussian Mixture Models (GMM). Podczas badań sprawdzono także wpływ obniżenia szybkości próbkowania na skuteczność identyfikacji mówcy.
8
Content available remote Improved vector quantization scheme for grayscale image compression
EN
This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.
9
Content available remote Efficient greyscale image compression technique based on vector quantization
EN
In this paper, a novel greyscale image coding technique based on vector quantization (VQ) is proposed. In VQ, the reconstructed image quality is restricted by the codebook used in the image encoding/decoding procedures. To provide a better image quality using a fixed-sized codebook, the codebook expansion technique is introduced in the proposed technique. In addition, the block prediction technique and the relatively address technique are employed to cut down the required storage cost of the compressed codes. From the results, it is shown that the proposed technique adaptively provides better image quality at low bit rates than VQ.
10
Content available remote A novel data hiding and reversible scheme using SMN blocks in VQ-compressed images
EN
Data hiding is a technique for embedding secret data into cover media. It is important to multimedia security and has been widely studied. Reversible data hiding methods are becoming prevalent in the area because they can reconstruct the original cover image while extracting the embedded data. In this paper, we propose a new reversible method for vector quantization (VQ) compressed images. Our method takes advantages of the relationship among the side match neighbouring (SMN) blocks to achieve reversibility. The experimental results show that the proposed method has higher compression rate and larger capacity than other existing reversible methods.
11
Content available remote Codebook-linked Watermarking Scheme for Digital Images
EN
Digital watermarking techniques are an importantmethod to provide copyright protection of multimedia. In this paper, we proposed a non-embedded watermarking scheme, based on vector quantization (VQ), to protect the image copyright. Our approach applies a codebook to generate a relationship between image blocks and watermark bits, and then the relationship is outputted as the key stream (called KS). With the KS, the relationship between the image and the watermark are confirmed and the copyright of the image is declared. In our method, the number of bits that are related to a block is adaptive. Compared to the scheme of Lin et al., our approach needs only one codebook, while their approach requires seven. Moreover, in our approach each block can be connected with watermark bits, while some blocks can not be connected with watermark bits in their method. Finally, a method of adjusting the robustness is given for our proposed approach. Our approach not only reduces the length of the key stream, but it also allows for a more flexible application. In addition, our experimental results show that the proposed approach runs faster and is more robust than that of Lin et al.
12
Content available remote A Fast VQ Codebook Generation Algorithm Based on Otsu Histogram Threshold
EN
In vector quantization, the codebook generation problem can be formulated as a classification problem of dividing N_p training vectors into N_c clusters, where N_p is the training size of input vectors and N_c is the codeword size of codebook. For large Np and Nc, a traditional search algorithmsuch as the LBG method can hardly find the global optimal classification and needs a great deal of calculation. In this paper, a novel VQ codebook generation method based on Otsu histogram threshold is proposed. The computational complexity of squared Euclidean distance can be reduced to O(N_p log_2 N_c) for a codebook with gray levels. Our method provides better image quality than recent proposed schemes in high compression ratio. The experimental results and the comparisons show that this method can not only reduce the computational complexity of squared Euclidean distance but also find better codewords to improve the quality of the resulted VQ codebook.
EN
A low bit rate image coding scheme based on vector quantization is proposed. In this scheme, the block prediction coding and the relative addressing techniques are employed to cut down the required bit rate of vector quantization. In block prediction coding, neighboring encoded blocks are taken to compress the current block if a high degree of similarity between them is existed. In the relative addressing technique, the redundancy among neighboring indices are exploited to reduce the bit rate. From the results, it is shown that the proposed scheme significantly reduces the bit rate of VQ while keeping good image quality of compressed images.
EN
The paper describes a method of automatic identification of different music performers playing identical pieces of music on the same instrument. The performers' models based on the LPCC features and vector quantization are proposed as methods of classification. The presented approach was verified with a database of experimental samples of Bach's 1st Cello Suite recorded especially for this study and the original audio CD recordings of Bach's 6 Cello Suites performed by six famous cellists.
15
Content available remote Wavelet based speaker recognition
EN
In this article we are presenting wavelet-based method for designing speaker recognition features. The proposed method is compared to linear prediction method. As a classificator we used LBG algorithm, which is one of the vector quantization (VG) algorithms.
PL
W artykule prezentujemy metodę szukania cech opartych na falkach dla problemu rozpoznawania mówców. Aby lepiej ocenić zaproponowaną metodą porównano ją do liniowej predykcji. Jako klasyfikatora użyliśmy algorytmu wektorowej kwantyzacji. Słowa kluczowe: rozpoznawanie mówców, falki, liniowa predykacja, kwantyfikacja wektorowa.
EN
A video compression technique for applications in very low bit rate coding is presented. The technique is based on vector quantization of chrominance and compression of the image with scalar representation of chrominance. The general idea is to represent the two components of the chrominance vector (CB, CR) using single scalar values. Experimental results of computer simulations using standard video sequences are presented.
17
Content available remote Vector quantized pattern learning for neural network-based image restoration
EN
This paper presents a hybrid scheme for image restoration with edge-preserving regularization and artificial neural network based on vector quantized pattern learning. The edge information is extracted from the source image as a priori knowledge to recover the details and reduce the ringing artifact of the subband-coded image. The spatially independent vector patterns are generated from source images using vector quantization to de-correlate the image patterns for more effective and efficient pattern learning and to minimize the number of training patterns while retaining the representativeness of the training patterns. The vector-quantized patterns are then used to train the multilayer perceptron model for the restoration process. To evaluate the performance of the proposed scheme, a comparative study with the set partitioning in hierarchical tree (SPIHT) and the full pattern trained NN has been conducted using a set of gray-scale digital images.The experimental results have drown that the proposed scheme could result in better performances compared with SPIHT on both objective and subjective quality for lower compression ratio subband coded image.
18
Content available remote Modified H.263 codec with improved color reproduction
EN
A modification of the H.263 intraframe mode is proposed. The modification is restricted to chrominance processing only. The two input chrominance components are converted into one scalar chrominance signal which is then processed by an H.263 coder with modified quantization of DCT coefficients. The original chrominance components are recovered using postprocessing after the H.263 decoding.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.