Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Ograniczanie wyników
Czasopisma help
Lata help
Autorzy help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 66

Liczba wyników na stronie
first rewind previous Strona / 4 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  image compression
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 4 next fast forward last
1
Content available remote Variable-rate colour image quantization based on quadtree segmentation
100%
EN
A novel variable-sized block encoding with threshold control for colour image quantization (CIQ) is presented in this paper. In CIQ, the colour palette used has a great influence on the reconstructed image quality. Typically, a higher image quality and a larger storage cost are obtained when a larger-sized palette is used in CIQ. To cut down the storage cost while preserving quality of the reconstructed images, the threshold control policy for quadtree segmentation is used in this paper. Experimental results show that the proposed method adaptively provides desired bit rates while having better image qualities comparing to CIQ with the usage of multiple palettes of different sizes.
EN
Image quality assessment methods are used in different image processing applications. Among them, image compression and image super-resolution can be mentioned in wireless capsule endoscopy (WCE) applications. The existing image compression algorithms for WCE employ the generalpurpose image quality assessment (IQA) methods to evaluate the quality of the compressed image. Due to the specific nature of the images captured by WCE, the general-purpose IQA methods are not optimal and give less correlated results to that of subjective IQA (visual perception). This paper presents improved image quality assessment techniques for wireless capsule endoscopy applications. The proposed objective IQA methods are obtained by modifying the existing full-reference image quality assessment techniques. The modification is done by excluding the noninformative regions, in endoscopic images, in the computation of IQA metrics. The experimental results demonstrate that the proposed IQA method gives an improved peak signal-tonoise ratio (PSNR) and structural similarity index (SSIM). The proposed image quality assessment methods are more reliable for compressed endoscopic capsule images.
EN
Based on compressive sensing and log operation, a new image compression-encryption algorithm is proposed, which accomplishes encryption and compression simultaneously. The proposed image compression-encryption algorithm takes advantage of not only the physical realizability of partial Hadamard matrix, but also the resistance of the chosen-plaintext attack since all the elements in the partial Hadamard matrix are 1, –1 or log 1 = 0. The proposed algorithm is sensitive to the key and it can resist various common attacks. The simulation results verify the validity and reliability of the proposed image compression-encryption algorithm.
4
Content available remote JCURVE : Multiscale Curve Coding via Second Order Beamlets
100%
EN
The paper presents an algorithm JCURVE for compression of binary images with linear or curvilinear features, which is a kind of generalization of the JBEAM coder. The proposed algorithm is based on second order beamlet representation, where second order beamlets are defined as hierarchically organized segments of conic curves. The algorithm can compress images in both a lossy and losless way, and it is also progressive. The experiments performed on benchmark images have shown that the proposed algorithm significantly outperforms the known JBIG2 standard and the base JBEAM algorithm both in losless and lossy compression. It is characterized, additionally, by the same time complexity as JBEAM, namely O(N² log₂ N) for image of size N × N pixels.
5
Content available remote Facial images dimensionality reduction and recognition by means of 2DKLT
100%
EN
Paper presents an efficient dimensionality reduction method for images (e.g. human faces databases). It does not require any usual pre-processing stage (like down-scaling or filtering). Its main advantage is associated with efficient representation of images leading to accurate recognition. Analysis is performed using two-dimensional Principal Component Analysis and Linear Discriminant Analysis and reduction by means of two-dimensional Karhunen-Loeve Transform. The paper presents mathematical principles together with some results of recognition experiments on popular facial databases. The experiments performed on several facial image databases (BioID [11], ORL/AT&T [3], FERET [8], Face94 [4] and Face95 [5]) showed that face recognition using this type of feature space dimensionality reduction is particularly convenient and efficient, giving high recognition performance.
6
100%
EN
A low bit rate image coding scheme based on vector quantization is proposed. In this scheme, the block prediction coding and the relative addressing techniques are employed to cut down the required bit rate of vector quantization. In block prediction coding, neighboring encoded blocks are taken to compress the current block if a high degree of similarity between them is existed. In the relative addressing technique, the redundancy among neighboring indices are exploited to reduce the bit rate. From the results, it is shown that the proposed scheme significantly reduces the bit rate of VQ while keeping good image quality of compressed images.
PL
Artykuł prezentuje algorytm kompresji obrazów planowany do zastosowania w bezprzewodowej kapsule ednoskopewj. Algorytm przeznaczony do tego typu zastosowań oprócz możliwie wysokiego stopnia kompresji musi cechować się bardzo niskim poborem mocy. To wymaganie wyklucza użycie standardowych metod. Proponowany algorytm oparty jest o całkowitoliczbowe wersje transformacji DCT i transformacji falkowej oraz koder Huffmana. W porównaniu do algorytmów konkurencyjnych proponowany algorytm oferuje znacznie większy stopień kompresji przy nieco większej (głównie pamięciowej) złożoności.
EN
The paper describes image compression algorithm suitable for wireless capsule endoscopy. Due to power limitation and small size conditions traditional image compression techniques are not appropriate and dedicated ones are neccessary. The proposed algorithm is based on integer version of discrete cosine transform (DCT) and wavelet transform (DWT) with Huffman entropy coder. Thanks to integer DCT/wavelet application it has low complexity and power consumption. Additionally, the algorithm can provide lossless compression as well as high-quality lossy compression.
8
Content available remote Correction of satellite images deformations using tropospheric aerosols
100%
EN
Modern digital technology has made it possible to manipulate multi-dimensional signals with systems that range from simple digital circuits to advanced parallel computers [1, 2]. The theory of Wiener gives the filter which minimizes the residual error (difference between the real exit and the desired exit), thus, the 2D Wiener filter gives a solution to many problems of two-dimensional signal processing such as the restoration of degraded images. However, since the determination of this filter implies the solution of a linear equations system with great dimension, fast algorithms are necessary. The effort of calculation for the determination of the coefficients of this filter depends primarily on the statistical nature of the input signal. The images provided by sensors are intended for various applications, however the geometrical deformations which accompanies them make them not easily exploitable. The goal of the geometrical correction is to generate an image presented according to one of the forms of projections cartographic of everyday usage; an image whose geometry is superposable to another is already corrected image. The method proposed in this paper is the analytical approach. And the optimal filter is the bicubic filter. Our survey concern in working out a sequential algorithm of numeric synthesis filter. To realize an Infinite Impulse Response (IIR) according to a model we need to apply the new concept of the parallelism; so to conceive an intended parallel algorithm to be executed on a map multiprocessors.
9
Content available remote Segmentation of motion field for scalable video compression
100%
EN
In this article we present new proposition of video compression based on a hybrid codec with segmentation. We proposed video segmentation algorithm based on block matching algorithm and Marcov Random Fields. This system works in two steps. In the first step motion vectors will be calculated. In the next step motion vector will be used in segmentation process. The second stage is an image classification based on motion vector and intensity function as well as compression with different rates of all segmentation regions from image. In this work we can also find influence of different macroblocks shape for: estimation vector field, segmentation process and compression rate.
EN
Interactive medical teleconsultations are an important tool in modern medical practice. Their applications include remote diagnostics, conferences, work- shops, and classes for students. In many cases, standard medium or low-end machines are employed, and the teleconsultation systems must be able to provide a high quality of user experience with very limited resources. Particularly problematic are large datasets consisting of image sequences that need to be accessed fluently. The main issue is insufficient internal memory; therefore, proper compression methods are crucial. However, a scenario where image sequences are kept in a compressed format in the internal memory and decompressed on- the-fly when displayed is difficult to implement due to performance issues. In this paper, we present methods for both lossy and lossless compression of medical image sequences that only require compatibility with the Pixel Shader 2.0 standard, which is present even on relatively old, low-end devices. Based on the evaluation of quality, size reduction, and performance, these methods have been proven to be suitable and beneficial for medical teleconsultation applications
11
Content available remote Image coding based on flexible contour model
100%
EN
This paper presents a new scheme of model-based image coding method. First a new image model called Flexible Contour Model that can extract features of nonrigid objects in images is proposed, then we deduce the fast algorithms for calculating the parameters of the model and for matching the model to images. Furthermore the combination of the model with multiscale analysis and the triangulation of the model has been studies. As a result, reconstrution of orginal images with high compression rate and unnaticeable distortion was obtained.
12
Content available remote Predictive Grayscale Image Coding Scheme Using VQ and BTC
100%
EN
A predictive image compression scheme that combines the advantages of vector quantization and moment preserving block truncation coding is introduced in this paper. To exploit the similarities among neighboring image blocks, the block prediction technique is employed in this scheme. If a similar compressed image block can be found in the neighborhood of current processing block, it is taken to encode this block. Otherwise, this image block is encoded either by vector quantization or moment preserving block truncation coding. A bit-rate reduced version of the proposed scheme is also introduced. According to the experimental results, it is shown that the proposed scheme provides better image quality at a low bitrate than these comparative schemes.
13
Content available remote Efficient greyscale image compression technique based on vector quantization
100%
EN
In this paper, a novel greyscale image coding technique based on vector quantization (VQ) is proposed. In VQ, the reconstructed image quality is restricted by the codebook used in the image encoding/decoding procedures. To provide a better image quality using a fixed-sized codebook, the codebook expansion technique is introduced in the proposed technique. In addition, the block prediction technique and the relatively address technique are employed to cut down the required storage cost of the compressed codes. From the results, it is shown that the proposed technique adaptively provides better image quality at low bit rates than VQ.
14
Content available remote Daubechies filters in wavelet image compression
100%
EN
The wavelet compression method is one of the most effective techniques of digital image compression. Effeciency of this method strongly depends on the filters used in two-dimensional wavelet transform. The fundamental way of construction of finite impulse response filters was given by I. Daubechies. The paper contains foundations of technique for wavelet image compression and presents a new proof of the fact that the Daubechies filters satisfy the conditions of a perfect signal reconstruction.
15
Content available remote Analysis of fractal operator convergence by graph methods
100%
EN
The convergence of fractal operator F used in image compression is investigated by analysis of block influence graph and pixel influence graph. The graph stability condition in block influence graph implies eventual contractivity condition which is sufficient for the operator iteration convergence. The graph stability condition in pixel influence graph appears to be sufficient and necessary for convergence of selecting fractal operators.
16
Content available remote Entropy-constrained multiresolution vector quantisation for image coding
100%
EN
A new image coding scheme based on multiresolution image representation and entropy-constrained vector quantisation is proposed. The proposed technique is evaluated using a series of images. The results show a performance improvement over the previously proposed multiresolution scheme at relatively high bitrates. The reasons for worse performance at low bitrates are discussed.
17
Content available remote A study on Partitioned Iterative Function Systems for image compression
100%
EN
The technique of image compression using Iterative Function System (IFS) is known as fractal image compression. An extension of IFS theory is called as Partitioned or local Iterative Function System (PIFS) for coding the gray level images. The theory of PIFS appears to be different from that of IFS in the sense of application domain. Assuming the theory of PIFS is the same as that of IFS, several techniques of image compression have been developed. In the present article we have studied the PIFS scheme as a separate one and proposed a mathematical formulation for the existence of its attractor. Moreover the results of a Genetic Algorithm (GA) based PIFS technique is presented. This technique appears to be efficient in the sense of computational cost.
EN
This paper presents a fast and adaptive lossless grayscale image compression algorithm based on the simple predication and the LZW dictionary compression. The LZW variant uses the dictionary in form of a simplified trie structure and a couple of original modifications of the basic LZW scheme. In the paper we report effects of applying the modifications, i. e. mainly the effects of introducing a new mechanism of adaptation which permits the large improvement in the compression speed at the cost of little degradation in the compression ratio. In the paper we also present the description and analysis of the algorithm, description of the large set of test images and the comparison to the standard LZW implementation.
PL
Tematem artykułu jest algorytm LZW zastosowany do szybkiej, bezstratnej oraz adaptacyjnej kompresji obrazów w odcieniach szarości. Użyta w badaniach odmiana algorytmu wykorzystuje słownik w postaci uproszczonego drzewa wektorowego oraz kilka oryginalnych modyfikacji wprowadzonych do podstawowego algorytmu LZW. Artykuł prezentuje efekty wprowadzonych modyfikacji, tj. przede wszystkim skutki zastosowania nowego mechanizmu adaptacji pozwalającego na duże zmiany prędkości kompresji kosztem nieznacznych zmian współczynnika kompresji. Artykuł zawiera również opis i analizę algorytmu, opis obszernego zbioru danych testowych oraz wyniki porównań ze standardową implementacją algorytmu LZW.
19
Content available remote Improved vector quantization scheme for grayscale image compression
88%
EN
This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.
EN
An image compression and encryption algorithm by combining the advanced encryption standard (AES) with the hyper-chaotic system is designed, in which Arnold map is employed to eliminate part of the block effect in the image compression process. The original image is compressed with the assistance of a discrete cosine transform and then its transform coefficients are encrypted with the AES algorithm. Besides, the hyper-chaotic system is adopted to introduce the nonlinear processfor image encryption. Numerical simulations and theoretical analyses demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.
first rewind previous Strona / 4 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.