Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 22

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  JPEG
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
2
100%
PL
W artykule przedstawiono współczesne tendencje w obszarze kompresji obrazów cyfrowych. Podano wady i zalety poszczególnych metod kompresji: JPEG, falkowej i fraktalnej. Podano istotne różnice w podejściu do poszczególnych metod. Materiał badawczy obejmuje kompresje zdjęć lotniczych i zobrazowań satelitarnych za pomocą standardów JPEG, JPEG 2000 oraz metody falkowej ECW. Testy zostały przeprowadzone na wybranych zdjęciach lotniczych oraz zobrazowaniach satelitarnych. Wyniki badań pokazują, że stopień kompresji zależny jest od skali zdjęcia oraz tekstury obrazu występującej na nim. W artykule zastosowano różne miary oceny straty jakości na obrazach po kompresji w odniesieniu do obrazów niedegradowanych.
EN
In this paper actual tendencies in area of image compression are presented. The advantages and disadvantages of particular compression methods such as: JPEG, wavelet and fractal are given. There are also discussed essential differences in presented methods of image compression. The tests were performed on selected aerial and satellite images using following compressions: JPEG, JPEG 2000, and wavelet ECW. The results show, that compression ratio is dependent on scale of image and image texture. In this paper different estimated measures of quality loss on images after compression, with reference to not-compressed images, have been used.
3
Content available Prior Image Jpeg-Compression Detection
94%
EN
The paper presents two methods of prior JPEG-compression detection. In the first method the histogram of chrominance is analysed. In JPEG-compressed images the histogram contains significantly more local maxima than in uncompressed files. The second method is based on neighbouring pixel Ø value difference. In JPEG-compressed image the distribution of these values is different than the distribution counted on the edges of compression 8x8 blocks. These differences are summed up to create a classifier that allows to assess if the image was compressed.
EN
To reduce blocking artifacts and enhance visual quality, a simple and efficient DWT-based technique is proposed for post-processing of block-based encoded images. Basically, the technique proposed in this paper removes the blocking artifacts that appear in smooth blocks within the DWT frequency domain. From the experimental results, we see that the proposed technique can easily and effectively remove the blocking artifacts as well as enhance the visual quality of an image. In addition, compared with some other current deblocking methods, the deblocked images created by using the proposed technique have higher PSNR.
EN
The problem investigated in this paper refers to image retrieval based on its compressed form, hence giving much advantages in comparison to traditional methods involving image decompression. The main goal of this paper is to discuss a unified visual descriptor for images stored in the two most popular image formats – JPEG/JFIF and JPEG-2000 in the aspect of content-based image retrieval (CBIR). Since the problem of CBIR takes a special interest nowadays, it is clear that new approaches should be discussed. To achieve such goal a unified descriptor is proposed based on low-level visual features. The algorithm operates in both DCT and DWT compressed domains to build a uniform, format-independent index. It is represented by a three-dimensional color histogram computed in CIE L*a*b* color space. Sample software implementation employs a compact descriptor calculated for each image and stored in a database-like structure. For a particular query image, a comparison in the feature-space is performed, giving information about images' similarity. Finally, images with the highest scores are retrieved and presented to the user. The paper provides an analysis of this approach as well as the initial results of application in the field of CBIR.
PL
Artykuł opisuje opracowanie akceleratora dla wybranych algorytmów kompresji obrazu nieruchomego. Do jego sprzętowej realizacji został wykorzystany język opisu sprzętu VHDL. Wynikiem pracy była skuteczna implementacja na układ programowalny dekompresora obrazów nieruchomych zapisanych w standardzie JPEG ISO/IEC 10918-1(1993), trybie Baseline będącym podstawowym i obowiązkowym trybem dla tego standardu. Szczególną uwagę poświęcono wyborowi i implementacji dwóch najważniejszych zdaniem autora algorytmów występujących w omawianym standardzie.
EN
Image compression is one of the most important topics in the industry, commerce and scientific research. Image compression algorithms need to perform a large number of operations on a large number of data. In the case of compression and decompression of still images the time needed to process a single image is not critical. However, the assumption of this project was to build a solution which would be fully parallel, sequential and synchronous. The paper describes the development of an accelerator for selected still image compression algorithms. In its hardware implementation there was used the hardware description language VHDL. The result of this work was a successful implementation on a programmable system decompressor of still images saved in JPEG standard ISO / IEC 10918-1 (1993), Baseline mode, which is a primary, fundamental, and mandatory mode for this standard. The modular system and method of connection allows the continuous input data stream. Particular attention was paid to selection and implementation of two major, in the authors opinion, algorithms occuring in this standard. Executing the IDCT module uses an algorithm transformation IDCT-SQ modified by the authors of this paper. It provides a full pipelining by applying the same kind of arithmetic operations between each stage. The module used to decode Huffman's code proved to be a bottleneck
PL
Przedstawiono krótki przegląd metod kompresji obrazu ze szczególnym uwzględnieniem kodowania sekwencji wizynych. Wskazano najważniejsze przewidywane kierunki rozwoju oraz istotne nierozwiązane problemy. Opisano w skrócie stan standaryzacji w zakresie kompresji oraz nanowsze prace normalizacyjne.
EN
The paper presents a short overview of the image compression methods with particular stress laid on video compression. Mentioned are the main expected directions of further development as well as open problems. The state-of-the-art standardization in image compression is also described.
8
Content available remote Experimental analysis of picture quality after compression by different methods
84%
EN
In this paper we present experimental results comparing the quality of still Black & White (B/W) images compressed using four methods: JPEG, JPEG2000, EZW and SPIHT. The compression was performed on three pictures with differing levels of detail and density (bit-rates - bpp) using VCDemo software. The quality of the compressed pictures is determined by values of MSE, SNR and PSNR. The values are presented in appropriate tables and diagrams. By comparing the values obtained, we have found the methods that give best results depending on the picture bitrate and level of detail.
PL
W artykule opisano rezultaty eksperymentalnego badania kompresji obrazu czarno/białego przy wykorzystaniu czterech metod: JPEG, JPEG2000, EZW i SPIHT. Kompresję wykonywano na trzech obrazach o różnym poziomie detali i różnej gęstości.
PL
Zwrócono uwagę na kilka istotnych problemów związanych z doskonaleniem metod kompresji obrazów. Zwrócono uwagę na konieczność praktycznego wykorzystania semantycznej teorii informacji, szczególnie w kontekście kompresji z selekcją informacji. Podkreślono znaczenie uniwersalnych, bezstratnych koderów danych oraz hierarchicznego uporządkowania informacji w elastycznych koderach z progresją informacji.
EN
Analysis and suggestions of image compression improvements were presented. Characteristics of modern conditions of development showed limitations and perspective of development in the following areas: utilization of semantic information theory, selection of useful information by interactive user oriented algorithms, application of universal lossless archivers for image compression, flexibility of ordered hierarchical representation of information in multiscale image coders. Selected lossless and lossy coders were verified experimentally and new compression paradigms were concluded.
EN
In the paper we present a method of direct access to single blocks of JPEG files which contain textures, with on-the-fly decompression. Anisotropic, adaptive filtering is applied in order to minimize visual defects appearing mainly on blocks borders. Main purpose of the method is to enable fast extraction of only these parts of an entire image which arę currently needed and not to keep whole decompressed texture in the main memory. This approach enables effective usage of high quality textures with Iow memory consumption. It's benefits are mainly demonstrated in rendering complex 3D scenes using nondeterministic ray-tracing algorithm. The algorithms have been encapsulated into DLL and static library.
PL
W artykule przedstawiono metodę swobodnego dostępu do pojedynczych bloków obrazów JPEG zawierających tekstury, z dekompresją, wykonywaną na bieżąco. Zastosowane przy tym anizotropowe adaptacyjne filtry zostały dobrane pod kątem minimalizacji obserwowanych zniekształceń, pojawiających się głównie na granicach bloków. Głównym celem zaproponowanej metody jest umożliwienie szybkiego dostępu tylko do tych fragmentów obrazu, które aktualnie są wymagane, bez konieczności przechowywania całej zdekompresowanej tekstury w pamięci komputera. Takie podejście pozwala na efektywne użycie dużych tekstur o wysokiej rozdzielczości przy oszczędnym wykorzystaniu pamięci. Swoje zalety demonstruje głównie w renderowaniu scen 3D przy użyciu metody śledzenia promieni. Zaproponowane algorytmy zostały wbudowane w bibliotekę typu DLL i statyczną.
PL
W artykule omówiono problem określenia wpływu skanowania i kompresji stratnej obrazów cyfrowych na wykrywanie obiektów liniowych i punktowych. Gwałtowne zwiększenie się zdolności rozdzielczej skanerów pociągnęło za sobą wzrost objętości obrazów rastrowych, a co za tym idzie, konieczność stosowania kompresji stratnych do ich efektywnego przechowywania i manipulowania. Powstał zatem zasadniczy problem, gdzie znajduje się granica stosowania kompresji stratnej, przy której będzie można wykorzystać obrazy do celów fotogrametrycznych. Niniejszy artykuł jest próbą odpowiedzi na to pytanie. Zasadniczym problemem jawi się tutaj kwestia zdefiniowania miary określającej stratę. Zaproponowano dwie: miarę globalną - liczoną jako współczynnik korelacji oraz miarę lokalną - średni błąd położenia punktu wyznaczonego jako przecięcie się krawędzi na obrazie po kompresji w stosunku do obrazu przed kompresją. W przeprowadzonych badaniach wykorzystano metody JPEG, JPEG2000 oraz ECW firmy ER Mapper. Do skanowania użyto profesjonalnego skanera fotogrametrycznego Photoscan TD firmy Intergraph celem zbadania również wpływu wielkości apertury skanowania. Testowano obrazy skanowane z pikselem 7, 14 i 21 mikrometrów. Do celów badań wykorzystano autorskie oprogramowanie FES (ang.: Feature Extraction Software).
EN
This article discusses the obstacles concerning an influence of scanning and compression on linear and points feature extraction. The massive increase in resolution ability of scanners caused the increase in raster files and necessity to use a loss compression to manage and save the data effectively. Consequently, a problem that arose concerns issue of how to define the boundary of a loss compression usage so that it can be used for photogrammetric purposes. The aim of this article is to provide an answer to the problem mentioned above. The main obstacle in this matter is how to define the measurement concerning the loss. Therefore, the two solutions has been suggested: global measurement counted as coefficient correlation and local measurement as a RMS of the point determined an intersection of the edge of the image, after compression in proportion to image before the compression. While undertaking the research studies, JPEG, JPEG2000 and ECW methods were used. A professional photogrammetric Photoscan TD Intergraph scanner was used for determining an influence of equipment size. The pixel with resolution 7, 14 and 21 micrometers was used during the research, as well as, authoring software FES (Feature Extraction Software).
EN
In today’s highly computerized world, data compression is a key issue to minimize the costs associated with data storage and transfer. In 2019, more than 70% of the data sent over the network were images. This paper analyses the feasibility of using the SVD algorithm in image compression and shows that it improves the efficiency of JPEG and JPEG2000 compression. Image matrices were decomposed using the SVD algorithm before compression. It has also been shown that as the image dimensions increase, the fraction of eigenvalues that must be used to reconstruct the image in good quality decreases. The study was carried out on a large and diverse set of images, more than 2500 images were examined. The results were analyzed based on criteria typical for the evaluation of numerical algorithms operating on matrices and image compression: compression ratio, size of compressed file, MSE, number of bad pixels, complexity, numerical stability, easiness of implementation.
EN
Watermarking is the process of embedding watermarks into an image such that the embedded watermark can be extracted later. Lossy compression attacks in digital water-marking are one of the major issues in digital watermarking. Cheddad et al. proposed a robust secured self-embedding method which is resistant to a certain amount of JPEG compression. Our experimental results show that the self-embedding method is resistant to JPEG compression attacks and not resistant to other lossy compression attacks such as Block Truncation Coding (ETC) and Singular Value Decomposition (SVD). Therefore we improved Cheddad et al's. method to give better protection against ETC and SVD compression attacks.
15
Content available remote Selective low bit-rate image compression using wavelet transform
71%
EN
A novel image compression approach reaching very low bit-rates by utilization of region of interest information in the transform domain of the discrete wavelet transform (DWT) is introduced. The content-dependent image coding scheme yields an improved quality in the regions of interest while the background regions distant from the center of interest are smooth. Coding distortions are visible for very low bit-rates outside the regions of interest which is tolerable for the purpose of content dependent image compression. The significance of certain image regions can be selected interactively or determined by appropriate image segmentation schemes. Qualitative results are presented for several compression ratios and a comparison with JPEG compression is made.
16
71%
EN
The idea of cancelable biometrics is widely used nowadays for user authentication. It is based on encrypted or intentionally-distorted templates. These templates can be used for user verification, while keeping the original user biometrics safe. Multiple biometric traits can be used to enhance the security level. These traits can be merged together for cancelable template generation. In this paper, a new system for cancelable template generation is presented depending on discrete cosine transform (DCT) merging and joint photographic experts group (JPEG) compression concepts. The DCT has an energy compaction property. The low-frequency quartile in the DCT domain maintains most of the image energy. Hence, the first quartile from each of the four biometrics for the same user is kept and other quartiles are removed. All kept coefficients from the four biometric images are concatenated to formulate a single template. The JPEG compression of this single template with a high compression ratio induces some intended distortion in the template. Hence, it can be used as a cancelable template for the user acquired from his four biometric traits. It can be changed according to the arrangement of biometric quartiles and the compression ratio used. The proposed system has been tested through merging of face, palmprint, iris, and fingerprint images. It achieves a high user verification accuracy of up to 100%. It is also robust in the presence of noise.
17
Content available remote A review of image and video coding standards
71%
EN
We present the most popular image and video coding standards. We concentrate on lossy compression techniques and ISO and ITU-T standards. JPEG image compression standard is described and its successor, currently developed JPEG-2000 is mentioned. For video, there are several standards aimed at different applications. We describe them starting with ISO MPEG-1 and continuing with MPEG-2, MPEG-4, H.261 and H.263.
PL
Artykuł przedstawia algorytm stratnej kompresji obrazu z wykorzystaniem aproksymacji liniowej. Omówione są wyniki kompresji przykładowych bitmap. Sformułowane są również wnioski na temat przydatności tego algorytmu dla pewnego rodzaju obrazów.
EN
In this paper lossy image compression algorithm has been presented. The algorithm uses linear approximation. The article discusses the compression result of example bitmaps. The conclusions of the usefulness of the algorithm for some kind of pictures has been discussed.
EN
Both JPED and JPEG200 compression method are based on human visual perception properties commonly there are used classical quality metrics. In the paper there is applied S-CIELAB filtering and metric to evaluate quality of compression in JPEG standards measured as frequency weighted signal to noise ratio.
PL
Chociaż kompresja JPEG, jak i JPEG2000 bazują na pewnych cechach ludzkiej percepcji, do oceny jakości obrazów najczęściej wykorzystywane są proste metryki. W artykule wykorzystano do oceny jakości kompresji sandardami JPEG ważony stosunek sygnału do szumu przy użyciu filtracji S-CIELAB.
20
59%
EN
In this paper the authors propose a new low-complexity approximation of 8-point discrete cosine transform (DCT) that requires 18 additions and two bit-shift operations. It is shown that the proposed transform outperforms significantly the known transform of the same computational complexity when applied to a JPEG compression stream in practical cases of encoding and decoding of still images. As such, the proposed transform can be effectively used in any practical applications where significant limitations exist regarding the computational capabilities coding and / or decoding devices, i.e. mobile devices or industrial imaging devices.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.