Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 10

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  human visual system
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Modern medical imaging techniques produce huge volume of data from stack of images generated in a single examination. To compress them several volumetric compression techniques have been proposed. Performance of these compression schemes can be improved further by considering the anatomical symmetry present in medical images and incorporating the characteristics of human visual system. In this paper a volumetric medical image compression algorithm is presented in which perceptual model is integrated with a symmetry based lossless scheme. Symmetry based lossless and perceptually lossless algorithms were evaluated on a set of three dimensional medical images. Experimental results show that symmetry based perceptually lossless coder gives an average of 8.47% improvement in bit per pixel without any perceivable degradation in visual quality against the lossless scheme.
EN
In this paper we emphasize a similarity between the logarithmic type image processing (LTIP) model and the Naka–Rushton model of the human visual system (HVS). LTIP is a derivation of logarithmic image processing (LIP), which further replaces the logarithmic function with a ratio of polynomial functions. Based on this similarity, we show that it is possible to present a unifying framework for the high dynamic range (HDR) imaging problem, namely, that performing exposure merging under the LTIP model is equivalent to standard irradiance map fusion. The resulting HDR algorithm is shown to provide high quality in both subjective and objective evaluations.
EN
Perceptual quality assessment of 3D triangular meshes is crucial for a variety of applications. In this paper, we present a new objective metric for assessing the visual difference between a reference triangular mesh and its distorted version produced by lossy operations, such as noise addition, simplification, compression and watermarking. The proposed metric is based on the measurement of the distance between curvature tensors of the two meshes under comparison. Our algorithm uses not only tensor eigenvalues (i.e., curvature amplitudes) but also tensor eigenvectors (i.e., principal curvature directions) to derive a perceptually-oriented tensor distance. The proposed metric also accounts for the visual masking effect of the human visual system, through a roughness-based weighting of the local tensor distance. A final score that reflects the visual difference between two meshes is obtained via a Minkowski pooling of the weighted local tensor distances over the mesh surface. We validate the performance of our algorithm on four subjectively-rated visual mesh quality databases, and compare the proposed method with state-of-the-art objective metrics. Experimental results show that our approach achieves high correlation between objective scores and subjective assessments.
4
Content available remote Bayesian Segmentation Based Local Geometrically Invariant Image Watermarking
EN
Robust digital watermarking has been an active research topic in the last decade. As one of the promising approaches, feature point based image watermarking has attracted many researchers. However, the related work usually suffers from the following limitations: 1) The feature point detector is sensitive to texture region, and some noise feature points are always detected in the texture region. 2) The feature points focus too much on high contrast region, and the feature points are distributed unevenly. Based on Bayesian image segmentation, we propose a local geometrically invariant image watermarking scheme with good visual quality in this paper. Firstly, the Bayesian image segmentation is used to segment the host image into several homogeneous regions. Secondly, for each homogeneous region, image feature points are extracted using the multiscale Harris-Laplace detector, and the corresponding invariant local image regions are constructed adaptively. Finally, by taking the human visual system (HVS) into account, digital watermark is repeatedly embedded into local image regions by modulating the magnitudes of DFT coefficients. By binding the digital watermark with the invariant local image regions, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise adding, and JPEG compression etc, but also robust against the geometric distortions.
EN
This paper proposes a human visual system based data hiding method with the consideration of the local complexity in images. It is known that human vision is more sensitive to the changes in smooth area than that of complex area, we embed less data into blocks with low complexity and embed more data into blocks with rich texture. We use the modified diamond encoding (MDE) as the embedding technique, and employ a sophisticated pixel pair adjustment process to maintain the complexity consistency of blocks before and after embedding data bits. Since the proposed method is robust to LSB-based steganalysis, it is more secure than other existing methods using the LSB replacement as their embedding technique. The experimental results revealed that the proposed method not only offers a better embedding performance, but is also secure under the attack of the LSB based steganalysis tools.
PL
Statystyki obrazów naturalnych, definiowanych jako nieprzetworzone obrazy rejestrowane przez człowieka, charakteryzują się dużą regularnością. Ich cechy wykorzystywane są w wielu aplikacjach grafiki komputerowej takich jak usuwanie szumu, czy kompresja. W artykule przedstawiono algorytm do szybkiego obliczenia statystyk wyższego rzędu na podstawie współczynników falek z wykorzystaniem programowalnego procsora graficznego. W rezultatach przedstawiono wyniki przyspieszenia uzyskanego przy wykorzystaniu GPU w porównaniu z implementacją na CPU.
EN
A natural image is unprocessed reproduction of a natural scene observed by a human. The Human Visual System (HVS), during its evolution, has been adjusted to the information encoded in natural images. Computer images are interpreted best by a human when they fit natural image statistics that can model the information in natural images. The main requirement of such statistics is their striking regularity. It hepls separate the information from noise, reconstruct information which is not avaiable in an image, or only partially avaiable. Other applications of statistics is compression, texture synthesis or finding distortion model in image like blur kernel. The statistics are translation and scale invariant, therefore a distribution of statistics does not depend on the object position in the image and on its size. In this paper there are presented higher order natural image statistics calculations based on GPU. The characteristic of the statistics is that they are independent of the scale and rotation transformations. Therefore, they are suitable for many graphic applications. To analyze images there is used statistics computed in the wavelet domain and there is considered the image contrast. The computation speedup is presented in the results. The paper is organized as follows: the overview of natural images sta-tistics is introduced in Section 2. In Section 3 the GPU-based implementation is described. The obtained results are given in Section 4. Finally, there are presented the concluding remarks.
7
Content available remote Optimal polar image sampling
EN
In this paper, a problem of efficient image sampling (deployment of image sensors) is considered. This problem is solved using techniques of two-dimensional quantization in polar coordinates, taking into account human visual system (HVS) and eye sensitivity function. The optimal radial compression function for polar quantization is derived. Optimization of the number of the phase levels for each amplitude level is done. Using optimal radial compression function and optimal number of phase levels for each amplitude level, optimal polar quantization is defined. Using deployment of quantization cells for the optimal polar quantization, deployment of image sensors is done, and therefore optimal polar image sampling is obtained. It is shown that our solution (the optimal polar sampling) has many advantages compared to presently used solutions, based on the log-polar sampling. The optimal polar sampling gives higher SNR (signal-to-noise ratio), compared to the log-polar sampling, for the same number of sensors. Also, the optimal polar sampling needs smaller number of sensors, to achieve the same SNR, compared to the log-polar sampling. Furthermore, with the optimal polar sampling, points in the image middle can be sampled, which is not valid for the log-polar sampling. This is very important since human eye is the most sensitive to these points, and therefore the optimal polar sampling gives better subjective quality.
EN
The research on the human reading technique and text perception strategies needs objective evaluation methods. Our proposal consists in the use of scan-path recording and eye track processing results as estimators of reading skills and capability of error compensation. Visual tasks, being principal investigation tool, were completed by arrangement of texts for presentation, eye trajectory acquisition and assessment of comprehension degree. Our results show that gaze point statistics represent well the observer performance and skills in fast reading even in free-spotting-based visual tasks. Additionally, we reveal very high human tolerance for errors, outperforming any known optical character recognition software.
9
EN
A new segmentation technique that is suitable for object-based coding schemes is proposed in this paper. Focusing on DCT frequency coefficients, the proposed technique assigns each frequency coefficient a corresponding rate value according to the importance of edge information. On the other hand, it adopts frequency sensitivity and edge contrast information as well as the properties of the human visual system to weight each frequency coefficient. As a result, the segmentation technique proposed in this paper can effectively reveal the necessary significant edges that make up the shape of objects so as to further facilitate obtaining meaningful objects.
10
Content available The universal quality index for medical images
EN
The aim of this paper is to propose a new quality index which measures the distance between a reference (source) image and its corrupted copy in the way as Human Visual System (HVS) does. The new quality index called the Mean Weighted Quality Index (MW) is defined with the help of the well known easy calculated indexes. The experiments performed on a number of medical images confirmed usefulness of the new index.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.