Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 8

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  image-based rendering
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available Per-pixel extrusion mapping with correct silhouette
EN
Per-pixel extrusion mapping consists of creating a virtual geometry that is stored in a texture over a polygon model without increasing its density. There are four types of extrusion mapping; namely, basic extrusion, outward extru sion, beveled extrusion, and chamfered extrusion. These different techniques produce satisfactory results in the case of plane surfaces; however, when it is about curved surfaces, a silhouette is not visible at the edges of the extruded forms on the 3D surface geometry, as they not take the curvatures of the 3D meshes into account. In this paper, we present an improvement that consists of using curved ray-tracing to correct the silhouette problem by combining per-pixel extrusion-mapping techniques with a quadratic approximation that is computed at each vertex of a 3D mesh.
EN
Image Based Rendering (IBR) is one of the most efficient approaches to the real-time computer visualisation. Applying the warping equation, being the essence of this method, it is possible to render the image observed by a virtual camera assuming the knowledge of the image and the depth map taken by another camera or cameras located at different positions. Nevertheless, depending on the geometrical configuration of the reference and destination (virtual) cameras some holes in the destination image can be observed. Their presence is caused by the fact that some fragments of objects visible from the destination camera may be not present in the reference images. A typical approach to filling such holes is splatting but typical algorithms usually cause the loss of details. Nevertheless, using the sub-pixel IBR based on the combination of the images taken from two or more cameras, the problem of missing data can be partially solved due to the possibility of the 2D or 3D interpolation, considering also the depth values of the projected points in the second case. The results obtained using the proposed approach have been verified using some recently proposed full-reference image quality assessment methods using the synthetic image of the 3D object as the reference image.
EN
In the paper the application of Image Based Rendering as a supplementary method useful for PCA-based face recognition is discussed. Presented results are based on the synthetic images of human faces' side views obtained from 3D models and 300 faces taken from FERET database. Application of Image Based Rendering allows the use of en face images rendered as the output based on two side views so the recognition accuracy can be improved.
PL
W artykule omówiono zastosowanie metody Image Based Rendering (IBR) jako techniki uzupełniającej, użytecznej przy rozpoznawaniu twarzy opartym na analizie komponentów głównych (PCA). Typowym zastosowaniem metody IBR jest szybka synteza obrazu o jakości porównywalnej z obrazem referencyjnym na podstawie informacji uzyskiwanych z rzeczywistej kamery zlokalizowanej w innym położeniu niż wirtualna kamera docelowa. Niezbędnym elementem do celów takiej syntezy jest również znajomość mapy głębokości obrazu referencyjnego. Uzyskiwane w taki sposób obrazy mogą być szczególnie użyteczne przy konieczności ich porównania ze wzorcami znajdującymi się w bazie, co jest typowe dla metod klasyfikacji i rozpoznawania wzorców, w tym obrazów. Przedstawione wyniki uzyskane zostały na podstawie syntetycznych obrazów twarzy obserwowanych z boku oraz 300 twarzy uzyskanych z bazy FERET. Jako reprezentatywna technika rozpoznawania twarzy, umożliwiająca dodatkowe wykorzystanie metody IBR, wybrana została metoda PCA, dla której uzyskano zauważalną poprawę skuteczności rozpoznawania twarzy z użyciem proponowanej metody. Zastosowanie metody IBR pozwala wykorzystać frontalne obrazy twarzy wyrenderowane nawet na podstawie obrazu z jednej kamery referencyjnej, co podnosi skuteczność rozpoznawania twarzy. Wykorzystanie obrazów z dwóch kamer bocznych wymaga precyzyjnego pasowania oraz kompensacji wpływu oświetlenia.
PL
Praca dotyczy algorytmicznych podstaw systemu telewizji trójwymiarowej o swobodnym punkcie obserwacji (FVP-3D-TV). Proponowane są ulepszenia szeregu podstawowych komponentów systemu: identyfikacji macierzy istotnej, konstrukcji map dysparycji i nawigacji przestrzennej jako elementu interfejsu użytkownika. Wykorzystanie metody optymalizacji LMM (Levenberga Marquarda) oraz wielobiegunowej faktoryzacji kątowej macierzy istotnej, pozwoliło na 90% redukcję błędu w porównaniu do modelu początkowego otrzymywanego metodą ośmiu punktów. Tworzenie map dysparycji jest wspomagane przez rektyfikację (prostowanie) biegunową, uzyskaną przez liniową transformację w dziedzinie obrazu. Skutkuje to zmniejszeniem zniekształceń w stosunku do techniki rektyfikacji o biegunach w nieskończoności. Wreszcie zaproponowano przyjazny dla użytkownika trójwymiarowy model nawigacji, który został zorganizowany wokół dwubiegunowych linii bazowych kamer rzeczywistych. Rozwiązanie to pozwoliło na płynne przełączanie widoku pomiędzy kamerami, a trajektoria kamery wirtualnej przecinająca linię główną nie generuje zauważalnych artefaktów, pomimo pojawiających się osobliwości macierzowych.
EN
A general scheme of free view point 3D television system (FVP-3D-TV) is considered. It is based on image based rendering and epipolar geometry of cameras. Several enhancements are proposed for the system basic modules: essential matrix identification, disparity map construction, and 3D navigation model for user interface. The epipolar angular factorisation of essential matrix is used for nonlinear least squared optimization. It reduces about 90% of error w.r.t. the initial model obtained by eight-point algorithm. Disparity map construction is supported by polar rectification. It is produced by 2D linear transformation of image domains and for camera setups applied in FVP-3D-TV systems, exhibits less distortion than rectification by mapping epipoles to infinity. Finally, user friendly 3D navigation model for GUI is proposed. It is organised around the baselines of real cameras. Despite the singularity of essential matrix equations, the trajectory of virtual camera can intersect baselines without noticed artefacts, and smooth switch between cameras is provided.
5
Content available remote The motion of impostors
EN
We describe a method that brings to life impostor-based environments. In a typical scene, supporting objects are rendered as two-dimensional texture maps always facing the camera. Billboards reduce complexity of objects to high extent. In such a representation, spatial properties of depicted objects are lost. Billboards are usually motionless in order to compress the video memory space. In our technique, we introduce 2.5D morphing with respect to the memory footprint. Minimum two textures are required to animate the billboard. Moreover, the whole process is automated and exploits a programmable GPU. As a result, the main application overhead is reduced. The method is designed for vegetation modeling, but can be easily extended to far- and middle-distance shots of humans.
6
Content available remote 3D plenoptic representation of a linear scene
EN
This paper presents a novel 3D plenoptic function. We constrain camera motion to a line, and create a linear mosaic using a manifold mosaic. The plenoptic function is represented with three parameters: camera position along the axis, the angle between the ray and the centric axis, and the rotation angle in the vertical plane. Novel views are renderend by combining the appropriate captured rays in an efficient manner at the rendering time. Like panoramas, our method does not require recovery of geometric and photometric scene models. Moreover, it provides a much richer user experience by allowing the user to move freely in a linear region and observe significant parallax and lighting changes. Compared with either Lightfield or Lumigraph, it has a much smaller file size because a 3D only plenoptic function is constructed. Finally, an experiment with a synthetic environment is given to demonstrate its efficiency in capturing construction and rendering of a linear scene.
7
Content available remote Warping-based interactive visualization on PC
EN
Image--based rendering produces realistic--looking 3D graphics at relatively low cost. In this paper, an original post--warping redering system using more than two sample views to derive a new view is presented. Owing to the warp--based compression and incremental coputation, the computational expense is less or no more than conventional two-image synthesis approaches. The procedure consists of three steps. First, a set of sample images is selectively acquired with conventional geometry rendering or volume rendering or from photographs of the real scene. Next each of the neighboring image pair is compressed by warping transformation based on redundant pixels between them. Finally, the compressed sample images are directly re- projected to produce new images. In order to improve the speed more, an incremental warping flow is developed, which is computationally less expense. With the method described above, animation faster than fifty frames (300x 300) per second is achieved on PC.
8
Content available remote Light field rendering of dynamic scene
EN
Image based rendering has displayed advantage in speed over traditional geometry based rendering algorithms. With the four dimensionl light field descriptions, static scene can be generated with interactive rate on ordinary computer. The limitation of the scheme is that it can only handle static scene and fixed illumination. This paper raises the idea to decompose the light field into sub-light-fields that do not change as scene changes. It expands the advantage of light field rendering to dynamic scenes where the position and orientation of objects, lights and viewpoint can be modified arbitrarily. The sub-light-fields include: ambient light field and spot light field. The latter is actually an eight-diamensional space. Because diffuse reflection is idenpendent on view direction, this paper presents a four-diamensional representation of spot light field. Considering the linearity of diffuse reflection to different spot light, the spot light fields of an object can be represented by the reflection light field to a pure-color light with unit intensity, to decrease storage and preprocessing. Owing to the coherency in thier data structures, data of the corresponding point in the ambient light field, diffuse light field and depth field are combined into a 5-dimensional vector which can be compressed efficiently with vector quantization. The algorithm given in this paper accurately computers typical characteristics of dynamic scene such as changes in surface color and shadow.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.