3D models play an increased role in today's computer applications. As a result, there is a need for flexible and easy to use measuring devices that produce 3D models of real world objects. 3D scene reconstruction is a quickly evolving field of computer vision, which aims at creating 3D models from images of a scene. Although many problems of the reconstruction process have been solved, the use of photographs as an information source involves some practical difficulties. Therefore, accurate and dense 3D reconstruction remains a challenging task. We discuss dense matching of surfaces in the case when the images are taken from a wide baseline camera setup. Some recent studies use a region-growing based dense matching framework, and improve accuracy through estimating the apparent distortion by local affine transformations. In this paper we present a way of using pre-calculated calibration data to improve precision. We demonstrate that the new method produces a more accurate model.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Motion tracking is an important step of the analysis of flow image sequences. However, Digital Particle Image Velocimetry (DPIV) methods rarely use tracking techniques developed in computer vision: FFT and correlation are usually applied. Two major types of motion estimation algorithms exit in computer vision, namely, the optical flow and the feature based ones. Promising results have been recently obtained by optical flow techniques. In this paper, we examine the applicability of feature tracking algorithms to digital PIV. Two feature based and one optical flow based tracking algorithms are compared. Flow measurement and visualisation resultsfor standard DPIV sequences are presented.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
We present a novel method to create entirely textured 3D models of real objects by combining partial texture mappings using surface flattening (surface parametrisation). Texturing a 3D model is not trivial. Texture mappings can be obtained from optical images, but usually one image is not sufficient to show the whole object; multiple images are required to cover the surface entirely. Merging partial texture mappings in 3D is difficult. Surface flattening coverts a 3D mesh into 2D space preserving its structure. Transforming optical images to flattening-based texture maps allows them to be merged based on the structure of the mesh. In this paper we describe a novel method for merging texture mappings using flattening and show its results on synthetic and real data.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.