Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote EDFCES: A new example-driven 3D face construction and editing system
EN
This paper presents an automatic runtime system for generating varied, realistic face models by synthesizing a global face shape and local facial features according to intuitive, high-level control parameters. Our method takes as examples 3D face scans in order to exploit the parameter-to-geometry correlations present in real faces. In order to establish the correspondences among the scanned models, we use a three-step model fitting approach to conform a generic head mesh onto each scanned model. We transform the obtained data sets of global face shapes and local feature shapes into vector space representations by applying a principal component analysis (PCA). We compute a set of face anthropometric measurements to parameterize the exemplary shapes in the measurement spaces. Using PCA coefficients as a compact shape representation, we approach the shape synthesis problem by forming scattered data interpolation functions designed to generate the desired face shape by taking anthropometric parameters as input. At runtime, the interpolation functions are evaluated for the input parameter values to produce new face geometries at an interactive rate. The correspondence among all exemplary face textures is obtained by parameterizing the 3D generic mesh over a 2D image domain. The new feature texture with the desired attributes is synthesized by interpolating the example textures. The resulting system is intuitive to control and fine-grained. We demonstrate our method by applying different parameters to generate a wide range of face models.
2
Content available remote Anatomy-based 3D facial modeling for expression animation
EN
In this paper we propose a new hierarchical 3D facia model that conforms to the human facial anatomy for realistic facial expression animation. The facial model has a hierarchical biomechanical structure, incorporationg a physically-based approximation to facial skin tissue, a set of anatomicallymotivated facial muscle actuators and underlying skull structure. The deformable skin model has multilayer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different kinds of muscle models have been developed to simulate the distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated by solving the governing dynamic equation numerically. To improve computational efficiency, we use a localized, semi-implicit integration method which allows a larger time step to be taken in the simulation while retaining stability. The dynamic facial animation algorithm runs at an interactive rate with flexible and realistic facial expressions to be generated.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.