The 3 years research project "GYROVIZ" addresses the challenge of automatic modeling of 3D physical scenes from located frames. The central motivation of our proposal stems from the observation of the current limitations of image-based modeling systems, which require a substantial amount of user interaction for matching the images. This limitation becomes even more critical when dealing with massive datasets to match.

While a considerable amount of efforts has previously been done to solve this issue purely from the algorithmic point of view, we propose to address it primarily from the technological side. Our solution hinges on a set of new accurate inertial sensors (specific fiber optic gyroscopes and accelerometers) which once coupled with an image acquisition device, provide a very precise measure of its physical location and pose.

Both robustness and efficiency of the image matching problem are this way substantially improved, as the solution space of the matching problem is drastically reduced, and the matching is always provided with good initial guesses

The algorithmic aspect of our proposal relies on state-of-the-art computer vision and geometric computing techniques for extracting and matching characteristic points and features from images, and for reconstructing 3D models with color attributes. Real-time reconstruction as well as feature and symmetry-preserving reconstruction are listed as topics for 2 Ph.D. theses (CEA/LIST and INRIA).

The complete system should be mobile and portable, and should allow a fully automatic 3D modeling of complex scenes. Two complementary laboratory models using the same localization unit will be developed in the framework of the project to make various sets of data acquisition for algorithm tests as well as to enable a practical evaluation of the potential of the technology with end-users:

  • A 3D real time video camera tracking system for virtual studio and postproduction purposes;
  • A 3D static scene and object modeling system for multimedia and engineering applications.

The project GYROVIZ is supported by the French Research Agency - ANR.

Key data

  • 3D scene reconstruction from pictures
  • Pictures and videos acquisitions
  • 3D tracking
  • 3D scene reconstruction in real time
  • Self-localization module able to provide 6 location information for each image
  • Geometric features reconstruction from point clouds

Localized Picture Acquisition

The located image acquisition principle is illustrated by figure. It relies on a frame acquisition system mechanically attached to a localization unit. This unit integrates inertial sensors able to provide an accurate 6D pose of the system during the shooting as well as a calculator able to reconstruct the shooting kinematics.

GYROVIZ - Schema de principe

The whole set constitutes the operative part of the system able to provide reliable pose metadata to be associated to each image frame.

So, a dedicated inertial measurement unit will have to be designed. It will stem from state-of-the-art FOG technologies and should be compliant with the required performances, weight and size constraints.

The robust matching capabilities to be obtained in the framework of the project rely on the appropriate merging of technologically advanced sensors. Special care should therefore be addressed to the analysis and specification of the technology to be implemented in the localization unit.

3D Scene Reconstruction

The output of the located frame processing is a set of characteristic points. For each of these points we know its 3D coordinates (its absolute location in space), its color attributes as well as the corresponding located frames used to recover the 3D information. The number of characteristic points depends both on the content of the scene (the richer, the higher) and on a user-defined parameter related to the sensibility of the feature point extraction process.

By interpolation, or by increasing the number of characteristic points up to the total number of pixels within each frame, the system can generate one dense depth image per frame. On these images we will extract higher level primitives (sharp edges, planar areas) and convert them as 3D segments and polygons to be preserved during reconstruction. Lastly, the motion and poses of the camera performed during the scene acquisition is also considered as being part of the data for reconstruction.

The output of the reconstruction algorithm is a set of 3D surface triangle meshes with metadata and color attributes.

The second laboratory model (3D modeling laboratory model) will be directed towards the 3D automatic and robust real time modelling of objects or scenes using located frame. It will implement the entire processing line algorithms and materials over the localization unit used in the first laboratory model. The integration of the implemented processing and real-time reconstruction components into the experimental platform of the project mainly involves the incorporation of the real-time reconstruction algorithm into the acquisition software in order to provide rapid coverage feedback to the user


Download our documents

Contact Us

For more information Tel. : + 33 4 94 11 57 00

Haut de page