Localized Picture Acquisition
The located image acquisition principle is illustrated by figure. It relies on a frame acquisition system mechanically attached to a localization unit. This unit integrates inertial sensors able to provide an accurate 6D pose of the system during the shooting as well as a calculator able to reconstruct the shooting kinematics.
The whole set constitutes the operative part of the system able to provide reliable pose metadata to be associated to each image frame.
So, a dedicated inertial measurement unit will have to be designed. It will stem from state-of-the-art FOG technologies and should be compliant with the required performances, weight and size constraints.
The robust matching capabilities to be obtained in the framework of the project rely on the appropriate merging of technologically advanced sensors. Special care should therefore be addressed to the analysis and specification of the technology to be implemented in the localization unit.
3D Scene Reconstruction
The output of the located frame processing is a set of characteristic points. For each of these points we know its 3D coordinates (its absolute location in space), its color attributes as well as the corresponding located frames used to recover the 3D information. The number of characteristic points depends both on the content of the scene (the richer, the higher) and on a user-defined parameter related to the sensibility of the feature point extraction process.
By interpolation, or by increasing the number of characteristic points up to the total number of pixels within each frame, the system can generate one dense depth image per frame. On these images we will extract higher level primitives (sharp edges, planar areas) and convert them as 3D segments and polygons to be preserved during reconstruction. Lastly, the motion and poses of the camera performed during the scene acquisition is also considered as being part of the data for reconstruction.
The output of the reconstruction algorithm is a set of 3D surface triangle meshes with metadata and color attributes.
The second laboratory model (3D modeling laboratory model) will be directed towards the 3D automatic and robust real time modelling of objects or scenes using located frame. It will implement the entire processing line algorithms and materials over the localization unit used in the first laboratory model. The integration of the implemented processing and real-time reconstruction components into the experimental platform of the project mainly involves the incorporation of the real-time reconstruction algorithm into the acquisition software in order to provide rapid coverage feedback to the user