Real-time vision-based tracking and reconstruction
Many of the recent real-time markerless camera tracking systems assume the existence of a complete 3D model of the target scene. Also the system developed in the MATRIS project assumes that a scene model is available. This can be a freeform surface model generated automatically from an image sequence using structure from motion techniques or a textured CAD model built manually using a commercial software. The offline model provides 3D anchors to the tracking. These are stable natural landmarks, which are not updated and thus prevent an accumulating error (drift) in the camera registration by giving an absolute reference. However, sometimes it is not feasible to model the entire target scene in advance, e.g. parts, which are not static, or one would like to employ existing CAD models, which are not complete. In order to allow camera movements beyond the parts of the environment modelled in advance it is desired to derive additional 3D information online. Therefore, a markerless camera tracking system for calibrated perspective cameras has been developed, which employs 3D information about the target scene and complements this knowledge online by reconstruction of 3D points. The proposed algorithm is robust and reduces drift, the most dominant problem of simultaneous localisation and mapping (SLAM), in real-time by a combination of the following crucial points: (1) stable tracking of longterm features on the 2D level; (2) use of robust methods like the well-known Random Sampling Consensus (RANSAC) for all 3D estimation processes; (3) consequent propagation of errors and uncertainties; (4) careful feature selection and map management; (5) incorporation of epipolar constraints into the pose estimation. Validation results on the operation of the system on synthetic and real data are presented.