3D Trassenplanung

Überblick:

An der in 2011 neu eingerichteten Forschergruppe "3D Trassenplanung" sind drei Arbeitsgruppen von der TU München und zwei vom KIT beteiligt.

Die Arbeitsgruppe von Prof. Rank und Dr. Mundani vom Lehrstuhl für Computation in Engineering der TU München beteiligen sich in der Forschergruppe an der Entwicklung einer Kooperationsplattform für die multidisziplinäre Trassenplanung auf der Basis mehrskaliger Modelle. Das Team von Prof. Borrmann vom Fachgebiet Computergestützte Modellierung und Simulation der TU München untersucht Methoden der Mehrskaligkeit in 3D Stadt- und Bauwerkmodellen. Schließlich sind Prof. Schilcher und Dr. Donaubauer vom Fachgebiet Geoinformationssysteme für die Entwicklung neuartiger Geo-Web-Services verantwortlich.

Am KIT untersucht die Gruppe am Geodätischen Institut um den Sprecher der Forschergruppe Prof. Martin Breunig und Mathias Menninghaus Methoden für den mobilen und internet-basierten Zugriff auf raum-zeitliche Datenbanken. Das Team von Prof. Stefan Hinz vom Institut für Photogrammetrie und Fernerkundung schließlich erforscht das Tracking für mobile bildgestützte Systeme für die Vor-Ort-Visualisierung im Kontext der kooperativen Trassenplanung.

 

Project of the IPF:

In this project a mobile, camera and image-based system is to be developed that allows planners to compare, document and inspect multi-scale 3D building models and construction plans on-site and during the building phase. Beyond that the system is supposed to be used in the facility management to overlay 3D models with camera images of a mobile device.

 

 

Overview first period

During the first period of the project a camera system was materialized and calibrated. It is, together with a tablet PC, part of the mobile system that is supposed to allow planners to compare and visualize construction plans and alternatives on-site and in his office.

System design and calibration: At the beginning a prototype of a helmet camera system was assembled. It consisted of 3 fisheye cameras that were fixed to a wooden board. The radiating orientation guarantees a 360° coverage of the environment at all time and the amount of data is still processible. In order to project 3D models to the camera images, the relative orientation between the cameras was determined and a vertex shader was developed to project the 3D models geometrically correct into the fisheye camera mapping geometry.

 

 
 
  Figure 1: Top: new helmet camera system. Three fisheye cameras plus data processing unit fully integrated in one helmet. Bottom: prototype of camera system from the beginning of the project.  

 

Estimation of an initial camera pose within a textureless building model

One big challenge for all augmented reality applications is the robust estimation of an initial camera pose that can be used for the subsequent camera tracking. In outdoor applications this can be achieved by using the GPS/GNSS sensor of the mobile device to get a coarse but absolute estimate of the position. In close-range indoor environments this possibility does not exist and different methods have to be developed. Since we can rely on multi-scale building models in this research group we developed methods to directly use those to get an initial pose.

 

 

 
  Figure 2: Top row: Initialized camera pose. From here a pose refinement and the model-based tracking can commence. Bottom row: test environment in the building basement.  

 

The video shows how the particle filter initialization works. On the right side you can see the particles (pose hypothesis) and on the left side you can see initialized poses. Each time the model is reprojected to the cameras, the system is initialized. Until now there is no tracking involved.

 

 

Contact