Egomotion Estimation and Multi-Run Depth Data Integration for 3D Reconstruction of Street Scenes
aut.embargo | No | en_NZ |
aut.thirdpc.contains | No | en_NZ |
dc.contributor.advisor | Klette, Reinhard | |
dc.contributor.advisor | Chen, Chia-Yen | |
dc.contributor.author | Chien, Hsiang-Jen | |
dc.date.accessioned | 2018-02-04T21:55:05Z | |
dc.date.available | 2018-02-04T21:55:05Z | |
dc.date.copyright | 2018 | |
dc.date.issued | 2018 | |
dc.date.updated | 2018-02-02T06:35:36Z | |
dc.description.abstract | Digitalisation of a 3D scene has been a fundamental yet highly active topic in the field of computer science. The acquisition of detailed 3D information on street sides is essential to many applications such as driver assistance, autonomous driving, or urban planning. Over decades, many techniques including active scanning and passive reconstruction have been developed and applied to achieve this goal. One of the state-of-the-art solutions of passive techniques uses a moving stereo camera to record a video sequence on a street which is later analysed for recovering the scene structure and the sensor's egomotion that together contribute to a 3D scene reconstruction in a consistent coordinate system. As a single reconstruction may be incomplete, the scene needs to be scanned multiple times, possibly with different types of sensors to fill in the missing data. This thesis studies the egomotion estimation problem in a wider perspective and proposes a framework that unifies multiple alignment models which are generally considered individually by existing methods. Integrated models lead to an energy minimisation-based egomotion estimation algorithm which is applicable to a wider range of sensor configurations including monocular cameras, stereo cameras, or LiDAR-engaged vision systems. This thesis also studies the integration of 3D street-side models reconstructed from multiple video sequences based on the proposed framework. A keyframe-based sequence bag-of-words matching pipeline is proposed. For integrating depth data from difference sequences, an alignment is initially found from established cross-sequence landmark-feature observations, based on the aforementioned outlier-aware pose estimation algorithm. The solution is then optimised using an improved bundle adjustment technique. Aligned point clouds are then integrated into a 3D mesh of the scanned street scene. | en_NZ |
dc.identifier.uri | https://hdl.handle.net/10292/11149 | |
dc.language.iso | en | en_NZ |
dc.publisher | Auckland University of Technology | |
dc.rights.accessrights | OpenAccess | |
dc.subject | Computer vision | en_NZ |
dc.subject | Egomotion estimation | en_NZ |
dc.subject | Pose recovery | en_NZ |
dc.subject | Image features | en_NZ |
dc.subject | 3D reconstruction | en_NZ |
dc.subject | Depth data integration | en_NZ |
dc.subject | Visual odometry | en_NZ |
dc.subject | SLAM | en_NZ |
dc.subject | Structure from motion | en_NZ |
dc.subject | Street scenes | en_NZ |
dc.title | Egomotion Estimation and Multi-Run Depth Data Integration for 3D Reconstruction of Street Scenes | en_NZ |
dc.type | Thesis | en_NZ |
thesis.degree.grantor | Auckland University of Technology | |
thesis.degree.level | Doctoral Theses | |
thesis.degree.name | Doctor of Philosophy | en_NZ |