All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

  • 1
    Call number: S 99.0139(380)
    In: Wissenschaftliche Arbeiten der Fachrichtung Geodäsie und Geoinformatik der Leibniz Universität Hannover, Nr. 380
    Description / Table of Contents: Semantic segmentation is an important task in computer vision to help machines gain a high-level understanding of the environment, similar to the human vision system. For example it is used in self-driving cars which are equipped with various sensors such as cameras and 3D laser scanners to gain a complete understanding of their environment. In recent years the field has been dominated by Deep Neural Networks (DNNs), which are notorious for requiring large amounts of training data. Creating these datasets is very time consuming and costly. Moreover, the datasets can only be applied to a specific type of sensor. The present work addresses this problem. It will be shown that knowledge from publicly available image datasets can be reused to minimize the labeling costs for 3D point clouds. For this purpose, the labels from classified images are transferred to 3D point clouds. To bridge the gap between sensor modalities, the geometric relationship of the sensors in a fully calibrated system is used. Due to various errors the naive label transfer can lead to a significant amount of incorrect class label assignments in 3D. Within the work the different reasons and possible solutions are shown in order to improve the label transfer.
    Type of Medium: Series available for loan
    Pages: v, 175 Seiten , Illustrationen, Diagramme
    ISBN: 978-3-7696-5301-4 , 9783769653014
    ISSN: 0174-1454
    Series Statement: Wissenschaftliche Arbeiten der Fachrichtung Geodäsie und Geoinformatik der Leibniz Universität Hannover Nr. 380
    Language: English
    Note: Dissertation, Gottfried Wilhelm Leibniz Universität Hannover, 2022 , Contents 1 Introduction 2 Theoretical Background 2.1 Cameras and Laserscanning 2.1.1 Cameras 2.1.2 Laserscanning 2.2 Machine Learning Fundamentals 2.2.1 Types of Learning 2.2.2 Supervised Learning - Illustrated by Decision Trees 2.2.3 Boosting 2.3 Deep Learning 2.3.1 Basics 2.3.2 Self-Attention 2.3.3 Generative Adversarial Networks 3 Related Work 3.1 Classification and Semantic Segmentation (2D) 3.2 Semantic Segmentation (3D) 3.3 Semi-Supervised Learning 3.4 Conditional Generative Adversarial Networks 3.5 Multi-View Fusion, Prediction and Labeling 3.6 Shape Completion 4 Multi-View Label Transfer and Correction 4.1 2D to 3D Label Transfer 4.1.1 Regular and Self-Occlusions 4.1.2 Dynamic Occlusions 4.1.3 Naive Label Transfer and Label Policy-Based Noise 4.2 Label Noise Correction 4.2.1 Scanstrip-Based Noise Correction 4.2.2 Semi-Supervised Scanstrip-Based Noise Correction 4.2.3 Conclusion 4.3 Multi-View Outlier Correction and Label Transfer 4.3.1 Multi-View Network 4.3.2 Label Transfer Network 4.3.3 Conclusion 5 Self-Supervised Point Cloud Rendering and Completion 5.1 Photo-Realistic Point Cloud Rendering 5.1.1 Network Architecture 5.1.2 Loss Function 5.1.3 Image Stitching 5.2 Self-Supervised Shape Completion 5.2.1 Subregion-Based GAN model 5.2.2 Loss Function 5.2.3 Network Architecture 6 Preparation of MMS data 6.1 Preprocessing of the Mobile Mapping Dataset 6.1.1 Semantic Segmentation of the MMS-Dataset 6.1.2 Human annotated MMS-Dataset 6.2 Massively Parallel Point Cloud Rendering Using Hadoop 6.3 Datasets of Self-Occluded Objects 6.3.1 Real Dataset 6.3.2 Synthetic Datasets 7 Experiments and Results for Multi-View Label Transfer 7.1 Introduction 7.2 Baseline 7.3 Training, Validation and Test Set 7.4 Scanstrip-Based Correction 7.4.1 Point-Wise Correction 7.4.2 Supervised Scanstrip-Based Correction 7.4.3 Semi-Supervised Scanstrip-Based Correction 7.4.4 Qualitative Evaluation 7.4.5 Conclusion and Discussion 7.5 Multi-View Error Correction 7.5.1 Baseline 7.5.2 Training, Validation and Test Sets 7.5.3 Training Procedure 7.5.4 Ablation Studies and Results 7.5.5 Qualitative Evaluation 7.5.6 Retraining Semantic Segmentation Network 7.5.7 Results of the Retraining Process 7.5.8 Conclusion and Discussion 7.6 Multi-View Label Transfer Learning 7.6.1 Training Procedure 7.6.2 Ablation Studies and Results 7.6.3 Qualitative Evaluation 7.6.4 Conclusion and Discussion 7.7 Summary and Conclusion 8 Experiments and Results for Self-Supervised Completion 8.1 Photorealistic Point Cloud Rendering 8.1.1 Training Procedure 8.1.2 Quantitative Evaluation 8.1.3 Qualitative Evaluation 8.1.4 Multi-View Error Correction in GAN Images 8.1.5 Conclusion and Discussion 8.2 Self-Supervised Shape Completion 8.2.1 Training Procedure 8.2.2 Quantitative Evaluation 8.2.3 Qualitative Evaluation 8.2.4 Conclusion and Discussion 9 Conclusion and Discussion 9.1 Summary and Discussion 9.1.1 Scanstrip-Based Label Error Correction 9.1.2 End-To-End Multi-View Label Transfer 9.1.3 Self-Supervised Completion 9.1.4 Conclusion 9.2 Outlook List of Figures List of Tables Bibliography Resume Acknowledgements , Sprache der Kurzfassungen: Englisch, Deutsch
    Location: Lower compact magazine
    Branch Library: GFZ Library
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2018-07-23
    Description: Global Navigation Satellite Systems (GNSS) deliver absolute position and velocity, as well as time information (P, V, T). However, in urban areas, the GNSS navigation performance is restricted due to signal obstructions and multipath. This is especially true for applications dealing with highly automatic or even autonomous driving. Subsequently, multi-sensor platforms including laser scanners and cameras, as well as map data are used to enhance the navigation performance, namely in accuracy, integrity, continuity and availability. Although well-established procedures for integrity monitoring exist for aircraft navigation, for sensors and fusion algorithms used in automotive navigation, these concepts are still lacking. The research training group i.c.sens, integrity and collaboration in dynamic sensor networks, aims to fill this gap and to contribute to relevant topics. This includes the definition of alternative integrity concepts for space and time based on set theory and interval mathematics, establishing new types of maps that report on the trustworthiness of the represented information, as well as taking advantage of collaboration by improved filters incorporating person and object tracking. In this paper, we describe our approach and summarize the preliminary results.
    Electronic ISSN: 1424-8220
    Topics: Chemistry and Pharmacology , Electrical Engineering, Measurement and Control Technology
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-07-07
    Description: We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.
    Print ISSN: 2512-2789
    Electronic ISSN: 2512-2819
    Topics: Geography , Geosciences
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...