ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Call number: S 90.0081(443)
    In: Reports of the Department of Geodetic Science and Surveying
    Type of Medium: Series available for loan
    Pages: VIII, 59 S.
    Series Statement: Report / Department of Geodetic Science and Surveying, the Ohio State University 443
    Language: English
    Location: Lower compact magazine
    Branch Library: GFZ Library
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    The @photogrammetric record 20 (2005), S. 0 
    ISSN: 1477-9730
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Architecture, Civil Engineering, Surveying
    Notes: Resampled imagery according to epipolar geometry, usually denoted as normalised imagery, is characterised by having conjugate points along the same row (or column). Such a characteristic makes normalised imagery an important prerequisite for many photogrammetric activities such as image matching, automatic aerial triangulation, automatic digital elevation model and orthophoto generation, and stereo viewing. The normalisation process requires having the input imagery in a digital format, which can be obtained by scanning analogue photographs or by direct use of digital cameras. To reduce the time gap between the data acquisition and product delivery, many small-scale mapping projects now rely on digital cameras. Digital frame cameras still, in general, provide imagery with geometric resolution and ground coverage inferior to scanned images from analogue cameras. Linear array scanners (line cameras) have therefore emerged as a possible alternative to digital frame cameras, especially for high-resolution space-borne imaging, with performance comparable to that of analogue frame cameras.The normalisation process of frame images is a well-established and straight forward procedure. On the other hand, the normalisation process of linear array scanner scenes is not as straightforward and is sometimes mysterious. For example, providers of space-borne imagery furnish normalised line scanner imagery while the user community is not aware of the underlying process. This paper presents a comprehensive analysis of the epipolar geometry in linear array scanner scenes. Special emphasis is directed towards scanners moving with constant velocity and attitude since such a trajectory closely resembles the imaging geometry of the majority of current space-borne scanners. The research presented highlights the factors affecting the shape of the resulting epipolar lines such as the stereo coverage configuration and the geometric specifications of the imaging system. In addition, the paper outlines a comparative analysis of the normalisation process for frame and line cameras. The presented concepts are verified through experimental results with synthetic data.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    The @photogrammetric record 18 (2003), S. 0 
    ISSN: 1477-9730
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Architecture, Civil Engineering, Surveying
    Notes: Automatic single photo resection (SPR) remains one of the challenging problems in digital photogrammetry. Visibility and uniqueness of distinct control points in the input imagery limit robust automation of the space resection procedure. Recent advances in photogrammetry mandate adopting higher-level primitives, such as free-form control linear features, for replacing traditional control points. Linear features can be automatically extracted from the image space. On the other hand, object space control linear features can be obtained from an existing GIS layer containing 3D vector data such as road networks or from newly developed terrestrial mobile mapping systems (MMS). In this paper, two different approaches are presented for simultaneously determining the position and attitude of the imagery as well as the correspondence between image and object space linear features. These approaches are based on two representation schemes of the linear features. The first one represents the linear feature by a sequence of 2D and 3D points along the linear feature in the image and object space, respectively. The second scheme assumes that the feature is modelled by polylines (a sequence of straight-line segments). Neither approach requires one-to-one correspondence between image and object space primitives, which makes the suggested methodology robust against changes and/or discrepancies between the data-sets involved. This characteristic will be helpful in detecting and dealing with changes between object and image space linear features (due to temporal effects for example). The parameter estimation and matching follow an optimal sequential procedure that is developed and described within this paper, which depends on the sensitivity of the mathematical model relating corresponding primitives at various image regions to incremental changes in the exterior orientation parameters (EOP). Experiments are conducted to compare the algorithms’ efficiency and the accuracy of the estimated EOP using both approaches. Experimental results using real data demonstrate the feasibility and robustness of both representation schemes as well as the methodologies developed. Moreover, different generalisation levels of the polylines representing the free-form linear features are compared.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    The @photogrammetric record 19 (2004), S. 0 
    ISSN: 1477-9730
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Architecture, Civil Engineering, Surveying
    Notes: Image registration aims at combining imagery from multiple sensors to achieve higher accuracy and derive more information than that obtained from a single sensor. The enormous increase in the volume of remotely sensed data that is being acquired by an ever-growing number of earth observation satellites mandates the development of accurate, robust, and automated registration procedures. An effective automatic image registration has to deal with four issues: registration primitives, transformation function, similarity measure, and matching strategy. This paper introduces a new approach for automatic image registration using linear features as the registration primitives. Linear features have been chosen because they can be reliably extracted from imagery with significantly different geometric and radiometric properties. The modified iterated Hough transform (MIHT), which manipulates the registration primitives and similarity measure, is used as the matching strategy for automatically deriving an estimate of the parameters involved in the transformation function as well as the correspondence between conjugate primitives. The MIHT procedure follows an optimal sequence for parameter estimation that takes into account the contribution of linear features with different orientations at various locations within the imagery towards the estimation of the transformation parameters in question. Experimental results using real data proved the feasibility and robustness of the suggested approach.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Oxford UK and Boston, USA : Blackwell Publishers Ltd.
    The @photogrammetric record 17 (2001), S. 0 
    ISSN: 1477-9730
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Architecture, Civil Engineering, Surveying
    Notes: Many photogrammetric and GIS applications, such as city modelling, change detection and object recognition, deal with surfaces. Change detection involves looking for differences between two surface models that are obtained from different sensors, for example an optical sensor and a laser scanner, or by the same sensor at different epochs. Surfaces obtained through a sampling process may also have to be compared for future processing (for example transformation parameter estimation and change detection). Surface matching is therefore an essential task in these applications. The matching of surfaces involves two steps. The first step deals with finding the correspondences between two surface points and/or patches. The second step requires the determination of transformation parameters between the two surfaces. However, since most surfaces consist of randomly distributed discrete points and may have different reference systems, finding the correspondences cannot be achieved without knowing the transformation parameters between the two surfaces. Conversely, deriving the transformation parameters requires the knowledge of the correspondence between the two point sets. The suggested approach for surface matching deals with randomly distributed data sets without the need for error prone interpolation and requires no point-to-point correspondence between the two surfaces under consideration. This research simultaneously solves for the correspondence and the transformation parameters using a Modified Iterated Hough Transform for robust parameter estimation. Several experiments are conducted to prove the feasibility and the robustness of the suggested approach, even when a high percentage of change exists.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Oxford, UK and Boston, USA : Blackwell Publishers Ltd.
    The @photogrammetric record 17 (2002), S. 0 
    ISSN: 1477-9730
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Architecture, Civil Engineering, Surveying
    Notes: Increased use of digital imagery has facilitated the opportunity to use features, in addition to points, in photogrammetric applications. Straight lines are often present in object space, and prior research has focused on incorporating straight–line constraints into bundle adjustment for frame imagery. In the research reported in this paper, object–space straight lines are used in a bundle adjustment with self–calibration. The perspective projection of straight lines in the object space produces straight lines in the image space in the absence of distortions. Any deviations from straightness in the image space are attributed to various distortion sources, such as radial and decentric lens distortions. Before incorporating straight lines into a bundle adjustment with self–calibration, the representation and perspective transformation of straight lines between image space and object space should be addressed. In this investigation, images of straight lines are represented as a sequence of points along the image line. Also, two points along the object–space straight line are used to represent that line. The perspective relationship between image– and object–space lines is incorporated in a mathematical constraint. The underlying principle in this constraint is that the vector from the perspective centre to an image point on a straight–line feature lies on the plane defined by the perspective centre and the two object points defining the straight line. This constraint has been embedded in a software application for bundle adjustment with self–calibration that can incorporate point as well as straight–line features. Experiments with simulated and real data have proved the feasibility and the efficiency of the algorithm proposed.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    The @photogrammetric record 17 (2002), S. 0 
    ISSN: 1477-9730
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Architecture, Civil Engineering, Surveying
    Notes: Increased use of digital imagery has facilitated the opportunity to use features, in addition to points, in photogrammetric applications. Straight lines are often present in object space, and prior research has focused on incorporating straight-line constraints into bundle adjustment for frame imagery. In the research reported in this paper, object-space straight lines aw used in a bundle adjustment with self-calibration. The perspective projection of straight lines in the object space produces straight lines in the image space in the absence of distortions. Any deviations from straightness in the image space are attributed to various distortion sources, such as radial and decentric lens distortions. Before incorporating straight lines into a bundle adjustment with self-calibration, the representation and perspective transformation of straight lines between image space and object space should be addressed. In this investigation. images of straight lines are represented as a sequence of points along the image line. Also, two points along the object-space straight line are used to represent that line. The perspective relationship between image- and object-space lines is incorporated in a mathematical constraint. The underlying principle in this constraint is that the vector from the perspective centre to an image point on a straight-line feature lies on the plane defined by the perspective centre and the two object points defining the straight line. This constraint has been embedded in a software application for bundle adjustment with self-calibration that can incorporate point as well as straight-line features. Experiments with simulated and real data have proved the feasibility and the eficiency of the algorithm proposed.〈section xml:id="abs1-2"〉〈title type="main"〉RésuméLe développement de l'imagerie numérique a fourni l'occasion de recourir davantage aux détails des objets, et non pas seulement aux points qui les constituent, duns toute application photogrammétrique. C'est ainsi que les objets présentent souvent des lignes droites qu'il était tentant, dans une recherche antérieure, d'introduire pour contraindre la compensation par faisceaux d'imageries photographiques. On présente dans cet article cette recherche où les lignes droites de l'espace objet sont utilisées dans une compensation par faisceaux avec auto-étalonnage. En l'absence de distorsions, la projection de lignes droites de l'espace objet dans l'espace image s'opère également sous forme de lignes droites. Tout écart à une droite sur l'image peut done être attribuéà toutes sortes de distorsions, comme la distorsion radiale ou cello due au décentrement de l'objectif. Avant d'utiliser ces lignes droites dans une compensation par faisceaux avec auto-étalonnage, il faut efectuer la représentation et la transformation perspective des lignes droites entre les espaces objet et image. Dans cette démarche, on considère les images des lignes droites comme constituées d'une suite de points jalonnant cette image tandis que duns l'espace objet cette ligne droite n'est définie que par deux points seulement. La relation de perspective qui relie les droites des espaces objet et image est alors introduite comme contrainte mathématique. Le principe de base de cette contrainte est que le vecteur issu du centre perspectif vers un point image d'une ligne droite de l'objet appartient au plan défini par re centre perspectif et les deux points retenus dans la définition de cette droite. On a incorporé cette contrainte dans un logiciel appliquéà la compensation par faisceaux avec auto-étalonnage prenant en compte les points ainsi que les éléments en ligne droite de l'objet. Des essais avec des données simulées puis réelles ont montré la faisabilité et l'efficacité de l'algorithme proposé.〈section xml:id="abs1-3"〉〈title type="main"〉ZusummenfussungDurch die zunehmende Nutzung digitaler Bilder wurde die Möglichkeit geschaffen, neben Punkten auch Objektmerkmale in photogrammetrischen Anwendungen zu nutzen. Oftmals finden sich Geraden in Objektraum, und frühere Forschung hat sich darauf konzentriert, Linienbedingungen in die Bündelausgleichung für Flächenkameras zu entwickeln. In den hier vorgestellten Forschungen werden Geraden im Objektraum in einer Bündelausgleichung mit Selbstkalibrierung eingesetzt. Wenn keine Verzeichnungen vorliegen, werden bei einer perspektiven Abbildung Geraden im Objektraum in Geraden im Bildraum abgebildet. Jegliche Abweichung von einer Gerden m Bildraum kann mit verschiedenen Ursachen für Verzeichnung in Verbindung gebracht werden, wie zum Beispiel radiale order asymmetrische Objektivverzeichung. Bevor Gerden in die Bündelausgleichung mit Selbstakalibrierung eingehen können, sollte die Repräsentation und die perspektive Transformation der Gerden zwischen Bild- und Objektraum geklärt werden. In dieser Untersuchung werden die Abbildungen der Geraden als eine Sequenz von Punkten entlang einer Bildlinie dargestellt. Zwei Punkte entlang der Objektgeraden werden genutzt, um diese darzustellen. Die perspektive Beziehung zwischen Bild- und Objektgeraden wird mit Hilfe einer mathematischen Bedingung formuliert. Das Prinzip, das dieser Beziehung zugrunde liegt, geht davon aus, dass ein Vektor vom Projektionszentrum zu einem Bildpunkt auf einer Geraden auf einer Ebene liegt, die durch das Projektionszentrum und die zwei Objektpunkte, die die Gerade definieren, bestimmt wird. Diese Bedingung wurde in ein Anwendungsprogramm zur Bündelausgleichung mit Selbstakalibrierung eingbaut, das sowohl Punkt- als auch Geradenmerkmale verarbeiten kann. Experimente mit simulierten und echten Datensätzen belegen die Anwendbarkeit und die Effizienz des vorgeschlagenen Algorithmus.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2020-07-15
    Description: Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.
    Electronic ISSN: 2072-4292
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2017-04-10
    Print ISSN: 1866-9298
    Electronic ISSN: 1866-928X
    Topics: Architecture, Civil Engineering, Surveying
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2020-04-27
    Description: Lane markings are one of the essential elements of road information, which is useful for a wide range of transportation applications. Several studies have been conducted to extract lane markings through intensity thresholding of Light Detection and Ranging (LiDAR) point clouds acquired by mobile mapping systems (MMS). This paper proposes an intensity thresholding strategy using unsupervised intensity normalization and a deep learning strategy using automatically labeled training data for lane marking extraction. For comparative evaluation, original intensity thresholding and deep learning using manually established labels strategies are also implemented. A pavement surface-based assessment of lane marking extraction by the four strategies is conducted in asphalt and concrete pavement areas covered by MMS equipped with multiple LiDAR scanners. Additionally, the extracted lane markings are used for lane width estimation and reporting lane marking gaps along various highways. The normalized intensity thresholding leads to a better lane marking extraction with an F1-score of 78.9% in comparison to the original intensity thresholding with an F1-score of 72.3%. On the other hand, the deep learning model trained with automatically generated labels achieves a higher F1-score of 85.9% than the one trained on manually established labels with an F1-score of 75.1%. In concrete pavement area, the normalized intensity thresholding and both deep learning strategies obtain better lane marking extraction (i.e., lane markings along longer segments of the highway have been extracted) than the original intensity thresholding approach. For the lane width results, more estimates are observed, especially in areas with poor edge lane marking, using the two deep learning models when compared with the intensity thresholding strategies due to the higher recall rates for the former. The outcome of the proposed strategies is used to develop a framework for reporting lane marking gap regions, which can be subsequently visualized in RGB imagery to identify their cause.
    Electronic ISSN: 2072-4292
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...