ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
  • 2
    Publication Date: 2018-01-04
    Description: Marine researchers continue to create large quantities of benthic images e.g., using AUVs (Autonomous Underwater Vehicles). In order to quantify the size of sessile objects in the images, a pixel-to-centimeter ratio is required for each image, often indirectly provided through a geometric laser point (LP) pattern, projected onto the seafloor. Manual annotation of these LPs in all images is too time-consuming and thus infeasible for nowadays data volumes. Because of the technical evolution of camera rigs, the LP's geometrical layout and color features vary for different expeditions and projects. This makes the application of one algorithm, tuned to a strictly defined LP pattern, also ineffective. Here we present the web-tool DELPHI, that efficiently learns the LP layout for one image transect/collection from just a small number of hand labeled LPs and applies this layout model to the rest of the data. The efficiency in adapting to new data allows to compute the LPs and the pixel-to-centimeter ratio fully automatic and with high accuracy. DELPHI is applied to two real-world examples and shows clear improvements regarding reduction of tuning effort for new LP patterns as well as increasing detection performance.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019-09-23
    Description: Highlights • Marine Image Annotation Software (MIAS) are used to assist annotation of underwater imagery. • We compare 23 MIAS assisting human annotation including some that include automated annotation. • MIAS can run in real time (50%), allow posterior annotation (95%), and interact with databases and data flows (44%). • MIAS differ in data input/output and display, customization, image analysis and re-annotation. • We provide important considerations when selecting UIAS, and outline future trends. Abstract Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, in posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display of data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze annotation data. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    facet.materialart.
    Unknown
    Frontiers
    In:  Frontiers in Artificial Intelligence, 3 (49).
    Publication Date: 2021-01-08
    Description: Deep artificial neural networks have become the go-to method for many machine learning tasks. In the field of computer vision, deep convolutional neural networks achieve state-of-the-art performance for tasks such as classification, object detection, or instance segmentation. As deep neural networks become more and more complex, their inner workings become more and more opaque, rendering them a “black box” whose decision making process is no longer comprehensible. In recent years, various methods have been presented that attempt to peek inside the black box and to visualize the inner workings of deep neural networks, with a focus on deep convolutional neural networks for computer vision. These methods can serve as a toolbox to facilitate the design and inspection of neural networks for computer vision and the interpretation of the decision making process of the network. Here, we present the new tool Interactive Feature Localization in Deep neural networks (IFeaLiD) which provides a novel visualization approach to convolutional neural network layers. The tool interprets neural network layers as multivariate feature maps and visualizes the similarity between the feature vectors of individual pixels of an input image in a heat map display. The similarity display can reveal how the input image is perceived by different layers of the network and how the perception of one particular image region compares to the perception of the remaining image. IFeaLiD runs interactively in a web browser and can process even high resolution feature maps in real time by using GPU acceleration with WebGL 2. We present examples from four computer vision datasets with feature maps from different layers of a pre-trained ResNet101. IFeaLiD is open source and available online at https://ifealid.cebitec.uni-bielefeld.de.
    Type: Article , PeerReviewed
    Format: text
    Format: other
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2020-06-26
    Description: Highlights • The proposed method automatically assesses the abundance of poly-metallic nodules on the seafloor. • No manually created feature reference set is required. • Large collections of benthic images from a range of acquisition gear can be analysed efficiently. Abstract Underwater image analysis is a new field for computational pattern recognition. In academia as well as in the industry, it is more and more common to use camera-equipped stationary landers, autonomous underwater vehicles, ocean floor observatory systems or remotely operated vehicles for image based monitoring and exploration. The resulting image collections create a bottleneck for manual data interpretation owing to their size. In this paper, the problem of measuring size and abundance of poly-metallic nodules in benthic images is considered. A foreground/background separation (i.e. separating the nodules from the surrounding sediment) is required to determine the targeted quantities. Poly-metallic nodules are compact (convex), but vary in size and appear as composites with different visual features (color, texture, etc.). Methods for automating nodule segmentation have so far relied on manual training data. However, a hand-drawn, ground-truthed segmentation of nodules and sediment is difficult (or even impossible) to achieve for a sufficient number of images. The new ES4C algorithm (Evolutionary tuned Segmentation using Cluster Co-occurrence and a Convexity Criterion) is presented that can be applied to a segmentation task without a reference ground truth. First, a learning vector quantization groups the visual features in the images into clusters. Secondly, a segmentation function is constructed by assigning the clusters to classes automatically according to defined heuristics. Using evolutionary algorithms, a quality criterion is maximized to assign cluster prototypes to classes. This criterion integrates the morphological compactness of the nodules as well as feature similarity in different parts of nodules. To assess its applicability, the ES4C algorithm is tested with two real-world data sets. For one of these data sets, a reference gold standard is available and we report a sensitivity of 0.88 and a specificity of 0.65. Our results show that the applied heuristics, which combine patterns in the feature domain with patterns in the spatial domain, lead to good segmentation results and allow full automation of the resource-abundance assessment for benthic poly-metallic nodules.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2024-02-07
    Description: Marine imaging has evolved from small, narrowly focussed applications to large-scale applications covering areas of several hundred square kilometers or time series covering observation periods of several months. The analysis and interpretation of the accumulating large volume of digital images or videos will continue to challenge the marine science community to keep this process efficient and effective. It is safe to say that any strategy will rely on some software platform supporting manual image and video annotation, either for a direct manual annotation-based analysis or for collecting training data to deploy a machine learning–based approach for (semi-)automatic annotation. This paper describes how computer-assisted manual full-frame image and video annotation is currently performed in marine science and how it can evolve to keep up with the increasing demand for image and video annotation and the growing volume of imaging data. As an example, observations are presented how the image and video annotation tool BIIGLE 2.0 has been used by an international community of more than one thousand users in the last 4 years. In addition, new features and tools are presented to show how BIIGLE 2.0 has evolved over the same time period: video annotation, support for large images in the gigapixel range, machine learning assisted image annotation, improved mobility and affordability, application instance federation and enhanced label tree collaboration. The observations indicate that, despite novel concepts and tools introduced by BIIGLE 2.0, full-frame image and video annotation is still mostly done in the same way as two decades ago, where single users annotated subsets of image collections or single video frames with limited computational support. We encourage researchers to review their protocols for education and annotation, making use of newer technologies and tools to improve the efficiency and effectivity of image and video annotation in marine science.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...