Publication Date:
2022-07-11
Description:
Underwater images are challenging for correspondence search algorithms, which are traditionally designed based on images captured in air and under uniform illumination. In water however, medium interactions have a much higher impact on the light propagation. Absorption and scattering cause wavelength- and distance-dependent color distortion, blurring and contrast reductions. For deeper or turbid waters, artificial illumination is required that usually moves rigidly with the camera and thus increases the appearance differences of the same seafloor spot in different images. Correspondence search, e.g. using image features, is however a core task in underwater visual navigation employed in seafloor surveys and is also required for 3D reconstruction, image retrieval and object detection. For underwater images, it has to be robust against the challenging imaging conditions to avoid decreased accuracy or even failure of computer vision algorithms. However, explicitly taking underwater nuisances into account during the feature extraction and matching process is challenging. On the other hand, learned feature extraction models achieved high performance in many in-air problems in recent years. Hence we investigate, how such a learned robust feature model, D2Net, can be applied to the underwater environment and particularly look into the issue of cross domain transfer learning as a strategy to deal with the lack of annotated underwater training data.
Type:
Book chapter
,
NonPeerReviewed
Format:
text
Permalink