Next Article in Journal
Burn Severity and Post-Fire Land Surface Albedo Relationship in Mediterranean Forest Ecosystems
Next Article in Special Issue
Canopy Height Estimation from Single Multispectral 2D Airborne Imagery Using Texture Analysis and Machine Learning in Structurally Rich Temperate Forests
Previous Article in Journal
The Effect of Mineral Sediments on Satellite Chlorophyll-a Retrievals from Line-Height Algorithms Using Red and Near-Infrared Bands
Previous Article in Special Issue
Object-Based Flood Analysis Using a Graph-Based Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying Vegetation in Arid Regions Using Object-Based Image Analysis with RGB-Only Aerial Imagery

Remote Sensing Laboratory, Jacob Blaustein Institutes for Desert Research, Ben Gurion University, Beer Sheva 84105, Israel
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(19), 2308; https://doi.org/10.3390/rs11192308
Submission received: 19 August 2019 / Revised: 19 September 2019 / Accepted: 30 September 2019 / Published: 3 October 2019
(This article belongs to the Special Issue Object Based Image Analysis for Remote Sensing)

Abstract

:
Vegetation state is usually assessed by calculating vegetation indices (VIs) derived from remote sensing systems where the near infrared (NIR) band is used to enhance the vegetation signal. However VIs are pixel-based and require both visible and NIR bands. Yet, most archived photographs were obtained with cameras that record only the three visible bands. Attempts to construct VIs with the visible bands alone have shown only limited success, especially in drylands. The current study identifies vegetation patches in the hyperarid Israeli desert using only the visible bands from aerial photographs by adapting an alternative geospatial object-based image analysis (GEOBIA) routine, together with recent improvements in preprocessing. The preprocessing step selects a balanced threshold value for image segmentation using unsupervised parameter optimization. Then the images undergo two processes: segmentation and classification. After tallying modeled vegetation patches that overlap true tree locations, both true positive and false positive rates are obtained from the classification and receiver operating characteristic (ROC) curves are plotted. The results show successful identification of vegetation patches in multiple zones from each study area, with area under the ROC curve values between 0.72 and 0.83.

Graphical Abstract

1. Introduction

As early as 1974, Rouse et al. [1] proposed the well-known normalized difference vegetation index (NDVI), which is based on the difference between the maximum absorption of radiation in the red band (620–680 nm) due to chlorophyll pigments and the maximum reflection of radiation in the near infrared (NIR) band (720–780 nm) caused by leaf cellular structure. With this basic tool, remote sensing has played a key role in vegetation mapping, even in arid regions. For example, a thorough population dynamics study of Acacia species (Isaacson et al. [2]) in the arid southern desert of Israel used both ground surveys and NIR band aerial images to follow changes in the canopy cover and tree size distribution. In another early paper, Wiegand et al. [3] compared NDVI from Landsat TM (Thematic Mapper) imagery to a spatial distribution of Acacia, also in southern Israel. Both of those research projects analyzed population distributions of Acacia by comparing NDVI-derived tree vitality to topography and ephemeral flooding in the dry river beds of their study areas.
More recently, both multispectral and hyperspectral imagery have also been used to identify and characterize vegetation. A review of applications of multispectral and hyperspectral imagery to the mapping of mangrove forests appeared in Pham et al. [4]. They covered spectral-based classifiers as well as object-based image analysis. Both Paz-Kagan et al. [5] and Hong et al. [6] have shown that hyperspectral images with limited spatial coverage can be used to train multispectral images with a larger spatial extent for vegetation mapping. Hong et al. [6] used small-scale hyperspectral images to train three classification models, then applied the models to multispectral images of a much larger area. Similarly, Paz-Kagan et al. [5] identified the penology stage of an invasive species using hyperspectral data with a random forest classifier, then expanded the analysis to a much larger region using multispectral imagery. Hong et al. [7] addressed the issue of mixed pixels by expanding the classic linear mixed model to more accurately derive abundance maps. They applied their method using both synthetic data and hyperspectral images over an urban region and showed high-quality identification of urban vegetation areas and good separation from non-vegetation pixels.

1.1. Vegetation Indices

The NDVI has found extensive use in various applications such as space-time trend analysis of vegetation health (Shoshany and Karnibad [8]), mapping of invasive species (Paz-Kagan et al. [5]), and identification of environmental factors that influence vegetation (Karnieli et al. [9]). Despite widspread adoption, several notable limitations of this index have been documented. For example Mbow et al. [10] critically examined the correlation between NDVI and biomass, measured as above-ground net primary production. Théau et al. [11] compared several different vegetation indices using multispectral satellite imagery and reported inconsistencies between them. Peng et al. [12] reviewed the MODIS NDVI product in the context of “spring greenup” and discovered spatial heterogeneity compared to other vegetation index products. To overcome these drawbacks, alternative vegetation indices (VI) have appeared, and their advantages have been demonstrated. Huete [13] introduced the soil-adjusted vegetation index (SAVI), then a few years later the modified SAVI (MSAVI) was proposed by Qi et al. [14]. Following that work, Huete et al. [15] presented a comparison of the NDVI to another adjusted index: the soil and atmosphere adjusted vegetation index (SARVI). This index, which uses the NIR and red bands after applying atmospheric correction, showed better results in desert regions. Importantly, almost all of these commonly used indices rely on the NIR band to distinguish vegetation.
However, a large archive of aerial imagery is available covering only the visible spectrum, i.e., the three red, green, and blue (RGB) bands. Among the vegetation indices, only a few attempt to differentiate vegetation with RGB bands only. Motohka et al. [16] presented and analyzed the green-red vegetation index (GRVI) on a seasonal time scale. The GRVI is derived similarly to the NDVI:
G R V I = ( ρ g r e e n ρ r e d ) / ( ρ g r e e n + ρ r e d ) ,
where each ρ component refers to reflectance at a specific spectral band. McKinnon and Huff [17] also tested the accuracy of two RGB-only vegetation indices from drone-acquired images: the visible atmospheric resistance index (VARI) and the triangular greenness index (TGI).
V A R I = ( ρ g r e e n ρ r e d ) / ( ρ g r e e n + ρ r e d ρ b l u e )
T G I = ρ g r e e n 0.39 ρ r e d 0.61 ρ b l u e
Results in that work were inconsistent. They reported a good correlation between the RGB-only indices and the NDVI in healthy corn fields, but a less accurate match in rice fields. Their conclusions refer to sporadic matches between these RGB-only indices and the actual crop health. Moreover, desert plants are relatively sparse, their photosynthetic duration is short, and their color is more grey than green, further challenging a pixel-based vegetation index approach.
The recent advancement of drone technology for acquiring aerial imagery has revived interest in RGB-only methods to classify vegetation. The need to use both archived and new consumer-grade, drone-based, RGB-only imagery has led to a different approach. The pixel-based spectral signature classification, which underpins all VIs, is being replaced by object-based image analysis (OBIA). OBIA took a foothold in the remote sensing discipline and became known as geographic OBIA (GEOBIA), some two decades ago. By 2008, GEOBIA techniques had become a primary tool for image segmentation and classification (Blaschke et al. [18], and Cheng and Han [19]). The advantages of object-based over pixel-based classification were reported and summarized by Myint et al. [20] and Hussain et al. [21]. Feng et al. [22] applied drone images to the mapping of vegetation in an urban environment. In that work, using OBIA techniques, the researchers were able to differentiate trees and grass from the surroundings.

1.2. Image Texture

Image texture, a central component of OBIA, describes the relationship between a pixel and it’s surrounding neighbors within a given window size. By characterizing this relationship, it becomes possible to distinguish, for example, areas that are homogeneous from areas of high local contrast. Texture parameters are derived from a gray-level co-occurrence matrix (GLCM), first presented by Haralick et al. [23]. Alternative algorithms include wavelet transform, the Gabor transform, Laws energy filter (Laws [24]), and others. A comparison of these different algorithms that appears in Selvarajah and Kodituwakku [25] finds only minor differences in their ability to recognize content in generic images. Ruiz et al. [26] performed a comparison between texture-based and spectral-based classification on satellite imagery using several different texture routines, including GLCM. He analyzed imagery covering three forest and one urban area and found no definitive difference among the texture-based classifications. The GLCM method was also chosen by Marceau et al. [27] in an early work to evaluate SPOT (Satellite Pour l’Observation de la Terre) satellite classification procedures over a mixed urban and forested coastal region of northern Quebec.
The GLCM method examines the relative frequency of pairs of gray-level values for neighboring pixels within a given window size in an image. The matrix is a tabulation of how frequently different combinations of gray levels occur. Aerial photographs usually have 8-bit radiometric resolution, giving a range of 255 shades of gray, thus the co-occurrence matrix is 255 × 255 cells. Each cell ( i , j ) value equals the number of pixels in the image window with value i that have an adjacent cell with value j. Furthermore, the GLCM cell values are normalized so that the final matrix contains values from 0 to 1.0.
Once the GLCM is calculated, texture parameters are derived from the matrix. Maillard [28] reviewed eleven GLCM-derived texture parameters and reported that five specific parameters are most often applied in the context of classification of vegetation: angular second momentum, contrast, correlation, inverse distance moment, and entropy.

1.3. OBIA Applied to Vegetation Classification

Mapping of vegetation has been specifically targeted by researchers using OBIA. Yu et al. [29] applied OBIA to satellite imagery at 1 m resolution with four spectral bands, RGB and NIR. They used OBIA to create a set of ancillary data, then applied a procedure known as classification and regression tree algorithm (CART) that successfully distinguished different types of vegetation. Lucas et al. [30] applied the proprietary eCognition© software to a time series of Landsat Thematic Mapper (TM) and Enhanced TM (ETM+) imagery to improve the habitat and agricultural area classifications in Wales. Their work utilized both visible as well as infrared bands. OBIA techniques have been applied to the characterization of forests by Blaschke et al. [31]. A work by Cleve et al. [32] also demonstrated a clear improvement in landuse–landcover delineation at urban–wildland interfaces when OBIA was used. Another application of eCognition© software appears in Moffett and Gorelick [33], where they described the advantages of OBIA over the classic pixel-based segmentation methods. They mapped wetland vegetation using 1 m resolution satellite images taking advantage of four bands: RGB and NIR. A study in West Africa (Karlson et al. [34]) applied a multistep procedure to identify tree crowns and clusters. The GEOBIA procedure in their study included classification, OBIA to refine results, and then calculation of NDVI only in those identified pixels to characterize the wooded areas. Juel et al. [35] performed mapping of vegetation in a coastal region by combining OBIA with a random forest classifier. A report by Alsharrah et al. [36] details a comparison of three vegetation mapping techniques using 2 m resolution satellite imagery in an arid climate: classic VI, OBIA, and a vegetation shadow model. Their results suggest that combining a VI classification with OBIA achieves the best match to true vegetation locations. A large-scale landcover classification was carried out recently by Maxwell et al. [37] by applying GEOBIA to four-band (including NIR) aerial photographs. After creating texture rasters and employing several sets of ancillary data, they reported a very good match between the classification and known objects on the ground.
The majority of research applying GEOBIA to vegetation classification, including all the papers cited above, employed the NIR band. Furthermore, ancillary data layers such as topography (Kim et al. [38]) or LIDAR (light detection and ranging) (Weinstein et al. [39]) were sometimes added as well. In almost all cases, study areas were in a vegetation-rich temperate climate. Significant exceptions are Alsharrah et al. [36] and a study by Ozdemir and Karnieli [40] that focused on forest structure in the semi-arid Negev (see map in Figure 1) desert in southern Israel. Their work, based on multispectral (eight-band) WorldView-2 images, used image segmentation and derived image textures to determine forest structure. That example notwithstanding, almost all previous research with GEOBIA employed spectral bands beyond the visible range and focused on temperate climates.
The expanded application of GEOBIA was a direct result of improvements in aerial photography. OBIA can discriminate objects and lead to successful classification when the image pixel size is small compared to the object size. With the advent of high-resolution, multispectral aerial photography over the past few decades, application of GEOBIA grew. The importance of pixel size in OBIA is pointed out by Yu et al. [29], and in an older work, Marceau et al. [27] even predicted that with higher resolution images the pixel-based, spectral approach would suffer due to “salt and pepper” effects. In a study applying GEOBIA to drone-acquired imagery for precision agriculture, Torres-Sánchez et al. [41] pointed to spectral heterogeneity as a limitation in classic image classification. OBIA, on the other hand, ideally handles high-resolution imagery. Now that drone aerial images are becoming an accepted research tool with a very small pixel size, GEOBIA techniques are gaining more widespread use.

1.4. Segmentation and Classification

OBIA consists of two stages: segmentation and classification. The segmentation stage collects image pixels into clusters such that within each cluster the pixels are alike and between clusters the pixels are different. The measures of likeness and difference, as described by Espindola et al. [42], are: variance within each cluster and spatial autocorrelation between clusters. The balance between these two measures determines how well segmentation identifies real-world objects. If intracluster variance is kept low, then clusters will contain only very similar pixels. This can lead to oversegmentation, where real-world objects become divided and cover several clusters. Conversely, if the spatial autocorrelation between different clusters is maintained low, then intracluster variance increases and one cluster might expand to cover several real-world objects. This balance between intracluster variance and intercluster spatial autocorrelation is regulated by the threshold input parameter (sometimes referred to as scale) to the segmentation procedure. Choosing the best threshold, described by Espindola et al. [42], is crucial for a successful match between segmented clusters and real-world objects.
The classification stage associates each segmented cluster of pixels with a certain class. Many machine learning algorithms, reviewed by Cánovas-García and Alonso-Sarría [43], use supervised classification with a training set of known classes. For example, Yu et al. [29] applied a CART algorithm to identify vegetation in a coastal area of California using image texture rasters derived from the spectral bands, along with ancillary environmental factors. Luca Malatesta et al. [44] compared a maximum likelihood (ML) classifier and a sequential maximum a posteriori (SMAP) model (without OBIA) and reported better results from the SMAP model. Rapinel et al. [45] used an ML classifier together with OBIA to map vegetation in a coastal region of France. They also employed ancillary data and image texture rasters. In a recent work, Mboga et al. [46] applied a fully convolutional neural network to OBIA-derived segmentation to produce landcover maps in an urban setting.
Recent research has often shown a preference for random forest (RF) classifiers (for example, Li et al. [47] and Feng et al. [22]). A comparison of four classification algorithms was presented by Grippa et al. [48], where they performed GEOBIA segmentation and classification in two urban regions. They compared k-nearest neighbors, support vector machine, recursive partitioning, and RF, as well as combinations of the above, and found that RF outperformed all others. A theoretical analysis of RF by Biau [49] and other practical applications (i.e., Nicolas et al. [50], Cánovas-García and Alonso-Sarría [43], Juel et al. [35]) pointed to on-par or superior results compared with the more traditional maximum likelihood or support vector machine classifiers. RF was also applied successfully by Maxwell et al. [37] in a large-scale landcover classification project.

1.5. Objectives

This study adopts an object-based image analysis approach for mapping vegetation in arid regions, replacing the traditional pixel-based method that underlies VI calculations. The work attempts to derive an accurate spatial dataset of vegetation patches while restricting the input to the RGB visible bands of aerial imagery to enable full utilization of older, archived photographs as well as consumer-grade, drone-acquired imagery. In addition, recent advancements in GEOBIA are incorporated into the method. This approach to identifying vegetation in a hyperarid region, while limiting the technique to visible bands only, constitutes an innovation.

2. Materials and Methods

2.1. Study Areas

The GEOBIA technique was applied to three study areas along the hyperarid Rift Valley in southern Israel (Figure 1). These areas were chosen due to the availability of accurate tree locations from monitoring campaigns. Table 1 lists auxiliary data, in addition to tree locations, that was collected at each study area during the monitoring. The climate in all areas is inducive to a mix of vegetation including some subspecies of Acacia Vachellia tortilis, a keystone species in these areas, as well as Retama raetam, and Tamarix aphylla bushes. These study areas all fall in a hyperarid region with an aridity index, the quotient of precipitation and potential evapotranspiration, below 0.5, as defined by UNEP in the World Atlas of Desertification (Cherlet et al. [51]).
The northern site, Wadi (ephemeral riverbed) Ashalim, drains a watershed of approximately 26  km 2 . The upstream reaches of the watershed are at an elevation of 250 m, and the outlet into the Dead Sea is at 350 m. The soil in the upstream region is loess, similar to the desert mountains in southern Israel, whereas near the outlet, the stream bed enters the marl soil that typifies the Dead Sea area. This area is classified in Köppen Geiger as hot semi-arid, with an annual average rainfall of 100 mm and summer daily average temperatures of 41/26 C (high/low). In addition to the trees and bushes mentioned above, species of Atriplex also appear at this site. Analysis was done on a 427 ha area of the hyperarid, lower extent of the wadi, covering three groups of monitored trees.
The Shizaf Nature Reserve was the location of the second study area. With only 40 mm of rainfall per year, this area is classified in Köppen Geiger as a hot desert. The high/low daily summer average temperatures are similar to Ashalim, 41/26 C. Unlike Ashalim, the nature reserve covers a flat terrain. A large cluster of Acacia trees was geolocated in 2005. An area of 396 ha was selected for analysis, which encompassed this group of monitored trees.
The southern site, Wadi Shitta, is located about 100 km further south and drains a small watershed of 16 km 2 . This wadi exhibits uniform loamy/sandy soil and a moderate slope. The average daily summer temperatures are slightly lower than the northern study areas (40/23 C) since this wadi is higher in altitude. The ongoing tree monitoring project, part of a Long Term Ecological Research (LTER) site (https://data.lter-europe.net/deims/site/lter_eu_il_015), was carried out in the eastern section of the watershed, but before the wadi enters the marl soil area. Monitoring covers two clusters of some 240 trees (Vachellia tortilis, Vachellia radiana, and also Acacia pachyceras), of which 43 are monitored continuously and 40 others are flagged as dead. The analysis region extended over 256 ha to cover the two clusters of trees.

Aerial Photographs

Ortho-rectified aerial photographs were acquired for each of the regions at a geometric resolution of 25 cm / pixel . The indigenous vegetation in the study areas included trees and bushes with diameters typically above 2 m. Lahav-Ginott et al. [52] report an average tree canopy of 10 m 2 in area, and Ward and Rohner [53] refer to 39 m 2 . The paper by Ward and Rohner [53] included the species Acacia gerrardii, a much larger tree, explaining the difference in canopy size. In either case, tree canopies are covered by at least several tens of pixels in the 25 cm resolution aerial photographs available in the current research. The imagery contained only the visible RGB bands, with eight-bit radiometric resolution, thus each band spanned a gray level range (digital numbers) from 0 to 255. The Ashalim aerial photograph was acquired in 2012. For the Shizaf Natural Reserve study area, an archived aerial photograph was obtained from 2010, some years after the tree monitoring campaign. Since these areas are nature reserves, no substantial changes are expected in the few years between mapping of the tree locations and the acquisition date of the photographs. Photographs from Wadi Shitta were available from 2017, coinciding with the tree monitoring campaign. In all three study areas, the aerial photographs were acquired during the late winter to early spring seasons.

2.2. Preprocessing

2.2.1. Image Texture

Referring to Figure 2a, five GLCM parameters (Section 1.2) were derived from the green color band of each of the original aerial photographs: angular second momentum, contrast, correlation, inverse distance moment, and entropy. As recognized by Maillard [28], these five, derived with Equations (4) to (8), are most often used in vegetation identification.
Contrast:
C o n t r = i , j = 0 N g p ( i , j ) 2 ;
Angular Second Momentum (ASM):
A S M = i , j = 0 N g p i , j 2 ;
Entropy:
E n t r = i , j = 0 N g , p i , j ( l n ( p i , j ) ) ;
Homogeneity (Inverse Distance Moment):
I D M = i , j = 0 N g p i , j ( 1 + ( i j ) 2 ) ;
Correlation:
C o r r = i , j = 0 N g p i , j ( i μ ) ( j μ ) σ 2 ,
where p i , j is the GLCM value at matrix location ( i , j ) , μ is the mean, and  σ is the standard deviation of gray-level values within the image window.
GLCM parameter rasters derived from each of the color bands were very similar, and thus were very highly correlated to each other. Including GLCM texture rasters for all colors would have led to overfitting at the classification stage, thus GLCM texture rasters from only one color band (green) were included.
Choice of window size impacts the resulting texture rasters. A small window results in more speckled texture rasters, whereas a larger window smooths the fine texture. A reasonable window size should reflect the smallest object to be differentiated. Considering tree canopies of a few meters (and referring to Lahav-Ginott et al. [52]), in this research a 7 pixel window size (1.75 m) was chosen.

2.2.2. Unsupervised Parameter Optimization

Unsupervised parameter optimization (USPO), as introduced by Espindola et al. [42], was implemented by Johnson et al. [54] and Georganos et al. [55]. The routine, applied in this work, performed segmentation repeatedly on small but representative subsets of the original image, while stepping through a range of threshold values. These subset polygons were delineated in advance, ensuring that each subset included a representative mix of the classes in the full analysis area. Then the parameter optimization routine was run in the extent of each subset.
The normalized values of variance and spatial autocorrelation at each iteration were summed (Figure 3), and the optimal threshold was that value that achieved the maximum sum of the two measures.
The optimized threshold for each study area was determined separately since variations in contrast and color balance among the aerial photographs (from different years and different seasons) led to distinct intracluster variance and intercluster spatial autocorrelation for each image. The final optimized threshold values for each study area appear in Table 2.

2.2.3. Superpixels

The concept of superpixels, introduced by Ren and Malik [56], allows for producing a quick preliminary segmentation by k-means clustering. This initial segmentation can be used as a seed for the full segmentation procedure, thus making the overall process more efficient. Among the algorithms for creating superpixels, reviewed recently by Stutz et al. [57], simple iterative linear clustering (SLIC) (Achanta et al. [58]) was shown in that paper to be relatively quick and as successful as the others. An innovative improvement to the SLIC algorithm, known as SLIC0 (pronounced “slick naught”), was demonstrated by Csillik [59]. This approach, implemented in the current work, initializes the regular k-means clustering with a distribution of cluster center points such that nearby center points do not fall on pixels that have similar spectral signatures. In this way, the superpixel clustering ensures that adjacent clusters are different. As shown in Csillik [59], using a seed produced by SLIC0 leads to a final segmentation that stabilizes quickly and more closely matches real-life objects.

2.3. Segmentation and Classification

As illustrated in Figure 2b, eight raster layers were used in the segmentation process: the three original color bands and the five texture rasters. These, together with the optimized threshold value and superpixel seed layer as described above, were input into the segmentation routine. The resulting output grouped all similar pixels from the original image into clusters, where each cluster should represent some real-world object. The similarity (i.e., variance) within each cluster and difference (spatial autocorrelation) between clusters was regulated by the threshold parameter. Furthermore, the superpixel preliminary segmentation, used as a seed, allowed the procedure to complete efficiently. By separating the initial raster layers into clusters, this segmentation stage successfully identified real-world objects allowing the following classification stage to correctly associate a class to each cluster. Thus, the segmentation stage was crucial to achieving positive model results overall.
Classification requires, in addition to the segmentation raster output, a dataset of training points. These datasets were prepared manually by on-screen digitization, with the aerial image as background, pinpointing 98, 73, and 74 training points for the Ashalim, Shizaf, and Shitta study areas, respectively. Points were digitized covering trees, sandy areas in the wadi, soil outside the wadi, and rocky areas on the slopes. Care was taken that no tree training point overlapped true tree locations from the monitoring campaigns, thus the validation (Section 2.5) tested tree locations that were kept independent of the training points.
The classification step took into account eleven rasters. First, following the segmentation step, the three initial color bands and five texture bands were used. In addition, three geometric data layers—the area, perimeter, and circle compactness—were prepared for all segmentation clusters. Given a polygon of perimeter P and area A, compactness is given by Equation (9):
C o m p a c t = P 2 π A .
All segmentation clusters obtained values for each of these eleven rasters by averaging pixel values within each polygon from each raster. Thus, the classification step modeled one dependent variable, the class, using eleven independent variables. Then classification of the segmented raster was performed using a random forest (RF) classifier. This machine learning algorithm randomly chooses a subset of the independent variables at each tree split, thus making it more resistant to overfitting when variables are correlated. In the current work, variables are mostly derived from the three RGB bands, so correlated variables might be a concern. Therefore, RF was a suitable choice because of both its widespread use (Section 1.4) and avoidance of overfitting. The algorithm was configured with a depth of 200 trees, and number of variables at each split (mtry) of three. Tree depths from 100 to 800 were tested in the Ashalim study area, and a visual examination showed no difference with higher numbers of trees, thus a tree depth of 200 was considered sufficient.
The categorical raster output of the classification procedure assigned to each pixel one of the training category values: vegetation, soil, sand, or rock (Figure 4). In addition, the classification procedure also produced a probability raster, where each pixel was given a value between 0.0 and 1.0 indicating the probability that the pixel should belong to the assigned class. Finally, vegetation patches were obtained by filtering only the the vegetation class from the full classification result, and that filtered raster was vectorized to produce a polygon dataset of vegetation patches.

2.4. Post-Processing

The geometric parameters of area and compactness (Section 2.3) allowed recognition of vegetation patches by their size and shape: long thin areas have a high compactness value. A demographic study of the Acacia population, carried out by Lahav-Ginott et al. [52], used panchromatic aerial images to determine canopy cover and tree size distribution. Their work and the study by Ward and Rohner [53] were both based on the assumption of more or less round or oval-shaped tree canopies. They recognized Acacia trees in black and white images as darker, circular patches on the light background. However, dead trees and other non-vegetation dark patches do not maintain this round shape. Dead trees appear as very irregular dark shapes, and elongated dark shapes could represent asphalt-paved roads or shadows under cliffs. Thus, a high circular compactness parameter, as given in Equation (9), was a good indicator of dark shapes that were not vegetation. Using a maximum compactness cutoff value allowed the filtering out of these areas. Furthermore, very small patches were considered suspect and ignored. The empirically determined cutoff values chosen in this work were: maximum compactness = 2.6 and minimum area = 1.0  m 2 . These steps appear in Figure 2c.

2.5. Validation

Validation was carried out in each study area for each group of monitored trees separately. The groups of monitored trees, referred to as validation zones, in all study areas covered only a small portion of the total analyzed area. For example, the analyzed area in the Shizaf reserve extended over 396 ha, while the two validation zones were of 1.8 and 2.5 ha. Analysis was carried out over the full extent in order to visually justify the derived vegetation patches; however, statistical validation was limited to these small zones since true tree locations were available only in the zones. The validation zones surrounding the monitored trees in each zone were delineated by constructing a concave hull (Park and Oh [60]) implemented in R (R Development Core Team [61]) using the concaveman package (Gombin et al. [62]). An example validation zone from the Shizaf study area appears in Figure 5.
The number of true tree locations in each validation zone appears in Table 3 (Section 3).
The true trees from the monitoring campaigns were compared to the vegetation patches as determined by the GEOBIA procedure only within each validation zone. Both true positives (trees correctly identified as vegetation patches) and false positives (vegetation patches that did not overlap true trees) were tallied. In addition, the probability values for all vegetation patches were extracted from the classification probability raster. These probabilities together with the counts of true positives composed the true positive rate (TPR), sometimes called sensitivity. False positives with their probabilities became the false positive rate (FPR), equivalent to 1 s p e c i f i c i t y . The receiver operating characteristic (ROC) curve plots the TPR against the FPR. Then the area under these ROC curves (AUC) represents a measure of accuracy.

2.6. Implementation

Application of the method was straightforward. In addition to the three-band ortho-rectified image, preparation of certain vector data was required in advance:
  • a small, representative subset of the full study area for USPO;
  • a layer of training points for supervised classification (Section 2.3);
  • the true tree locations from monitoring campaigns;
  • the validation zones as described above in Section 2.5.
The steps described above in Section 2.2, Section 2.3, and Section 2.4 were implemented in the Python scripting language, run within the environment of GRASS-GIS (GRASS Development Team [63]). The choice of this open platform avoids the need for proprietary solutions and allows the details of implementation to be examined and developed further in the future. The code and an example implementation are available from a public repository (https://github.com/micha-silver/obia_vegetation.git). Several GRASS-GIS add-ons (https://grass.osgeo.org/grass76/manuals/addons/) were prerequisite: i.segment.uspo and r.neighborhoodmatrix for performing the USPO, i.superpixels.slic for preparing the superpixel seed and r.learn.ml, which contains code for the random forest classifier.
The Python code called GRASS-GIS functions to perform all image analyses and segmentation steps. In the preprocessing stage, these functions created texture rasters, calculated the optimized threshold, and prepared the superpixel segmentation. Then, calls to additional functions performed full segmentation and classification. The classification step was executed with a call to the Python Scikit-learn (Pedregosa et al. [64]) library. This library also included routines for plotting the ROC curves and calculating the AUC as described in Section 2.5.

3. Results

The following visual representation of results includes:
  • sections of aerial photographs with modeled vegetation and true tree locations;
  • graphs showing receiver operating characteristic (ROC) curves;
  • a table summarizing AUC values for all validation zones.
The vegetation patches from classification are presented in Figure 6 for one validation zone from each study area. The true tree locations from monitoring campaigns appear in red, and classified vegetation patches are outlined in green. Visual inspection verifies that the GEOBIA procedure successfully located vegetation throughout each study area. Results from Ashalim (Figure 6a) show that some rock faces outside the dry riverbed were incorrectly identified as vegetation. The dark, slightly green shade of volcanic rock covering the hill tops might explain this misidentification. Model results from the Shizaf study area (Figure 6b)show very good identification of vegetation throughout. In the Shitta study area (Figure 6c), some dark patches south of the dry riverbed seem to be missed by the GEOBIA model; however, these are confirmed dead trees and thus correctly skipped, as illustrated in Figure 7. The ground photographs in this figure were taken in late spring, yet clearly, the large tree (panel a) is viable, whereas the tree in panel b shows no vegetation, and the post-processing filter correctly identified this due to the irregular shape of the dead tree.
Three sample ROC curves appear in Figure 8, and the complete set of AUC values for all validation zones is presented in Table 3. The northern validation zone in the Shizaf nature reserve, showing the lowest AUC value, encloses many small bushes, especially Tamarix aphylla. However, the tree monitoring campaigns all focused on Acacia trees, the keystone species in this desert region. Thus, the model correctly identified vegetation patches that were not located in the monitoring campaign, leading to a somewhat high false positive rate and thus a lower AUC.

4. Discussion

Since the beginning of high-resolution, commercial, color aerial photography decades ago, a large archive of imagery consisting of only the RGB visible bands has accumulated. Recently, with the expansion of consumer-grade drone aerial photographs, even more imagery covering only the RGB bands has become available to environmental research. Numerous papers, reviewed briefly in Rapinel et al. [45], have used remote sensing for vegetation mapping. In research similar to the current work, Staben et al. [65] used high-resolution aerial imagery in an OBIA procedure to determine woody biomass in arid and semiarid regions of Australia. Shoshany and Karnibad [8] also examined biomass change and water-use efficiency in the semi-arid region of the eastern Mediterranean. However, each of these efforts used remote sensing data that included an infrared band. Much research has focused on time series analyses of desertification or forest decline (Peters et al. [66], Joshi et al. [67], and more recently Dorman et al. [68], Bajocco et al. [69], Fensholt et al. [70], and Zhang et al. [71]). Remote sensing data has aided research in tracking the health and distribution of certain species of vegetation (Escobar-Flores et al. [72], Pham et al. [4], Paz-Kagan et al. [5]). Yet, again, all of the above employed additional spectral bands.
However, as was demonstrated above, the classic pixel-based classification that takes into account only the spectral signature of the color bands achieves unsatisfactory results. The VI methods were shown to be especially unsuited to arid regions (i.e., Moleele et al. [73]) due to the weak reflectance of green and strong interference from the surrounding bright soil. Mbow et al. [10], working in the semi-arid Sahel region, showed only limited success in vegetation mapping, and only when they used soil moisture as an auxiliary variable.
GEOBIA has become a standard tool in remote sensing for over a decade. By first segmenting an image based on OBIA factors, including image texture, spectral signature, and geometry, real-world objects are correctly separated. Then the second classification stage successfully identifies and classifies those objects. The demand to take advantage of RGB-only aerial imagery has reinforced the move to GEOBIA. Not only does GEOBIA overcome the shortcomings of VI methods, but it also deals very well with the high-resolution imagery available recently by avoiding the “salt and pepper” problem.
The procedure in this work demonstrated successful mapping of vegetation in arid regions, using imagery with only RGB color bands. Initially, five texture rasters were prepared using the GLCM algorithm from one of the color bands. Two innovative preprocessing steps were adopted: a superpixel preliminary segmentation and optimized selection of the threshold parameter. With those inputs, segmentation was executed followed by the classification step using a random forests classifier. The map images and tables presented in Section 3 suggest that accurate mapping of vegetation in arid regions by RGB-only imagery is achievable. The weak green coloring of desert vegetation is overcome by using OBIA texture factors and careful selection of the threshold parameter in segmentation. Furthermore, by adding the geometric measures of area and circle compactness before classification, the model filtered out clusters with irregular or elongated shapes that could not be vegetation patches.

5. Conclusions

The GEOBIA remote sensing tool demonstrated in this research can open the way to ecological investigation that was not easily achievable previously by utilizing archives of aerial imagery. Large-scale mapping of vegetation in arid regions potentially raises questions of tree canopy density, change detection, patch analysis, comparisons with explanatory environmental variables, and so on. Ground-based monitoring campaigns can cover only limited areas, so these avenues of research were mostly closed. Early applications of remote sensing, when based on classic vegetation indices, showed limited success in extensive mapping of trees in desert regions. Whereas by adopting and tuning the object-based method presented here, ecologists can obtain relatively accurate vegetation maps both from past archives of RGB-only aerial imagery and from new and inexpensive images acquired by drones. The current work, which applies recent advances in GEOBIA (Section 2.2), could revive ecological research in arid region vegetation by enabling use of archives of RGB-only aerial photographs, merged with recently acquired imagery from consumer grade drones.
The procedure (Section 2) does not require costly proprietary software, rather the steps are transparent and open to critical analysis. The authors believe that with careful testing and adjusting of the threshold parameters, highly reliable vegetation maps can be attained. Looking forward, application of the techniques offered herein could expand research and the understanding of arid region ecosystems.

Author Contributions

Conceptualization, M.S., A.K., and A.T.; methodology, M.S.; software, M.S.; validation, M.S.; writing—original draft preparation, M.S.; writing—review and editing, A.K., A.T.; supervision, A.K.; funding acquisition, A.K.

Funding

The project leading to this research was partially funded by the Jewish National Fund (JNF) contract no. 10-02-002-17 and by the European Union’s Horizon 2020 Research and Innovation Program under grant agreements no. 641762 (Ecopotential) and no. 654359 (eLTER).

Acknowledgments

Several actors helped in preparation of this research by supplying the locations of trees from the ground surveying campaigns. Without this data, validation of the technique would not have been possible. The head of the Arava Dead Sea Science Center, Eli Groner connected the authors with their field staff to obtain tree locations in Wadi Shitta. In addition, Asaf Tsoar and Rotem Golan from the National Parks Authority oversee monitoring of the vegetation in Wadi Ashalim, and they willingly contributed their dataset with the help of their GIS department and Tal Polak. Many thanks to all parties involved for their assistance.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
OBIAobject-based image analysis
GEOBIAgeographic object-based image analysis
NDVINormalized differential vegetation index
VIVegetation index
NIRnear infrared
LIDARlight detection and ranging
GLCMgray-level co-occurrence matrix
RFRandom forest
RGBred, green, blue
SLICsimple iterative linear clustering
TPRtrue positive rate
FPRfalse positive rate
ROCreceiver operating characteristic
AUCarea under the curve

References

  1. Rouse, J.; Haas, R.; Schell, J.; Deering, D. Monitoring Vegetation Systems in the Great Plains with ERTS; Remote Sensing Center, Texas A&M University: College Station, TX, USA, 1974. [Google Scholar]
  2. Isaacson, S.; Rachmilevitch, S.; Ephrath, J.E.; Maman, S.; Blumberg, D.G. Monitoring tree population dynamics in arid zone through multiple temporal scales: Integration of spatial analysis change detection and field long term monitoring. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2016, XLI-B7, 513–515. [Google Scholar] [CrossRef]
  3. Wiegand, K.; Schmidt, H.; Jeltsch, F.; Ward, D. Linking a spatially-explicit model of acacias to GIS and remotely-sensed data. Folia Geobot. 2000, 35, 211–230. [Google Scholar] [CrossRef]
  4. Pham, T.D.; Yokoya, N.; Bui, D.T.; Yoshino, K.; Friess, D.A. Remote Sensing Approaches for Monitoring Mangrove Species, Structure, and Biomass: Opportunities and Challenges. Remote Sens. 2019, 11, 230. [Google Scholar] [CrossRef]
  5. Paz-Kagan, T.; Silver, M.; Panov, N.; Karnieli, A. Multispectral Approach for Identifying Invasive Plant Species Based on Flowering Phenology Characteristics. Remote Sens. 2019, 11, 953. [Google Scholar] [CrossRef]
  6. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. CoSpace: Common Subspace Learning From Hyperspectral-Multispectral Correspondences. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4349–4359. [Google Scholar] [CrossRef] [Green Version]
  7. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing. IEEE Trans. Image Process. 2019, 28, 1923–1938. [Google Scholar] [CrossRef] [PubMed]
  8. Shoshany, M.; Karnibad, L. Remote Sensing of Shrubland Drying in the South-East Mediterranean, 1995–2010: Water-Use-Efficiency-Based Mapping of Biomass Change. Remote Sens. 2015, 7, 2283–2301. [Google Scholar] [CrossRef]
  9. Karnieli, A.; Agam, N.; Pinker, R.T.; Anderson, M.; Imhoff, M.L.; Gutman, G.G.; Panov, N.; Goldberg, A. Use of NDVI and Land Surface Temperature for Drought Assessment: Merits and Limitations. J. Clim. 2010, 23, 618–633. [Google Scholar] [CrossRef]
  10. Mbow, C.; Fensholt, R.; Rasmussen, K.; Diop, D. Can vegetation productivity be derived from greenness in a semi-arid environment? Evidence from ground-based measurements. J. Arid Environ. 2013, 97, 56–65. [Google Scholar] [CrossRef]
  11. Théau, J.; Sankey, T.T.; Weber, K.T. Multi-sensor analyses of vegetation indices in a semi-arid environment. GISci. Remote Sens. 2010, 47, 260–275. [Google Scholar] [CrossRef]
  12. Peng, D.; Wu, C.; Li, C.; Zhang, X.; Liu, Z.; Ye, H.; Luo, S.; Liu, X.; Hu, Y.; Fang, B. Spring green-up phenology products derived from MODIS NDVI and EVI: Intercomparison, interpretation and validation using National Phenology Network and AmeriFlux observations. Ecol. Indic. 2017, 77, 323–336. [Google Scholar] [CrossRef]
  13. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  14. Qi, J.; Chehbouni, A.; Huete, A.; Kerr, Y.; Sorooshian, S. A Modified Soil Adjusted Vegetation Index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  15. Huete, A.R.; Liu, H.Q.; Batchily, K.; Van Leeuwen, W. A comparison of vegetation indices over a global set of TM images for EOS-MODIS. Remote Sens. Environ. 1997, 59, 440–451. [Google Scholar] [CrossRef]
  16. Motohka, T.; Nasahara, K.N.; Oguma, H.; Tsuchida, S. Applicability of Green-Red Vegetation Index for Remote Sensing of Vegetation Phenology. Remote Sens. 2010, 2, 2369–2387. [Google Scholar] [CrossRef] [Green Version]
  17. McKinnon, T.; Huff, P. Comparing RGB-Based Vegetation Indices With NDVI For Drone Based Agricultural Sensing. Agribotix. Com. 2017, 1–8. [Google Scholar]
  18. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef]
  19. Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef] [Green Version]
  20. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  21. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  22. Feng, Q.; Liu, J.; Gong, J. UAV Remote Sensing for Urban Vegetation Mapping Using Random Forest and Texture Analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef] [Green Version]
  23. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  24. Laws, K.I. Goal-Directed Textured-Image Segmentation. In Applications of Artificial Intelligence II; SPIE: Bellingham, WA, USA, 1985; Volume 548. [Google Scholar]
  25. Selvarajah, S.; Kodituwakku, S.R. Analysis and comparison of texture features for content based image retrieval. Int. J. Latest Trends Comput. 2011, 2, 108–113. [Google Scholar]
  26. Ruiz, L.A.; Fdez-Sarría, A.; Recio, J.A. Texture feature extraction for classification of remote sensing data using wavelet decomposition: A comparative study. In Proceedings of the 20th ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume 35, pp. 1109–1114. [Google Scholar]
  27. Marceau, D.J.; Howarth, P.J.; Dubois, J.M.M.; Gratton, D.J. Evaluation of the Grey-Level Co-Occurrence Matrix Method For Land-Cover Classification Using SPOT Imagery. IEEE Trans. Geosci. Remote Sens. 1990, 28, 513–519. [Google Scholar] [CrossRef]
  28. Maillard, P. Comparing Texture Analysis Methods through Classification. Photogramm. Eng. Remote Sens. 2003, 69, 357–367. [Google Scholar] [CrossRef] [Green Version]
  29. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 799–811. [Google Scholar] [CrossRef]
  30. Lucas, R.; Rowlands, A.; Brown, A.; Keyworth, S.; Bunting, P. Rule-based classification of multi-temporal satellite imagery for habitat and agricultural land cover mapping. ISPRS J. Photogramm. Remote Sens. 2007, 62, 165–185. [Google Scholar] [CrossRef]
  31. Blaschke, T.; Lang, S.; Hay, G.J. (Eds.) Pixels to Objects to Information: Spatial Context to Aid in Forest Characterization with Remote Sensing. In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Springer: Berlin/Heidelberg, Germany, 2008; pp. 345–363. [Google Scholar] [CrossRef]
  32. Cleve, C.; Kelly, M.; Kearns, F.R.; Moritz, M. Classification of the wildland–urban interface: A comparison of pixel- and object-based classifications using high-resolution aerial photography. Comput. Environ. Urban Syst. 2008, 32, 317–326. [Google Scholar] [CrossRef]
  33. Moffett, K.B.; Gorelick, S.M. Distinguishing wetland vegetation and channel features with object-based image segmentation. Int. J. Remote Sens. 2013, 34, 1332–1354. [Google Scholar] [CrossRef]
  34. Karlson, M.; Reese, H.; Ostwald, M. Tree Crown Mapping in Managed Woodlands (Parklands) of Semi-Arid West Africa Using WorldView-2 Imagery and Geographic Object Based Image Analysis. Sensors 2014, 14, 22643–22669. [Google Scholar] [CrossRef]
  35. Juel, A.; Groom, G.B.; Svenning, J.C.; Ejrnæs, R. Spatial application of Random Forest models for fine-scale coastal vegetation classification using object based analysis of aerial orthophoto and DEM data. Int. J. Appl. Earth Obs. Geoinf. 2015, 42, 106–114. [Google Scholar] [CrossRef]
  36. Alsharrah, S.A.; Bruce, D.A.; Bouabid, R.; Somenahalli, S.; Corcoran, P.A. High-Spatial Resolution Multispectral and Panchromatic Satellite Imagery for Mapping Perennial Desert Plants. In Proceedings of the SPIE; SPIE: Bellingham, WA, USA, 2015; p. 96440Z. [Google Scholar] [CrossRef]
  37. Maxwell, A.E.; Strager, M.P.; Warner, T.A.; Ramezan, C.A.; Morgan, A.N.; Pauley, C.E. Large-Area, High Spatial Resolution Land Cover Mapping Using Random Forests, GEOBIA, and NAIP Orthophotography: Findings and Recommendations. Remote Sens. 2019, 11, 1409. [Google Scholar] [CrossRef]
  38. Kim, M.; Madden, M.; Xu, B. GEOBIA Vegetation Mapping in Great Smoky Mountains National Park with Spectral and Non-spectral Ancillary Information. Photogramm. Eng. Remote Sens. 2010, 76, 137–149. [Google Scholar] [CrossRef]
  39. Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309. [Google Scholar] [CrossRef]
  40. Ozdemir, I.; Karnieli, A. Predicting forest structural parameters using the image texture derived from WorldView-2 multispectral imagery in a dryland forest, Israel. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 701–710. [Google Scholar] [CrossRef]
  41. Torres-Sánchez, J.; López-Granados, F.; Peña, J. An automatic object-based method for optimal thresholding in UAV images: Application for vegetation detection in herbaceous crops. Comput. Electron. Agric. 2015, 114, 43–52. [Google Scholar] [CrossRef]
  42. Espindola, G.M.; Camara, G.; Reis, I.A.; Bins, L.S.; Monteiro, A.M. Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation. Int. J. Remote Sens. 2006, 27, 3035–3040. [Google Scholar] [CrossRef]
  43. Cánovas-García, F.; Alonso-Sarría, F. Optimal Combination of Classification Algorithms and Feature Ranking Methods for Object-Based Classification of Submeter Resolution Z/I-Imaging DMC Imagery. Remote Sens. 2015, 7, 4651–4677. [Google Scholar] [CrossRef] [Green Version]
  44. Malatesta, L.; Attorre, F.; Altobelli, A.; Adeeb, A.; De Sanctis, M.; Taleb, N.M.; Scholte, P.T.; Vitale, M. Vegetation mapping from high-resolution satellite images in the heterogeneous arid environments of Socotra Island (Yemen). J. Appl. Remote Sens. 2013, 7. [Google Scholar] [CrossRef]
  45. Rapinel, S.; Clément, B.; Magnanon, S.; Sellin, V.; Hubert-Moy, L. Identification and mapping of natural vegetation on a coastal site using a Worldview-2 satellite image. J. Environ. Manag. 2014, 144, 236–246. [Google Scholar] [CrossRef] [Green Version]
  46. Mboga, N.; Georganos, S.; Grippa, T.; Lennert, M.; Vanhuysse, S.; Wolff, E. Fully Convolutional Networks and Geographic Object-Based Image Analysis for the Classification of VHR Imagery. Remote Sens. 2019, 11, 597. [Google Scholar] [CrossRef]
  47. Li, M.; Ma, L.; Blaschke, T.; Cheng, L.; Tiede, D. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments. Int. J. Appl. Earth Obs. Geoinf. 2016, 49, 87–98. [Google Scholar] [CrossRef]
  48. Grippa, T.; Lennert, M.; Beaumont, B.; Vanhuysse, S.; Stephenne, N.; Wolff, E. An Open-Source Semi-Automated Processing Chain for Urban Object-Based Classification. Remote Sens. 2017, 9, 358. [Google Scholar] [CrossRef]
  49. Biau, G. Analysis of a Random Forests Model. J. Mach. Learn. Res. 2012, 13, 1063–1095. [Google Scholar]
  50. Nicolas, G.; Robinson, T.P.; Wint, G.R.W.; Conchedda, G.; Cinardi, G.; Gilbert, M. Using Random Forest to Improve the Downscaling of Global Livestock Census Data. PLoS ONE 2016, 11, e0150424. [Google Scholar] [CrossRef]
  51. Cherlet, M.; Hutchinson, C.; Reynolds, J.; Hill, J.; Sommer, S.; von Maltitz, G. World Atlas of Desertification; Publications Office of the European Union: Luxembourg, 2018. [Google Scholar] [CrossRef]
  52. Lahav-Ginott, S.; Kadmon, R.; Gersani, M. Evaluating the viability of Acacia populations in the Negev Desert: A remote sensing approach. Biol. Conserv. 2001, 98, 127–137. [Google Scholar] [CrossRef]
  53. Ward, D.; Rohner, C. Anthropogenic Causes of high mortality and low recruitment in three Acacia tree taxa in the Negev desert, Israel. Biodivers. Conserv. 1997, 6, 877–893. [Google Scholar] [CrossRef]
  54. Johnson, B.A.; Bragais, M.; Endo, I.; Magcale-Macandog, D.B.; Macandog, P.B.M. Image Segmentation Parameter Optimization Considering Within- and Between-Segment Heterogeneity at Multiple Scale Levels: Test Case for Mapping Residential Areas Using Landsat Imagery. ISPRS Int. J. Geoinf. 2015, 4, 2292–2305. [Google Scholar] [CrossRef] [Green Version]
  55. Georganos, S.; Lennert, M.; Grippa, T.; Vanhuysse, S.; Johnson, B.; Wolff, E. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis. Remote Sens. 2018, 10, 222. [Google Scholar] [CrossRef]
  56. Ren, X.; Malik, J. Learning a classification model for segmentation. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 1, pp. 10–17. [Google Scholar] [CrossRef]
  57. Stutz, D.; Hermans, A.; Leibe, B. Superpixels: An evaluation of the state-of-the-art. Comput. Vis. Image Underst. 2018, 166, 1–27. [Google Scholar] [CrossRef] [Green Version]
  58. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
  59. Csillik, O. Fast Segmentation and Classification of Very High Resolution Remote Sensing Data Using SLIC Superpixels. Remote Sens. 2017, 9, 243. [Google Scholar] [CrossRef]
  60. Park, J.S.; Oh, S.J. A New Concave Hull Algorithm and Concaveness Measure for n-dimensional Datasets. J. Inf. Sci. Eng. 2012, 28, 14. [Google Scholar]
  61. R Development Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2008; ISBN 3-900051-07-0. [Google Scholar]
  62. Gombin, J.; Vaidyanathan, R.; Agafonkin, V. Concaveman: A Very Fast 2D Concave Hull Algorithm; R Package Version 1.0.0.; R Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
  63. GRASS Development Team. Geographic Resources Analysis Support System (GRASS GIS) Software, Version 7.2; Open Source Geospatial Foundation: Chicago, IL, USA, 2017. [Google Scholar]
  64. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  65. Staben, G.W.; Lucieer, A.; Evans, K.G.; Scarth, P.; Cook, G.D. Obtaining biophysical measurements of woody vegetation from high resolution digital aerial photography in tropical and arid environments: Northern Territory, Australia. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 204–220. [Google Scholar] [CrossRef]
  66. Peters, A.J.; Eve, M.D.; Holt, E.H.; Whitford, W.G. Analysis of Desert Plant Community Growth Patterns with High Temporal Resolution Satellite Spectra. J. Appl. Ecol. 1997, 34, 418–432. [Google Scholar] [CrossRef]
  67. Joshi, P.K.K.; Roy, P.S.; Singh, S.; Agrawal, S.; Yadav, D. Vegetation cover mapping in India using multi-temporal IRS Wide Field Sensor (WiFS) data. Remote Sens. Environ. 2006, 103, 190–202. [Google Scholar] [CrossRef]
  68. Dorman, M.; Svoray, T.; Perevolotsky, A.; Sarris, D. Forest performance during two consecutive drought periods: Diverging long-term trends and short-term responses along a climatic gradient. For. Ecol. Manag. 2013, 310, 1–9. [Google Scholar] [CrossRef]
  69. Bajocco, S.; De Angelis, A.; Salvati, L. A satellite-based green index as a proxy for vegetation cover quality in a Mediterranean region. Ecol. Indic. 2012, 23, 578–587. [Google Scholar] [CrossRef]
  70. Fensholt, R.; Rasmussen, K.; Kaspersen, P.; Huber, S.; Horion, S.; Swinnen, E. Assessing Land Degradation/ Recovery in the African Sahel from Long-Term Earth Observation Based Primary Productivity and Precipitation Relationships. Remote Sens. 2013, 5, 664–686. [Google Scholar] [CrossRef]
  71. Zhang, G.; Biradar, C.M.; Xiao, X.; Dong, J.; Zhou, Y.; Qin, Y.; Zhang, Y.; Liu, F.; Ding, M.; Thomas, R.J. Exacerbated grassland degradation and desertification in Central Asia during 2000–2014. Ecol. Appl. 2018, 28, 442–456. [Google Scholar] [CrossRef] [PubMed]
  72. Escobar-Flores, J.G.; Lopez-Sanchez, C.A.; Sandoval, S.; Marquez-Linares, M.A.; Wehenkel, C. Predicting Pinus monophylla forest cover in the Baja California Desert by remote sensing. PeerJ 2018, 6, e4603. [Google Scholar] [CrossRef] [PubMed]
  73. Moleele, N.; Ringrose, S.; Arnberg, W.; Lunden, B.; Vanderpost, C. Assessment of vegetation indexes useful for browse (forage) prediction in semi-arid rangelands. Int. J. Remote Sens. 2001, 22, 741–756. [Google Scholar] [CrossRef]
Figure 1. (a) Three study areas along the Afro-Syrian rift valley with aridity index data from https://cgiarcsi.community/data/global-aridity-and-pet-database/, (b) overview map.
Figure 1. (a) Three study areas along the Afro-Syrian rift valley with aridity index data from https://cgiarcsi.community/data/global-aridity-and-pet-database/, (b) overview map.
Remotesensing 11 02308 g001
Figure 2. Flow diagram of the object-based image analysis procedure.
Figure 2. Flow diagram of the object-based image analysis procedure.
Remotesensing 11 02308 g002
Figure 3. Unsupervised parameter optimization. Graph (a) shows decrease in spatial autocorrelation between clusters as the threshold increases. Graph (b) shows an increase in variance within clusters as the threshold increases. The normalized combination of the two appears in graph (c), with the optimal threshold indicated by the vertical dotted line. These graphs were derived from the unsupervised parameter optimization (USPO) procedure in the Shizaf study area.
Figure 3. Unsupervised parameter optimization. Graph (a) shows decrease in spatial autocorrelation between clusters as the threshold increases. Graph (b) shows an increase in variance within clusters as the threshold increases. The normalized combination of the two appears in graph (c), with the optimal threshold indicated by the vertical dotted line. These graphs were derived from the unsupervised parameter optimization (USPO) procedure in the Shizaf study area.
Remotesensing 11 02308 g003
Figure 4. Random Forest (RF) classification result (a) and RF probability raster (b) in a section of the Shitta study area.
Figure 4. Random Forest (RF) classification result (a) and RF probability raster (b) in a section of the Shitta study area.
Remotesensing 11 02308 g004
Figure 5. Validation zone (blue dashed line) in the Shizaf study area. The monitored tree locations appear as red “Xs”.
Figure 5. Validation zone (blue dashed line) in the Shizaf study area. The monitored tree locations appear as red “Xs”.
Remotesensing 11 02308 g005
Figure 6. Three vegetation classification results. Wadi Ashalim (a), northern zone in Shizaf (b), and the eastern zone in Shitta (c). Monitored tree locations appear as red crosses, and vegetation patches are outlined in green.
Figure 6. Three vegetation classification results. Wadi Ashalim (a), northern zone in Shizaf (b), and the eastern zone in Shitta (c). Monitored tree locations appear as red crosses, and vegetation patches are outlined in green.
Remotesensing 11 02308 g006
Figure 7. Closeup of the Shitta region. Two trees were photographed and the pictures georeferenced. The photo (a) shows a live tree correctly identified as a vegetation patch, while photo (b) shows a non-vegetative tree that was correctly skipped by the model.
Figure 7. Closeup of the Shitta region. Two trees were photographed and the pictures georeferenced. The photo (a) shows a live tree correctly identified as a vegetation patch, while photo (b) shows a non-vegetative tree that was correctly skipped by the model.
Remotesensing 11 02308 g007
Figure 8. Three receiver operating characteristic (ROC) curves. Wadi Amiaz zone in Ashalim (a), northern zone in Shizaf (b), and the western zone in Shitta (c).
Figure 8. Three receiver operating characteristic (ROC) curves. Wadi Amiaz zone in Ashalim (a), northern zone in Shizaf (b), and the western zone in Shitta (c).
Remotesensing 11 02308 g008
Table 1. Auxiliary data collected at each study area during monitoring campaigns.
Table 1. Auxiliary data collected at each study area during monitoring campaigns.
Year InitializedShizafShittaAshalim
200720172012
Speciesxxx
Number of trunksx
Trunk circumferencex x
Age (est.)x
Canopy height (est.)xxx
Canopy area (est.)xx
Canopy E–W x
Canopy N–S x
Mistletoe parasite (T/F)x
Status (live/dead)xxx
Monitoring datex x
Continuous Monitoring (T/F) x
Flowering x
Table 2. Optimal threshold values for each study area.
Table 2. Optimal threshold values for each study area.
Study AreaOptimized Threshold
Ashalim0.11
Shizaf0.13
Shitta0.12
Table 3. Area under the curve (AUC) values and number of validation trees for all validation zones.
Table 3. Area under the curve (AUC) values and number of validation trees for all validation zones.
Study AreaValidation ZoneAUCNumber of Trees
AshalimWadi Amiaz0.81862
AshalimWadi Ashalim0.74985
Ashalimsouth0.85066
Shizafnorth0.712134
Shizafsouth0.731159
Shittaeast0.83072
Shittawest0.73082

Share and Cite

MDPI and ACS Style

Silver, M.; Tiwari, A.; Karnieli, A. Identifying Vegetation in Arid Regions Using Object-Based Image Analysis with RGB-Only Aerial Imagery. Remote Sens. 2019, 11, 2308. https://doi.org/10.3390/rs11192308

AMA Style

Silver M, Tiwari A, Karnieli A. Identifying Vegetation in Arid Regions Using Object-Based Image Analysis with RGB-Only Aerial Imagery. Remote Sensing. 2019; 11(19):2308. https://doi.org/10.3390/rs11192308

Chicago/Turabian Style

Silver, Micha, Arti Tiwari, and Arnon Karnieli. 2019. "Identifying Vegetation in Arid Regions Using Object-Based Image Analysis with RGB-Only Aerial Imagery" Remote Sensing 11, no. 19: 2308. https://doi.org/10.3390/rs11192308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop