High resolution multisensor fusion of SAR, optical and LiDAR data based on crisp vs. fuzzy and feature vs. decision ensemble systems


Synthetic Aperture Radar (SAR) data are of high interest for different applications in remote sensing specially land cover classification. SAR imaging is independent of solar illumination and weather conditions. It can even penetrate some of the Earth’s surface materials to return information about subsurface features. However, the response of radar is more a function of geometry and structure than a surface reflection occurs in optical images. In addition, the backscatter of objects in the microwave range depends on the frequency of the band used, and the grey values in SAR images are different from the usual assumption of the spectral reflectance of the Earth’s surface. Consequently, SAR imaging is often used as a complementary technique to traditional optical remote sensing. This study presents different ensemble systems for multisensor fusion of SAR, multispectral and LiDAR data. First, in decision ensemble system, after extraction and selection of proper features from each data, crisp SVM (Support Vector Machine) and Fuzzy KNN (K Nearest Neighbor) are utilized on each feature space. Finally Bayesian Theory is applied to fuse SVMs when Decision Template (DT) and Dempster Shafer (DS) are applied as fuzzy decision fusion methods on KNNs. Second, in feature ensemble system, features from all data are applied on a cube. Then classifications were performed by SVM and FKNN as crisp and fuzzy decision making system respectively. A co-registered TerrraSAR-X, WorldView-2 and LiDAR data set form San Francisco of USA was available to examine the effectiveness of the proposed method. The results show that combinations of SAR data with different sensor improves classification results for most of the classes. © 2016 Elsevier B.V. All rights reserved. . Introduction and background A huge amount of different remote sensing data has been cquired during recent years. Information extraction from these ata is still a challenging task, for example using the data clasification. Synthetic Aperture Radar (SAR) as one of the most ommon remote sensing data provide measurements in amplitude nd phase related to the interaction of the Earth’s surface with icrowaves. SAR imaging is independent of solar illumination and eather conditions; it is not affected by rain, fog, hail, smoke, or ost importantly, clouds. It can even penetrate some of the Earth’s urface materials to return information about subsurface features Crisp, 2006). However, SAR images are difficult to interpret due to heir special characteristics, i.e., the geometry and spectral range of AR are different from optical imagery. In addition, the exhibition f the property of speckle caused SAR image is visually difficult to nterpret. ∗ Corresponding author. E-mail addresses: bigdeli@ut.ac.ir (B. Bigdeli), pahlavani@ut.ac.ir (P. Pahlavani). ttp://dx.doi.org/10.1016/j.jag.2016.06.008 303-2434/© 2016 Elsevier B.V. All rights reserved. In recent years, significant attention has focused on multisensor data fusion for remote sensing applications and, more specifically, for land cover mapping. Data fusion techniques combine information from multiple sources, providing potential advantages over a single sensor in terms of classification accuracy. Van der Meer (1997) assessed the effects of sensor fusion systems in the aim of adding information content for visual interpretation. In this field by combining SAR and optical imagery, the overall classification accuracy is increased compared to the quality of a single-source classifier. Also, it is proven that fusion of SAR and optical data can help to overcome the lack of information due to cloud cover. In most cases, data fusion provided higher accuracy than single sensors. Researchers have found that data fusion produced small to moderate increases in accuracy from multispectral image mapping and significant accuracy improvements over SAR data alone (Schistad Solberg et al., 1994). For instance, Lozano-Garcia and Hoffer (1993) compared classification of SIR-B data (L-HH at three different incidence angles), Landsat TM, and multisensor fusion for mapping land cover in Florida. The best accuracy they achieved for the fused data was only slightly higher than the best results for the TM alone, but a great improvement over SIR-B alone. Le B. Bigdeli, P. Pahlavani / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 126–136 127 Fig. 1. Proposed ensemble system for multi sensor fusion of SAR, optical and LiDAR data. Table 1 Feature spaces on MS, LiDAR and SAR data. Data type Feature Type Feature Names Multispectral Vegetation Indices Ratio Vegetation Index (RVI) Normalized Difference Vegetation Index (NDVI) Soil Adjusted Vegetation Index (SAVI) Modified Soil Adjustment Vegetation Index (MSAVI) Color Spaces YIQ HIS YcbCr First order statistical descriptor Mean, Variance, Skewness, Kurtosis Second order statistical descriptor Contrast, Homogeneity, Entropy, Correlation, Energy LiDAR Topography features nDSM Normalized Difference Index (NDI) Slope Aspect Profile Curvature Plane curvature Roughness Smoothness Variance Laplacian Textural features GLCM (entropy, correlation, contrast, mean, standard deviation, dissimilarity, homogeneity, second moment) Variogram (semi-variogram, radiogram, madogram) H c S f t d s SAR SAR Phase Amplitude Intensity égarat-Mascle et al. (2000) observed similar trends when they ompared image classifications of multitemporal European Remote ensing (ERS) Satellite (C-VV), Landsat TM, and fusion for identiying crops in France. They found that SAR data (ERS) produced he poorest results, TM identified crop types better, and the fused ata classification was an improvement over either single data ource. Some of the new researches on fusion of SAR and optical data focused on low level fusion systems such as pixel level (Hong et al., 2009). In general, these techniques can be grouped into two categories: Colour related techniques, and Statistical/numerical methods. Hong et al. (2009) proposed a pixel level image fusion as an alternative to improve the interpretability of SAR images by fusing the colour information from moderate spatial resolu128 B. Bigdeli, P. Pahlavani / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 126–136 Fig. 2. San Francisco data sets, (a) LiDAR and (b) WorldView-2. Table 2 Accuracy assessment of crisp decision ensemble system on first area. LiDAR SAR WorldView Fusion t b H t s b S f p a t s p w d p l s r t ( K M m S Table 3 Results of fuzzy decision ensemble system for first area. Accuracy LiDAR SAR WorldView Fuzzy Fusion DT DS Overall Accuracy 90.41 76.64 86.24 93.42 Kappa 86.2 70.1 82.32 89.88 ion multispectral (MS) images. They applied a new fusion method ased on the integration of wavelet transform and IHS (Intensity, ue, and Saturation) transform for SAR and MS fusion to maintain he spectral content of the original MS image while retaining the patial detail of the high-resolution SAR image. In more new pixel ased fusion researches, Reiche et al. (2015) applied an opticalAR time series fusion approach for detection deforestation. They used Landsat NDVI and ALOS PALSAR backscatter time series. They resent a novel pixel-based Multi-sensor Time-series correlation nd Fusion approach (MulTiFuse) that exploits the full observaion density of optical and SAR time series. Fused results are more ignificantly better than SAR and Landsat only. Zhouping (2015) resented a fusion algorithm for fusion of SAR and optical data ith fast sparse representation on low-frequency images for target etection. First, the source images are done multi-scale decomosition based on Support Vector Transform (SVT), and then the ow-frequency sub-band images are decomposed, sparse repreentation of high frequency sub-band image by fusion based on egional energy, finally, the fusion image is obtained by reconstrucion. In addition some of researches try to apply Neural Networks NNs) for fusion of SAR and optical data (Serpico and Roli 1995; avzoglu and Mather, 1999; Cao and Jin 2007). Kavzoglu and ather applied three different techniques for pruning ANNs using icrowave SAR and optical SPOT data to classify land cover. Also, erpico and Roli proposed structured neural networks for classifiOverall Accuracy 91.9 78.52 89.64 96.94 97.68 Kappa 88.2 74.22 85.82 93.84 93.44 cation of SAR and optical data. Comparisons with fully connected neural networks and with the k-nearest neighbour classifier are also made. Cao and Jin (2007) fused SAR ERS-2 and Landsat ETM+ for classification of urban terrain surface. They applied a hybrid algorithm of the back propagation artificial neural network and genetic algorithm (BP-ANN/GA) to optimize initial weights and to enable fast convergence of the BP-ANN. From all of these methods it can be concluded that fusion of optical and SAR imagery is more difficult than simple fusion of two optical images, because the grey values of SAR imagery do not correlate with those of multispectral imagery. In contrast, for example, the correlation between panchromatic and multispectral bands of the same sensor is high. Fusion of sensors with inherent differences such as SAR and optical data need higher level of fusion strategies. Ability to fuse different types of data from different sensors, independence to errors in data registration step and accurate fusion methods could be mentioned as the benefits of decision-level fusion methods rather than other level fusion. An extensive literature is available on the decision fusion approaches for fusion of SAR and optical data (Benediktsson and Kanellopoulos, 1999; Waske and Benediktsson, 2007; Waske and Van der linden, 2008). Benediktsson and Kanellopoulos (1999) introduced the concept of a parallel use of neural and statistical techniques for classifying a multisensor data set, consisting B. Bigdeli, P. Pahlavani / International Journal of Applied Earth Observation and Geoinformation 52 (2016) 126–136 129


0 Figures and Tables

    Download Full PDF Version (Non-Commercial Use)