This document is a hand-on exercise generated for the EUFAR Training Course Airborne Remote Sensing for Monitoring Essential Biodiversity Variables in Forest Ecosystems (RS4forestEBV). In particular, this hand-on exercise illustrates and provides the code for different exercises on Hyperspectral & LiDAR data fusion
This document was generated using R Markdown (a format for writing reproducible, dynamic reports with R) in Rstudio. The original R Markdown document can be directly used in RStudio to reproduce the results below. Consequently, before starting make sure you have a recent version of R (& RStudio) installed and ready to use on your computer. In addition the following packages are also required to run the exercises below
#please uncomment if not installed yet
#install.packages("raster")
#install.packages("rgdal")
#install.packages("caret")
library("raster")
library("rgdal")
library("caret")
In this tutorial the following is covered
Three different data sources are used in this tutorial covering parts of the Wijnendale forest in Belgium. Wijnendale Forest is a managed forest, consisting of a mixture of oak, maple, beach, larch, ….
To access these datasets, please download the data from the group server (link send by Roshanak) and extract the corresponding .zip file. Next, please change your working directory to this unzipped folder.
setwd("C:/Users/vdkerchr/Documents/DataFusion_VanDeKerchove/") #change to the extracted zip folder
1. APEX hyperspectral image: APEX.tif
This contains a preprocessed hyperspectral airborne APEX image obtained in June 2010. Pre-processing was done bythe Flemish Institute for Technological, Research VITO. This dataset has the following characteristics:
#load the APEX hypercube
APEX <- brick("./input/APEX.tif")
#check the properties
APEX
## class : RasterBrick
## dimensions : 1079, 649, 700271, 244 (nrow, ncol, ncell, nlayers)
## resolution : 1.5, 1.5 (x, y)
## extent : 56256.5, 57230, 195293, 196911.5 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=lcc +lat_1=51.16666723333333 +lat_2=49.8333339 +lat_0=90 +lon_0=4.367486666666666 +x_0=150000.013 +y_0=5400088.438 +ellps=intl +towgs84=-106.8686,52.2978,-103.7239,0.3366,-0.457,1.8422,-1.2747 +units=m +no_defs
## data source : C:\Users\vdkerchr\Documents\DataFusion_VanDeKerchove\input\APEX.tif
## names : APEX.1, APEX.2, APEX.3, APEX.4, APEX.5, APEX.6, APEX.7, APEX.8, APEX.9, APEX.10, APEX.11, APEX.12, APEX.13, APEX.14, APEX.15, ...
#plot TCC
plotRGB(APEX,37,15,4,scale=65535,stretch="lin")
2. LiDAR: CHM.tif and HP.tif
LiDAR data was obtained from a TopoSys sensor Harrier 56 at full waveform. The study area was acquired in four different flight lines. The resulting point density was 13.81/m² with a point spacing of 0.27 m (using all returns).
The LiDAR data were processed in LAS format and converted to a raster grid with a cell size equal to the spatial resolution of the hyperspectral sensor (1.5 m). Two products were obtained and are provided:
The latter is represented as a multi-band image of 9 bands, containing respectively the 10,20,30,40,..,90th percentile.
#load the CHM
CHM <- raster("./input/CHM.tif")
#check the properties
CHM
## class : RasterLayer
## dimensions : 1079, 649, 700271 (nrow, ncol, ncell)
## resolution : 1.5, 1.5 (x, y)
## extent : 56256.5, 57230, 195293, 196911.5 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=lcc +lat_1=51.16666723333333 +lat_2=49.8333339 +lat_0=90 +lon_0=4.367486666666666 +x_0=150000.013 +y_0=5400088.438 +ellps=intl +towgs84=-106.8686,52.2978,-103.7239,0.3366,-0.457,1.8422,-1.2747 +units=m +no_defs
## data source : C:\Users\vdkerchr\Documents\DataFusion_VanDeKerchove\input\CHM.tif
## names : CHM
## values : -0.1137505, 39.05622 (min, max)
#plot the CHM
plot(CHM,main="CHM [m]")
#load the HPlayer
HP <- brick("./input/HP.tif")
#check the properties
HP
## class : RasterBrick
## dimensions : 1079, 649, 700271, 9 (nrow, ncol, ncell, nlayers)
## resolution : 1.5, 1.5 (x, y)
## extent : 56256.5, 57230, 195293, 196911.5 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=lcc +lat_1=51.16666723333333 +lat_2=49.8333339 +lat_0=90 +lon_0=4.367486666666666 +x_0=150000.013 +y_0=5400088.438 +ellps=intl +towgs84=-106.8686,52.2978,-103.7239,0.3366,-0.457,1.8422,-1.2747 +units=m +no_defs
## data source : C:\Users\vdkerchr\Documents\DataFusion_VanDeKerchove\input\HP.tif
## names : HP.1, HP.2, HP.3, HP.4, HP.5, HP.6, HP.7, HP.8, HP.9
## min values : -0.1770832, -0.1792676, -0.1830814, -0.2334484, -0.3433107, -0.3630987, -0.3801159, -0.4108579, -1.0213253
## max values : 35.32573, 36.86208, 37.08920, 37.51311, 37.65366, 38.71571, 38.79909, 39.01853, 39.21227
#plot a False Colour composite of the 90th, 50th and 10th percentile
plotRGB(HP,r=9,g=5,b=1,scale=50,stretch="lin")
3. Ground reference data: training_data.shp
This contains tree species information (points) available in vector format (shapefile)
#load the training data
training_data <- readOGR("./input","training_data")
## OGR data source with driver: ESRI Shapefile
## Source: "./input", layer: "training_data"
## with 1414 features
## It has 1 fields
#explore
training_data
## class : SpatialPointsDataFrame
## features : 1414
## extent : 56270.04, 57213.54, 195374.8, 196523.8 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=lcc +lat_1=51.16666723333333 +lat_2=49.8333339 +lat_0=90 +lon_0=4.367486666666666 +x_0=150000.013 +y_0=5400088.438 +ellps=intl +units=m +no_defs
## variables : 1
## names : class
## min values : Ash
## max values : Poplar
#show the number of points/class
table(training_data$class)
##
## Ash Beech Chestnut Copper.beech Larch
## 67 388 67 79 108
## Oak Poplar
## 301 404
#plot the training data
plotRGB(APEX,37,15,4,scale=65535,stretch="lin")
plot(training_data,add=T,col=training_data$class)
In this step we’ll extract the corresponding hyperspectral spectra and LiDAR structural parameters for each of the training data points.
#extract the explanatory variables
data.APEX <- extract(APEX,training_data)
data.CHM <- extract(CHM,training_data)
data.HP <- extract(HP,training_data)
#aggregate into a data frame
data <- cbind(as.data.frame(data.APEX),
as.data.frame(data.CHM),
as.data.frame(data.HP),
class=training_data$class)
names(data)[245]="CHM"
Next we’ll plot the explanatory variables by class
#plot the average APEX spectra by class (note that no wavelength information is provided)
mean.spec <- matrix(nrow=7,ncol=244)
for (i in 1:244){
mean.spec[,i]=tapply(data[,i],data$class,function(x)(mean(x,na.rm=T)))
}
plot(mean.spec[1,],ylim=c(0,max(mean.spec)),col=1,type="l",ylab="Scaled Reflectance",main="Average spectrum by species",lwd=2)
for (i in 2:7)(lines(mean.spec[i,],col=i,lwd=2))
legend("topright",levels(data$class),lty=rep(1,7),col=1:7,lwd=rep(2,7))
#boxplot with the CHM by class
boxplot(CHM~class,data=data,xaxt ="n",ylab="Height [m]",main="Height by species",col=1:7)
axis(1, at=1:7, labels=levels(data$class),las=2)
abline(h=median(data$CHM),col="red",lty=2)
#plot the HP by class
mean.HP <- matrix(nrow=7,ncol=9)
for (i in 1:9){
mean.HP[,i]=tapply(data.HP[,i],data$class,function(x)(mean(x,na.rm=T)))
}
plot(mean.HP[1,],ylim=c(min(mean.HP),max(mean.HP)),col=1,lwd=2,type="l",ylab="Height [m]",main="HP by species",xaxt="n")
axis(1,at=1:9,labels=paste0("HP",1:9))
for (i in 2:7)(lines(mean.HP[i,],col=i,lwd=2))
legend("bottomright",levels(data$class),lty=rep(1,7),col=1:7,lwd=rep(2,7))
To train, tune, and validate different classification techniques we will rely on the caret package.
The caret package (short for Classification And REgression Training) is a set of functions that attempt to streamline the process for creating predictive models. The package contains tools for:
+data splitting +pre-processing +feature selection +model tuning using resampling +variable importance estimation
as well as other functionality.
The caret package currently contains about 233 different classifier or regression models among which all famous ones. So I recommend to have a thorough look on http://topepo.github.io/caret/index.html
In this excercise we’ll use random forest models and a separate testing dataset (1/3 of the training data) to validate the different models. However one should not be confused as we“ll also use cross validation (run on the training dataset), but this mainly serves to train the model.
Furthermore it’s important to note that in the sections below, we don’t aim at fully getting out every % of accuracy gain that’s in the data, but rather keep methods simple. As such, if interested much more can be done with this dataset e.g. by
#create folds to split in 2/3 training & validation
set.seed(111)
folds <- createFolds(data$class,k=3)
#create training dataset
training <- data[-folds$Fold3,]
#create validation dataset
testing <- data[folds$Fold3,]
#show the number of points per class in the training and validation dataset
table(training$class)
##
## Ash Beech Chestnut Copper.beech Larch
## 45 259 45 53 72
## Oak Poplar
## 201 270
table(testing$class)
##
## Ash Beech Chestnut Copper.beech Larch
## 22 129 22 26 36
## Oak Poplar
## 100 134
#create folds to train the random forest models
folds.cv <- createMultiFolds(training$class,k=5,times=3)
control.cv <- trainControl(method = "repeatedCV", index = folds.cv)
Next we train 2 different random forest models
To optimize speed we use a fixed mtry (Number of variables randomly sampled as candidates at each split) value however we acknowledge that accuracies might get better when this is optimized. Please have a look at http://machinelearningmastery.com/tune-machine-learning-algorithms-in-r/ to see a good example
#Hyperspectral model
mtry <- round(sqrt(244)) #default value
tunegrid <- expand.grid(.mtry=mtry)
rfModel1 <- train(training[,1:244],
training$class,
method="rf",
trControl=control.cv,
tuneGrid=tunegrid)
#visualize model output. Accuracies are those obtained from cross validation on the training set alone
rfModel1
## Random Forest
##
## 945 samples
## 244 predictors
## 7 classes: 'Ash', 'Beech', 'Chestnut', 'Copper.beech', 'Larch', 'Oak', 'Poplar'
##
## No pre-processing
## Resampling: Cross-Validated (25 fold, repeated 25 times)
## Summary of sample sizes: 756, 757, 754, 757, 756, 756, ...
## Resampling results:
##
## Accuracy Kappa
## 0.7065702 0.6216122
##
## Tuning parameter 'mtry' was held constant at a value of 16
#LiDAR model
mtry <- round(sqrt(10))
tunegrid <- expand.grid(.mtry=mtry)
rfModel2 <- train(training[,245:254],
training$class,
method="rf",
trControl=control.cv,
tuneGrid=tunegrid)
#visualize model output. Accuracies are those obtained from cross validation on the training set alone
rfModel2
## Random Forest
##
## 945 samples
## 10 predictor
## 7 classes: 'Ash', 'Beech', 'Chestnut', 'Copper.beech', 'Larch', 'Oak', 'Poplar'
##
## No pre-processing
## Resampling: Cross-Validated (25 fold, repeated 25 times)
## Summary of sample sizes: 756, 757, 754, 757, 756, 756, ...
## Resampling results:
##
## Accuracy Kappa
## 0.5329422 0.3830449
##
## Tuning parameter 'mtry' was held constant at a value of 3
Next we calculate accuracies using our separate validation (testing) dataset
#predict on test
pred.rfModel1 <- predict(rfModel1,testing[,-255])
pred.rfModel2 <- predict(rfModel2,testing[,-255])
#get confusion matrices
(conf.model1 <- confusionMatrix(testing$class,pred.rfModel1))
## Confusion Matrix and Statistics
##
## Reference
## Prediction Ash Beech Chestnut Copper.beech Larch Oak Poplar
## Ash 10 5 0 0 1 3 3
## Beech 1 87 1 0 2 24 14
## Chestnut 0 4 4 0 1 9 4
## Copper.beech 0 0 0 26 0 0 0
## Larch 0 0 1 0 29 1 5
## Oak 1 20 1 0 7 58 13
## Poplar 1 1 2 0 2 13 115
##
## Overall Statistics
##
## Accuracy : 0.7015
## 95% CI : (0.6578, 0.7426)
## No Information Rate : 0.3284
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.6155
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: Ash Class: Beech Class: Chestnut
## Sensitivity 0.76923 0.7436 0.444444
## Specificity 0.97368 0.8807 0.960870
## Pos Pred Value 0.45455 0.6744 0.181818
## Neg Pred Value 0.99329 0.9118 0.988814
## Prevalence 0.02772 0.2495 0.019190
## Detection Rate 0.02132 0.1855 0.008529
## Detection Prevalence 0.04691 0.2751 0.046908
## Balanced Accuracy 0.87146 0.8121 0.702657
## Class: Copper.beech Class: Larch Class: Oak
## Sensitivity 1.00000 0.69048 0.5370
## Specificity 1.00000 0.98361 0.8837
## Pos Pred Value 1.00000 0.80556 0.5800
## Neg Pred Value 1.00000 0.96998 0.8645
## Prevalence 0.05544 0.08955 0.2303
## Detection Rate 0.05544 0.06183 0.1237
## Detection Prevalence 0.05544 0.07676 0.2132
## Balanced Accuracy 1.00000 0.83704 0.7103
## Class: Poplar
## Sensitivity 0.7468
## Specificity 0.9397
## Pos Pred Value 0.8582
## Neg Pred Value 0.8836
## Prevalence 0.3284
## Detection Rate 0.2452
## Detection Prevalence 0.2857
## Balanced Accuracy 0.8432
(conf.model2 <- confusionMatrix(testing$class,pred.rfModel2))
## Confusion Matrix and Statistics
##
## Reference
## Prediction Ash Beech Chestnut Copper.beech Larch Oak Poplar
## Ash 0 8 0 0 0 7 7
## Beech 1 86 1 7 7 14 13
## Chestnut 0 4 1 0 2 12 3
## Copper.beech 0 23 0 0 0 3 0
## Larch 0 6 0 0 10 18 2
## Oak 0 19 5 0 5 56 15
## Poplar 1 8 0 0 2 16 107
##
## Overall Statistics
##
## Accuracy : 0.5544
## 95% CI : (0.5081, 0.6)
## No Information Rate : 0.3284
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.4112
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: Ash Class: Beech Class: Chestnut
## Sensitivity 0.000000 0.5584 0.142857
## Specificity 0.952891 0.8635 0.954545
## Pos Pred Value 0.000000 0.6667 0.045455
## Neg Pred Value 0.995526 0.8000 0.986577
## Prevalence 0.004264 0.3284 0.014925
## Detection Rate 0.000000 0.1834 0.002132
## Detection Prevalence 0.046908 0.2751 0.046908
## Balanced Accuracy 0.476445 0.7110 0.548701
## Class: Copper.beech Class: Larch Class: Oak
## Sensitivity 0.00000 0.38462 0.4444
## Specificity 0.94372 0.94131 0.8717
## Pos Pred Value 0.00000 0.27778 0.5600
## Neg Pred Value 0.98420 0.96305 0.8103
## Prevalence 0.01493 0.05544 0.2687
## Detection Rate 0.00000 0.02132 0.1194
## Detection Prevalence 0.05544 0.07676 0.2132
## Balanced Accuracy 0.47186 0.66296 0.6581
## Class: Poplar
## Sensitivity 0.7279
## Specificity 0.9161
## Pos Pred Value 0.7985
## Neg Pred Value 0.8806
## Prevalence 0.3134
## Detection Rate 0.2281
## Detection Prevalence 0.2857
## Balanced Accuracy 0.8220
We can conclude that the hyperspectral model clearly outperforms the LiDAR model when using only a single dataset
Here we stack the LiDAR and hyperspectral dataset together and use this dataset to train the RF model.
#Built a rf model using both the hyperspectral and LiDAR explanatory variables
mtry <- round(sqrt(254))
tunegrid <- expand.grid(.mtry=mtry)
rfModel3 <- train(training[,1:254],
training$class,
method="rf",
trControl=control.cv,
tuneGrid=tunegrid)
#cross validation accuracy
rfModel3
## Random Forest
##
## 945 samples
## 254 predictors
## 7 classes: 'Ash', 'Beech', 'Chestnut', 'Copper.beech', 'Larch', 'Oak', 'Poplar'
##
## No pre-processing
## Resampling: Cross-Validated (25 fold, repeated 25 times)
## Summary of sample sizes: 756, 757, 754, 757, 756, 756, ...
## Resampling results:
##
## Accuracy Kappa
## 0.7291436 0.6504915
##
## Tuning parameter 'mtry' was held constant at a value of 16
#hold out validation accuracy
pred.rfModel3 <- predict(rfModel3,testing[,-255])
(conf.model3 <- confusionMatrix(testing$class,pred.rfModel3))
## Confusion Matrix and Statistics
##
## Reference
## Prediction Ash Beech Chestnut Copper.beech Larch Oak Poplar
## Ash 9 4 1 0 1 4 3
## Beech 1 98 1 0 2 14 13
## Chestnut 0 4 5 0 1 9 3
## Copper.beech 0 2 0 24 0 0 0
## Larch 0 0 1 0 27 1 7
## Oak 0 11 2 0 9 67 11
## Poplar 0 1 2 0 2 12 117
##
## Overall Statistics
##
## Accuracy : 0.7399
## 95% CI : (0.6977, 0.779)
## No Information Rate : 0.3284
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.6645
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: Ash Class: Beech Class: Chestnut
## Sensitivity 0.90000 0.8167 0.41667
## Specificity 0.97168 0.9112 0.96280
## Pos Pred Value 0.40909 0.7597 0.22727
## Neg Pred Value 0.99776 0.9353 0.98434
## Prevalence 0.02132 0.2559 0.02559
## Detection Rate 0.01919 0.2090 0.01066
## Detection Prevalence 0.04691 0.2751 0.04691
## Balanced Accuracy 0.93584 0.8639 0.68973
## Class: Copper.beech Class: Larch Class: Oak
## Sensitivity 1.00000 0.64286 0.6262
## Specificity 0.99551 0.97892 0.9088
## Pos Pred Value 0.92308 0.75000 0.6700
## Neg Pred Value 1.00000 0.96536 0.8916
## Prevalence 0.05117 0.08955 0.2281
## Detection Rate 0.05117 0.05757 0.1429
## Detection Prevalence 0.05544 0.07676 0.2132
## Balanced Accuracy 0.99775 0.81089 0.7675
## Class: Poplar
## Sensitivity 0.7597
## Specificity 0.9460
## Pos Pred Value 0.8731
## Neg Pred Value 0.8896
## Prevalence 0.3284
## Detection Rate 0.2495
## Detection Prevalence 0.2857
## Balanced Accuracy 0.8529
In this section we will use the fuzzy output of the 2 single models (i.e. probabilities per class) as input for a fused model.
#get probabilities
prob.rfModel1 <- predict(rfModel1, training[, -255], type="prob")
prob.rfModel2 <- predict(rfModel2, training[, -255], type="prob")
Next we build a combined model
#train a model based on the probabilities from the individual models
names(prob.rfModel2) <- paste0("LIDAR_",names(prob.rfModel2))
mtry <- round(sqrt(14))
tunegrid <- expand.grid(.mtry=mtry)
rfModel.comb <- train(cbind(prob.rfModel1,prob.rfModel2),
training$class,
method="rf",
trControl=control.cv,
tuneGrid=tunegrid)
rfModel.comb
## Random Forest
##
## 945 samples
## 14 predictor
## 7 classes: 'Ash', 'Beech', 'Chestnut', 'Copper.beech', 'Larch', 'Oak', 'Poplar'
##
## No pre-processing
## Resampling: Cross-Validated (25 fold, repeated 25 times)
## Summary of sample sizes: 756, 757, 754, 757, 756, 756, ...
## Resampling results:
##
## Accuracy Kappa
## 1 1
##
## Tuning parameter 'mtry' was held constant at a value of 4
#hold out validation
testing.prob.rfModel1 <- predict(rfModel1,testing[,-255],type="prob")
testing.prob.rfModel2 <- predict(rfModel2,testing[,-255],type="prob")
testing.prob.rfModel3 <- predict(rfModel3,testing[,-302],type="prob")
names(testing.prob.rfModel2) <- paste0("LIDAR_",names(testing.prob.rfModel2))
pred.rfModel.comb <- predict(rfModel.comb,cbind(testing.prob.rfModel1,testing.prob.rfModel2))
(conf.model.comb <- confusionMatrix(testing$class,pred.rfModel.comb))
## Confusion Matrix and Statistics
##
## Reference
## Prediction Ash Beech Chestnut Copper.beech Larch Oak Poplar
## Ash 10 6 0 0 0 1 5
## Beech 8 91 2 1 2 12 13
## Chestnut 5 4 2 0 0 8 3
## Copper.beech 0 18 0 8 0 0 0
## Larch 1 5 1 0 18 10 1
## Oak 6 22 3 0 3 52 14
## Poplar 8 2 1 0 0 6 117
##
## Overall Statistics
##
## Accuracy : 0.6354
## 95% CI : (0.59, 0.679)
## No Information Rate : 0.3262
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.5265
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: Ash Class: Beech Class: Chestnut
## Sensitivity 0.26316 0.6149 0.222222
## Specificity 0.97216 0.8816 0.956522
## Pos Pred Value 0.45455 0.7054 0.090909
## Neg Pred Value 0.93736 0.8324 0.984340
## Prevalence 0.08102 0.3156 0.019190
## Detection Rate 0.02132 0.1940 0.004264
## Detection Prevalence 0.04691 0.2751 0.046908
## Balanced Accuracy 0.61766 0.7482 0.589372
## Class: Copper.beech Class: Larch Class: Oak
## Sensitivity 0.88889 0.78261 0.5843
## Specificity 0.96087 0.95964 0.8737
## Pos Pred Value 0.30769 0.50000 0.5200
## Neg Pred Value 0.99774 0.98845 0.8997
## Prevalence 0.01919 0.04904 0.1898
## Detection Rate 0.01706 0.03838 0.1109
## Detection Prevalence 0.05544 0.07676 0.2132
## Balanced Accuracy 0.92488 0.87112 0.7290
## Class: Poplar
## Sensitivity 0.7647
## Specificity 0.9462
## Pos Pred Value 0.8731
## Neg Pred Value 0.8925
## Prevalence 0.3262
## Detection Rate 0.2495
## Detection Prevalence 0.2857
## Balanced Accuracy 0.8555
Build a model based on the highest probability for each class
#get highest probability
prob <- cbind(testing.prob.rfModel1,testing.prob.rfModel2)
names(prob)[8:14] = names(prob)[1:7]
class=c()
for (i in 1:dim(testing)[1]){
if (pred.rfModel1[i]==pred.rfModel2[i]){
class = c(class,as.character(pred.rfModel1[i]))
} else {
class = c(class,names(which.max(prob[i,])))
}
}
(confusionMatrix(testing$class,class))
## Confusion Matrix and Statistics
##
## Reference
## Prediction Ash Beech Chestnut Copper.beech Larch Oak Poplar
## Ash 8 7 0 0 0 3 4
## Beech 0 98 0 0 3 15 13
## Chestnut 0 5 0 0 2 11 4
## Copper.beech 0 3 0 23 0 0 0
## Larch 0 4 0 0 27 4 1
## Oak 0 20 3 0 4 59 14
## Poplar 0 1 0 0 0 10 123
##
## Overall Statistics
##
## Accuracy : 0.7207
## 95% CI : (0.6777, 0.7609)
## No Information Rate : 0.339
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.6354
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: Ash Class: Beech Class: Chestnut
## Sensitivity 1.00000 0.7101 0.000000
## Specificity 0.96963 0.9063 0.952790
## Pos Pred Value 0.36364 0.7597 0.000000
## Neg Pred Value 1.00000 0.8824 0.993289
## Prevalence 0.01706 0.2942 0.006397
## Detection Rate 0.01706 0.2090 0.000000
## Detection Prevalence 0.04691 0.2751 0.046908
## Balanced Accuracy 0.98482 0.8082 0.476395
## Class: Copper.beech Class: Larch Class: Oak
## Sensitivity 1.00000 0.75000 0.5784
## Specificity 0.99327 0.97921 0.8883
## Pos Pred Value 0.88462 0.75000 0.5900
## Neg Pred Value 1.00000 0.97921 0.8835
## Prevalence 0.04904 0.07676 0.2175
## Detection Rate 0.04904 0.05757 0.1258
## Detection Prevalence 0.05544 0.07676 0.2132
## Balanced Accuracy 0.99664 0.86461 0.7334
## Class: Poplar
## Sensitivity 0.7736
## Specificity 0.9645
## Pos Pred Value 0.9179
## Neg Pred Value 0.8925
## Prevalence 0.3390
## Detection Rate 0.2623
## Detection Prevalence 0.2857
## Balanced Accuracy 0.8691
#im.stack <- stack(APEX,CHM,HP)
#map <- predict(im.stack,rfModel3)