Optimization of SVM classifier using Genetic algorithm |
Paper ID : 1064-SMPR-FULL |
Authors: |
hamid ghanbari *1, Saeid Homayouni2 1kooy daneshgah 260 University |
Abstract: |
the presented work describes a methodology that employs GA-SVM for classification of image and compares it with Artificial Neural Network. In this work an image consisting of 5 classes (tree, building, road, shadow and bare land) and relevant training and testing sites obtained from envi software, have been used. Support Vector Machine (SVM) where presented by Vipnik (1995) is originally developed for linear two-class classification via constructing an optimal separating hyper plane, where the margin is maximal. In case of non linearly separable training data, SVM uses a kernel function map the low-dimensional input features into a high-dimension, such as linear kernels, polynomial kernels and radial basis function kernels (RBF). When using SVM, the primary issue is to choose a proper kernel function and set suitable kernel parameters. RBF as the frequently used function kernel in SVM, here we just study the parameter optimization of RBF. The parameters that should be optimized include generalization C-factor, which determines the trade-off between maximum classification rate and minimum training error, and kernel parameter which defines the nonlinear mapping from the original low-dimensional input space into some high-dimensional feature space. An alternative optimization approach is to estimate the generalization abilities of SVMs using Genetic Algorithm. Genetic Algorithm (GA) is a stochastic and heuristic searching algorithm that is inspired by natural evolution. In the evolution, the candidate solutions are encoded to group of strings (called chromosomes) by some kind of encoding methods. Based on Darwin’s principle of ‘survival of the fittest’, the optimal candidate solution is obtained after a series of iterative GA computations. In GA-based SVM parameter optimization process, the most difficult work is to design a fitness function to produce SVM parameters that are reliable and effective for SVM models. K-fold cross validation is one of the used technique to assess the generalization ability of a SVM classifier. But For this work we use test data to define the fitness function. For each predicted parameters we can obtain predicted-lable for testing data, that shows each pixels belongs to each classes and compare them with the true-labled for classes. By substracting predicted-labled and true-labled we can define the fitness function. By using GA along with SVM here we are trying to make classification of the objects such that it will be closer to the original image. Then the results are compared to multilayer perceptron neural network classification. Artificial Neural Networks (ANN) were initially developed according to the elementary principle of the operation of the (human) neural system. The choice of the network type depends on the problem to be solved; the backpropagation gradient network is the most frequently used. This network consists of three or more neuron layers: one input layer, one output layer and at least one hidden layer. In most cases, a network with only one hidden layer is used to restrict calculation time. In This paper we proposes three-layer back propagation network for classification of the image with same training sites. Finally, by using of kappa coefficient, overall accuracy index and testing data, we show that the performance of neural network is better than the SVM-GA classifier. |
Keywords: |
Support Vector Machine(SVM), Genetic Algorithm, Artificial Neural Network(ANN), classification |
Status : Paper Accepted (Poster Presentation) |