Notes On Support Vector Machines: Fernando Mira Da Silva
Notes On Support Vector Machines: Fernando Mira Da Silva
Fernando.Silva@inesc.pt
                INESC
               November 1998
                                            Abstract
    This report describes an empirical study of Support Vector Machines (SVM) which aims to
review the foundations and properties of SVM based on the graphical analysis of simple binary
classification tasks.
     SVM theory is briefly reviewed by discussing possible learning rules for linear separable
problems and the role of the classification margin in generalization. This leads to the formulation
of the SVM learning procedure for linear separable problems. It is then shown how it is possible to
extend the SVM approach to non-separable problem by relaxing the strict separability constraint
of the original formulation.
    The application of SVM to nonlinear problems by feature expansion techniques is then briefly
reviewd. The effect of the dimension of the feature space on the SVM solution is briefly considered
based on a simple bi-dimensional problem. Finally, it is discussed how explicit feature expansion
can be avoided by the use of kernels.
    In the last part of this document, practical problems regarding SVM optimization are briefly
discussed and some pointers and references regarding this issue are supplied.
                                                                                 Contents
1 Introduction 1
2.1.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5 Implementation issues 45
6 Conclusions 47
        References                                                                                         49
                                                                 List of Figures
2.3   Binary classification region found by a simple sigmoidal unit distribution of fig.
      2.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     12
2.5   Binary classification region found by a simple sigmoidal unit in the distribution of
      fig. 2.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    13
                                                                                        
2.6
      
      Non-linearly separable problem. Classification regions found by a SVM with
             . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     18
2.7   Same distribution of figure 2.6. Classification regions found by a SVM with
                                                                                          .   18
3.3   SVM classification of the chess board distribution of figure 3.2. Linear case (      ). 23
                             
	
3.4   SVM classification of the chess board distribution of figure 3.2. Polynomial fea-
      ture expansion with       . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      23
3.5
                               
 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
      SVM classification of the chess board distribution of figure 3.2. Polynomial fea-
      ture expansion with                                                                       24
                            
3.6   SVM classification of the chess board distribution of figure 3.2. Polynomial fea-
      ture expansion with       . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      24
VI   LIST OF FIGURES
                                        
           3.7   SVM classification of the chess board distribution of figure 3.2. Polynomial fea-
                 ture expansion with       . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      25
           3.8
                                            . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
                 SVM classification of the chess board distribution of figure 3.2. Polynomial fea-
                 ture expansion with                                                                       25
                                        
           3.9   SVM classification of the chess board distribution of figure 3.2. Polynomial fea-
                 ture expansion with        . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       26
                                                                                     	 
                   
           3.10 MLP classification of the chess board distribution of figure 3.2.          network.
                      . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       26
                input gaussians.
                                   	  
           3.11 RBF classification of the chess board distribution of figure 3.2. network with 8
                                        . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       27
4.1 SVM classification of the chess board distribution of figure 3.2. RBF kernel. . . 33
4.3 Classification region for the cross problem with an SVM with a RBF kernel. . . 35
4.4 Classification region for the cross problem with a MLP network. . . . . . . . . . 35
4.5 Classification region for the cross problem with a RBF network (8 gaussians). . . 36
4.6 Classification region for the cross problem with a RBF network (14 gaussians). . 36
           4.8   Classification region for the cross problem with 4 outliers and a SVM with a RBF
                 kernel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      38
4.9 Classification region for the cross problem with 4 outliers and a MLP network. . 38
           4.10 Classification region for the cross problem with four outliers and a RBF network
                (8 gaussians). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      39
           4.11 Classification region for the cross problem with four outliers and a RBF network
                (23 gaussians). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       39
4.13 Sinusoidal problem. Solution found by a SVM network with a RBF kernel. . . . 41
Introduction
The interest on supervised, automatic learning systems for pattern classification was greatly en-
hanced by the advances in connectionist and other non-symbolic learning paradigms. In super-
vised learning, the system is presented with examples of input patterns and the corresponding
desired class. During the learning phase, the system is expected to develop a classification model
which is able to correctly classify arbitrary input data patterns.
     Discussion of generalization and learning often involve the concepts of computational feasi-
bility and computational capacity. For a system to perform a given classification task, its model
structure must be able to implement it. This means that the classification process must be feasible
by the computational model. This requirement may imply the use of a fairly complex compu-
tational model. But, on the other hand, one must consider the computational capacity of the
model. The computational capacity of a given system may be formally quantified by the Vapnik-
Chervonenkis (VC) dimension (Vapnik, 1995). Broadly speaking, the VC dimension of a binary
classification system is the maximum number of training samples that can be learned by the sys-
tem without error for all possible binary labellings of the classification models (Haykin, 1994).
The introduction of the VC dimension enables to derive an upper bound for the the misclassifica-
2 I NTRODUCTION
       tion rate in the application domain (risk) given the misclassification rate observed on the training
       set (empirical risk). While the theoretical bounds which result from the VC dimension are often
       considered too conservative (see for instance (Baum and Haussler, 1989; Baum, 1990; Cohn and
       Tesauro, 1991; Ji and Psaltis, 1992)), the VC dimension remains an important tool for statistical
       analysis of learning and generalization. A too large VC dimension may lead to an over fitted sys-
       tem, which tends to work as a memory regarding the training samples, but which develops a poor
       generalization behavior.
            In practice, good generalization often requires a trade-off between the computational feasibil-
       ity of the model and its computational capacity. This is often achieved by including regularization
       techniques in the learning procedure. Several neural network learning models, as weight sharing
       (Le Cun et al., 1990) or weight decay (Krogh and Hertz, 1992; Guyon et al., 1992) attempt to
       reduce the VC dimension without affecting the computational feasibility of the system.
           Recently, support vector machines (SVM) received a great deal of attention from the machine
       learning community. Possibly the most attractive feature of SVM stems from the attempt to derive
       a feasible classifier with a minimum VC-dimension for a given classification task. This report
       describes an empirical study of Support Vector Machines which aims to review the foundations
       and properties of SVM through the graphical analysis of simple binary classification tasks.
           This report is structured as follows. In chapter 1 the basic principles of linear classification
       systems are introduced. In chapter 2 it is introduced the optimality principle of support vector ma-
       chines for linear classification tasks and its application to separable and non-separable problems.
       In chapter 3 its discussed the extension of SVM to non linear classification problems by feature
       expansion. In chapter 4 its considered the use of Kernels in non-linear SVMs. In chapter 5 some
       details regarding implementation of SVM are considered. Finally, in chapter 6, conclusions are
       presented.
       Consider a linearly separable binary classification problem where the training set is defined by
       pairs                              	
   
(1.1)
       Each
               is an input pattern and  ! #"  
   identifies the target class for input pattern   .
       an hyper-plane in
                            
           By definition, stating that the problem is linearly separable is equivalent to state that there is
                               splitting the input domain such that patterns of the same class lay in the
       same region of the input space. The solution of the classification problem reduces to find any
       hyper-plane which has this property. Without further restrictions, there is in general an infinity of
                                                                                   L INEAR   CLASSIFICATION SYSTEMS        3
solutions to this problem. Examples of several possible solutions for a simple bi-dimensional case
are shown in figure 1.1.
                                                                               
     If the domain of the classification problem is the training set alone, then any hyper-plane in
figure 1.1 provides a correct solution to the classification problem. However, further to the correct
classification of all patterns in the training set 1 , a good classification system is expected to provide
correct classes for previously unseen data patterns. Therefore, to achieve optimal generalization,
the classification problem must be restated as follows: among all possible solutions, choose the
one which hopefully provides the best generalization ability. Clearly, this is an ill-posed problem,
since it is impossible to guess the generalization behavior without making strong assumptions on
the underlying data distribution.
    A linear classification system halves the input space with an hyper-plane                        defined by the
equation
                                                       
                                                                                                          (1.2)
   1
    In the more general case, the optimal system that minimizes the classification error in the application domain may
even misclassify one or more patterns in the training set. However, we assume here that the problem is strictly linearly
separable and that there is no noise in the training set samples.
4 I NTRODUCTION
           The original perceptron (Rosenblatt, 1958), the adaline (Widrow and Hoff, 1960) or the neu-
       ron unit with sigmoidal activation function (Rumelhart et al., 1986) are among the simplest binary
                              
       classification systems with learning ability. The output of a neuron unit with an activation function
             for an input pattern is given by
                                                                                   
 
 
                                                                                                          
                                                                                   
                                                                               	
                                                                                                                                                (1.4)
                
                                                                                                                           
       where 
  is the  component of    and is a vector of components  . The function
                                                                              
                                                                                                   is
                                                                                                                                                     
                          
       usually a monotonic sigmoidal type function. According to (1.3), the class of input pattern
       is obtained from  by
                                                                          "  
                     
                                                      
(1.5)
             
       where is a pre-specified threshold. In neural network terminology, the parameters  are the
                      
       weights and is the bias input.
                                                                                        
            The learning phase of the sigmoidal unit reduces to find the parameters ( , ) that minimize                          
       the total output error on the training set, given by
                                                                
        
                                                      (1.6)
                                                                              
                 
 
                    is an arbitrary distortion measure. If                                                                                     
       where
                                              	 " 
           refers to the quadratic error and   is
         
    " 
                       
                            . Since
                                       
 
       an odd function with values in            , then the distortion measure can be simply defined as
                                          depend on all training samples, the parameters   found
                                                                                                                                       
 
       
       by the minimization process – and hence          – also depend on all training samples. This is a
       reasonable result, since must correctly classify all training patterns.
            However, it is also reasonable to argue that, provided that all training patterns are correctly
       classified, the exact location of should only depend on patterns laying near the decision bound-
       ary. In fact, these patterns are the most critically relevant for the definition of the exact boundary
       of the decision region and it is natural to expect that generalization is, above all, dependent on this
       boundary. Accordingly, one may expect that correctly classified patterns far away from should
       be almost irrelevant regarding the generalization behavior of the classification system.
           The hypothesis that the optimal location of must only depend on the data patterns near to
       the decision boundary is a key principle of support vector machines. Under this assumption, the
                                                                    
       learning process must be able to identify this set of patterns – the support vectors – and it must be
       able to express the optimal parameters (     ) of the classifier only as a function of these samples.
                                                                                Chapter 2
2.1.1 Formulation
In its simplest form, the learning criteria of support vector machines requires that all patterns
     Let us define the classification margin      as the minimum distance between     and the closest
pattern to . This means
                                                  
                                (2.2)
where 
         
  is the Euclidean distance from pattern  to . This distance is given by
                                                               
                                                               
                                        
     	  	                                  (2.3)
     The classification margin can be seen as a measure of as well the hyper-plane performs
in the task of separating the two binary classes. In fact, a small value of means that is close
to one or more samples of one or two classes and, therefore, there is a reasonably high probability
that samples not included in the training set may fall in the wrong side of the classification region.
On the other hand, if is large, such probability is significantly reduced.
6 O PTIMAL   LINEAR CLASSIFICATION
                                                                    d max
                                                                            d max
                 Figure 2.1: Optimal classification hyper-plane. Note that the exact location of
                 the hyper-plane only depends on two support vectors.
            The learning criteria of support vector machines reduces to find the set of parameters
                                                                                                         
 
       that maximize the classification margin under the constraints defined by (2.1). The optimal clas-
       sification hyper-plane according to this criteria for a simple bi-dimensional example is depicted in
       figure 2.1. Note that, as discussed before, the exact location of is only dependent on the sup-
       port vectors closer to the hyper-plane (in this example, two support vectors). One of the relevant
       features of this classification criteria is that, under certain hypothesis, it can be related with the
       principle of structural risk minimization (Vapnik, 1992; Guyon et al., 1992), therefore providing
       theoretical grounds for the justification of a good generalization ability of these systems (Vapnik,
       1995; Gunn, 1997; Burges, 1986).
                                                                                                       
                                       
  
       In fact, all set of parameters         with     
            Note that there is one degree of freedom in the definition of by the condition
                                                           define the same hyper-plane and result in the
                                                                                                         .
                                                          	 	   
       Given (2.2) and (2.3), this additional constraint is equivalent to state that
(2.5)
                                                                                               	  
of parameters
                 
  
Note, once more, that this reduces to sate that once condition
                      , then it is possible to choose a 
                                                                                           
                                                                     is satisfied for an arbitrary set
                                                               such that                        also
                                                                                                              
           
verify (2.6).
                                                                         	  	 
    Let us consider again the classification margin. From (2.5) it results that
(2.7)
                                                                           	  	 
    With this condition, the optimization problem can be stated as follows: maximize
                                                                                                                                       (2.8)
under constraints
                                                    "    
    
 
                                                 (2.9)
If there is patterns on the training set, the optimization will have                                       constraints. Note that ,
                                                                                                                                                 
while absent from (2.8), is implicitly defined by the constraints (2.9).
                                                                                    	 	
                                              
     In order to avoid the square root associated to
                                                           and to simplify the overall formulation,
                                                                          	 	  	 
instead of the maximization of        , it is usually considered the problem of minimization of
                                                                                                                               (2.10)
                                    	 	  	              "    
	          " 
                                                                  
                                                                                                                                             
                
with respect to and , subject to
                                
    Under this formulation, the optimization problem is reduced to finding the minimization of
                                            
                   
    
                                                                                                                               (2.12)
8 O PTIMAL    LINEAR CLASSIFICATION
       which is equivalent to state that the solution is found at a saddle point of the Lagrangian in respect
       to the Lagrange multipliers. This is a convex quadratic programming problem, since the objective
       function is convex and each constraint defines itself a convex set.
           While the above optimization problem can be directly solved using any standard optimization
                                                                                        
       package, it requires the explicit computation of . While this can be considered a quite natural
       requirement at this stage, avoiding the explicit computation of       is a key requirement for the
                                                                                                             	 	
       application of kernel function to SVM (this subject will be discussed in further detail in chapter
       4). In this problem, the explicit computation of                                     
                                                            can be avoided using the dual formulation
       of Lagrangian theory (Fletcher, 1987). As a side result, this dual formulation will also lead to a
       simpler optimization problem.
            Classical Lagrangian duality enables to replace a given optimization problem by its dual for-
       mulation. In this case, the Lagrangian duality states that the minimum of that is found solving
                                                                                                                        
       the primal problem
                                   
            
    
 
 
                                                                                                    (2.13)
This particular dual formulation is called the Wolfe dual (Fletcher, 1987) 1 .
           Solving (2.14) implies to find the points where the derivatives of with respect to
                                                                                                                                and
                                                                                                                                        
       vanish. Let us consider the derivative with respect to first. From (2.10) it results     
                                                   "          "          
                                              	                                   
                                             "                                                                        (2.15)
                                                                  
       Hence,
                                                                                      # 
                                                                                         
                                                                                                                                  (2.16)
                                           	     "           "            
                                                                                              
          1
          In order to avoid confusion between the primal and dual problems, in optimization literature the Lagrangian is
       sometimes denoted by     or         
                                      according to the problem being solved. This notation is not used in this document.
                                                                                                                          T HE   SEPARABLE CASE   9
                         	                 "                
                                                                        
                            "            
                                               
                                
                          " 	                "      #                                               (2.17)
                                                                              
                         "                                                                                                    (2.18)
                                  
Therefore,
                                                                
                                                                
                                                                                                                                       (2.19)
                                  " 	           
                                                                            
                                                                                      
                                                                                                                                       (2.20)
                                                
Note that the first term in (2.20) is a quadratic form in the coefficients                                        . Defining the vector
                                                           
   
 
                                                     (2.21)
                                                                                  
                                                                                          
                                                                                                  
                                                                                                                                       (2.22)
where    is a
                  matrix whose elements  are given by                 
                                                 
                                          
                                                          
                                                                                                                                       (2.24)
                                     
The dual formulation enables to restate the optimization problem in the following simplified form:
minimize
                                                   " 	    
                                                          
                                                                                             (2.25)       
10 O PTIMAL             LINEAR CLASSIFICATION
subject to
                                                                                            
                                                                                     
                                                                                                                                                                   (2.26)
                                                                                  
    
                                                            (2.27)
                               
           Note that does not appear explicitly in the dual formulation and, moreover, the input pat-
       terns only appear in the optimization functional in dot products of the form  . As it will be
                                                                                                                                         
       seen in chapter 4, this property is essential for the use of kernels in support vector machines.
2.1.2 Analysis
       For the analysis of the solutions found by the optimization procedure is convenient to make use
       of the Kuhn-Tucker conditions, which are observed at the solution of any convex optimization
       problem.
Instead of recalling the whole set of Kuhn-Tucker conditions, we will only consider a partic-
                                                
       ular relevant one for the analysis of SVM. This condition states that, for constraints of the form
                                                                                                                                   
               , one must have           at the solution. Note that this is a quite natural condition. In fact,
                                                                                                                                                
       it just means that either       at the solution, and in this case the constraint is irrelevant for the
       specific solution found, and hence                            
                                                    , or, alternatively,       and, in this case, imposes a
                                                                                                                                    
       bound on the solution that implies the intervention of a non null multiplier .                                           
                  In the SVM case, the Kuhn-Tucker condition means that, at the solution, one must have
(2.29)
                                                                                                             
            After the optimization,           means that the constraint can be eliminated from the La-
                                                                                                                                            
       for the definition of
                                            
 
       grangian without affecting the final solution. This means that if           than pattern is irrelevant
                                    . Therefore, it is clear that only those patterns for which          con-                                                
       tribute for the definition of . The patterns in the set
                                                                                                                                                   (2.30)
              2
                  Recall that since    
	
 ,  
  
                                                                                                                   T HE   SEPARABLE CASE   11
                                                                 
                     
                                                                                                                                (2.31)
and, therefore, all support vectors correspond to the minimum classification margin and lay on
the two hyper-planes symmetrical to the decision boundary that define the optimal classification
margin.
                                                                                                
In the dual formulation, only the Lagrange multipliers
                                                                              
                                                         of the primal problem are computed.
                                                         
However, the classifier parameters are and . But these can be derived from (2.16), resulting
                                                                                                                    (2.32)
                                                                              
                                    
                                                             
    On the other hand can be computed from any support vector applying equation (2.29). In
                        
practice, it is usually better to compute from the average of the results obtained from all support
                                                                        
                  is the number of support vectors, and denoting by 
  
                                                                                
                                                                                     the  -th support
                                                                                                           
  
vectors. If
vector and the corresponding class, one may compute from
                                                                                         
                                                       	     "                                         (2.33)
                                                           
where denotes the multiplier for the -th support vector.
a
  		  . Note that only 2 support vectors are used in this case. The classification region found by
         sigmoidal unit for the same input distribution is depicted in figure 2.3.
12 O PTIMAL   LINEAR CLASSIFICATION
                 dimensional distribution.
                                               and
                                                        
	
                 Figure 2.2: Binary classification region found by a SVM in a simple bi-
                                                             .
                                  Figure 2.4: .
Binary classification region found by a SVM.
                                                  and   	 
 .
Figure 2.5: Binary classification region found by a sigmoidal unit in the same
distribution of fig. 2.4
.
14 O PTIMAL   LINEAR CLASSIFICATION
            A second example where the structure of the distribution leads to three support vectors is
       depicted in figures 2.4. Note that in the linear separable case, if there is some degree of randomness
       in the input distribution the SVM is almost always defined by two or three support vectors. The
       solution found by a sigmoidal unit for the same input distribution is depicted and 2.5.
       The above analysis assumes that all training samples can be correctly classified by a single deci-
       sion hyper-plane. However, this is seldom the case. Even assuming that the structure and nature of
       the problem is purely linear, it is necessary to take into account that training samples often result
       from observations of (noisy) real world data, yielding training data often populated by misclassi-
       fied, ambiguous or outlier samples. In these cases, the constrained optimization problem outlined
       before does not converge, since it is not possible to satisfy the classification constraints (equation
       (2.9)). In such cases, the optimization process will result in an increasing and unbounded sequence
       of Lagrange multipliers for those samples that can not be correctly classified.
A simple solution to this problem was proposed in (Cortes and Vapnik, 1995). The idea
                                          
       is to soft the constraints defined by (2.9) such that some of patterns are allowed to lay inside
       the classification margin (             ) or even in the wrong side of the classification boundary.
                                                                                   
       This is done by including in the classification constraints slack variables which create room for a
                        
       misclassification margin. More specifically, for each pattern in the training set is introduced a
       slack variable and the constraints (2.9) are re-written as
                                                 "  
    
   
                                                            
    
   
 
                                                                                                      (2.35)
                                                                                                      (2.36)
                                                                  
            Of course, the optimization problem must now provide some way of computing also the new
       parameters . Desirably, one would like that all vanish or, if this is not possible, that their values
                                                                                       
       get as small as possible. One way to accomplish this result is to include in the objective function
       a penalty term to weight and minimize the contribution of the parameters. In these conditions,
       the objective function defined by (2.10) is rewritten as
                                                      	 	  	    
                                                                             
                                                                            
                                                                                                      (2.37)
                                                                             
       where is a positive constant that weights the penalty term. Now the optimization problem must
       be re-stated as follows: minimize (2.37) subject to constraints (2.35) and (2.36).
                                                                                                                            T HE   NON SEPARABLE CASE   15
As before, the constrained minimization of (2.37) can be performed by the minimization of the
Lagrangian
                   	    "           "            
                                                                    
                         "   " 
                                                                                             (2.38)
    At this point, we may restate the formulation of the primal and dual problems. The primal
problem corresponds to the solution of
                                                         
                          
    
 
                                        
                                                                                            (2.39)
                                                                                                          
     The partial derivatives of in respect to , and can be easily computed. From (2.38),
                                                                                                                
it becomes clear that the partial derivatives in respect to and are the same as in the separable
                                                                                                                                           
                       
case, therefore also providing conditions (2.16) and (2.19). It remains the derivative of with
respect to . This derivative can be computed as
                                           
                                                                  	    "           "            
                                                                                                                      
                                                                                      
                                                                                                "  
and, therefore,
                                                                                                                                              (2.42)
Replacement of this result, (2.16) and (2.19) in (2.38) makes all terms in to vanish and results
                                                                                                                              
again in exactly the same objective function defined by (2.25). But one additional constraint must
16 O PTIMAL    LINEAR CLASSIFICATION
       must have
                      
       the simplified objective function since they are implicitly defined by (2.42). However, since one
                         , this implies that
                                                       
    
 
                              (2.43)
           In synthesis, the optimization problem in the non separable case can now be restated as fol-
       lows: minimize
                                                       " 	    
                                                       
                                                               
                                                                                                (2.44)
subject to
                                                           
                                                    
                                                                                                         (2.45)
                                                
    
                                    (2.46)
2.2.3 Analysis
The analysis of the Kuhn-Tucker conditions in the non-linear case provides some further insight
                                
       concerning the role and effect of the parameters in the optimization procedure. Further to the
                                                                                                         
       conditions for the
                                                                          
                             multipliers, there are now the conditions that relate the constraints
       and the multipliers . At the solution, one must have            . This means that one of two case
       may occur:
                                                                                            
               one will have
                                   
               If, furthermore, these samples are far from the border, one will have
                                     .
                                                                                                 and, therefore,
                      , than   . This means that the optimization process needed to make use of
               If
                                                                 is either outside the classification margin
                             ) or misclassified (if  ). In all these cases, the Kuhn-Tucker condition
               the slack margin . In this case, the sample
               (if
                                    and, therefore, that    . This means that a sample for which
                 reaches the upper bound on the optimization process corresponds
               imposes that
               sample or, at least, an out of margin sample (in the sense that
                                                                                  	      	   ). Note that
                                                                                              to a misclassified
               all samples in these conditions will correspond to support vectors of the classifier.
                                                                                          T HE   NON SEPARABLE CASE       17
Let us consider again the problem of computing the classifier parameters                   and .
                                                                                                  
                
         
     As before, can be computed from the support vectors by (2.34). However, for the compu-
tation of the Kuhn-Tucker conditions must be again revisited. Note that with the introduction of
the slack variables, the condition (2.28) at the solution is now expressed as
                                                                                                           
any support vector such that
                                 
    Since the slack variables are not explicitly computed, must now be computed from and
                                    . As before, it is usually safer to use the average that results
average defined by (2.33) must exclude, all support vectors for which           .    
from all support vectors that verify such conditions. Therefore, in the non separable case, the
     It remains the problem of choosing a good value of for a given classification problem. Un-
happily, the optimal value of is strongly dependent on the problem under analysis and, specif-
ically, on the noise, ambiguities or / and outliers that occur in the training set. In practice, is
usually fixed by a trial and error procedure.
Figures 2.6 and 2.7 contribute to a better understanding of the effect of different values of in a
                                                                     
as possible, in order to try to reduce the effect of outliers and misclassified samples. However,
                                                                     
          corresponds to do not impose any upper bound on and, as it was already discussed, such
solution only exists if the problem is linearly separable. Therefore, in practice, it is necessary to
find a trade-off between a large and a numerically feasible solution of the optimization problem.
18 O PTIMAL   LINEAR CLASSIFICATION
                             (
                                                  
 	    
                 Figure 2.6: Non-linearly separable problem. Classification regions found by a
                 SVM with                      ,        ,       ,           ).
                 SVM with
                              
    
	   
                 Figure 2.7: Same distribution of figure 2.6. Classification regions found by a
                                  (         ,        ,        ,          ).
                                                                               Chapter 3
As described before, support vector machines can only be applied to linear classification problems.
However, the vast majority of real world classification problems are in fact non-linear by nature.
Hopefully, the SVM framework was elegantly extended to non-linear problems.
     One conventional solution to solve non-linear classification problems with linear classifiers is
to generate intermediate features from the input data such that the input patterns become linearly
separable in the feature space. In most cases, this result can be achieved performing simple but
adequate non-linear transformations of the input data. This is the basic strategy adopted, for
example, by multilayer neural networks or multilayer perceptron (MLP). In any MLP, the output
layer acts as a linear classifier. However, the pre-processing of the input by one or more non-linear
hidden layers may perform a transformation of the input data such that the output layer work on a
linearly separable feature space. This basic idea behind MLP networks can be traced back to the
work of Minsky and Papert (Minsky and Papert, 1969).
                                                                       Φ (x)
                                                             Π
                                                                                          Linear
                          x                                                               SVM               y (x)
                    Figure 3.1: Extension of SVM to non linear problems through feature expan-
                    sion.
(3.1)
       In order to test this feature expansion approach on a non-linear classification problem we con-
       sidered the bi-dimensional classification problem represented in figure 3.2. The classes layout
       correspond to a chess board pattern, thus resulting in a reasonably hard classification problem.
            In order to solve this problem with a SVM, several classification tests based on feature spaces
       of different dimensions were performed. In this group of tests, only polynomial transformations of
                                                                           
       the input data were performed. More specifically, for each polynomial order , the feature space  
                                                                                                
       was build from all features resulting from products of components of total order lower or equal
                                                                                                                                 
       to . For example, denoting by    
       the following transformations were used for of
                                                                     
  
                                               the expansion of order and by 
 the -th component of ,
                                                                    :
.
                                                                        
  
                                                                                                                         (3.2)
                                                                     O RTHONORMALIZED                     POLYNOMIAL FEATURE EXPANSION           21
           	 .                                                      
 
 
                
           
                                                                        
  
                 
                                                                                 
  
  
  
                                       (3.3)
            .                               
  
  
 
 
                                                                               
 
 
                                                                                                        
            
 
                                                               
  
  
  
  
  
  
  
  
  
                                  (3.4)
Note that this feature expansion yields, for order , an output space of dimension
and, therefore, the dimension of increases considerably with . On the other extreme, if                                                  
the SVM reduces to the linear classifier considered before.
of
         
and 
  ) and, therefore, they are usually highly correlated. Moreover, the individual components
           have quite different variances depending on the order of each output feature. Due to these
space
                 
facts, it is usually difficult to perform a successful optimization procedure directly on the feature
              . In order to avoid these problems, it was performed an equalization of the feature
space by making
                                                                    !"             (3.6)
22 N ON LINEAR       CLASSIFICATION
       where
                     denotes the ensemble mean of    .  is a   
                                                                                                  transformation matrix
       defined as
                                                                                                                   (3.7)
            
       where  is the covariance matrix of
                                                :
                                 	 
   "       "                                                      (3.8)
       Note that with this transformation, the covariance matrix of the final features
                                                                                                          is
                                                          
                                                           (3.9)
       and, therefore, the feature space becomes orthonormalized. It was observed that this equalization
       of the feature space yielded a much easier SVM optimization procedure.
                                               
 
                                                    
       In order to assess the properties of polynomial features of different orders, a first set of classi-
       fication tests was performed with                     . All tests were performed with            . The
       results of this first set of tests is depicted on figures 3.3 to 3.7. As it would be expected, as (and           
       hence Dim( )) increases, a more elaborate boundary line is build by the SVM operating on .
                                                                                                         
                                                                     
       As a result, the number of misclassified samples decreases with the increase of and a correct
       classification of all training patterns is reached for         .
                 
       values of . Figures 3.8 and 3.9 show the result of these tests for
               (                ). As it can be observed, increasing beyond
                                                                                     (
                                                                                          
                                                                                                      ) and
                                                                                       yields an increasing
       complex boundary line. In fact, a too high dimension of only contributes for the uniformization
       of the distances of each feature vector to      (note that this is an expected consequence of the
       topological properties of high dimensional spaces).
                   
                                                                                        
 
            For           and above, the SVM optimization procedure converges for solutions in which
       all samples lie in the boundary of the classification margin (                  ) and, therefore, all
       data patterns become support vectors. These solutions correspond to increasingly fragmented
                                
       classification regions and the regularization effect of SVM is, in these cases, significantly lower.
       Note that for          the dimensionality of the feature space becomes higher than the number of
       training samples (                ). This can be a a possible justification for the type of solution
       found by the optimization procedure.
                                                                            T EST   RESULTS   23
                    
	      
    
Figure 3.3: SVM classification of the chess board distribution of figure 3.2.
Linear case (   ).               .         .         .           .
                                     	       	       
Figure 3.4: SVM classification of the chess board distribution of figure 3.2.
   
Polynomial feature expansion with
          .
                                        .              .         .          .
24 N ON LINEAR   CLASSIFICATION
                                                                    	   	 
                 Figure 3.5: SVM classification of the chess board distribution of figure 3.2.
                   
                 Polynomial feature expansion with
                           .
                                                         .              .         .          .
                                                                          
                 Figure 3.6: SVM classification of the chess board distribution of figure 3.2.
                  	   
                 Polynomial feature expansion with
                   .          .
                                                          .                .          .
                                                                              T EST   RESULTS   25
                                             	          
Figure 3.7: SVM classification of the chess board distribution of figure 3.2.
   
Polynomial feature expansion with
         .
                                        .               .          .        .
   
Polynomial feature expansion with
         .
                                        .                .         .        .
26 N ON LINEAR   CLASSIFICATION
                                                                   	
                 Figure 3.9: SVM classification of the chess board distribution of figure 3.2.
                      
                 Polynomial feature expansion with
                        .         .
                                                             .                  .            .
                 	
 	             
                 Figure 3.10: MLP classification of the chess board distribution of figure 3.2.
                           network.        .
                                                                                                              T EST   RESULTS   27
3.11. Note that both networks share the same number of free parameters (
                                                                                                
8 input gaussians. The classification regions found in each case are depicted in figures 3.10 and
                                                                                      ), but the partic-
ular choice of 8 hidden units was rather arbitrary at this stage. As it would be expected, the solution
found by the MLP network is based on a segmentation of the input space with hyper-planes such
that all training samples are correctly classified. Apparently, the particular segmentation found
by the MLP is quite arbitrary, and the solution depicted on figure 3.10 will hardly provide a rea-
sonable generalization behavior (but it must be emphasized that input distributions of this type,
punctuated by local data clusters, are among the most difficult problems for MLP classifiers). On
the other hand, it becomes clear that the RBF network provides here a quite simple and natural
result, based on centering one (or more) gaussians in each input cluster of one of the two classes
                                                                                                         
1 . Apparently, this result will provide a better generalization behavior than the one obtained by
the MLP network. But, in any case, the decision boundary build by the SVM with                 seems a
much better match of the classification regions that one would heuristically expect to be associated
with the structure of this classification problem.
       At this stage, one remark must be done regarding the tests performed through all this docu-
   1
     Note that the desired value of the white square patterns was set to 0 for the RBF networks, while the classification
threshold was set to 0.5. This strategy simplifies considerably the RBF task. On the other hand, it must be considered
that if the desired values of the target classes are interchanged, the solution found is based on gaussians centered on the
complementary class, therefore yielding a rather different decision boundary.
28 N ON LINEAR   CLASSIFICATION
       ment. Since our aim is to perform a qualitative comparison of typical solutions found by different
       classifiers under similar conditions, it was not used any test set to provide a criteria for early stop-
       ping of the learning procedure in RBF and MLP networks. It is well known that early stopping
       may remarkably improve the generalization ability of MLP networks, namely when the number
       of free parameters is close to the number of training patterns (as it happens in the majority of the
       tests described herein). This means that the solutions found by MLP networks presented in this
       document are certainly punctuated by over-learning. But it must be considered that our goal is
       also to assess the regularization effect of the learning criteria itself. In this sense, all tests were
       performed under similar and reasonably fair conditions.
                                                                                                         Chapter 4
4.1 Definition
As it was discussed in chapter 3, the feature expansion approach requires the explicit translation
of the training patterns from a (usually) low dimensional space to an (usually) much higher
dimensional space were the problem is linearly separable. In this section, we discuss how the
theory of Kernels may avoid the explicit computation of the patterns in the high dimensional space
  .
    Therefore,
             .theNow,                                                                                 and   . Or,
                                optimization problem itself only requires the computation of the dot products
                                                               !
   (called kernel function) such that
                                    assume that we define this dot product as direct function of
by other words, that we define a function
                                                    
            
                                                                 
(4.2)
         
   can be directly computed from   and   , we avoid the explicit computation of the
If    
features
                  	  . In this case, the elements of the matrix can be simply defined as
                                                                                         
                                                            
       
                                                             
                                                                                                                        (4.3)
                                                      
                                                                         
          In the more general case, one may even define directly the kernel                          
                                                                                                           
     without the
30 N ON LINEAR SVM AND KERNELS
       explicit definition of
                                        
                                  . Of course, some care must be taken when choosing kernel functions.
       A valid kernel requires that some transformation of into exists such that the kernel represents
       in fact a dot product in , even if such mapping does not need to be explicitly defined. Note that,
       for some kernel functions, may even live on an infinite dimensional Hilbert space.
                                                                                                                                
                                                                                                                                                                    (4.4)
                                                                                                       
                                                             
       ( 
                   denoting the  -th component of   ) if for any function   in                                                
                                                                                                                                                          such that
                                                                
                                                                                                                                                                    (4.5)
       then                                                 
                                                                         
                                                                                     
      	
                                                                                           
                                                                                                              
                                                                                                                               	                                 (4.6)
                               
           Note that the validity of a kernel function requires that (4.6) is satisfied for any finite energy
       function      (finite  norm). This may not be easy to prove, even for quite simple kernel types.
       On the other hand, it must be emphasized that some kernels often used do not satisfy Mercer’s
       condition but, nevertheless, they are able to provide reasonable solutions to the optimization prob-
       lem.
       Different types of SVM classifiers may be derived for a same problem by using different kernel
       functions. Some of the most used kernel functions include:
                  Polynomial:
                                                         
                                                                      
      
    
 	 
 
                                                                                                                  
(4.7)
                  The previous kernel function may result on an Hessian with 0 determinant. In order to avoid
                   this problem, the following alternative polynomial kernel is often used:
                                                     
                                                                  
    
                                                                                                                        
    
 	 
 
                                                                                                                            
                                                                                                                                                                     (4.8)
                                                                                         F UNCTION         AND PARAMETERS EVALUATION   31
                                                                     
   		 
   
             Radial basis function:
                                                                       
                                                                                                                           (4.9)
                                                   
                                                        
  
   	   "   
                                                                
                                                                                                                           (4.10)
                                                    
 	   
       
                                    
                                                      	 
     
      	 
   
                                                       	 
  
       
   
                                                                                                                           (4.11)
                                                                                                                           (4.12)
    The last two kernel (radial and sigmoidal) correspond to an infinite dimension         spaces. It
must be emphasized that the proper choice of the kernel function is one of the least clarified issues
on the use of SVM.
   
It was previously shown how the kernel function can avoid the explicit computation of the features
       in the optimization process. However, it remains to discuss how the SVM can be used for
                                               
classification. In fact, the separating hyper-plane lives in , and the evaluation of the output
class for an arbitrary input pattern requires the computation of
                                                                                                                (4.14)
                                                                          	      
     
       and therefore
                                                     
                                                                                                                                                     (4.17)
                                                                
                                                                                                   
           Equation (4.17) provides a simple and effective way of computing the output  avoiding the
       explicit knowledge of
                                 
                                 .
                                             
           The computation of the parameter was not addressed yet, but it can be performed by a similar
       approach. From (4.17), it is clear that choosing any support vector such that
                                                                                           
                                                                                                  , one                                     
                                                                "            	      
  
       may evaluate
                                                                                                                        
                                                                                                                                                     (4.18)
                                                                    
                                                                                                       
                                                                                                           
       which
                   
       Once again, it is usually numerically safer to compute from the average of all support vectors for
                    
                           .
                                                                                        
   
	
     
            For a radial basis function kernel, (4.17) assumes the form
                                                                     
                                                                                                         
                                                 
                                                                                                                                                     (4.19)
                                                                          
       and therefore the final classifier is equivalent to an RBF network with Gaussian of identical
                                                                                                                                     
          
       variance, each one centered in a given support vector and with one output weight given by of
            
                     .
                                                                                                                                                    
                                                                                               
       and, in this case, the classification performed by the SVM is equivalent to the output of a
       MLP network, with a linear output unit with bias . In this equivalent network, the        hidden units
                                                                                                                                            
                                                                                        E XPERIMENTAL     RESULTS   33
                                                                                    
        
      
by 
   and an weight from hidden unit  to the output of of         
                                                                       
have an hyperbolic tangent activation function, with an weight from input to hidden unit  given
                                                                              .
    In both cases, it is however remarkable that the SVM learning criteria avoids the explicit
specification of the number of Gaussian or hidden units, which results spontaneously from the
optimization process.
At this point it is interesting to consider again the chess board problem and compare the classifi-
with a kernel. The result is depicted in figure 4.1 for an RBF kernel with
                                                                            
                                                                                 .
                                                                                         
cation region obtained by explicit feature expansion methods and the one resulting from a SVM
(
    Note that this result is similar to the best result obtained with polynomial feature expansion
       ). However, in this case, a quite good solution was found without the need to specify the
dimension of or any learning parameter 1 . This is certainly one of the most interesting features
of the use of kernels in SVM.
   1                      
     The weight coefficient is the only training parameter which must be set in SVM optimization procedure. How-
ever, in this particular problem, it was not required an upper bound on the multipliers.
34 N ON LINEAR SVM AND KERNELS
Separable case
       The second problem problem addressed with SVM and a RBF kernel function was the quite simple
       distribution depicted in figure 4.2. This distribution has 31 and 65 patterns in each one of the two
       classes, in a total of 96 patterns.
           The results of this problem using an SVM classifier with RBF kernel and a
                                                                                          	    MLP
       network are depicted in figures 4.3 and 4.4. Both classifiers were able to correctly classify all
       samples of the training set but, once again, the MLP solution is far less reasonable than the one
       found by SVMs.
            The some classification problem was also solved with two RBF networks. The first one, with 8
       gaussians (figure 4.5), has the same number of parameters as the MLP network, while the second,
       with 14 gaussians (figure 4.6), enables a solution with the same number of gaussians than the one
       found by the SVM. It is clear that both networks find much more reasonable solutions than the
       one found by the MLP network, and probably both networks would generalize well. However,
       the solution found by the SVM seems again much more balanced and plausible than those found
       by both RBF networks, which are apparently much more marked by the circular shape of the
       gaussian.
                                                                     E XPERIMENTAL   RESULTS   35
                      
Figure 4.3: Classification region for the cross problem with a SVM with a RBF
kernel.        .           .        .           .
                                                                 	
 
 
  
Figure 4.4: Classification region for the cross problem with a
       .
                                                                             MLP.
36 N ON LINEAR SVM AND KERNELS
              RBF network.
                             	  
              Figure 4.5: Classification region for the cross problem with an eight gaussian
                                     .
              network.
                       	  
              Figure 4.6: Classification region for the cross problem with a 14 gaussian RBF
                               .
                                                                                E XPERIMENTAL     RESULTS   37
              Figure 4.7: Data distribution in the cross problem with 4 outliers (P=100).
          .
Noisy case
In order to assess the performance of the several classifiers in the presence of noise or outliers, the
                                                                                             	 
cross problem was again tested but, this time, four outliers were added to the data distribution (see
figure 4.7). The results of this problem using a SVM classifier with RBF the kernel and a
MLP network are depicted in figures 4.8 and 4.9.
     In these, neither of these two classifiers was able to correctly classify all training samples.
But, again, it must be emphasized how the MLP quickly reacted trying to correctly classify all
outliers samplers and, therefore, yielding a quite unplausible solution with only one misclassified
sample. On the other hand, the SVM reacted in a much more moderate way, basically making use
of the slack variables to build a a classification boundary much closer to the one derived in the
absence of outliers, and leaving the four outliers misclassified.
      The some classification problem was again repeated with pure RBF networks. This time, the
first test was performed again with eight gaussians (figure 4.10) and a second one with 23 gaussians
(figure 4.11) in order to offer the same degrees of freedom of the solution found by SVM. In both
cases the RBF performed reasonably well, being remarkable that in the first case the classifier
basically ignored the outliers and built a solution also quite similar to the one found with the clean
data distribution. However, with 23 gaussians, the solution found by the RBF network, with only
one misclassified sample, is already clearly subject to over-learning, yielding a less reasonable
decision boundary
38 N ON LINEAR SVM AND KERNELS
                                              	   
   
              Figure 4.8: Classification region for the cross problem with 4 outliers and a
              SVM with a RBF kernel.             .         .        .         .
              	
 	         
              Figure 4.9: Classification region for the cross problem with 4 outliers and a
                        MLP.           .
                                                                   E XPERIMENTAL   RESULTS   39
Figure 4.12: Source data distribution for the sinusoidal problem. P=320
Separable case
       The sinusoidal problem represents one case where the data distribution is certainly far away from
       those usually found in real world problems. But any classifier must build a quite complex boundary
       line to correctly classify this problem and, therefore, it was considered an interesting case study.
            The source data distribution was build in a rather deterministic way and it is represented in
       figure 4.12.
            The solution found in this classification problem by a SVM with an RBF kernel and a
                                                                                                   	
       MLP network are depicted in figures 4.13 and 4.14. Again, both classifiers were able to correctly
       classify all data samples. However, while the SVM built an elegant and balanced boundary line
       between both classes, the MLP solution was again marked by the straight lines which mark the
       sigmoidal units decision hyper-planes. While this solution built a reasonable decision boundary, it
       is clear that, for some samples, the classification margin is rather small.
            Once again, we tried two RBF networks in this problem. The first one, based on the fixed
       reference of 8 gaussians, is depicted in figure 4.15. Its is clear that, in this particular case, the
       number of gaussians in the RBF network was far away from the minimum required to achieve a
       correct classification of all training patterns, therefore providing an odd – and useless – solution.
       On the second test, the RBF network was tested with the same number of gaussians (35) used by
                                                                             E XPERIMENTAL     RESULTS   41
                          
        
          Figure 4.13: Sinusoidal problem. Solution found by a SVM network with a
          RBF kernel.         .         .      .
the SVM machine. In this case, the RBF network was able to find a reasonable solution that cor-
rectly classifies all data patterns. However, once more, the effect of the circular shaped gaussians
is still noticeable on the boundary line.
42 N ON LINEAR SVM AND KERNELS
                                                                                  	 
 .
                                               Figure 4.14: .
              Sinusoidal problem. Solution found by a      MLP network.
              gaussians).
                          	 	 
              Figure 4.15: Sinusoidal problem. Solution found by a RBF network (eight
                                 .
      	  
Figure 4.16: Sinusoidal problem. Solution found by a RBF network (35 gaus-
sians).        .
                                                                                                        Chapter 5
Implementation issues
subject to fairly simple linear constraints. This problem can be easily solved using standard opti-
mization tools.
    When the size of the training set is small, the Matlab c standard quadratic programming
function qp can be directly used to build the SVM. The qp Matlab function has the syntax
                                      
 
	 
  
                                                            
                                                                                                               (5.2)
subject to
                                                                       	
                                                        
                           
                                                                                                               (5.4)
                                               
                        
      
                           (5.5)
     The qp Matlab function is supplied in source code, therefore providing a good starting point
for analysis and implementation of a simple quadratic programming package. It main drawback
                                                                                 
remains the need to fully build the  matrix before the optimization procedure. Therefore, this
                                                         
approach requires a memory amount in the order of          ( being the training set size).
    Another approach to the implementation of SVM is to write dedicated code based on any
standard non-linear programming package. The DONLP2 package, available from the Netlib
46 I MPLEMENTATION    ISSUES
       repository, is a powerful nonlinear programming package developed by Prof. Dr. Peter Spellucci
       (e-mail spellucci@mathematik.th-darmstadt.de). It uses an SQP algorithm and dense-matrix linear
       algebra. Source for DONLP2 is available by ftp from netlib ftp servers, e.g.,
ftp://netlib.bell-labs.com/netlib/opt/donlp2.tar.
            DONLP2 was originally written in Fortran, but it can be translated to C by the widely available
       f2c translator. The main DONLP2 module is an optimization routine which is parametrized via
       global variables included in standard Fortran common blocks. All the examples included in this
       document were computed by a dedicated C program which makes use of the DONLP2 package
       via a simplified set of C interface routines.
           As stated before, the SVM optimization procedure implies to solve a fairly large quadratic
       programming problem. DONLP2, for instance, requires an amount of memory larger than
                                                                                                        	
       bytes, which easily grows to more than 2GBytes of memory with less than 10000 training samples.
       On the other hand, each iteration of the optimization procedure is a complex process which may
       require a large amount of time for large values of .
            Another procedure for building SVMs from large databases was proposed in (Osuna et al.,
       1997). The proposed method splits the optimization procedure in several smaller subtasks. Within
       each subtask, a given number of parameters is kept fixed and the optimization is performed over
       the remaining multipliers. This procedure is then iteratively solved for each parameter subset,
       until a stop criteria is met. The authors report that this procedure enabled to build a SVM with
                                                     
       about 100,000 support vectors from a data base with 110,000 samples.
Conclusions
Support Vector Machines rely on the concept of optimal margin classification. While this is a
reasonably understood concept in linear classification problems, its scope and conceptual basis is
less trivial in nonlinear applications.
     SVM address nonlinear problems by projecting the input space in a high dimensional feature
space where the problem is assumed to be linearly separable. However, application of the optimal
margin concept to the feature space requires some further discussion. In fact, the classification
boundary that is derived by the SVM theory in the input data space is strongly dependent on the
transformation that is performed from the input space to the feature space. This is affected by the
choice of the feature expansion or the kernel function. But since different feature expansions lead
to different solutions, the classifier found by the SVM theory for nonlinear problems is not unique
and does not complies to a universal optimality criteria.
    It was also discussed how SVM classifiers depends on the upper bound defined for the La-
grange multipliers. While it is clear that the proper choice of this parameter is dependent on the
amount of noise on the data set, there is not yet an optimized procedure to set this parameter.
     Nevertheless, the examples shown in this short report show that support vector machines ex-
hibit a remarkable regularization behavior, often providing better classification boundaries than
those found by other standard classifiers, namely MLP and RBF networks. When used with RBF
kernels(Girosi, 1997), SVM provide an interesting solution to the choice of the centers of Radial
Basis Functions, playing an important role in the choice of solutions which are sparse and, at
the same time, can be related to the principle of Structural Risk Minimization. This point pro-
vides a strong motivation for the analysis of SVM as classifiers with near optimal generalization
properties.
                                                                         References
Baum, E. (1990). When are k-nearest neighbour and back propagation accurate for feasible sized
   sets of examples? In Almeida, L. B. and Wellekens, C. J., editors, Neural Networks, Lec-
   ture Notes in Computer Science, pages 69–80. EURASIP Workshop on Neural Networks,
   Sesimbra, Portugal, Springer-Verlag.
Baum, E. and Haussler, D. (1989). What size net gives valid generalization? In Touretzky, D.,
   editor, Advances in Neural Information Processing Systems 1, pages 81–90, San Mateo, CA.
   Morgan Kaufmann.
Burges, C. J. C. (1986). A tutorial on support vector machines for pattern recognition. In Adavnces
    in Knowledge Discovery and Data Mining@, the Microstructure of Cognition. Kluwer Aca-
    demic Press, Cambridge, MA.
Cohn, D. and Tesauro, G. (1991). Can neural networks do better than the Vapnik-Chervonenkis
    bounds? In Lippman, R., Moody, J., and Touretzky, D., editors, Advances in Neural Infor-
    mation Processing Systems 3, pages 911–917, San Mateo, CA. Morgan Kaufmann.
Cortes, C. and Vapnik, V. (1995). Support vector networks. Machine Learning, 20:273–297.
Fletcher, R. (1987). Practical Methods of Optimization. John Wiley and Sons, Inc.
Girosi, F. (1997). An equivalence between sparse approximation and support vector machines.
    Technical Report A.I. Memo No. 1606, Massachusetts Institute of Technology, Artificial In-
    telligence Laboratory.
Gunn, S. (1997). Support vector machines for classification and regression. Technical Report
    http://www.isis.ecs.soton.ac.uk/research/svm/svm.pdf, University of Southampton.
Guyon, I., Vapnik, V., Bottou, L., and Solla, S. A. (1992). Structural risk minimization for char-
    acter recognition. In Hanson, S., Cowan, J., and Giles, C., editors, Advances in Neural
    Information Processing Systems 4, pages 471–479, San Mateo, CA. Morgan Kaufman.
     Ji, C. and Psaltis, D. (1992). The VC-dimension versus the statistical capacity of multilayer
          networks. In Hanson, S., Cowan, J., and Giles, C., editors, Advances in Neural Information
          Processing Systems 4, pages 928–935, San Mateo, CA. Morgan Kaufman.
     Krogh, A. and Hertz, J. A. (1992). A simple weight decay can improve generalization. In Hanson,
         S., Cowan, J., and Giles, C., editors, Advances in Neural Information Processing Systems 4,
         pages 950–957, San Mateo, CA. Morgan Kaufman.
     Le Cun, Y., Denjer, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, L. (1990).
         Handwritten digit recognition with a backpropagation network. In Touretzky, D., editor, Ad-
         vances in Neural Information Processing Systems 2, pages 396–404, San Mateo, CA. Morgan
         Kaufmann.
     Osuna, E., Freund, R., and Girosi, F. (1997). An improved training algorithm for support vector
         machines. In Proc. of IEEE NNSP 97, Amelia Island.
     Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organi-
         zation in the brain. Psychological Review, 65:386–408.
     Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning internal representations by
        error propagation. In Parallel Distributed Processing: Explorations in the Microstructure of
        Cognition. MIT Press, Cambridge, MA.
     Vapnik, V. N. (1992). Principles of risk minimization for learning theory. In Hanson, S., Cowan, J.,
         and Giles, C., editors, Advances in Neural Information Processing Systems 4, pages 831–838,
         San Mateo, CA. Morgan Kaufman.
Vapnik, V. N. (1995). The Nature of Statistical Learning Theory. Springer-Verlag, New York.
     Widrow, B. and Hoff, M. (1960). Adaptive switching circuits. In IRE WESCON Convention
         Record, Part4, pages 96–104.