0% found this document useful (0 votes)
10 views3 pages

SVM Detail

Support Vector Machine (SVM) is a supervised learning algorithm used for classification and regression, aiming to find the best hyperplane that separates data points with maximum margin. Key concepts include hyperplanes, margins, support vectors, and the use of kernels for handling non-linear classification. SVM has advantages such as effectiveness in high-dimensional spaces and versatility, but it can be slow with large datasets and requires careful selection of kernels and hyperparameters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

SVM Detail

Support Vector Machine (SVM) is a supervised learning algorithm used for classification and regression, aiming to find the best hyperplane that separates data points with maximum margin. Key concepts include hyperplanes, margins, support vectors, and the use of kernels for handling non-linear classification. SVM has advantages such as effectiveness in high-dimensional spaces and versatility, but it can be slow with large datasets and requires careful selection of kernels and hyperparameters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

What is SVM?

Support Vector Machine (SVM) is a supervised learning algorithm used


for:

 Classification

 Regression (called Support Vector Regression or SVR)

SVM aims to find the best hyperplane that separates data points of
different classes with the maximum margin.

📏 Key Concepts

1. Hyperplane

In an n-dimensional space (n = number of features), a hyperplane is a flat


affine subspace of (n-1) dimensions that separates data.

2. Margin

The margin is the distance between the hyperplane and the nearest data
point from any class.
SVM maximizes this margin.

3. Support Vectors

These are the data points closest to the hyperplane — they define the
decision boundary. Removing them changes the model.

4. Linear vs. Non-linear SVM

 Linear SVM: When data is linearly separable.

 Non-linear SVM: Uses kernel trick to map data into higher-


dimensional space for separation.

🔁 Kernels in SVM

Kernels allow SVM to handle non-linear classification by transforming data


into higher dimensions.

Kernel Description When to Use

Linear No transformation Linearly separable data


Kernel Description When to Use

Complex but structured


Polynomial Uses polynomial functions
data

RBF Maps to infinite dimensions via radial General-purpose, non-


(Gaussian) basis functions linear

Similar to neural networks' activation Less common, for


Sigmoid
functions experimentation

⚙️Key Parameters

Parameter Description

C
Controls trade-off between smooth decision boundary and
(regularization
classification error
)

kernel Specifies the kernel type ('linear', 'rbf', 'poly', etc.)

Controls influence of a single training example (only for


gamma
RBF/poly kernels)

degree Degree of the polynomial kernel function (if using 'poly')

✅ Pros and ❌ Cons

✅ Pros:

 Effective in high-dimensional spaces

 Works well with clear margin of separation

 Memory efficient (only support vectors used)

 Versatile (with different kernel functions)

❌ Cons:

 Slow with large datasets

 Hard to choose the right kernel and hyperparameters

 No probabilistic interpretation (but can be approximated)

You might also like