Projects and Tutorials

Data Mining & Machine Learning
Document Video

Supervised Machine Learning(Support Vector Machine (SVM))

User Ratings :

Many times, it is not possible to have any linear discrimination and finding a quadratic function to delineate groups is practically impossible, which reduces prediction accuracy. In those cases, we can use another important classification method that you will see used for analysis of RNA-seq data. SVM, or support vector machine, is an algorithm that differs from other classification methods, because it is optimal to find a complex boundary of a class in an efficient way. To do so, it uses a procedure for feature space transformation with a kernel function. For example, a circular class border can be achieved by a straight hyperplane – if the space was curved in such a way that the classes are cut with a decision surface.

The original SVM algorithm was invented by Vladimir Vapnik and Alexey Chervonenkis in 1963. Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick to maximum-margin hyperplanes. The current standard incarnation of SVM using soft margins was proposed by Corinna Cortes and Vapnik and published in 1995.The idea of 

 

Support Vector Machines is to map the training data into a higher-dimensional feature space via 𝚽 (Phi), and construct a separating hyperplane with maximum margin there. This yields a non-linear decision boundary in input space. SVM output is all about prediction for the test dataset. If the prediction is accurate, high accuracy will be determined. 

To learn more about the concept in greater detail with step wise pipeline instructions and parameters, visit Lesson 14: Transcriptomics on OmicsLogic Learn Portal : https://learn.omicslogic.com/Learn/course-5-transcriptomics/lesson/14-t3-supervised-machine-learning

 

In this lesson, you will learn about supervised Machine learning methods, including LDA, Random Forest, SVM, and Naive Bayes, etc.