Top Posters
Since Sunday
10
G
9
I
8
m
8
7
m
7
K
7
6
b
6
L
6
b
6
6
New Topic  
Kaevyn04 Kaevyn04
wrote...
Posts: 353
A month ago
In 1992, Boser, Guyon, and Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick to maximum-margin hyperplanes. How does the resulting algorithm differ from the original optimal hyperplane algorithm proposed by Vladimir Vapnik in 1963?
Textbook 

Analytics, Data Science, & Artificial Intelligence: Systems for Decision Support


Edition: 11th
Authors:
Read 50 times
2 Replies
Replies
Answer verified by a subject expert
mnp2357mnp2357
wrote...
Posts: 337
A month ago
Sign in or Sign up in seconds to unlock everything for free.
The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in the transformed feature space. The transformation may be nonlinear and the transformed space high dimensional; thus, though the classifier is a hyperplane in the high-dimensional feature space it may be nonlinear in the original input space.
1

Related Topics

wrote...
A month ago
Brilliant
New Topic      
Explore
Post your homework questions and get free online help from our incredible volunteers.
Learn More
Improve Grades
Help Others
Save Time
Accessible 24/7
  109 People Browsing
 154 Signed Up Today
Related Images
 371
 598
 97