Image Home
Image People
Image Publication
Image Applications
Image Data & Code
Image Library
Image Seminar
Image Link
Image Album

Search LAMDA

Seminar abstract

Power Mean SVM

Jianxin Wu
Assistant Professor
Nanyang Technological University

Abstract: In this talk, I will present PmSVM (Power Mean SVM), a classifier that trains significantly faster than state-of-the-art linear and non-linear SVM solvers in large scale visual classification tasks. PmSVM also achieves higher accuracies. A scalable learning method for large vision problems (e.g., with millions of examples and / or millions of dimensions) is a key component in many current computer vision systems. Recent progresses have enabled linear classifiers to efficiently process such large scale problems. Linear classifiers, however, usually have inferior accuracies in vision tasks. Non-linear classifiers such as additive kernel SVM, on the other hand, may take weeks to train. We propose a power mean kernel, prove its positive definiteness, and present an efficient learning algorithm through gradient approximation. The power mean kernel includes as special cases many of the recently proposed additive kernels. Empirically, PmSVM is up to 5 times faster than LIBLINEAR, and up to 10 times faster than additive kernel classifiers. In terms of accuracy, it outperforms state-of-the-art additive kernel implementations, and has major advantages over linear classifiers.

Bio: Jianxin Wu received the BS degree and MS degree in computer science from the Nanjing University, and his PhD degree in computer science from the Georgia Institute of Technology. He is currently an assistant professor in the School of Computer Engineering, Nanyang Technological University, Singapore. His research interests are computer vision, machine learning, and robotics, especially in dealing with large scale learning problems in vision and robotics.
  Name Size

(for FireFox 3+ and IE 7+)
Contact LAMDA: (email) (tel) +86-25-89685926. © LAMDA, 2016