Advances in large margin classifiers / edited by Alexander J. Smola ... [et al.].
Contributor(s): Smola, Alexander J | IEEE Xplore (Online Service) [distributor.] | MIT Press [publisher.].
Material type: BookSeries: Neural information processing series: Publisher: Cambridge, Massachusetts : MIT Press, c2000Distributor: [Piscataqay, New Jersey] : IEEE Xplore, [2000]Description: 1 PDF (vi, 412 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9780262283977.Subject(s): Kernel functions | Algorithms | Machine learningGenre/Form: Electronic books.Additional physical formats: Print version: No titleOnline resources: Abstract with links to resource Also available in print.Summary: The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms.The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.Includes bibliographical references (p. [389]-407) and index.
Restricted to subscribers or individual electronic text purchasers.
The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms.The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.
Also available in print.
Mode of access: World Wide Web
Description based on PDF viewed 12/23/2015.
There are no comments for this item.