000 | 03991nam a2200493 i 4500 | ||
---|---|---|---|
001 | 6267226 | ||
003 | IEEE | ||
005 | 20220712204604.0 | ||
006 | m o d | ||
007 | cr |n||||||||| | ||
008 | 151223s2007 maua ob 001 eng d | ||
020 |
_z9780262026253 _qprint |
||
020 |
_a9780262255790 _qebook |
||
020 |
_z0262255790 _qelectronic |
||
035 | _a(CaBNVSL)mat06267226 | ||
035 | _a(IDAMS)0b000064818b41b8 | ||
040 |
_aCaBNVSL _beng _erda _cCaBNVSL _dCaBNVSL |
||
050 | 4 |
_aQA76.9.D35 _bL38 2007eb |
|
082 | 0 | 4 |
_a005.7/3 _222 |
245 | 0 | 0 |
_aLarge-scale kernel machines / _c[edited by] L�eon Bottou ... [et al.]. |
264 | 1 |
_aCambridge, Massachusetts : _bMIT Press, _cc2007. |
|
264 | 2 |
_a[Piscataqay, New Jersey] : _bIEEE Xplore, _c[2007] |
|
300 |
_a1 PDF (xii, 396 pages) : _billustrations. |
||
336 |
_atext _2rdacontent |
||
337 |
_aelectronic _2isbdmedia |
||
338 |
_aonline resource _2rdacarrier |
||
490 | 1 | _aNeural information processing series | |
504 | _aIncludes bibliographical references (p. [361]-387) and index. | ||
506 | 1 | _aRestricted to subscribers or individual electronic text purchasers. | |
520 | _aPervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.ContributorsL�on Bottou, Yoshua Bengio, St�phane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Ga�l�le Loosli, Joaquin Qui�onero-Candela, Carl Edward Rasmussen, Gunnar R�tsch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, S�ren Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-TovL�on Bottou is a Research Scientist at NEC Labs America. Olivier Chapelle is with Yahoo! Research. He is editor of Semi-Supervised Learning (MIT Press, 2006). Dennis DeCoste is with Microsoft Research. Jason Weston is a Research Scientist at NEC Labs America. | ||
530 | _aAlso available in print. | ||
538 | _aMode of access: World Wide Web | ||
588 | _aDescription based on PDF viewed 12/23/2015. | ||
650 | 0 |
_aData structures (Computer science) _98188 |
|
650 | 0 |
_aMachine learning. _91831 |
|
655 | 0 |
_aElectronic books. _93294 |
|
700 | 1 |
_aBottou, L�eon. _921627 |
|
710 | 2 |
_aIEEE Xplore (Online Service), _edistributor. _921628 |
|
710 | 2 |
_aMIT Press, _epublisher. _921629 |
|
776 | 0 | 8 |
_iPrint version _z9780262026253 |
830 | 0 |
_aNeural information processing series _921630 |
|
856 | 4 | 2 |
_3Abstract with links to resource _uhttps://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=6267226 |
942 | _cEBK | ||
999 |
_c72884 _d72884 |