Machine learning: Difference between revisions
Line 1: | Line 1: | ||
:''For the journal, see [[Machine Learning (journal)]].'' | :''For the journal, see [[Machine Learning (journal)]].'' | ||
{{portalpar|Artificial intelligence}} | {{portalpar|Artificial intelligence}} | ||
'''{{PAGENAME}}''' is defined as "a type of artificial intelligence that enable computers to independently initiate and execute learning when exposed to new data"<ref>{{MeSH}}</ref><ref name="pmid31714992">{{cite journal| author=Liu Y, Chen PC, Krause J, Peng L| title=How to Read Articles That Use Machine Learning: Users' Guides to the Medical Literature. | journal=JAMA | year= 2019 | volume= 322 | issue= 18 | pages= 1806-1816 | pmid=31714992 | doi=10.1001/jama.2019.16489 | pmc= | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=31714992 }} </ref>. | '''{{PAGENAME}}''' is defined as "a type of artificial intelligence that enable computers to independently initiate and execute learning when exposed to new data"<ref>{{MeSH}}</ref><ref name="pmid31714992">{{cite journal| author=Liu Y, Chen PC, Krause J, Peng L| title=How to Read Articles That Use Machine Learning: Users' Guides to the Medical Literature. | journal=JAMA | year= 2019 | volume= 322 | issue= 18 | pages= 1806-1816 | pmid=31714992 | doi=10.1001/jama.2019.16489 | pmc= | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=31714992 }} </ref>. | ||
Line 9: | Line 10: | ||
Machine learning has a wide spectrum of applications including [[natural language processing]], [[syntactic pattern recognition]], [[search engines]], [[diagnosis|medical diagnosis]], [[bioinformatics]] and [[cheminformatics]], detecting [[credit card fraud]], [[stock market]] analysis, classifying [[DNA sequence]]s, [[speech recognition|speech]] and [[handwriting recognition|handwriting]] recognition, [[object recognition]] in [[computer vision]], [[strategy game|game playing]] and [[robot locomotion]]. | Machine learning has a wide spectrum of applications including [[natural language processing]], [[syntactic pattern recognition]], [[search engines]], [[diagnosis|medical diagnosis]], [[bioinformatics]] and [[cheminformatics]], detecting [[credit card fraud]], [[stock market]] analysis, classifying [[DNA sequence]]s, [[speech recognition|speech]] and [[handwriting recognition|handwriting]] recognition, [[object recognition]] in [[computer vision]], [[strategy game|game playing]] and [[robot locomotion]]. | ||
== Human interaction == | ==Human interaction== | ||
Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data are to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the [[scientific method]]. | Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data are to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the [[scientific method]]. | ||
Some statistical machine learning researchers create methods within the framework of [[Bayesian statistics]]. | Some statistical machine learning researchers create methods within the framework of [[Bayesian statistics]]. | ||
== Algorithm types ==<!-- This section is linked from [[Learning]] --> | ==Algorithm types==<!-- This section is linked from [[Learning]] --> | ||
Machine learning [[algorithm]]s are organized into a [[taxonomy]], based on the desired outcome of the algorithm. Common algorithm types include: | Machine learning [[algorithm]]s are organized into a [[taxonomy]], based on the desired outcome of the algorithm. Common algorithm types include:<ref name="pmid30890124">{{cite journal| author=Sidey-Gibbons JAM, Sidey-Gibbons CJ| title=Machine learning in medicine: a practical introduction. | journal=BMC Med Res Methodol | year= 2019 | volume= 19 | issue= 1 | pages= 64 | pmid=30890124 | doi=10.1186/s12874-019-0681-4 | pmc=6425557 | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=30890124 }}</ref> | ||
* [[Supervised learning]] — in which the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the [[statistical classification|classification]] problem: the learner is required to learn (to approximate) the behavior of a function which maps a vector <math>[X_1, X_2, \ldots X_N]\,</math> into one of several classes by looking at several input-output examples of the function. | *[[Supervised learning]] — in which the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the [[statistical classification|classification]] problem: the learner is required to learn (to approximate) the behavior of a function which maps a vector <math>[X_1, X_2, \ldots X_N]\,</math> into one of several classes by looking at several input-output examples of the function. | ||
** [[Support vector machine]] is "supervised machine learning algorithm which learns to assign labels to objects from a set of training examples. Examples are learning to recognize fraudulent credit card activity by examining hundreds or thousands of fraudulent and non-fraudulent credit card activity, or learning to make disease diagnosis or prognosis based on automatic classification of microarray gene expression profiles drawn from hundreds or thousands of samples"<ref>{{MeSH|Deep learning}}</ref>. | **[[Support vector machine]] is "supervised machine learning algorithm which learns to assign labels to objects from a set of training examples. Examples are learning to recognize fraudulent credit card activity by examining hundreds or thousands of fraudulent and non-fraudulent credit card activity, or learning to make disease diagnosis or prognosis based on automatic classification of microarray gene expression profiles drawn from hundreds or thousands of samples"<ref>{{MeSH|Deep learning}}</ref>. | ||
* [[Unsupervised learning]] — which models a set of inputs: labeled examples are not available. | *[[Unsupervised learning]] — which models a set of inputs: labeled examples are not available. | ||
* [[Semi-supervised learning]] — which combines both labeled and unlabeled examples to generate an appropriate function or classifier. | *[[Semi-supervised learning]] — which combines both labeled and unlabeled examples to generate an appropriate function or classifier. | ||
* [[Reinforcement learning]] — in which the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm. | *[[Reinforcement learning]] — in which the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm. | ||
* [[Transduction (machine learning)|Transduction]] — similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and test inputs which are available while training. | *[[Transduction (machine learning)|Transduction]] — similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and test inputs which are available while training. | ||
* [[Learning to learn]] — in which the algorithm learns its own [[inductive bias]] based on previous experience. | *[[Learning to learn]] — in which the algorithm learns its own [[inductive bias]] based on previous experience. | ||
[[Deep learning]], a type of computer or [[artificial neural network]]s, is "supervised or unsupervised machine learning methods that use multiple layers of data representations generated by nonlinear transformations, instead of individual task-specific algorithms, to build and train neural network models"<ref>{{MeSH|Deep learning}}</ref>. | [[Deep learning]], a type of computer or [[artificial neural network]]s, is "supervised or unsupervised machine learning methods that use multiple layers of data representations generated by nonlinear transformations, instead of individual task-specific algorithms, to build and train neural network models"<ref>{{MeSH|Deep learning}}</ref>. | ||
* Convolutional neural network (CNN; ConvNet), also called shift invariant or space invariant artificial neural networks (SIANN), used for visual imagery such as retinal scans<ref name="pmid27898976">{{cite journal| author=Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A | display-authors=etal| title=Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. | journal=JAMA | year= 2016 | volume= 316 | issue= 22 | pages= 2402-2410 | pmid=27898976 | doi=10.1001/jama.2016.17216 | pmc= | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=27898976 }} </ref>. | |||
*Convolutional neural network (CNN; ConvNet), also called shift invariant or space invariant artificial neural networks (SIANN), used for visual imagery such as retinal scans<ref name="pmid27898976">{{cite journal| author=Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A | display-authors=etal| title=Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. | journal=JAMA | year= 2016 | volume= 316 | issue= 22 | pages= 2402-2410 | pmid=27898976 | doi=10.1001/jama.2016.17216 | pmc= | url=https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=ref&cmd=prlinks&id=27898976 }} </ref>. | |||
The computational analysis of machine learning algorithms and their performance is a branch of [[theoretical computer science]] known as [[computational learning theory]]. | The computational analysis of machine learning algorithms and their performance is a branch of [[theoretical computer science]] known as [[computational learning theory]]. | ||
== Machine learning topics == | ==Machine learning topics== | ||
:''This list represents the topics covered on a typical machine learning course.'' | :''This list represents the topics covered on a typical machine learning course.'' | ||
{{MultiCol}} | {{MultiCol}} | ||
Line 86: | Line 89: | ||
{{EndMultiCol}} | {{EndMultiCol}} | ||
== See also == | ==See also== | ||
{{MultiCol}} | {{MultiCol}} | ||
* [[Artificial intelligence]] | * [[Artificial intelligence]] | ||
Line 115: | Line 118: | ||
{{reflist|2}} | {{reflist|2}} | ||
== Bibliography == | ==Bibliography== | ||
* Ethem Alpaydın (2004) ''Introduction to Machine Learning (Adaptive Computation and Machine Learning)'', MIT Press, ISBN 0262012111 | |||
* Christopher M. Bishop (2007) ''Pattern Recognition and Machine Learning'', Springer ISBN 0-387-31073-8. | *Ethem Alpaydın (2004) ''Introduction to Machine Learning (Adaptive Computation and Machine Learning)'', MIT Press, ISBN 0262012111 | ||
* Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1983), ''Machine Learning: An Artificial Intelligence Approach'', Tioga Publishing Company, ISBN 0-935382-05-4. | *Christopher M. Bishop (2007) ''Pattern Recognition and Machine Learning'', Springer ISBN 0-387-31073-8. | ||
* Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1986), ''Machine Learning: An Artificial Intelligence Approach, Volume II'', Morgan Kaufmann, ISBN 0-934613-00-1. | *Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1983), ''Machine Learning: An Artificial Intelligence Approach'', Tioga Publishing Company, ISBN 0-935382-05-4. | ||
* Yves Kodratoff, Ryszard S. Michalski (1990), ''Machine Learning: An Artificial Intelligence Approach, Volume III'', Morgan Kaufmann, ISBN 1-55860-119-8. | *Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1986), ''Machine Learning: An Artificial Intelligence Approach, Volume II'', Morgan Kaufmann, ISBN 0-934613-00-1. | ||
* Ryszard S. Michalski, George Tecuci (1994), ''Machine Learning: A Multistrategy Approach'', Volume IV, Morgan Kaufmann, ISBN 1-55860-251-8. | *Yves Kodratoff, Ryszard S. Michalski (1990), ''Machine Learning: An Artificial Intelligence Approach, Volume III'', Morgan Kaufmann, ISBN 1-55860-119-8. | ||
* Bhagat, P. M. (2005). ''Pattern Recognition in Industry'', Elsevier. ISBN 0-08-044538-1. | *Ryszard S. Michalski, George Tecuci (1994), ''Machine Learning: A Multistrategy Approach'', Volume IV, Morgan Kaufmann, ISBN 1-55860-251-8. | ||
* Bishop, C. M. (1995). ''Neural Networks for Pattern Recognition'', Oxford University Press. ISBN 0-19-853864-2. | *Bhagat, P. M. (2005). ''Pattern Recognition in Industry'', Elsevier. ISBN 0-08-044538-1. | ||
* Richard O. Duda, Peter E. Hart, David G. Stork (2001) ''Pattern classification'' (2nd edition), Wiley, New York, ISBN 0-471-05669-3. | *Bishop, C. M. (1995). ''Neural Networks for Pattern Recognition'', Oxford University Press. ISBN 0-19-853864-2. | ||
* Huang T.-M., Kecman V., Kopriva I. (2006), Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning, Springer-Verlag, Berlin, Heidelberg, 260 pp. 96 illus., Hardcover, ISBN 3-540-31681-7 [http://learning-from-data.com]. | *Richard O. Duda, Peter E. Hart, David G. Stork (2001) ''Pattern classification'' (2nd edition), Wiley, New York, ISBN 0-471-05669-3. | ||
* KECMAN Vojislav (2001), LEARNING AND SOFT COMPUTING, Support Vector Machines, Neural Networks and Fuzzy Logic Models, The MIT Press, Cambridge, MA, 608 pp., 268 illus., ISBN 0-262-11255-8 [http://support-vector.ws]. | *Huang T.-M., Kecman V., Kopriva I. (2006), Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning, Springer-Verlag, Berlin, Heidelberg, 260 pp. 96 illus., Hardcover, ISBN 3-540-31681-7 [http://learning-from-data.com]. | ||
* MacKay, D. J. C. (2003). [http://www.inference.phy.cam.ac.uk/mackay/itila/ ''Information Theory, Inference, and Learning Algorithms''], Cambridge University Press. ISBN 0-521-64298-1. | *KECMAN Vojislav (2001), LEARNING AND SOFT COMPUTING, Support Vector Machines, Neural Networks and Fuzzy Logic Models, The MIT Press, Cambridge, MA, 608 pp., 268 illus., ISBN 0-262-11255-8 [http://support-vector.ws]. | ||
* Mitchell, T. (1997). ''Machine Learning'', McGraw Hill. ISBN 0-07-042807-7. | *MacKay, D. J. C. (2003). [http://www.inference.phy.cam.ac.uk/mackay/itila/ ''Information Theory, Inference, and Learning Algorithms''], Cambridge University Press. ISBN 0-521-64298-1. | ||
* Ian H. Witten and Eibe Frank "Data Mining: Practical machine learning tools and techniques" Morgan Kaufmann ISBN 0-12-088407-0. | *Mitchell, T. (1997). ''Machine Learning'', McGraw Hill. ISBN 0-07-042807-7. | ||
* Sholom Weiss and Casimir Kulikowski (1991). ''Computer Systems That Learn'', Morgan Kaufmann. ISBN 1-55860-065-5. | *Ian H. Witten and Eibe Frank "Data Mining: Practical machine learning tools and techniques" Morgan Kaufmann ISBN 0-12-088407-0. | ||
* Mierswa, Ingo and Wurst, Michael and [[Ralf_Klinkenberg|Klinkenberg, Ralf]] and Scholz, Martin and Euler, Timm: ''YALE: Rapid Prototyping for Complex Data Mining Tasks'', in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-06), 2006. | *Sholom Weiss and Casimir Kulikowski (1991). ''Computer Systems That Learn'', Morgan Kaufmann. ISBN 1-55860-065-5. | ||
* Trevor Hastie, Robert Tibshirani and Jerome Friedman (2001). ''The Elements of Statistical Learning'', Springer. ISBN 0387952845 ([http://www-stat.stanford.edu/~tibs/ElemStatLearn/ companion book site]). | *Mierswa, Ingo and Wurst, Michael and [[Ralf_Klinkenberg|Klinkenberg, Ralf]] and Scholz, Martin and Euler, Timm: ''YALE: Rapid Prototyping for Complex Data Mining Tasks'', in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-06), 2006. | ||
* Vladimir Vapnik (1998). ''Statistical Learning Theory''. Wiley-Interscience, ISBN 0471030031. | *Trevor Hastie, Robert Tibshirani and Jerome Friedman (2001). ''The Elements of Statistical Learning'', Springer. ISBN 0387952845 ([http://www-stat.stanford.edu/~tibs/ElemStatLearn/ companion book site]). | ||
*Vladimir Vapnik (1998). ''Statistical Learning Theory''. Wiley-Interscience, ISBN 0471030031. | |||
== External links == | ==External links== | ||
{{linkfarm}} | {{linkfarm}} | ||
* [http://www.machinelearning.org/ International Machine Learning Society] | *[http://www.machinelearning.org/ International Machine Learning Society] | ||
* [http://www.ics.uci.edu/~mlearn/Machine-Learning.html UCI description] | *[http://www.ics.uci.edu/~mlearn/Machine-Learning.html UCI description] | ||
* [http://www.mlnet.org/ MLnet Mailing List] | *[http://www.mlnet.org/ MLnet Mailing List] | ||
* [http://www.cs.iastate.edu/~honavar/Courses/cs673/machine-learning-courses.html Index of Machine Learning Courses] | *[http://www.cs.iastate.edu/~honavar/Courses/cs673/machine-learning-courses.html Index of Machine Learning Courses] | ||
* [http://www.kmining.com/info_conferences.html Kmining List of machine learning, data mining and KDD scientific conferences] | *[http://www.kmining.com/info_conferences.html Kmining List of machine learning, data mining and KDD scientific conferences] | ||
* Book "[http://www.intelligent-systems.com.ar/intsyst/index.htm Intelligent Systems and their Societies]" by [[Walter Fritz]] | *Book "[http://www.intelligent-systems.com.ar/intsyst/index.htm Intelligent Systems and their Societies]" by [[Walter Fritz]] | ||
* [http://mlpedia.org MLpedia] — wiki dedicated to machine learning. | *[http://mlpedia.org MLpedia] — wiki dedicated to machine learning. | ||
* [http://scholarpedia.org/article/Encyclopedia_of_Computational_Intelligence The Encyclopedia of Computational Intelligence] | *[http://scholarpedia.org/article/Encyclopedia_of_Computational_Intelligence The Encyclopedia of Computational Intelligence] | ||
[[Category:Computer vision]] | [[Category:Computer vision]] |
Revision as of 15:52, 20 January 2021
- For the journal, see Machine Learning (journal).
Template:Portalpar Machine learning is defined as "a type of artificial intelligence that enable computers to independently initiate and execute learning when exposed to new data"[1][2].
As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn". At a general level, there are two types of learning: inductive, and deductive. Inductive machine learning methods extract rules and patterns out of massive data sets.
The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. Hence, machine learning is closely related to data mining and statistics but also theoretical computer science.
Machine learning has a wide spectrum of applications including natural language processing, syntactic pattern recognition, search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion.
Human interaction
Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data are to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the scientific method.
Some statistical machine learning researchers create methods within the framework of Bayesian statistics.
Algorithm types
Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm. Common algorithm types include:[3]
- Supervised learning — in which the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate) the behavior of a function which maps a vector <math>[X_1, X_2, \ldots X_N]\,</math> into one of several classes by looking at several input-output examples of the function.
- Support vector machine is "supervised machine learning algorithm which learns to assign labels to objects from a set of training examples. Examples are learning to recognize fraudulent credit card activity by examining hundreds or thousands of fraudulent and non-fraudulent credit card activity, or learning to make disease diagnosis or prognosis based on automatic classification of microarray gene expression profiles drawn from hundreds or thousands of samples"[4].
- Unsupervised learning — which models a set of inputs: labeled examples are not available.
- Semi-supervised learning — which combines both labeled and unlabeled examples to generate an appropriate function or classifier.
- Reinforcement learning — in which the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm.
- Transduction — similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and test inputs which are available while training.
- Learning to learn — in which the algorithm learns its own inductive bias based on previous experience.
Deep learning, a type of computer or artificial neural networks, is "supervised or unsupervised machine learning methods that use multiple layers of data representations generated by nonlinear transformations, instead of individual task-specific algorithms, to build and train neural network models"[5].
- Convolutional neural network (CNN; ConvNet), also called shift invariant or space invariant artificial neural networks (SIANN), used for visual imagery such as retinal scans[6].
The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
Machine learning topics
- This list represents the topics covered on a typical machine learning course.
|
|
See also
|
References
- ↑ Anonymous (2024), Machine learning (English). Medical Subject Headings. U.S. National Library of Medicine.
- ↑ Liu Y, Chen PC, Krause J, Peng L (2019). "How to Read Articles That Use Machine Learning: Users' Guides to the Medical Literature". JAMA. 322 (18): 1806–1816. doi:10.1001/jama.2019.16489. PMID 31714992.
- ↑ Sidey-Gibbons JAM, Sidey-Gibbons CJ (2019). "Machine learning in medicine: a practical introduction". BMC Med Res Methodol. 19 (1): 64. doi:10.1186/s12874-019-0681-4. PMC 6425557. PMID 30890124.
- ↑ Anonymous (2024), Deep learning (English). Medical Subject Headings. U.S. National Library of Medicine.
- ↑ Anonymous (2024), Deep learning (English). Medical Subject Headings. U.S. National Library of Medicine.
- ↑ Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A; et al. (2016). "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs". JAMA. 316 (22): 2402–2410. doi:10.1001/jama.2016.17216. PMID 27898976.
Bibliography
- Ethem Alpaydın (2004) Introduction to Machine Learning (Adaptive Computation and Machine Learning), MIT Press, ISBN 0262012111
- Christopher M. Bishop (2007) Pattern Recognition and Machine Learning, Springer ISBN 0-387-31073-8.
- Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1983), Machine Learning: An Artificial Intelligence Approach, Tioga Publishing Company, ISBN 0-935382-05-4.
- Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1986), Machine Learning: An Artificial Intelligence Approach, Volume II, Morgan Kaufmann, ISBN 0-934613-00-1.
- Yves Kodratoff, Ryszard S. Michalski (1990), Machine Learning: An Artificial Intelligence Approach, Volume III, Morgan Kaufmann, ISBN 1-55860-119-8.
- Ryszard S. Michalski, George Tecuci (1994), Machine Learning: A Multistrategy Approach, Volume IV, Morgan Kaufmann, ISBN 1-55860-251-8.
- Bhagat, P. M. (2005). Pattern Recognition in Industry, Elsevier. ISBN 0-08-044538-1.
- Bishop, C. M. (1995). Neural Networks for Pattern Recognition, Oxford University Press. ISBN 0-19-853864-2.
- Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York, ISBN 0-471-05669-3.
- Huang T.-M., Kecman V., Kopriva I. (2006), Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning, Springer-Verlag, Berlin, Heidelberg, 260 pp. 96 illus., Hardcover, ISBN 3-540-31681-7 [1].
- KECMAN Vojislav (2001), LEARNING AND SOFT COMPUTING, Support Vector Machines, Neural Networks and Fuzzy Logic Models, The MIT Press, Cambridge, MA, 608 pp., 268 illus., ISBN 0-262-11255-8 [2].
- MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms, Cambridge University Press. ISBN 0-521-64298-1.
- Mitchell, T. (1997). Machine Learning, McGraw Hill. ISBN 0-07-042807-7.
- Ian H. Witten and Eibe Frank "Data Mining: Practical machine learning tools and techniques" Morgan Kaufmann ISBN 0-12-088407-0.
- Sholom Weiss and Casimir Kulikowski (1991). Computer Systems That Learn, Morgan Kaufmann. ISBN 1-55860-065-5.
- Mierswa, Ingo and Wurst, Michael and Klinkenberg, Ralf and Scholz, Martin and Euler, Timm: YALE: Rapid Prototyping for Complex Data Mining Tasks, in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-06), 2006.
- Trevor Hastie, Robert Tibshirani and Jerome Friedman (2001). The Elements of Statistical Learning, Springer. ISBN 0387952845 (companion book site).
- Vladimir Vapnik (1998). Statistical Learning Theory. Wiley-Interscience, ISBN 0471030031.
External links
- International Machine Learning Society
- UCI description
- MLnet Mailing List
- Index of Machine Learning Courses
- Kmining List of machine learning, data mining and KDD scientific conferences
- Book "Intelligent Systems and their Societies" by Walter Fritz
- MLpedia — wiki dedicated to machine learning.
- The Encyclopedia of Computational Intelligence
ar:تعلم آلي
bg:Машинно самообучение
ca:Aprenentatge automàtic
cs:Strojové učení
de:Maschinelles Lernen
et:Masinõppimine
el:Μηχανική Μάθηση
fa:یادگیری ماشینی
ko:기계 학습
it:Apprendimento automatico
he:למידה חישובית
lt:Sistemos mokymasis
nl:Machinaal leren
sl:Strojno učenje
fi:Koneoppiminen
sv:Maskininlärning
th:การเรียนรู้ของเครื่อง
uk:Машинне навчання