|
|
|
||
The goal of this course is to introduce students to the basic methods of machine learning. They get theoretical understanding and practical working knowledge of regression and classification models in the supervised learning scenario and clustering models in the unsupervised scenario. Students will be aware of the relationships between model bias and variance, and know the fundamentals of assessing model quality. Moreover, they learn the basic techniques of data preprocessing and multidimensional data visualization. In practical demonstrations, pandas and scikit libraries in Python will be used.
Last update: Lankaš Filip (21.02.2023)
|
|
||
The course aims to introduce students to a rapidly developing field of machine learning. Last update: Lankaš Filip (21.02.2023)
|
|
||
1. Deisenroth M. P. : Mathematics for Machine Learning. Cambridge University Press, 2020. ISBN 978-1108455145. 2. Alpaydin E. : Introduction to Machine Learning. MIT Press, 2020. ISBN 978-0262043793. 3. Murphy K. P. : Machine Learning: A Probabilistic Perspective. MIT Press, 2012. ISBN 978-0-262-01802-9. 4. Bishop Ch. M. : Pattern Recognition and Machine Learning. Springer, 2006. ISBN 978-0387-31073-2. 5. Hastie T., Tibshirani R., Friedman J. : The Elements of Statistical Learning. Springer, 2009. ISBN 978-0-387-84857-0.
Further information: https://courses.fit.cvut.cz/BI-ML1/ Last update: Lankaš Filip (21.02.2023)
|
|
||
Syllabus of lectures: 1. Introduction and basic concepts of Machine Learning 2. Supervised learning setup, Classification setup, Decision trees 3. Regression setup, K-nearest neighbors for classification and regression 4. Linear regression - Ordinary least squares 5. Linear regression - geometrical interpretation, numerical issues 6. Ridge regression, bias-variance trade-off 7. Logistic regression 8. Ensemble methods (Random forests, Adaboost) 9. Model evaluation, cross-validation 10. Feature selection 11. Unsupervised learning setup, Association rules 12. Hierarchical clustering, the k-means algorithm
Syllabus of tutorials: 1. Introduction, Python and jupyter notebooks 2. Supervised learning setup, Classification setup, Decision trees 3. Regression setup, K-nearest neighbors for classification and regression 4. Linear regression - Ordinary least squares 5. Linear regression - geometrical interpretation, numerical issues 6. Ridge regression, bias-variance trade-off 7. Logistic regression 8. Ensemble methods (Random forests, Adaboost) 9. Model evaluation, cross-validation 10. Feature selection 11. Unsupervised learning setup, Association rules 12. Hierarchical clustering, the k-means algorithm Last update: Lankaš Filip (21.02.2023)
|
|
||
https://courses.fit.cvut.cz/BI-ML1/ Last update: Lankaš Filip (21.02.2023)
|
|
||
The knowledge of calculus, linear algebra and probability theory is assumed. Last update: Lankaš Filip (21.02.2023)
|