## OVERVIEW

The course offers an introduction to the mathematical theory of machine learning, whose tools are at the basis of modern machine learning algorithms and large data analysis. The course is aimed at master students in Mathematics.

## AIMS AND CONTENT

LEARNING OUTCOMES

The primary objective is to provide the students with the basic language and tools of machine learning, with particular emphasis on the supervised case. The approach is based on a formulation of the problem of machine learning as an inverse stochastic problem. The students will also need to know some of the best known algorithms, including both statistical and computational properties.

AIMS AND LEARNING OUTCOMES

At the end of the course, the student will have:

- a good understanding of the basic notions of machine learning and of the related basic mathematical tools;
- a good comprehension of the basic concepts and techniques of convex optimization
- a good knowledge of the statistical and computational properties of some well known machine learning algorithms;
- some ability to implement machine learning algorithms on synthetic and real data sets.

PREREQUISITES

Calculus 1 and 2, probability and linear algebra.

TEACHING METHODS

Classes using blackboard and lab activities

SYLLABUS/CONTENT

- introduction to supervised statistical learning and related concepts: expected error, learning algorithms and their consistency;
- classical supervised learning algorithms (regularized least squares, Support Vector Machines, etc.) and their consistency;
- first order methods for smooth functions: gradient method
- first order methods for nonsmooth functions: proximal gradient method, applications to sparsity (Lasso and Elastic Net);
- stochastic gradient methods;
- introduction to deep learning;

RECOMMENDED READING/BIBLIOGRAPHY

- L. Rosasco,
**Introductory Machine Learning Notes**, University of Genoa, (http://lcsl.mit.edu/courses/ml/1718/MLNotes.pdf) - Steinwart, Ingo, Christmann, Andreas, Support vector machines, Springer, ISBN 978-0-387-77241-7
- Cucker, Felipe, Zhou, Ding-Xuan, Learning theory: an approximation theory viewpoint, Cambridge University Press 2007, ISBN 978-0-521-86559-3
- Boyd, Vandenberghe, Convex Optimization, Cambridge University Press, 2004, ISBN 0 521 83378 7

## TEACHERS AND EXAM BOARD

**Office hours:** By appointment wich can be fixed in person or via email : villa@dima.unige.it

Exam Board

SILVIA VILLA (President)

ERNESTO DE VITO

LORENZO ROSASCO (President Substitute)

## LESSONS

TEACHING METHODS

Classes using blackboard and lab activities

LESSONS START

In agreement with the offical academic calendar

Class schedule

All class schedules are posted on the EasyAcademy portal.

## EXAMS

EXAM DESCRIPTION

To pass the exam the student have to write and present a short report (max 10 pages). The student can choose one among the following options:

- analyze and discuss a research article on themse close to the ones studied during classes
- implement an algorithms presented during the classes (in some programming language)
- use some available code to analyze synthetic and/or real datasets and discuss the obtained results.

The topic studied in the report must be decided in advance in agreement with the instructors.

ASSESSMENT METHODS

The report preparation and its discussion are aimed at verifying the student's achievement of an independent critical reasoning capability in the context of machine learning.

In addition, the report writing will be used to assess the student's ability to elaborate in written form his ideas.

The wide range of possible topics allows to adapt the requested skills to students of the Bachelor and the Master degree,