Concentration inequalities provide quantitative insight how functions of (independent) random variables deviate from their expectation. In the course, we will discuss classical concentration inequalities, such as Hoeffding’s or McDiarmid’s, as well as recent extensions. In additional to the mathematical treatment, we will discuss application in machine learning and potentially other fields.

References:
Concentration Inequalities by S. Boucheron, G. Lugosi, P. Massart
Probability in High Dimension by R. Van Handel
High-Dimensional Probability by R. Vershynin
Concentration of measure by M. Ledoux

Target group: PhD students from all years, postdocs, anyone else who is interested

Prerequisites: strong background in probability and analysis

Evaluation: pass/fail based on homework

Teaching format: None

ECTS: 3 Year: 2020

Track segment(s):
CS-AI Computer Science - Artificial Intelligence
DSSC-PROB Data Science and Scientific Computing - Probabilistic Models
MAT-PROB Mathematics - Probability

Teacher(s):
Christoph Lampert Jan Maas

Teaching assistant(s):

If you want to enroll to this course, please click: REGISTER