Distribution has been a major trend in computing over the last decade, which affects the way we compute in several ways: microprocessor architectures are now multi-core, offering several parallel threads of computation, while large-scale systems distribute storage and computation across several processors, machines, or data centers. More recently, researchers started to implement distributed algorithms using synthetic DNA, which gives them the ability to execute at humongous scales. The Alistarh group works to create algorithms that take advantage of these developments, by creating software that scales – in other words, it improves its performance when more computation units are available.

The course will serve as a hands-on introduction to distributed computing, with a focus on our group’s research. In particular, we will have three modules, covering the following topics:
Distributed Machine Learning
Relaxed Concurrent Data Structures
Molecular Computation

The course will be project-based. In particular, each module will consist in an introduction to the topic, followed by a small “research” or “literature understanding” project, which can be performed individually or in teams.

Target group: Any student remotely interested in distributed computing.

Prerequisites: Basic knowledge of mathematics and coding.

Evaluation: Project-based, pass/fail.

Teaching format: Lectures and recitations.

ECTS: 3 Year: 2020

Track segment(s):
CS-ALG Computer Science - Algorithms and Complexity

Teacher(s):
Dan Alistarh

Teaching assistant(s):
Giorgi Nadiradze Lorenzo Portinale

If you want to enroll to this course, please click: REGISTER