Spring 2022

Taught in English by Dengxin Dai, Alex Liniger, Christos Sakaridis, Martin Hahner, and Jan-Nico Zaech.
6 ECTS. Class size limited to 100 students.

ETH Course Catalogue

Teaching Staff

  • All
  • General
  • Project 1
  • Project 2
  • Project 3
  • Website & Forum
  • AWS & PyTorch

Lectures


Date

Time

Room

Slides

Video

Topic


25.02.2022
13:15 - 16:00
Fundamentals of a Self-Driving Car

04.03.2022
13:15 - 16:00
Fundamentals of Deep Learning

11.03.2022
13:15 - 16:00
Fundamentals of Deep Learning (continued)

18.03.2022
13:15 - 16:00
Semantic Segmentation

25.03.2022
13:15 - 16:00
Depth Estimation

01.04.2022
13:15 - 16:00
Multi-tasking and 2D Object Detection

08.04.2022
13:15 - 16:00
3D Object Detection

15.04.2020
Good Friday


22.04.2020
Easter Break


29.04.2022
13:15 - 16:00
Localization

06.05.2022
13:15 - 16:00
All-Season Semantic Scene Understanding

13.05.2022
13:15 - 16:00
Path Planning

20.05.2022
13:15 - 16:00
Motion Planning and Vehicle Control

27.05.2022
13:15 - 16:00
Imitation Learning and Reinforcement Learning

03.06.2022
13:15 - 16:00
Imitation Learning and Reinforcement Learning (continued)



Exercises


Date

Time

Room

Slides

Video

Topic


04.03.2022
10:15 - 12:00
Getting Started with Amazon Web Services (AWS)

11.03.2022
10:15 - 12:00
Project 1: Understanding Multimodal Driving Data

18.03.2022
11:00 - 12:00
Q&A for AWS & Project 1

25.03.2022
10:30 - 12:00
Q&A for Project 1

01.04.2022
10:15 - 12:00
Project 2: Multi-task learning for semantics and depth

08.04.2022
10:15 - 12:00
Q&A for Project 2

15.04.2020
Good Friday


22.04.2020
Easter Break


29.04.2022
10:15 - 12:00
Q&A for Project 2

06.05.2022
10:15 - 12:00
Q&A for Project 2

13.05.2022
10:15 - 12:00
Project 3: 3D Object Detection

20.05.2022
10:15 - 12:00
PDF
Q&A for Project 3

27.05.2022
10:15 - 12:00
Q&A for Project 3

03.06.2022
10:15 - 12:00
Q&A for Project 3


Room Details


174 Seats

Abstract

Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme.

Objective

Students will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control.

After attending this course, students will:

  1. understand the core technologies of building a self-driving car,
  2. have a good overview over the current state of the art in self-driving cars,
  3. be able to critically analyze and evaluate current research in this area,
  4. be able to implement basic systems for multiple autonomous driving tasks.

Content

We will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control.
The course covers the following main areas:

  1. Foundation
    1. Fundamentals of Deep Learning
    2. Fundamentals of a Self-Driving Car
  2. Perception
    1. Semantic and Instance Segmentation
    2. Depth Estimation with Images and Sparse LiDAR Data
    3. 3D Object Detection with Images and LiDAR Data
    4. Object Tracking and Motion Prediction
  3. Localization
    1. GPS-Based Localization
    2. Visual Localization and LiDAR-Based Localization
  4. Path Planning and Control
    1. Path Planning
    2. Motion Planning and Vehicle Control
    3. Imitation Learning and Reinforcement Learning

Exercises

The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems:

  1. Sensor calibration and synchronization to obtain multimodal driving data,
  2. Semantic segmentation and depth estimation with deep neural networks,
  3. Learning to drive with images and map data directly (a.k.a. end-to-end driving).

Prerequisites

This is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image.

Notice

Registration for this class requires the permission of the instructors. Preference is given to EEIT, INF and RSC students.

Exam

Examiners:
Dengxin Dai, Alex Liniger

The grade is based on

  1. the realization of three projects (10%, 25% and 15%), and
  2. a 30 minutes oral exam during the session examination period (50%).

Successfully completing the projects is compulsory for attending the exam.
The projects will be group based but we assess the contribution of each student individually.
The examination is based on the contents of the lectures, the associated reading materials and exercises.

The performance assessment is only offered in the session after the course unit. Repetition only possible after re-enrolling.

Acknowledgement

We thank Amazon AWS and HESAI for sponsoring our education efforts,
and Toyota Motor Europe for sponsoring our autonomous driving research via the project TRACE Zurich.