Note

This is the website of the course taught in Spring 2021. If you are looking for the website of the course taught in Spring 2022, please click here.


Spring 2021

Taught in English by Dengxin Dai and Alex Liniger.
6 ECTS. Class size limited to 105 students.

ETH Course Catalogue

Teaching Staff

  • All
  • General
  • Project 1
  • Project 2
  • Project 3
  • Website & Forum
  • AWS & PyTorch

Lectures


Date

Time

Room

Slides

Video

Topic


26.02.2021
13:15 - 16:00
Fundamentals of a Self-Driving Car

05.03.2021
13:15 - 16:00
Fundamentals of Deep Learning

12.03.2021
13:15 - 16:00
Fundamentals of Deep Learning (continued)

19.03.2021
13:15 - 16:00
Semantic Image Segmentation

26.03.2021
13:15 - 16:00
Depth Estimation

02.04.2021
Good Friday


09.04.2021
Easter Break


16.04.2021
13:15 - 16:00
Multi-tasking and 2D Object Detection

23.04.2021
13:15 - 16:00
3D Object Detection

30.04.2021
13:15 - 16:00
Localization

07.05.2021
13:15 - 16:00
Lane Detection

14.05.2021
13:15 - 16:00
Path Planning

21.05.2021
13:15 - 16:00
Motion Planning and Vehicle Control

28.05.2021
13:15 - 16:00
Imitation Learning and Reinforcement Learning

04.06.2021
13:15 - 16:00
Imitation Learning and Reinforcement Learning (continued)



Exercises


Date

Time

Room

Slides

Video

Topic


05.03.2021
10:15 - 12:00
Project 1: Understanding Multimodal Driving Data

12.03.2021
10:15 - 12:00
Q&A for Project 1

19.03.2021
10:15 - 12:00
AWS (1st pdf) and Q&A for Project 1 (2nd pdf)

26.03.2021
10:15 - 12:00
Project 2: Multitask Learning for Semantics and Depth

02.04.2021
Good Friday


09.04.2021
Easter Break


16.04.2021
10:15 - 12:00
Q&A for Project 2

23.04.2021
10:15 - 12:00
Q&A for Project 2

30.04.2021
10:15 - 12:00
Q&A for Project 2

07.05.2021
10:15 - 12:00
Q&A for Project 2

14.05.2021
10:15 - 12:00
Project 3: 3D Object Detection

21.05.2021
10:15 - 12:00
Q&A for Project 3

28.05.2021
10:15 - 12:00
Q&A for Project 3

04.06.2021
10:15 - 12:00
Q&A for Project 3


Room Details


87 Seats

Abstract

Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme.

Objective

Students will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control.

After attending this course, students will:

  1. understand the core technologies of building a self-driving car,
  2. have a good overview over the current state of the art in self-driving cars,
  3. be able to critically analyze and evaluate current research in this area,
  4. be able to implement basic systems for multiple autonomous driving tasks.

Content

We will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control.
The course covers the following main areas:

  1. Foundation
    1. Fundamentals of Deep Learning
    2. Fundamentals of a Self-Driving Car
  2. Perception
    1. Semantic and Instance Segmentation
    2. Depth Estimation with Images and Sparse LiDAR Data
    3. 3D Object Detection with Images and LiDAR Data
    4. Object Tracking and Motion Prediction
  3. Localization
    1. GPS-Based Localization
    2. Visual Localization and LiDAR-Based Localization
  4. Path Planning and Control
    1. Path Planning
    2. Motion Planning and Vehicle Control
    3. Imitation Learning and Reinforcement Learning

Exercises

The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems:

  1. Sensor calibration and synchronization to obtain multimodal driving data,
  2. Semantic segmentation and depth estimation with deep neural networks,
  3. Learning to drive with images and map data directly (a.k.a. end-to-end driving).

Prerequisites

This is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image.

Notice

Registration for this class requires the permission of the instructors. Preference is given to EEIT, INF and RSC students.

Exam

Examiners:
Dengxin Dai, Alex Liniger

The grade is based on

  1. the realization of three projects (10%, 20% and 20%), and
  2. a 30 minutes oral exam during the session examination period (50%).

Successfully completing the projects is compulsory for attending the exam.
The projects will be group based but we assess the contribution of each student individually.
The examination is based on the contents of the lectures, the associated reading materials and exercises.

The performance assessment is only offered in the session after the course unit. Repetition only possible after re-enrolling.

Acknowledgement

We thank Amazon AWS and HESAI for sponsoring our education efforts,
and Toyota Motor Europe for sponsoring our autonomous driving research via the project TRACE Zurich.