Spring 2020

Taught in English by Dengxin Dai and Alex Liniger.
6 ECTS. Class size limited to 80 students.

ETH Course Catalogue

Teaching Staff


  • All
  • General
  • Project 1
  • Project 2
  • Project 3
  • Website & Forum
  • AWS & PyTorch

Announcements


10.02.2020
Lecture times were extended until 16:00, so instead of having 2 x 45min blocks we will now have 3 x 45min blocks.
As a result of this change, we have moved the exercise session to the morning. They will now take place from 10:00 to 12:00 in ETZ D 61.1 and D 61.2.
If you can't or don't want to use your personal laptop for the exercises, both rooms together offer around 60 linux workstations.




13.02.2020
There will be no exercises on 21.02.2020, exercises will start on 28.02.2020.

19.02.2020
We will be using Piazza as a class discussion forum, all class participants will get an email with the access code soon. The system is highly catered to getting you help fast and efficiently from fellow classmates, TAs and instructors. Rather than emailing questions to the TAs or instructors, we encourage you to post your questions directly on Piazza - you can even do so anonymously.
From now on, all course relevant announcements will only be posted there.


Exercises


Date

Time

Room

Slides

Video

Topic


28.02.2020
10:15 - 12:00
Getting Started with Amazon Web Services (AWS)

06.03.2020
10:15 - 12:00
at home
Project 1: Understanding Multimodal Driving Data

13.03.2020
10:15 - 12:00
Pytorch Tutorial and Q&A for Project 1

20.03.2020
10:15 - 11:00
Q&A for Project 1

23.03.2020
18:00 - 18:45
Q&A for Project 1

27.03.2020
10:15 - 11:00
Project 2: Multi-task learning for semantics and depth

31.03.2020
Tutorial: AWS, Git, Training Instances

03.04.2020
10:15 - 12:00
Q&A for Project 2

10.04.2020
Good Friday


17.04.2020
Easter Break


24.04.2020
10:15 - 12:00
Q&A for Project 2

01.05.2020
May Day


08.05.2020
10:15 - 12:00
Q&A for Project 2

15.05.2020
10:15 - 12:00
Q&A for Project 2

22.05.2020
10:15 - 12:00
Q&A for Project 2

29.05.2020
10:15 - 12:00
Q&A for Project 2



Lectures


Date

Time

Room

Slides

Video

Topic


21.02.2020
13:15 - 16:00
Fundamentals of a self-driving car

28.02.2020
13:15 - 16:00
Fundamentals of deep-learning

06.03.2020
13:15 - 16:00
Fundamentals of deep-learning (continued)

13.03.2020
13:15 - 16:00
Semantic Segmentation and Inertial Navigation System

20.03.2020
13:15 - 16:00
PDF PDF   Depth Estimation

27.03.2020
13:15 - 16:00
Multi-tasking and 2D Object Detection

03.04.2020
13:15 - 16:00
3D Object Detection

10.04.2020
Good Friday


17.04.2020
Easter Break


24.04.2020
13:15 - 16:00
Localization

01.05.2020
May Day


08.05.2020
13:15 - 16:00
Path planning for autonomous driving

15.05.2020
13:15 - 16:00
Trajectory planning and vehicle control

22.05.2020
13:15 - 16:00
Imitation learning and reinforcement learning for self driving cars

29.05.2020
13:15 - 16:00
Lane Detection and Maps


Room Details


87 Seats

Abstract

Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme.

Objective

Students will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control.

After attending this course, students will:

  1. understand the core technologies of building a self-driving car,
  2. have a good overview over the current state of the art in self-driving cars,
  3. be able to critically analyze and evaluate current research in this area,
  4. be able to implement basic systems for multiple autonomous driving tasks.

Content

We will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control.
The course covers the following main areas:

  1. Foundation
    1. Fundamentals of deep-learning
    2. Fundamentals of a self-driving car
  2. Perception
    1. Semantic and instance segmentation
    2. Depth estimation with images and sparse LiDAR data
    3. 3D object detection with images and LiDAR data
    4. Object tracking and motion prediction
  3. Localization
    1. GPS-based localization
    2. Visual localization and Lidar-based localization
  4. Path Planning and Control
    1. Path planning for autonomous driving
    2. Motion planning and vehicle control
    3. Imitation learning and reinforcement learning for self driving cars

Exercises

The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems:

  1. Sensor calibration and synchronization to obtain multimodal driving data,
  2. Semantic segmentation and depth estimation with deep neural networks,
  3. Learning to drive with images and map data directly (a.k.a. end-to-end driving).

Prerequisites

This is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image.

Notice

Registration for this class requires the permission of the instructors. Preference is given to EEIT, INF and RSC students.

Exam

Examiners:
Dengxin Dai, Alex Liniger

The grade is based on

  1. the realization of two projects (15% and 30%), and
  2. a 30 minutes oral exam during the session examination period (55%).

Successfully completing the projects is compulsory for attending the exam.
The projects will be group based but we assess the contribution of each student individually.
The examination is based on the contents of the lectures, the associated reading materials and exercises.

The performance assessment is only offered in the session after the course unit. Repetition only possible after re-enrolling.

Acknowledgement

We thank Amazon AWS and HESAI for sponsoring our education efforts,
and Toyota Motor Europe for sponsoring our autonomous driving research via the project TRACE Zurich.