Deep Learning for Autonomous Driving

Spring 2020

Taught in English by Dengxin Dai and Alex Liniger.
6 ECTS. Class size limited to 80 students.

ETH Course Catalogue

Abstract

Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme.

Objective

Students will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control.

After attending this course, students will:

  1. understand the core technologies of building a self-driving car,
  2. have a good overview over the current state of the art in self-driving cars,
  3. be able to critically analyze and evaluate current research in this area,
  4. be able to implement basic systems for multiple autonomous driving tasks.

Content

We will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control.
The course covers the following main areas:

  1. Foundation
    1. Fundamentals of deep-learning
    2. Fundamentals of a self-driving car
  2. Perception
    1. Semantic and instance segmentation
    2. Depth estimation with images and sparse LiDAR data
    3. 3D object detection with images and LiDAR data
    4. Object tracking and motion prediction
  3. Localization
    1. GPS-based localization
    2. Visual localization and Lidar-based localization
  4. Path Planning and Control
    1. Path planning for autonomous driving
    2. Motion planning and vehicle control
    3. Imitation learning and reinforcement learning for self driving cars

Exercises

The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems:

  1. Sensor calibration and synchronization to obtain multimodal driving data,
  2. Semantic segmentation and depth estimation with deep neural networks,
  3. Learning to drive with images and map data directly (a.k.a. end-to-end driving).

Prerequisites

This is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image.

Notice

Registration for this class requires the permission of the instructors. Preference is given to EEIT, INF and RSC students.

Exam

Examiners:
Dengxin Dai, Alex Liniger

The grade is based on

  1. the realization of three projects (10%, 20% and 20%), and
  2. a 30 minutes oral exam during the session examination period (50%).

Successfully completing the projects is compulsory for attending the exam.
The projects will be group based but we assess the contribution of each student individually.
The examination is based on the contents of the lectures, the associated reading materials and exercises.

The performance assessment is only offered in the session after the course unit. Repetition only possible after re-enrolling.

Acknowledgement

We thank Amazon AWS and HESAI for sponsoring our education efforts,
and Toyota Motor Europe for sponsoring our autonomous driving research via the project TRACE Zurich.