An introduction to recurrent neural networks

Teacher
Alessio Micheli
Dipartimento di Informatica, Università degli Pisa
micheli[at]di.unipd.it
INF/01

Aim
The aim of the course is to introduce the student to the architectures and learning methods for dynamical/recurrent neural networks for temporal data and the analysis of their properties.
The students will learn the differences and the advantages of the different approaches to learn sequential data by neural networks. Basic models, which allow representing time in a neural networks, will be detailed along with their learning algorithms. Advanced models, which are playing a key role in the current Artificial intelligence “revolution”, will be also introduced with an emphasis on simple and efficient approaches. Examples of applications in various fields including signal processing, human language processing and  human activity recognition will accompany the model presentation.

Syllabus
- Introduction to the problem and methodology: Time representation in neural networks: explicit and implicit forms.
- Recurrent Neural Networks: Models and architectures; Properties (stationarity, causality, unfolding)
- Learning algorithms: BPTT, RTRL.
- Analysis: architectural bias. Reservoir Computing.
- Advanced RNN models. Related approaches and extensions.
- Applications: Case studies.

Introductory reading
Z. C. Lipton, C. Elkan.  “The Neural Network That Remembers”, IEEE Spectrum (magazine) 26 Jan 2016
H. Jaeger, H. Haas. “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication”, Science, vol. 304, no. 5667, pp. 78–80, 2004

Course requirements
The student is expected to have basic knowledge of the basic of mathematical calculus, of  machine learning and feed-forward neural networks

Examination modality
None

Course material, enrollment and last minute notifications
Made available by the teacher at this Moodle address

Schedule
30 Mar 2020, 15:00-18:00
31 Mar 2020, 10:00-13:00

Location
tbd

<< Courses in 2019-2020