Elementary calculus and linear algebra; basics of probability theory and statistics.
The course provides an introduction to key concepts and algorithms for neural networks, with strong emphasis on Deep Learning and its applications. It covers the following topics:
• Part One: Classical Neural Networks
o Basics of statistical pattern recognition
o Linear models: Perceptron, Logistic Regression, Support Vector Machines
o Multi-layer Perceptron and Backpropagation
- Part Two: DeepLearning
o Convolutional Networks
o Restricted Boltzmann Machines
o Software and hardware for Deep Learning
Moreover, several state-of-the-art applications of Deep Learning to image recognition, computer vision, language modeling, will be discussed. The course consists of weekly lectures, a few programming assignments (in Python or Matlab) and the final written exam.
The objective of this course are:
• to provide a general introduction to the field of neural networks and deep learning and their applications
• to develop practical skills for designing and training neural networks for tasks like image classification, speech recognition, forecasting
• to learn some popular tools for training deep architectures: Theano, Torch, Pylearn2
The most recent timetable can be found at the LIACS website
Mode of instruction
The final grade will be the weighted average of grades for:
programming assignments (60%)
written exam (40%
See this Blackboard
Deep Learning, Yoshua Bengio, Ian Goodfellow, Aaron Courville, MIT Press (in preparation)
You have to sign up for classes and examinations (including resits) in uSis. Check this link for more information and activity codes.
Study coordinator Computer Science, Riet Derogee