I am collecting here a series of (guided) exercises, implemented in Jupyter notebooks, to explore several topics in deep learning, from automatic differentiation to explainability. This is a work in progress that will take a lot of time, but most notebooks can already be used in a standalone form.
<aside> 📌
Most exercises are relatively simple, so I suggest removing any code autocomplete features before starting. 😅
</aside>
All notebooks in this category are assumed essential and will not be listed below.
ID | Name | Libraries | Preliminaries |
---|---|---|---|
Py01 | Introduction to Python | - | - |
NP01 | Introduction to NumPy | NumPy | - |
NP02 | Logistic regression in NumPy | NumPy | NP01 |
NP03 | k-NN in NumPy | NumPy | NP01 |
SK01 | Introduction to scikit-learn | scikit-learn |
ID | Name | Libraries | Preliminaries |
---|---|---|---|
PT01 | Introduction to PyTorch | PyTorch | - |
PT02 | Logistic regression from scratch | PyTorch | PT01 |
PT03 | Convolutional networks from scratch | PyTorch | PT01 |
PT04 | Building a Vision Transformer | PyTorch, PyTorch Lightning | PT01 |
ID | Name | Libraries | Preliminaries |
---|---|---|---|
JAX01 | Logistic regression in JAX | JAX | - |
JAX02 | CNNs in Keras and JAX | Keras, JAX | JAX01 |
JAX03 | Introduction to Equinox | Equinox, JAX | JAX01 |
JAX04 | Text classification in Equinox | Equinox, Optax, JAX | JAX01, JAX02 |
ID | Name | Libraries | Preliminaries |
---|---|---|---|
XAI01 | Saliency maps | PyTorch, Captum | PT04 |
XAI02 | Data attribution | PyTorch | PT04 |