Download Latest Version d2l-en-1.0.3.zip (176.3 MB)
Email in envelope

Get an email when there's a new version of D2L.ai

Home / v1.0.0-beta0
Name Modified Size InfoDownloads / Week
Parent folder
d2l-en-1.0.0-beta0.zip 2022-12-15 177.9 MB
d2l-en-1.0.0-beta0-full-mxnet.pdf 2022-12-15 37.4 MB
d2l-en-1.0.0-beta0-full-pytorch.pdf 2022-12-15 38.8 MB
README.md 2022-12-15 3.8 kB
Release v1.0.0-beta0.tar.gz 2022-12-15 116.1 MB
Release v1.0.0-beta0.zip 2022-12-15 116.4 MB
Totals: 6 Items   486.5 MB 0

D2L has gone 1.0.0-beta0! We thank all the 296 contributors for making this happen!

Forthcoming on Cambridge University Press

Chapter 1--11 will be forthcoming on Cambridge University Press (early 2023). 1a

New JAX Implementation

We added new JAX implementation. Get started with import jax at https://d2l.ai/chapter_preliminaries/ndarray.html

2a

Thank @AnirudhDagar!

New Vol.2 Chapter on Reinforcement Learning

With the advent of ChatGPT (sibling model to InstructGPT fine-tuned using reinforcement learning), you may get curious about how to enable ML to take decisions sequentially:

17. Reinforcement Learning 17.1. Markov Decision Process (MDP) 17.2. Value Iteration 17.3. Q-Learning

3a 3b

Thank Pratik Chaudhari (University of Pennsylvania and Amazon), Rasool Fakoor @rasoolfa (Amazon), and Kavosh Asadi (Amazon)!

New Vol.2 Chapter on Gaussian Processes

“Everything is a special case of a Gaussian process.” Gaussian processes and deep neural networks are highly complementary and can be combined to great effect:

18. Gaussian Processes 18.1. Introduction to Gaussian Processes 18.2. Gaussian Process Priors 18.3. Gaussian Process Inference

4a 4b

Thank Andrew Gordon Wilson @andrewgordonwilson (New York University and Amazon)!

New Vol.2 Chapter on Hyperparameter Optimization

Tired of setting hyperparameters in a trial-and-error manner? You may wish to check out the systematic hyperparameter optimization approach:

19. Hyperparameter Optimization 19.1. What Is Hyperparameter Optimization? 19.2. Hyperparameter Optimization API 19.3. Asynchronous Random Search 19.4. Multi-Fidelity Hyperparameter Optimization 19.5. Asynchronous Successive Halving

5b

Thank Aaron Klein @aaronkl (Amazon), Matthias Seeger @mseeger (Amazon), and Cedric Archambeau (Amazon)!

Fixes and Improvements

Thank @finale80 @JojiJoseph @gab-chen @Excelsior7 @shanmo @kxxt @vBarbaros @gui-miotto @bolded @atishaygarg @tuelwer @gopalakrishna-r @qingfengtommy @Mohamad-Jaallouk @biswajitsahoo1111 @315930399 for improving this book!

Source: README.md, updated 2022-12-15