The autograd package provides automatic differentiation for all operations on Tensors.

  1. If you set torch.requires_grad as True, it will start to track all operations on it

  2. Call .backward() when you finish your computation and it will compute all the gradients automatically

  3. You can retrieve the gradient from the .grad attribute

  4. .detach() can stop the tensor from tracking history and stop future computation
    • You can also wrap the with torch.no_grad(). This is useful when evaluating the model as you don’t need to compute the gradients during evaluation

Tensor and Function are interconnected and build up an acyclic graph which encodes the whole history of computation. Each tensor has a .grad_fn attribute that references a Function that created the Tensor.

Ryan

Ryan

Data Scientist

Leave a Reply