Deep Learning/PyTorch

model.eval() VS model.train()

Naranjito 2023. 1. 17. 20:11

When model uses specific layers/part-Dropouts Layers, BatchNorm Layers etc, it can be turned on during the train mode, but it needs to be turned off during the evaluation mode.

model.train() model.eval()
Sets model in training mode:
  • activates specific layers/part-Dropout layers etc
  • normalisation layers1 use per-batch statistics
Sets model in evaluation (inference) mode:
  • de-activates Dropout layers
  • normalisation layers use running statistics
  Equivalent to model.train(False).
It needs to be used with 'with torch.no_grad()'. 

 

  • model.eval()

During model evaluation, you need to turn off specific layers/part-Dropouts Layers, BatchNorm Layers etc-should not be used during evaluation process, and .eval() will do it for you. 

 

  • torch.no_grad()

Inactivate Autograd Engine so it does not count gradient. 

Using the with torch.no_grad(), save the memory consumption and speed up.

It needs to be used with model.eval().

model.eval()

with torch.no_grad():
    
    dataiter = iter(trainloader)
    sample = dataiter.next()
    
    noisy_image,image = sample
    
    index = 0 
    
    pred_image = model(noisy_image[index].unsqueeze(0))
    print(pred_image.squeeze(0).shape)
    show_image(noisy_image[index],image[index],pred_image.squeeze(0))

 

  • model.train()

It needs to be switched to train mode after evaluation.

model.train()

 

'Deep Learning > PyTorch' 카테고리의 다른 글

view vs reshape, contiguous vs non-contiguous  (0) 2023.01.11
ERROR: Failed building wheel for pytorch  (0) 2022.12.29
Pytorch-contiguous  (0) 2022.12.05
PyTorch-permute vs transpose  (0) 2022.12.05
RNN  (0) 2022.08.22