Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | Last revisionBoth sides next revision |
how_to [2024/01/24 17:14] – [Measure memory used by GPUa?] habil | how_to [2024/01/24 17:14] – [Measure memory used by GPUa?] habil |
---|
---- | ---- |
| |
==== Measure memory used by GPUa? ==== | ==== Measure memory used by GPUs? ==== |
| |
Use ''torch.cuda.[[https://pytorch.org/blog/understanding-gpu-memory-1/?hss_channel=tw-776585502606721024|memory]]'' , e.g., to discover the effect of clearing gradients at the end of each [[https://towardsdatascience.com/epoch-vs-iterations-vs-batch-size-4dfb9c7ce9c9|iteration]]. With ignite, this can be done using an event handler that calls ''optimizer.[[https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch|zero_grad]](set_to_none=True)''.'' '' | Use ''torch.cuda.[[https://pytorch.org/blog/understanding-gpu-memory-1/?hss_channel=tw-776585502606721024|memory]]'' , e.g., to discover the effect of clearing gradients at the end of each [[https://towardsdatascience.com/epoch-vs-iterations-vs-batch-size-4dfb9c7ce9c9|iteration]]. With ignite, this can be done using an event handler that calls ''optimizer.[[https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch|zero_grad]](set_to_none=True)''.'' '' |