Added advise for num_workers=0 in docs/speed (#10215)
This commit is contained in:
parent
5f4ffdee41
commit
ff2d7e8115
|
@ -145,6 +145,16 @@ some references, [`1 <https://discuss.pytorch.org/t/guidelines-for-assigning-num
|
|||
|
||||
The best thing to do is to increase the ``num_workers`` slowly and stop once you see no more improvement in your training speed.
|
||||
|
||||
For debugging purposes or for dataloaders that load very small datasets, it is desirable to set ``num_workers=0``. However, this will always log a warning for every dataloader with ``num_workers <= min(2, os.cpu_count())``. In such cases, you can specifically filter this warning by using:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import warnings
|
||||
|
||||
warnings.filterwarnings(
|
||||
"ignore", ".*does not have many workers. Consider increasing the value of the `num_workers` argument*"
|
||||
)
|
||||
|
||||
Spawn
|
||||
"""""
|
||||
When using ``strategy=ddp_spawn`` or training on TPUs, the way multiple GPUs/TPU cores are used is by calling ``.spawn()`` under the hood.
|
||||
|
|
Loading…
Reference in New Issue