parent
079544a902
commit
e0ba4d46e1
|
@ -115,7 +115,7 @@ See the Full Example
|
|||
|
||||
.. note::
|
||||
|
||||
When running a Lighting App on your local machine, any :class:`~lightning.app.utilities.packaging.cloud_compute.CloudCompute`
|
||||
When running a Lightning App on your local machine, any :class:`~lightning.app.utilities.packaging.cloud_compute.CloudCompute`
|
||||
configuration (including a :class:`~lightning.app.storage.mount.Mount`) is ignored at runtime. If you need access to
|
||||
these files on your local disk, you should download a copy of them to your machine.
|
||||
|
||||
|
|
|
@ -186,7 +186,7 @@ configuration files and automatic creation of objects, so you don't need to do i
|
|||
|
||||
.. note::
|
||||
|
||||
Lighting automatically registers all subclasses of :class:`~lightning.pytorch.core.LightningModule`,
|
||||
Lightning automatically registers all subclasses of :class:`~lightning.pytorch.core.LightningModule`,
|
||||
so the complete import path is not required for them and can be replaced by the class name.
|
||||
|
||||
.. note::
|
||||
|
|
|
@ -175,7 +175,7 @@ Configure all aspects of Lightning for advanced usecases.
|
|||
|
||||
.. displayitem::
|
||||
:header: Level 16: Own the training loop
|
||||
:description: Learn all the ways of owning your raw PyTorch loops with Lighting.
|
||||
:description: Learn all the ways of owning your raw PyTorch loops with Lightning.
|
||||
:col_css: col-md-6
|
||||
:button_link: levels/advanced_level_17.html
|
||||
:height: 150
|
||||
|
|
|
@ -31,7 +31,7 @@ Configure all aspects of Lightning for advanced usecases.
|
|||
|
||||
.. displayitem::
|
||||
:header: Level 16: Own the training loop
|
||||
:description: Learn all the ways of owning your raw PyTorch loops with Lighting.
|
||||
:description: Learn all the ways of owning your raw PyTorch loops with Lightning.
|
||||
:col_css: col-md-6
|
||||
:button_link: advanced_level_17.html
|
||||
:height: 150
|
||||
|
|
|
@ -74,7 +74,7 @@
|
|||
- rely on Torch native AMP
|
||||
- `PR12312`_
|
||||
|
||||
* - used ``LightingModule.use_amp`` attribute
|
||||
* - used ``LightningModule.use_amp`` attribute
|
||||
- rely on Torch native AMP
|
||||
- `PR12315`_
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ The main architecture is the following:
|
|||
<img src="https://pl-public-data.s3.amazonaws.com/assets_lightning/examples/fabric/reinforcement-learning/fabric_coupled.png">
|
||||
</p>
|
||||
|
||||
where `N+1` processes (labelled *rank-0*, ..., *rank-N* in the figure above) will be spawned by Fabric/PyTorch, each of them running `M+1` independent copies of the environment (*Env-0*, ..., *Env-M*). Every rank has its own copy of the agent, represented by a [LightningModule](https://lightning.ai/docs/pytorch/stable/common/lightning_module.html)/[Pytorch Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), which will be updated through distributed training.
|
||||
where `N+1` processes (labelled *rank-0*, ..., *rank-N* in the figure above) will be spawned by Fabric/PyTorch, each of them running `M+1` independent copies of the environment (*Env-0*, ..., *Env-M*). Every rank has its own copy of the agent, represented by a [LightningModule](https://lightning.ai/docs/pytorch/stable/common/lightning_module.html)/[PyTorch Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), which will be updated through distributed training.
|
||||
|
||||
### Raw PyTorch:
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ def from_compiled(model: "torch._dynamo.OptimizedModule") -> "pl.LightningModule
|
|||
if not isinstance(orig_module, pl.LightningModule):
|
||||
_check_mixed_imports(model)
|
||||
raise ValueError(
|
||||
f"`model` is expected to be a compiled LightingModule. Found a `{type(orig_module).__name__}` instead"
|
||||
f"`model` is expected to be a compiled LightningModule. Found a `{type(orig_module).__name__}` instead"
|
||||
)
|
||||
|
||||
orig_module._compiler_ctx = {
|
||||
|
|
Loading…
Reference in New Issue