Add Lightning Ecosystem to docs (#11399)

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
This commit is contained in:
Kaushik B 2022-03-01 19:05:35 +05:30 committed by GitHub
parent a110bbfe1a
commit 5a2b0d6319
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 250 additions and 5 deletions

View File

@ -1,6 +1,7 @@
Bolts
=====
`PyTorch Lightning Bolts <https://lightning-bolts.readthedocs.io/en/stable/>`_, is our official collection
Lightning Bolts
===============
`PyTorch Lightning Bolts <https://lightning-bolts.readthedocs.io/en/latest/>`_, is our official collection
of prebuilt models across many research domains.
.. code-block:: bash

View File

@ -0,0 +1,28 @@
Ecosystem CI
============
`Ecosystem CI <https://github.com/PyTorchLightning/ecosystem-ci>`_ automates issue discovery for your projects against Lightning nightly and releases.
It is a lightweight repository that provides easy configuration of Continues Integration running on CPUs and GPUs.
Any user who wants to keep their project aligned with current and future Lightning releases can use the EcoSystem CI to configure their integrations.
Read more: `Stay Ahead of Breaking Changes with the New Lightning Ecosystem CI <https://devblog.pytorchlightning.ai/stay-ahead-of-breaking-changes-with-the-new-lightning-ecosystem-ci-b7e1cf78a6c7>`_
--------------
***********************
Integrate a New Project
***********************
Follow the instructions below to add a new project to the PyTorch Lightning ecosystem.
1. Fork the ecosystem CI repository to be able to create a `new Pull Request <https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork>`_ and work within a specific branch.
2. Create a new config file in ``configs/<Organization-name>`` folder and call it ``<project-name>.yaml``.
3. Define runtime for CPU and link the config for GPU:
For CPU integrations, list OS and Python version combination to be running with GitHub actions.
For GPU integrations, you only add the path to the config (OS/Linux and Python version is fixed) to be running with Azure pipelines.
4. Add a Contact to the ``.github/CODEOWNERS`` list for your organization folder or just a single project.
5. Create a Draft PR with all mentioned requirements.
6. Join our `Slack <https://join.slack.com/t/pytorch-lightning/shared_invite/zt-pw5v393p-qRaDgEk24~EjiZNBpSQFgQ>`_ (Optional) channel ``#alerts-ecosystem-ci`` to be notified if your project is breaking.
To learn more about Ecosystem CI, please refer to the `Ecosystem CI repo <https://github.com/PyTorchLightning/ecosystem-ci>`_.
Also, note that some particular implementation details described above may evolve over time.

View File

@ -0,0 +1,76 @@
Lightning Flash
===============
`Lightning Flash <https://lightning-flash.readthedocs.io/en/stable/>`_ is a high-level deep learning framework for fast prototyping, baselining, fine-tuning, and solving deep learning problems.
Flash makes complex AI recipes for over 15 tasks across 7 data domains accessible to all.
It is built for beginners with a simple API that requires very little deep learning background, and for data scientists, Kagglers, applied ML practitioners, and deep learning researchers that
want a quick way to get a deep learning baseline with advanced features PyTorch Lightning offers.
.. code-block:: bash
pip install lightning-flash
-----------------
*********************************
Using Lightning Flash in 3 Steps!
*********************************
1. Load your Data
-----------------
All data loading in Flash is performed via a ``from_*`` classmethod of a ``DataModule``.
Which ``DataModule`` to use and which ``from_*`` methods are available depends on the task you want to perform.
For example, for image segmentation where your data is stored in folders, you would use the ``SemanticSegmentationData``'s `from_folders <https://lightning-flash.readthedocs.io/en/latest/reference/semantic_segmentation.html#from-folders>`_ method:
.. code-block:: python
from flash.image import SemanticSegmentationData
dm = SemanticSegmentationData.from_folders(
train_folder="data/CameraRGB",
train_target_folder="data/CameraSeg",
val_split=0.1,
image_size=(256, 256),
num_classes=21,
)
------------
2. Configure your Model
-----------------------
Our tasks come loaded with pre-trained backbones and (where applicable) heads.
You can view the available backbones to use with your task using `available_backbones <https://lightning-flash.readthedocs.io/en/latest/general/backbones.html>`_.
Once you've chosen, create the model:
.. code-block:: python
from flash.image import SemanticSegmentation
print(SemanticSegmentation.available_heads())
# ['deeplabv3', 'deeplabv3plus', 'fpn', ..., 'unetplusplus']
print(SemanticSegmentation.available_backbones("fpn"))
# ['densenet121', ..., 'xception'] # + 113 models
print(SemanticSegmentation.available_pretrained_weights("efficientnet-b0"))
# ['imagenet', 'advprop']
model = SemanticSegmentation(head="fpn", backbone="efficientnet-b0", pretrained="advprop", num_classes=dm.num_classes)
------------
3. Finetune!
------------
.. code-block:: python
from flash import Trainer
trainer = Trainer(max_epochs=3)
trainer.finetune(model, datamodule=datamodule, strategy="freeze")
trainer.save_checkpoint("semantic_segmentation_model.pt")
To learn more about Lightning Flash, please refer to the `Lightning Flash documentation <https://lightning-flash.readthedocs.io/en/latest/>`_.

View File

@ -0,0 +1,91 @@
TorchMetrics
============
`TorchMetrics <https://torchmetrics.readthedocs.io>`_ is a collection of machine learning metrics for distributed,
scalable PyTorch models and an easy-to-use API to create custom metrics. It has a collection of 60+ PyTorch metrics implementations and
is rigorously tested for all edge cases.
.. code-block:: bash
pip install torchmetrics
In TorchMetrics, we offer the following benefits:
- A standardized interface to increase reproducibility
- Reduced Boilerplate
- Distributed-training compatible
- Rigorously tested
- Automatic accumulation over batches
- Automatic synchronization across multiple devices
-----------------
Example 1: Functional Metrics
-----------------------------
Below is a simple example for calculating the accuracy using the functional interface:
.. code-block:: python
import torch
import torchmetrics
# simulate a classification problem
preds = torch.randn(10, 5).softmax(dim=-1)
target = torch.randint(5, (10,))
acc = torchmetrics.functional.accuracy(preds, target)
------------
Example 2: Module Metrics
-------------------------
The example below shows how to use the class-based interface:
.. code-block:: python
import torch
import torchmetrics
# initialize metric
metric = torchmetrics.Accuracy()
n_batches = 10
for i in range(n_batches):
# simulate a classification problem
preds = torch.randn(10, 5).softmax(dim=-1)
target = torch.randint(5, (10,))
# metric on current batch
acc = metric(preds, target)
print(f"Accuracy on batch {i}: {acc}")
# metric on all batches using custom accumulation
acc = metric.compute()
print(f"Accuracy on all data: {acc}")
# Reseting internal state such that metric ready for new data
metric.reset()
------------
Example 3: TorchMetrics with Lightning
--------------------------------------
The example below shows how to use a metric in your :doc:`LightningModule <../common/lightning_module>`:
.. code-block:: python
class MyModel(LightningModule):
def __init__(self):
...
self.accuracy = torchmetrics.Accuracy()
def training_step(self, batch, batch_idx):
x, y = batch
preds = self(x)
...
# log step metric
self.accuracy(preds, y)
self.log("train_acc_step", self.accuracy, on_epoch=True)
...

View File

@ -0,0 +1,45 @@
Lightning Transformers
======================
`Lightning Transformers <https://lightning-transformers.readthedocs.io/en/latest/>`_ offers a flexible interface for training and fine-tuning SOTA Transformer models
using the :doc:`PyTorch Lightning Trainer <../common/trainer>`.
.. code-block:: bash
pip install lightning-transformers
In Lightning Transformers, we offer the following benefits:
- Powered by `PyTorch Lightning <https://www.pytorchlightning.ai/>`_ - Accelerators, custom Callbacks, Loggers, and high performance scaling with minimal changes.
- Backed by `HuggingFace Transformers <https://huggingface.co/transformers/>`_ models and datasets, spanning multiple modalities and tasks within NLP/Audio and Vision.
- Task Abstraction for Rapid Research & Experimentation - Build your own custom transformer tasks across all modalities with little friction.
- Powerful config composition backed by `Hydra <https://hydra.cc/>`_ - simply swap out models, optimizers, schedulers task, and many more configurations without touching the code.
- Seamless Memory and Speed Optimizations - Out-of-the-box training optimizations such as `DeepSpeed ZeRO <https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html#deepspeed>`_ or `FairScale Sharded Training <https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html#sharded-training>`_ with no code changes.
-----------------
Using Lightning-Transformers
----------------------------
Lightning Transformers has a collection of tasks for common NLP problems such as `language_modeling <https://lightning-transformers.readthedocs.io/en/latest/tasks/nlp/language_modeling.html#language-modeling>`_,
`translation <https://lightning-transformers.readthedocs.io/en/latest/tasks/nlp/translation.html#translation>`_ and more. To use, simply:
1. Pick a task to train (passed to ``train.py`` as ``task=``)
2. Pick a dataset (passed to ``train.py`` as ``dataset=``)
3. Customize the backbone, optimizer, or any component within the config
4. Add any :doc:`Lightning supported parameters and optimizations <../common/trainer>`
.. code-block:: bash
python train.py \
task=<TASK> \
dataset=<DATASET>
backbone.pretrained_model_name_or_path=<BACKBONE> # Optionally change the HF backbone
optimizer=<OPTIMIZER> # Optionally specify optimizer (Default AdamW)
trainer.<ANY_TRAINER_FLAGS> # Optionally specify Lightning trainer arguments
To learn more about Lightning Transformers, please refer to the `Lightning Transformers documentation <https://lightning-transformers.readthedocs.io/en/latest/>`_.

View File

@ -110,10 +110,14 @@ PyTorch Lightning
.. toctree::
:maxdepth: 1
:name: Bolts
:caption: Bolts
:name: Lightning Ecosystem
:caption: Lightning Ecosystem
ecosystem/metrics
ecosystem/flash
ecosystem/bolts
ecosystem/transformers
ecosystem/ecosystem-ci
.. toctree::
:maxdepth: 1