lightning/docs/source-pytorch/ecosystem/transformers.rst

48 lines
2.6 KiB
ReStructuredText
Raw Normal View History

docs refactor 3/n (#12795) * updated titles + css * updated titles + css * levels structure * levels structure * levels structure * adding level indexes * finished intro guide layout * finished intro guide layout * general titles * general titles * added movie * added movie * finished 15 mins * levels * added core levels * added core levels * fixed api reference on the left * gpu guides * gpu guides * gpu guides * gpu guides * precision * hpu guide * added ipu * added ipu * added ipu * added ckpt docs * finished basic logging * intermediate * intermediate * intermediate * fixed * fixed margins * fixed margins * fixed margins * fixed margins * fixed margins * fixed margins * fixed margins * fixed margins * fixed margins * added logger stuff * added logger stuff * added logger stuff * added logger stuff * added logger stuff * ic * added inconsolata * added inconsolata * added inconsolata * added inconsolata * added inconsolata * added inconsolata * added inconsolata * updated menu * added basic cloud docs * added basic cloud docs * added basic cloud docs * added basic cloud docs * ic * ic * ic * ic * ic * ic * ic * ic * ic * ic * ic * ic * added demos folder * added demos folder * added demos folder * added demos folder * added demos folder * added demos folder * twocolumns directive * twocols * twocols * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * registry * cleaning up * cleaning up * cleaning up * cleaning up * cleaning up * cleaning up * cleaning up * cleaning up * cleaning up * updated titles + css * levels structure * adding level indexes * finished intro guide layout * general titles * added movie * finished 15 mins * levels * added core levels * fixed api reference on the left * gpu guides * precision * hpu guide * added ipu * added ckpt docs * finished basic logging * intermediate * fixed margins * added logger stuff * ic * added inconsolata * updated menu * added basic cloud docs * ic * added demos folder * twocolumns directive * registry * cleaning up * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * deconflict * deconflict * deconflict * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add testsetup sections wherever needed; fix errors in building docs * pre-commit fixes * Fix duplicate label * minor nit with pre-commit * Fix labels * More changes... * require * debug & cli * prec & model & visu * fix references * fix references * fix refs * fix refs - model_parallel * fix references * prune testsetup with global * refs in index * Fix duplicate label errors * Update orphan docs * Update orphan docs * Update orphan docs * fix links * Fix genindex and search index * fix refs * fix refs * Fix index rst related issues * fix refs * inc to rst * Fix links ref * fix more references * fix refs * deconflict * errors * errors * errors * fix refs * fix refs * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix warnings * Fix LightningCLI errors * Fix LightningCLI errors * Fix LightningCLI errors * Fix LightningCLI errors * fix doc build * Duplicate Label fix (docs) (#12800) Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com> * ignore typing in demo folder * Ignore demos for mypy Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Kushashwa Ravi Shrimali <kushashwaravishrimali@gmail.com> Co-authored-by: Jirka <jirka.borovec@seznam.cz> Co-authored-by: rohitgr7 <rohitgr1998@gmail.com> Co-authored-by: Kaushik B <kaushikbokka@gmail.com> Co-authored-by: otaj <ota@grid.ai>
2022-04-19 18:15:47 +00:00
:orphan:
Lightning Transformers
======================
`Lightning Transformers <https://lightning-transformers.readthedocs.io/en/latest/>`_ offers a flexible interface for training and fine-tuning SOTA Transformer models
using the :doc:`PyTorch Lightning Trainer <../common/trainer>`.
.. code-block:: bash
pip install lightning-transformers
In Lightning Transformers, we offer the following benefits:
- Powered by `PyTorch Lightning <https://www.pytorchlightning.ai/>`_ - Accelerators, custom Callbacks, Loggers, and high performance scaling with minimal changes.
- Backed by `HuggingFace Transformers <https://huggingface.co/transformers/>`_ models and datasets, spanning multiple modalities and tasks within NLP/Audio and Vision.
- Task Abstraction for Rapid Research & Experimentation - Build your own custom transformer tasks across all modalities with little friction.
- Powerful config composition backed by `Hydra <https://hydra.cc/>`_ - simply swap out models, optimizers, schedulers task, and many more configurations without touching the code.
- Seamless Memory and Speed Optimizations - Out-of-the-box training optimizations such as `DeepSpeed ZeRO <https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html#deepspeed>`_ or `FairScale Sharded Training <https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html#sharded-training>`_ with no code changes.
-----------------
Using Lightning-Transformers
----------------------------
Lightning Transformers has a collection of tasks for common NLP problems such as `language_modeling <https://lightning-transformers.readthedocs.io/en/latest/tasks/nlp/language_modeling.html#language-modeling>`_,
`translation <https://lightning-transformers.readthedocs.io/en/latest/tasks/nlp/translation.html#translation>`_ and more. To use, simply:
1. Pick a task to train (passed to ``train.py`` as ``task=``)
2. Pick a dataset (passed to ``train.py`` as ``dataset=``)
3. Customize the backbone, optimizer, or any component within the config
4. Add any :doc:`Lightning supported parameters and optimizations <../common/trainer>`
.. code-block:: bash
python train.py \
task=<TASK> \
dataset=<DATASET>
backbone.pretrained_model_name_or_path=<BACKBONE> # Optionally change the HF backbone
optimizer=<OPTIMIZER> # Optionally specify optimizer (Default AdamW)
trainer.<ANY_TRAINER_FLAGS> # Optionally specify Lightning trainer arguments
To learn more about Lightning Transformers, please refer to the `Lightning Transformers documentation <https://lightning-transformers.readthedocs.io/en/latest/>`_.