**Fabric is the fast and lightweight way to scale PyTorch models without boilerplate**
______________________________________________________________________
Website •
Docs •
Getting started •
FAQ •
Help •
Discord
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lightning_fabric)](https://pypi.org/project/lightning_fabric/)
[![PyPI Status](https://badge.fury.io/py/lightning_fabric.svg)](https://badge.fury.io/py/lightning_fabric)
[![PyPI Status](https://pepy.tech/badge/lightning_fabric)](https://pepy.tech/project/lightning_fabric)
[![Conda](https://img.shields.io/conda/v/conda-forge/lightning_fabric?label=conda&color=success)](https://anaconda.org/conda-forge/lightning_fabric)
## Maximum flexibility, minimum code changes
With just a few code changes, run any PyTorch model on any distributed hardware, no boilerplate!
- Easily switch from running on CPU to GPU (Apple Silicon, CUDA, …), TPU, multi-GPU or even multi-node training
- Use state-of-the-art distributed training strategies (DDP, FSDP, DeepSpeed) and mixed precision out of the box
- All the device logic boilerplate is handled for you
- Designed with multi-billion parameter models in mind
- Build your own custom Trainer using Fabric primitives for training checkpointing, logging, and more
```diff
+ import lightning as L
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, Dataset
class PyTorchModel(nn.Module):
...
class PyTorchDataset(Dataset):
...
+ fabric = L.Fabric(accelerator="cuda", devices=8, strategy="ddp")
+ fabric.launch()
- device = "cuda" if torch.cuda.is_available() else "cpu
model = PyTorchModel(...)
optimizer = torch.optim.SGD(model.parameters())
+ model, optimizer = fabric.setup(model, optimizer)
dataloader = DataLoader(PyTorchDataset(...), ...)
+ dataloader = fabric.setup_dataloaders(dataloader)
model.train()
for epoch in range(num_epochs):
for batch in dataloader:
input, target = batch
- input, target = input.to(device), target.to(device)
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
- loss.backward()
+ fabric.backward(loss)
optimizer.step()
lr_scheduler.step()
```
______________________________________________________________________
# Getting started
## Install Lightning