diff --git a/README.md b/README.md
index d2f3cc397e..7df2bfe035 100644
--- a/README.md
+++ b/README.md
@@ -191,39 +191,62 @@ trainer = pl.Trainer()
trainer.fit(autoencoder, DataLoader(train), DataLoader(val))
```
+### Advanced features
+Lightning has over [40+ advanced features](https://pytorch-lightning.readthedocs.io/en/stable/trainer.html#trainer-flags) designed for professional AI research at scale.
+
+Here are some examples:
+
+
Train on GPUs without code changes
- ```python
- # 8 GPUs
- trainer = Trainer(max_epochs=1, gpus=8)
+ ```python
+ # 8 GPUs
+ trainer = Trainer(max_epochs=1, gpus=8)
- # 256 GPUs
- trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32)
- ```
+ # 256 GPUs
+ trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32)
+ ```
Train on TPUs without code changes
- ```python
- trainer = Trainer(tpu_cores=8)
- ```
+ ```python
+ trainer = Trainer(tpu_cores=8)
+ ```
-#### And even export for production via onnx or torchscript
-```python
-# torchscript
-autoencoder = LitAutoEncoder()
-torch.jit.save(autoencoder.to_torchscript(), "model.pt")
+
+ 16-bit precision
+
+ ```python
+ trainer = Trainer(precision=16)
+ ```
+
-# onnx
-with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
- autoencoder = LitAutoEncoder()
- input_sample = torch.randn((1, 64))
- autoencoder.to_onnx(tmpfile.name, input_sample, export_params=True)
- os.path.isfile(tmpfile.name)
-```
+
+ Export to torchscript (JIT) (production use)
+
+ ```python
+ # torchscript
+ autoencoder = LitAutoEncoder()
+ torch.jit.save(autoencoder.to_torchscript(), "model.pt")
+ ```
+
+
+
+ Export to ONNX (production use)
+
+ ```python
+ # onnx
+ with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
+ autoencoder = LitAutoEncoder()
+ input_sample = torch.randn((1, 64))
+ autoencoder.to_onnx(tmpfile.name, input_sample, export_params=True)
+ os.path.isfile(tmpfile.name)
+ ```
+
#### For advanced users, you can still own complex training loops