Update README.md
This commit is contained in:
parent
656972cd37
commit
ae5e28f9d9
133
README.md
133
README.md
|
@ -299,139 +299,6 @@ If you are one of these corporations, please feel free to reach out to will@pyto
|
|||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
**Starting a new project?**
|
||||
|
||||
[Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed)
|
||||
|
||||
**Why lightning?**
|
||||
|
||||
Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you.
|
||||
|
||||
Lightning has 3 goals in mind:
|
||||
|
||||
1. Maximal flexibility while abstracting out the common boilerplate across research projects.
|
||||
2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format.
|
||||
3. Democratizing PyTorch power-user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning.
|
||||
|
||||
|
||||
**Who is Lightning for?**
|
||||
|
||||
- Professional researchers
|
||||
- Ph.D. students
|
||||
- Corporate production teams
|
||||
|
||||
If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :)
|
||||
|
||||
**What does lightning control for me?**
|
||||
|
||||
Everything in Blue!
|
||||
This is how lightning separates the science (red) from engineering (blue).
|
||||
|
||||
![Overview](docs/source/_images/general/pl_overview.gif)
|
||||
|
||||
**How much effort is it to convert?**
|
||||
|
||||
If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour.
|
||||
If your code IS a mess, then you needed to clean up anyhow ;)
|
||||
|
||||
[Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09).
|
||||
[Or watch this video](https://www.youtube.com/watch?v=QHww1JH7IDU).
|
||||
|
||||
**How flexible is it?**
|
||||
|
||||
As you see, you're just organizing your PyTorch code - there's no abstraction.
|
||||
|
||||
And for the stuff that the Trainer abstracts out, you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backward pass.
|
||||
|
||||
For example, here you could do your own backward pass without worrying about GPUs, TPUs or 16-bit since we already handle it.
|
||||
|
||||
```python
|
||||
class LitModel(LightningModule):
|
||||
|
||||
def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):
|
||||
optimizer.zero_grad()
|
||||
```
|
||||
|
||||
For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer.
|
||||
|
||||
**What types of research works?**
|
||||
|
||||
Anything! Remember, that this is just organized PyTorch code.
|
||||
The Training step defines the core complexity found in the training loop.
|
||||
|
||||
##### Could be as complex as a seq2seq
|
||||
|
||||
```python
|
||||
# define what happens for training here
|
||||
def training_step(self, batch, batch_idx):
|
||||
x, y = batch
|
||||
|
||||
# define your own forward and loss calculation
|
||||
hidden_states = self.encoder(x)
|
||||
|
||||
# even as complex as a seq-2-seq + attn model
|
||||
# (this is just a toy, non-working example to illustrate)
|
||||
start_token = '<SOS>'
|
||||
last_hidden = torch.zeros(...)
|
||||
loss = 0
|
||||
for step in range(max_seq_len):
|
||||
attn_context = self.attention_nn(hidden_states, start_token)
|
||||
pred = self.decoder(start_token, attn_context, last_hidden)
|
||||
last_hidden = pred
|
||||
pred = self.predict_nn(pred)
|
||||
loss += self.loss(last_hidden, y[step])
|
||||
|
||||
#toy example as well
|
||||
loss = loss / max_seq_len
|
||||
return {'loss': loss}
|
||||
```
|
||||
|
||||
##### Or as basic as CNN image classification
|
||||
|
||||
```python
|
||||
# define what happens for validation here
|
||||
def validation_step(self, batch, batch_idx):
|
||||
x, y = batch
|
||||
|
||||
# or as basic as a CNN classification
|
||||
out = self(x)
|
||||
loss = my_loss(out, y)
|
||||
return {'loss': loss}
|
||||
```
|
||||
|
||||
**Does Lightning Slow my PyTorch?**
|
||||
|
||||
No! Lightning is meant for research/production cases that require high-performance.
|
||||
|
||||
We have tests to ensure we get the EXACT same results in under 600 ms difference per epoch. In reality, lightning adds about a 300 ms overhead per epoch.
|
||||
[Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks).
|
||||
|
||||
Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts.
|
||||
|
||||
**How does Lightning compare with Ignite and fast.ai?**
|
||||
|
||||
[Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a).
|
||||
|
||||
**Is this another library I have to learn?**
|
||||
|
||||
Nope! We use pure Pytorch everywhere and don't add unnecessary abstractions!
|
||||
|
||||
**Are there plans to support Python 2?**
|
||||
|
||||
Nope.
|
||||
|
||||
**Are there plans to support virtualenv?**
|
||||
|
||||
Nope. Please use anaconda or miniconda.
|
||||
```bash
|
||||
conda activate my_env
|
||||
pip install pytorch-lightning
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Licence
|
||||
|
||||
Please observe the Apache 2.0 license that is listed in this repository. In addition
|
||||
|
|
Loading…
Reference in New Issue