46 lines
1.8 KiB
YAML
46 lines
1.8 KiB
YAML
name: Feature request
|
|
description: Propose a feature for this project
|
|
labels: ["needs triage", "feature"]
|
|
body:
|
|
- type: textarea
|
|
attributes:
|
|
label: Description & Motivation
|
|
description: A clear and concise description of the feature proposal
|
|
placeholder: |
|
|
Please outline the motivation for the proposal.
|
|
Is your feature request related to a problem? e.g., I'm always frustrated when [...].
|
|
If this is related to another GitHub issue, please link it here
|
|
|
|
- type: textarea
|
|
attributes:
|
|
label: Pitch
|
|
description: A clear and concise description of what you want to happen.
|
|
validations:
|
|
required: false
|
|
|
|
- type: textarea
|
|
attributes:
|
|
label: Alternatives
|
|
description: A clear and concise description of any alternative solutions or features you've considered, if any.
|
|
validations:
|
|
required: false
|
|
|
|
- type: textarea
|
|
attributes:
|
|
label: Additional context
|
|
description: Add any other context or screenshots about the feature request here.
|
|
validations:
|
|
required: false
|
|
|
|
- type: markdown
|
|
attributes:
|
|
value: >
|
|
### If you enjoy Lightning, check out our other projects! ⚡
|
|
|
|
- [**Metrics**](https://github.com/Lightning-AI/metrics):
|
|
Machine learning metrics for distributed, scalable PyTorch applications.
|
|
enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
|
|
- [**GPT**](https://github.com/Lightning-AI/lit-GPT):
|
|
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT.
|
|
Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
|