mirror of https://github.com/explosion/spaCy.git
prettier changes
This commit is contained in:
parent
02eb0e2f1a
commit
6f55065ed9
|
@ -1402,7 +1402,7 @@ provider's API.
|
||||||
Currently, these models are provided as part of the core library:
|
Currently, these models are provided as part of the core library:
|
||||||
|
|
||||||
| Model | Provider | Supported names | Default name | Default config |
|
| Model | Provider | Supported names | Default name | Default config |
|
||||||
| ----------------------------- | ----------------- | ------------------------------------------------------------------------------------------------------------------ | ---------------------- | ------------------------------------ |
|
| ----------------------------- | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------- | ------------------------------------ |
|
||||||
| `spacy.GPT-4.v1` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{}` |
|
| `spacy.GPT-4.v1` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{}` |
|
||||||
| `spacy.GPT-4.v2` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{temperature=0.0}` |
|
| `spacy.GPT-4.v2` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{temperature=0.0}` |
|
||||||
| `spacy.GPT-4.v3` | OpenAI | All names of [GPT-4 models](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) offered by OpenAI | `"gpt-4"` | `{temperature=0.0}` |
|
| `spacy.GPT-4.v3` | OpenAI | All names of [GPT-4 models](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) offered by OpenAI | `"gpt-4"` | `{temperature=0.0}` |
|
||||||
|
@ -1461,7 +1461,13 @@ different than working with models from other providers:
|
||||||
`"completions"` or `"chat"`, depending on whether the deployed model is a
|
`"completions"` or `"chat"`, depending on whether the deployed model is a
|
||||||
completion or chat model.
|
completion or chat model.
|
||||||
|
|
||||||
**⚠️ A note on `spacy.Ollama.v1`.** The Ollama models are all local models that run on your GPU-backed machine. Please refer to the [Ollama docs](https://ollama.com/) for more information on installation, but the basic flow will see you running `ollama serve` to start the local server that will route incoming requests from `spacy-llm` to the model. Depending on which model you want, you'll then need to run `ollama pull <MODEL_NAME>` which will download the quantised model files to your local machine.
|
**⚠️ A note on `spacy.Ollama.v1`.** The Ollama models are all local models that
|
||||||
|
run on your GPU-backed machine. Please refer to the
|
||||||
|
[Ollama docs](https://ollama.com/) for more information on installation, but the
|
||||||
|
basic flow will see you running `ollama serve` to start the local server that
|
||||||
|
will route incoming requests from `spacy-llm` to the model. Depending on which
|
||||||
|
model you want, you'll then need to run `ollama pull <MODEL_NAME>` which will
|
||||||
|
download the quantised model files to your local machine.
|
||||||
|
|
||||||
#### API Keys {id="api-keys"}
|
#### API Keys {id="api-keys"}
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue