Skip to content

Commit

Permalink
Update llamacpp.md (#1231)
Browse files Browse the repository at this point in the history
accross -> across
  • Loading branch information
eltociear authored Oct 27, 2024
1 parent a2fa1e0 commit dc31b9b
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/reference/models/llamacpp.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ model = models.llamacpp(
| `n_gpu_layers`| `int` | Number of layers to offload to GPU. If -1, all layers are offloaded | `0` |
| `split_mode` | `int` | How to split the model across GPUs. `1` for layer-wise split, `2` for row-wise split | `1` |
| `main_gpu` | `int` | Main GPU | `0` |
| `tensor_split` | `Optional[List[float]]` | How split tensors should be distributed accross GPUs. If `None` the model is not split. | `None` |
| `tensor_split` | `Optional[List[float]]` | How split tensors should be distributed across GPUs. If `None` the model is not split. | `None` |
| `n_ctx` | `int` | Text context. Inference from the model if set to `0` | `0` |
| `n_threads` | `Optional[int]` | Number of threads to use for generation. All available threads if set to `None`.| `None` |
| `verbose` | `bool` | Print verbose outputs to `stderr` | `False` |
Expand Down

0 comments on commit dc31b9b

Please sign in to comment.