From 026c86e755b55fd4993eed3fbde03a106b64e92b Mon Sep 17 00:00:00 2001 From: hanouticelina <36770234+hanouticelina@users.noreply.github.com> Date: Mon, 25 Nov 2024 03:13:11 +0000 Subject: [PATCH] Update API inference documentation (automated) --- docs/api-inference/tasks/chat-completion.md | 8 ++++---- docs/api-inference/tasks/feature-extraction.md | 2 +- docs/api-inference/tasks/text-classification.md | 1 - 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/docs/api-inference/tasks/chat-completion.md b/docs/api-inference/tasks/chat-completion.md index 15893bc48..2a1377e6b 100644 --- a/docs/api-inference/tasks/chat-completion.md +++ b/docs/api-inference/tasks/chat-completion.md @@ -440,12 +440,12 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/ | **        include_usage*** | _boolean_ | If set, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, and the choices field will always be an empty array. All other chunks will also include a usage field, but with a null value. | | **temperature** | _number_ | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | | **tool_choice** | _unknown_ | One of the following: | -| **         (#1)** | _object_ | | -| **         (#2)** | _string_ | | -| **         (#3)** | _object_ | | +| **         (#1)** | _enum_ | Possible values: auto. | +| **         (#2)** | _enum_ | Possible values: none. | +| **         (#3)** | _enum_ | Possible values: required. | +| **         (#4)** | _object_ | | | **                function*** | _object_ | | | **                        name*** | _string_ | | -| **         (#4)** | _object_ | | | **tool_prompt** | _string_ | A prompt to be appended before the tools | | **tools** | _object[]_ | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. | | **        function*** | _object_ | | diff --git a/docs/api-inference/tasks/feature-extraction.md b/docs/api-inference/tasks/feature-extraction.md index b9d0fb312..e4cbb5889 100644 --- a/docs/api-inference/tasks/feature-extraction.md +++ b/docs/api-inference/tasks/feature-extraction.md @@ -105,7 +105,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/ | :--- | :--- | :--- | | **inputs*** | _string_ | The text to embed. | | **normalize** | _boolean_ | | -| **prompt_name** | _string_ | The name of the prompt that should be used by for encoding. If not set, no prompt will be applied. Must be a key in the `Sentence Transformers` configuration `prompts` dictionary. For example if ``prompt_name`` is "query" and the ``prompts`` is {"query": "query: ", ...}, then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?" because the prompt text will be prepended before any text to encode. | +| **prompt_name** | _string_ | The name of the prompt that should be used by for encoding. If not set, no prompt will be applied. Must be a key in the `sentence-transformers` configuration `prompts` dictionary. For example if ``prompt_name`` is "query" and the ``prompts`` is {"query": "query: ", ...}, then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?" because the prompt text will be prepended before any text to encode. | | **truncate** | _boolean_ | | | **truncation_direction** | _enum_ | Possible values: Left, Right. | diff --git a/docs/api-inference/tasks/text-classification.md b/docs/api-inference/tasks/text-classification.md index f88fb3f89..63cfcbd25 100644 --- a/docs/api-inference/tasks/text-classification.md +++ b/docs/api-inference/tasks/text-classification.md @@ -28,7 +28,6 @@ For more details about the `text-classification` task, check out its [dedicated - [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert): A sentiment analysis model specialized in financial sentiment. - [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest): A sentiment analysis model specialized in analyzing tweets. - [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection): A model that can classify languages. -- [meta-llama/Prompt-Guard-86M](https://huggingface.co/meta-llama/Prompt-Guard-86M): A model that can classify text generation attacks. Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=text-classification&sort=trending).