-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add initial Hugging Face support #359
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
#' Chat with a model hosted on Hugging Face Serverless Inference API | ||
#' | ||
#' @description | ||
#' [Hugging Face](https://huggingface.co/) hosts a variety of open-source | ||
#' and proprietary AI models available via their Inference API. | ||
#' To use the Hugging Face API, you must have an Access Token, which you can obtain | ||
#' from your [Hugging Face account](https://huggingface.co/settings/tokens). | ||
#' | ||
#' This function is a lightweight wrapper around [chat_openai()], with | ||
#' the defaults adjusted for Hugging Face. Model defaults to `meta-llama/Llama-3.1-8B-Instruct`. | ||
#' | ||
#' ## Known limitations | ||
#' | ||
#' * Some models do not support the chat interface or parts of it, for example | ||
#' `google/gemma-2-2b-it` does not support a system prompt. You will need to | ||
#' carefully choose the model. | ||
#' | ||
#' @family chatbots | ||
#' @param api_key The API key to use for authentication. You should not | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you please mimic the existing structure from the other chat functions? Or if you think it's worth being more explicit, it'd be useful to create a |
||
#' supply this directly; instead, store your Hugging Face API key as an | ||
#' environment variable (`HUGGINGFACE_API_KEY`) in your `.Renviron` file. | ||
#' Use `usethis::edit_r_environ()` to modify it. | ||
#' @export | ||
#' @inheritParams chat_openai | ||
#' @inherit chat_openai return | ||
#' @examples | ||
#' \dontrun{ | ||
#' chat <- chat_hf() | ||
#' chat$chat("Tell me three jokes about statisticians") | ||
#' } | ||
chat_hf <- function(system_prompt = NULL, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd call this |
||
turns = NULL, | ||
base_url = "https://api-inference.huggingface.co/models/", | ||
api_key = hf_key(), | ||
model = NULL, | ||
seed = NULL, | ||
api_args = list(), | ||
echo = NULL) { | ||
|
||
model <- set_default(model, "meta-llama/Llama-3.1-8B-Instruct") | ||
echo <- check_echo(echo) | ||
|
||
chat_openai( | ||
system_prompt = system_prompt, | ||
turns = turns, | ||
# modify base_url for hugging face compatibility with openai | ||
base_url = paste0(base_url, model, "/v1"), | ||
api_key = api_key, | ||
model = model, | ||
seed = seed, | ||
api_args = api_args, | ||
echo = echo | ||
) | ||
} | ||
|
||
hf_key <- function() { | ||
key_get("HUGGINGFACE_API_KEY") | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,5 @@ | ||
Version: 1.0 | ||
ProjectId: cd6d3e25-62f1-4041-a95a-6833286990e5 | ||
|
||
RestoreWorkspace: No | ||
SaveWorkspace: No | ||
|
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
test_that("can make simple request", { | ||
chat <- chat_hf("Be as terse as possible; no punctuation", model = "meta-llama/Llama-3.1-8B-Instruct") | ||
resp <- chat$chat("What is 1 + 1?", echo = FALSE) | ||
expect_match(resp, "2") | ||
expect_equal(chat$last_turn()@tokens > 0, c(TRUE, TRUE)) | ||
}) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you want to add any other tests to verify that (eg.) tool calling works? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.