Skip to content

Commit

Permalink
[feat] Optional LLM API provider
Browse files Browse the repository at this point in the history
  • Loading branch information
wenjie1991 committed Dec 25, 2024
1 parent 16a7cf3 commit b9853d6
Show file tree
Hide file tree
Showing 4 changed files with 71 additions and 23 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Authors@R:
person("Wenjie", "SUN", , "[email protected]", role = c("aut", "cre"),
comment = c(ORCID = "0000-0002-3100-2346"))
Description: When you prepare a presentation or a report, you often need to manage a large number of 'ggplot' figures. You need to change the figure size, modify the title, label, themes, etc. It is inconvenient to go back to the original code to make these changes. This package provides a simple way to manage 'ggplot' figures. You can easily add the figure to the database and update them later using CLI (command line interface) or GUI (graphical user interface).
License: GPL-3
License: MIT
Encoding: UTF-8
Depends: R (>= 4.0.0)
Imports:
Expand Down
56 changes: 50 additions & 6 deletions R/server.R
Original file line number Diff line number Diff line change
Expand Up @@ -166,27 +166,71 @@ serveFile <- function(filepath) {
#' @param dir The directory of the ggfigdone database.
#' @param host Server host name or IP address; the default is "0.0.0.0".
#' @param port The port on which the server will run; the default is 8080.
#' @param llm_api_key The API key for the large language model service; the
#' default is NULL. When NULL, the service will not be available.
#' @param llm_api_url The URL for the OpenAI language model; the default is
#' "https://api.openai.com/v1/chat/completions".
#' @param llm_model The model to use for the large language model; the default
#' is "gpt-4o-mini".
#' @param llm_max_tokens The maximum number of tokens to generate; the default is 1000.
#' @param llm_temperature The temperature for the language model; the default is 0.5.
#' The temperature is a hyperparameter that controls the randomness of the
#' generated text. Lower temperatures will generate more predictable text, while
#' higher temperatures will generate more random text.
#' @param token A logical value indicating whether a token should be used to access the server.
#' @param auto_open A logical value indicating whether the server should be
#' opened in a web browser; the default is TRUE.
#' @return No return value, the function is called for its side effects.
#' @examples
#' \dontrun{
#' library(ggplot2)
#' ## Initialize the database
#' fo = fd_init("./fd_dir")
#' ## Draw a ggplot figure
#' g = ggplot(mtcars, aes(x=wt, y=mpg)) + geom_point()
#'
#' ## Add the figure to the database
#' fd_add(name = "fig1")
#'
#' ## Start the server
#' fd_server("./fd_dir")
#' }
#' @export
fd_server = function(
dir,
host = getOption("ggfigdone.host", "0.0.0.0"),
port = getOption("ggfigdone.port", 8080),
llm_api_key = getOption("ggfigdone.llm_api_key", NULL),
llm_api_url = getOption("ggfigdone.llm_api_url", "https://api.openai.com/v1/chat/completions"),
llm_model = getOption("ggfigdone.llm_model", "gpt-4o-mini"),
llm_max_tokens = getOption("ggfigdone.llm_max_tokens", 1000),
llm_temperature = getOption("ggfigdone.llm_temperature", 0.5),
token = getOption("ggfigdone.token", TRUE),
openai_api_key = Sys.getenv("OPENAI_API_KEY"),
auto_open = getOption("ggfigdone.auto_open", FALSE)
) {

fo = fd_load(dir)
## Large language model configuration
llm_config =
if (is.null(llm_api_key)) {
message("The large language model service is not available.")
NULL
} else {
list(
api_url = llm_api_url,
api_key = llm_api_key,
model = llm_model,
max_tokens = llm_max_tokens,
temperature = llm_temperature
)
}

## Load the database
fo = fd_load(dir)
# print(fd_ls(fo))
# print(format(fo))

# on.exit(fd_save(fo))

## Directory of the web application
www_dir = system.file("www", package = "ggfigdone")

## Token to access the server
Expand Down Expand Up @@ -237,11 +281,11 @@ fd_server = function(
response_fd_download_data(fo, req)
} else if (path == "/fd_str_data") {
response_fd_str_data(fo, req)
} else if (path == "/openai_api_key") {
} else if (path == "/llm_config") {
list(
status = 200L,
headers = list('Content-Type' = "text/plain"),
body = openai_api_key
headers = list('Content-Type' = "application/json"),
body = toJSON(llm_config, auto_unbox = T)
)
} else {
serveFile(file.path(www_dir, "404.html"))
Expand Down
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ When you prepare a presentation or a report, you often need to manage a large nu
You need to change the figure size, modify the title, label, themes, etc.
It is inconvinient to go back to the original code to make these changes.
This package provides a simple way to manage ggplot figures.

You can easily add the figure to the database and update them later using CLI (command line interface) or GUI (graphical user interface).

![ggfigdone_demo](https://github.com/user-attachments/assets/a0d4d01d-105a-4fc0-bda5-c7cc3e6dbd48)
Expand All @@ -34,6 +35,7 @@ remotes::install_github("wenjie1991/ggfigdone")
```



## Demo

### Initialize the database
Expand Down Expand Up @@ -83,4 +85,4 @@ This package is being developed. Feel free to contribute to the package by sendi

## License

GPL-3
MIT
32 changes: 17 additions & 15 deletions inst/www/js/main.js
Original file line number Diff line number Diff line change
Expand Up @@ -647,15 +647,18 @@ $("#chat_container .btn_close").click(function () {



// Get the API key from the url
var apiKey = "";
$.get("/openai_api_key?" + token_query, function (data) {
// hide the LLM button if the API key is empty
console.log("LLM key" + data);
if (data == "") {
// Get the llm_config from the url
// NOTE: The ajax is async, so the llm_config is not available immediately
// How to solve this problem?
let llm_config = null;
$.get("/llm_config?" + token_query, function (data) {
// if data is not null or empty, show the LLM button
if (data == null) {
$("#btn_llm").css("display", "none");
console.log("LLM is not available");
} else {
llm_config = data;
}
apiKey = data;
});


Expand All @@ -669,7 +672,7 @@ function initConversationHistory() {
conversationHistory = [
{
role: "system",
content: "You are an assistant to translate nature language to R ggplot2 code, by modifying the given ggplot2 code include ggplot obj g by '\n--code--\n'. Output the explanation, which is followed by R programming language code. Both are seperated by '\n--code--\n'."}
content: "You are an assistant to translate nature language to R ggplot2 code, by modifying the given ggplot2 code include ggplot object `g` followed by '\n--code--\n'. Output the explanation, which is followed by R programming language code. Use '\n--code--\n' to seperate explanation and updated code. Do not quote the updated code."}
];
}

Expand Down Expand Up @@ -724,7 +727,6 @@ sendButton.addEventListener("click", async () => {
codeBlock = codeBlock.trim();
}
// Update the code block with the new code
// TODO: Update the code block with the new code
addMessageToChatLog(message, 'gpt');
conversationHistory.push({ role: 'assistant', content: message });
}
Expand All @@ -751,16 +753,16 @@ function addMessageToChatLog(message, sender) {
async function getChatGPTResponse() {
trimConversationHistory(); // Trim history to fit within token limits

const endpoint = 'https://api.openai.com/v1/chat/completions';
const endpoint = llm_config.api_url;
const headers = {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
'Authorization': `Bearer ${llm_config.api_key}`
};
const body = {
model: 'gpt-4o-mini', // Using the ChatGPT model
model: llm_config.model,
messages: conversationHistory,
max_tokens: 1000,
temperature: 0.7,
max_tokens: + llm_config.max_tokens,
temperature: + llm_config.temperature,
};

try {
Expand All @@ -773,7 +775,7 @@ async function getChatGPTResponse() {
const data = await response.json();
return data.choices[0].message.content.trim();
} catch (error) {
console.error('Error fetching response from GPT-3:', error);
console.error('Error fetching response from LLM:', error);
return 'Sorry, I am having trouble connecting to the server. Please try again later.';
}
}

0 comments on commit b9853d6

Please sign in to comment.