Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for OllamaFunctions chat model from the official langchain library #149

Open
prabirshrestha opened this issue May 10, 2024 · 15 comments

Comments

@prabirshrestha
Copy link
Collaborator

prabirshrestha commented May 10, 2024

Ollama current doesn't support Open AI Compatible Function Calling but there are models such as Hermes 2 Pro that supports function calling - https://ollama.com/adrienbrault/nous-hermes2pro.

LangChain Python has OllamaFunctions (src), LangChain Javascript has an equivalent OllamaFunctions (src). We should have something similar to unblock ollama users from using function call before ollama has an official support.

@erhant
Copy link
Contributor

erhant commented May 13, 2024

Perhaps both this and #148 issues can be handled together

@prabirshrestha
Copy link
Collaborator Author

the first part will be to get ollama-rs integrated. and then function calling can follow.

Seems like ollama-rs do plan to support function calling natively. pepperoni21/ollama-rs#50 (comment). This will allow us to directly use theirs instead of creating our own wrapper similar to lang chain.

@erhant
Copy link
Contributor

erhant commented May 17, 2024

Yep, @andthattoo and I work at the same place actually, we thought its better if we add the necessary functionality to Ollama-rs first and then have access to them from here, 2 birds 1 stone type of thing.

With #148 we can simply have the Ollama-rs integration, and basically wrap around the functionality there as you said. After we are done with the Ollama-rs PR, we will come back to integrating it here!

@prabirshrestha
Copy link
Collaborator Author

Saw these tweet on how to use function calling in Ollama via raw mode.

https://x.com/ollama/status/1793392887612260370
https://x.com/Dev__Digest/status/1793419875685367919

Mistal 0.3 with function calling - https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3

@erhant
Copy link
Contributor

erhant commented May 23, 2024

Huh thats cool, I wonder though instead of providing the [AVAILABLE_TOOLS] in a raw prompt, can it be given as a System prompt, e.g. within a Modelfile? Would those two be equivalent?

cc. @andthattoo

@andthattoo
Copy link

andthattoo commented May 23, 2024

It's completely about the way they trained Mistral-7B-v0.3, with just [AVAILABLE_TOOLS] and [TOOL_CALLS] tags, it works with OpenAI Tool format out of the box. Which is cool, I may add a default function calling pipeline for such models, but pipelines like NousHermes and Gorilla need specific prompts and tool formats.

@prabirshrestha
Copy link
Collaborator Author

Haven't tried it yet but saw this new updates for LocalAI - https://github.com/mudler/LocalAI/releases/tag/v2.16.0 which seems to support function calling. Though they did finetune llama3 https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2

@prabirshrestha
Copy link
Collaborator Author

I was able to use local-ai to perform function calling using curl. No need for custom system prompts.

  • Download local-ai 2.16.0 from https://github.com/mudler/LocalAI/releases/tag/v2.16.0. needs to be 2.16+.
  • brew install abseil grpc. Seems like these are not statically linked yet.
  • Download LocalAI-llama3-8b-function-call-v0.2 model for function call support.
  • run the following curl command.
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
   "model": "LocalAI-llama3-8b-function-call-v0.2",
   "messages": [{"role": "user", "content": "create a birthday for John on 05/20/2024"}],
   "temperature": 0.1,
   "grammar_json_functions": {
      "oneOf": [
          {
              "type": "object",
              "properties": {
                  "function": {"const": "create_event"},
                  "arguments": {
                      "type": "object",
                      "properties": {
                          "title": {"type": "string"},
                          "date": {"type": "string"},
                          "time": {"type": "string"}
                      }
                  }
              }
          },
          {
              "type": "object",
              "properties": {
                  "function": {"const": "search"},
                  "arguments": {
                      "type": "object",
                      "properties": {
                          "query": {"type": "string"}
                      }
                  }
              }
          }
      ]
  }
 }'

Response:

{
  "created": 1717109075,
  "object": "chat.completion",
  "id": "fe683da1-2fdc-4fed-ab74-bd93183fb5cb",
  "model": "LocalAI-llama3-8b-function-call-v0.2",
  "choices": [
    {
      "index": 0,
      "finish_reason": "stop",
      "message": {
        "role": "assistant",
        "content": "{ \"arguments\": {\"date\": \"05/20/2024\", \"time\": \"12:00:00\", \"title\": \"John's Birthday\"} , \"function\": \"create_event\"}"
      }
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 43,
    "total_tokens": 68
  }
}

@andthattoo
Copy link

andthattoo commented May 31, 2024

When it comes to function calling with local models, having options is essential. That was the primary reason I implemented this feature in ollama-rs. My tests also showed that the phi3:14b-medium-128k-instruct-q4_1 model performs well at function calling and has a 128k context size. Both nous-hermes2theta-llama3-8b and nous-hermes2pro models work better with their custom prompts. Performance varies in different cases.

Langchain-rs might use this easy method for direct function call capabilities anyways. Also brew llama.cpp kinda replace ollama with less overhead and has function calling. It's a viable option.

@CypherpunkSamurai
Copy link

CypherpunkSamurai commented Jun 7, 2024

I needed function calling to work yesterday so I created a fork of Ollama, I seem to have gotten it work as of now.

Should I Create a PR for this?

Changelog

  • I added new Modelfile command FUNCTIONTMPL to declare a function calling template. This template is merged to the system prompt.
  • I added ollama show commands for the same
  • Edited /api/show route to be compatible with show requests
  • Added Template renderer for function prompt
  • Edited the ChatRequest and openai ChatRequest json structs to allow accepting function key.
  • Edited the ChatRequest handler to render the functions using the model's FUNCTIONTMPL template.

Results

image

Code

Shell Command

$ ollama show --functiontmpl nous-hermes-llama3

You have access to the following functions:

<tools>
{{ . | tojsoni "" "  " }}
</tools>

When the user asks you a question, if you need to use functions, provide ONLY the function calls, and NOTHING ELSE, in the format:
<function_calls>
[
    { "name": "function_name_1", "params": { "param_1": "value_1", "param_2": "value_2" }, "output": "The output variable name, to be possibly used as input for another function},
    { "name": "function_name_2", "params": { "param_3": "value_3", "param_4": "output_1"}, "output": "The output variable name, to be possibly used as input for another function"},
    ...
]
</function_calls>

Nous-Hermes-2-Pro-LLAMA3.Modelfile

FROM hermes-2-pro-llama-3.gguf

TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant"""

FUNCTIONTMPL """
You have access to the following functions:

<tools>
{/* a template function to convert interface to indented json */}}
{{ . | tojsoni "" "  " }}
</tools>

When the user asks you a question, if you need to use functions, provide ONLY the function calls, and NOTHING ELSE, in the format:
<function_calls>
[
    { "name": "function_name_1", "params": { "param_1": "value_1", "param_2": "value_2" }, "output": "The output variable name, to be possibly used as input for another function},
    { "name": "function_name_2", "params": { "param_3": "value_3", "param_4": "output_1"}, "output": "The output variable name, to be possibly used as input for another function"},
    ...
]
</function_calls>
"""

SYSTEM "You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia."

PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"

@prabirshrestha
Copy link
Collaborator Author

@erhant @andthattoo Do you plan to send PRs for function calling support now that ollama-rs supports it?

@andthattoo
Copy link

andthattoo commented Jul 14, 2024

@prabirshrestha I delved into few other stuff, might do it in coming weeks if there is no one to do it before me. Ayo @erhant ?

@prabirshrestha
Copy link
Collaborator Author

Sounds good for me.

By the way support for tools landed in Ollama. ollama/ollama#5284

@CypherpunkSamurai
Copy link

@prabirshrestha I delved into few other stuff, might do it in coming weeks if there is no one to do it before me. Ayo @erhant ?

I might give it a try, can you link me the pr and required resources? :)

@andthattoo
Copy link

@CypherpunkSamurai These are the prs: PR1, PR2 and some examples in the test folder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants