diff --git a/Contribution.md b/Contribution.md
new file mode 100644
index 000000000..9958feaa4
--- /dev/null
+++ b/Contribution.md
@@ -0,0 +1,167 @@
+# Contribution Guide for Gorilla API Store
+
+Welcome to the **Gorilla API Store** contribution guide! We appreciate your interest in enhancing the capabilities of LLMs with API integration. This guide will help you contribute APIs to Gorilla and make sure your API is well-documented and functional within our ecosystem.
+
+## Table of Contents
+
+- [Introduction](#introduction)
+- [Contribution Methods](#contribution-methods)
+ - [Option 1: API JSON Contribution](#option-1-api-json-contribution-preferred)
+ - [Option 2: URL JSON Contribution](#option-2-url-json-contribution)
+- [Submission Process](#submission-process)
+- [Contribution Format](#contribution-format)
+ - [API JSON Format](#api-json-format)
+ - [URL JSON Format](#url-json-format)
+- [Example Submissions](#example-submissions)
+ - [API JSON Example](#api-json-example)
+ - [URL JSON Example](#url-json-example)
+- [Best Practices](#best-practices)
+- [Contact and Support](#contact-and-support)
+
+---
+
+## Introduction
+
+The **Gorilla API Store** is designed to enhance the ability of large language models (LLMs) to interact with various APIs, improving their accuracy and reducing hallucinations in function calls. We aim to build an open-source, one-stop-shop for all APIs that LLMs can invoke effectively.
+
+[Gorilla](https://gorilla.cs.berkeley.edu/) currently supports **1,600+ APIs** and counting. By contributing your APIs to this ecosystem, you'll help expand this store and enable more accurate and context-aware API calls by LLMs. We offer multiple ways to contribute, making it easy to get involved.
+
+---
+
+## Contribution Methods
+
+You can contribute to the Gorilla API Store in two ways:
+
+### Option 1: API JSON Contribution (Preferred)
+
+This method allows for full control and customization over your API contribution. You will create a JSON file describing your API in detail, including function calls, arguments, and example code. This method is the preferred option because it ensures a higher level of accuracy in documenting your API.
+
+### Option 2: URL JSON Contribution
+
+If you're short on time or resources, you can simply provide a URL to your API documentation, and we will generate the API JSON for you using an LLM. Please note that this method may require additional verification to ensure the generated JSON is accurate.
+
+---
+
+## Submission Process
+
+To submit your API, follow these steps:
+
+1. **Prepare Your API JSON** or **API URL JSON** as described below.
+2. **Fork** the Gorilla repository.
+3. **Have any issue** then create a new issue in [Issue section](https://github.com/ShishirPatil/gorilla/issues) and get the issue assigned then start working on it.
+4. **Create a Pull Request** with your API JSON file added under the appropriate directory (`data/apizoo`) and make PR in [Pull request section](https://github.com/ShishirPatil/gorilla/pulls) .
+5. We'll review your submission, and once approved, your API will become part of the Gorilla API Store.
+
+---
+
+## Contribution Format
+
+### API JSON Format
+
+For **API JSON** contributions, follow this format:
+
+| Field | Type | Description | Required |
+| ------------------ | ------------ | ------------------------------------------------------------------- | -------- |
+| `user_name` | String | Name of the contributor. | ✅ |
+| `api_name` | String | Name of the API (max 20 words). | ✅ |
+| `api_call` | String | A one-line function call with arguments and values. | ✅ |
+| `api_version` | String | Version of the API. | ✅ |
+| `api_arguments` | JSON | JSON object listing the function's arguments and valid options. | ✅ |
+| `functionality` | String | Short description of the function (max 20 words). | ✅ |
+| `env_requirements` | List[String] | List of any required libraries or dependencies. | Optional |
+| `example_code` | String | Example code showing how to use the API. | Optional |
+| `meta_data` | JSON | Additional information such as descriptions or performance metrics. | Optional |
+| `questions` | List[String] | Questions that describe real-life scenarios for using this API. | Optional |
+
+---
+
+### URL JSON Format
+
+For **URL JSON** contributions, follow this format:
+
+| Field | Type | Description | Required |
+| ----------- | ------------ | --------------------------------------------------------------- | -------- |
+| `user_name` | String | Name of the contributor. | ✅ |
+| `api_name` | String | Name of the API (max 20 words). | ✅ |
+| `api_url` | String | URL to the API documentation. | ✅ |
+| `questions` | List[String] | Questions that describe real-life scenarios for using this API. | Optional |
+
+---
+
+## Example Submissions
+
+### API JSON Example:
+
+```json
+[
+ {
+ "user_name": "example_username_api",
+ "api_name": "Torch Hub Model snakers4-silero",
+ "api_call": "torch.hub.load(repo_or_dir=['snakers4/silero-models'], model=['silero_stt'], *args, source, trust_repo, force_reload, verbose, skip_validation, **kwargs)",
+ "api_version": "2.0",
+ "api_arguments": {
+ "repo_or_dir": "snakers4/silero-models",
+ "model": "silero_stt",
+ "language": ["en", "de", "es"]
+ },
+ "functionality": "Speech to Text",
+ "env_requirements": ["torchaudio", "torch", "omegaconf", "soundfile"],
+ "example_code": "import torch\nmodel = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)\nimgs = ['https://ultralytics.com/images/zidane.jpg']\nresults = model(imgs)",
+ "meta_data": {
+ "description": "Silero Speech-To-Text models provide enterprise-grade STT in a compact form factor.",
+ "performance": { "dataset": "imagenet", "accuracy": "80.4%" }
+ },
+ "questions": [
+ "I am a doctor and I want to dictate what my patient is saying and put it into a text doc in my computer.",
+ "My math students need an API to write down what I am saying for reviewing."
+ ]
+ }
+]
+```
+
+##
+
+## URL JSON Example:
+
+```json
+[
+ {
+ "user_name": "example_username_url",
+ "api_name": "Torch Hub ultralytics_yolov5",
+ "api_url": "https://pytorch.org/hub/ultralytics_yolov5/",
+ "questions": [
+ "I am a doctor and I want to dictate what my patient is saying and put it into a text doc in my computer.",
+ "My math students need an API to write down what I am saying for reviewing."
+ ]
+ }
+]
+```
+
+# Best Practices
+
+#### Clear Documentation
+
+- Ensure that the URL points to clear and well-documented API information. This helps us generate accurate API JSON.
+
+#### Accurate API Calls
+
+- Double-check the API calls in the documentation to ensure they are syntactically and semantically correct.
+
+#### Dependencies
+
+- Make sure that the URL lists all necessary dependencies and environment requirements for the API to function properly.
+
+#### Test Your API
+
+- Whenever possible, provide working examples or sample code in your API documentation to demonstrate how the API functions.
+
+## Contact and Support
+
+For any questions or issues with your contribution, please reach out through one of the following:
+
+- **Discord**: Join our [community](https://discord.com/invite/grXXvj9Whz) for real-time support.
+- **Checkout our paper** : checkout our [papers](https://arxiv.org/abs/2305.15334) for more information
+- Use [Gorilla in your CLI](https://github.com/gorilla-llm/gorilla-cli) with ` pip install gorilla-cli`
+- **Email**: Contact us at support@gorilla-apistore.com.
+
+**We look forward to your contributions!**
diff --git a/README.md b/README.md
index ea251a0f7..52e5c4141 100644
--- a/README.md
+++ b/README.md
@@ -1,32 +1,31 @@
# Gorilla: Large Language Model Connected with Massive APIs [[Project Website](https://shishirpatil.github.io/gorilla/)]
-
-**🚒 GoEx: A Runtime for executing LLM generated actions like code & API calls** GoEx presents “undo” and “damage confinement” abstractions for mitigating the risk of unintended actions taken in LLM-powered systems. [Release blog](https://gorilla.cs.berkeley.edu/blogs/10_gorilla_exec_engine.html) [Paper](https://arxiv.org/abs/2404.06921).
+**🚒 GoEx: A Runtime for executing LLM generated actions like code & API calls** GoEx presents “undo” and “damage confinement” abstractions for mitigating the risk of unintended actions taken in LLM-powered systems. [Release blog](https://gorilla.cs.berkeley.edu/blogs/10_gorilla_exec_engine.html) [Paper](https://arxiv.org/abs/2404.06921).
-**🎉 Berkeley Function Calling Leaderboard** How do models stack up for function calling? :dart: Releasing the [Berkeley Function Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard). Read more in our [Release Blog](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html).
+**🎉 Berkeley Function Calling Leaderboard** How do models stack up for function calling? :dart: Releasing the [Berkeley Function Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard). Read more in our [Release Blog](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html).
-**:trophy: Gorilla OpenFunctions v2** Sets new SoTA for open-source LLMs :muscle: On-par with GPT-4 :raised_hands: Supports more languages :ok_hand: [Blog](https://gorilla.cs.berkeley.edu/blogs/7_open_functions_v2.html).
+**:trophy: Gorilla OpenFunctions v2** Sets new SoTA for open-source LLMs :muscle: On-par with GPT-4 :raised_hands: Supports more languages :ok_hand: [Blog](https://gorilla.cs.berkeley.edu/blogs/7_open_functions_v2.html).
**:fire: Gorilla OpenFunctions** is a drop-in alternative for function calling! [Release Blog](https://gorilla.cs.berkeley.edu/blogs/4_open_functions.html)
-**🟢 Gorilla is Apache 2.0** With Gorilla being fine-tuned on MPT, and Falcon, you can use Gorilla commercially with no obligations! :golf:
+**🟢 Gorilla is Apache 2.0** With Gorilla being fine-tuned on MPT, and Falcon, you can use Gorilla commercially with no obligations! :golf:
-**:rocket: Try Gorilla in 60s** [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing)
+**:rocket: Try Gorilla in 60s** [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing)
:computer: Use [Gorilla in your CLI](https://github.com/gorilla-llm/gorilla-cli) with `pip install gorilla-cli`
-**:fax: Checkout our [blogs](https://gorilla.cs.berkeley.edu/blog.html) for all things tools-use/function-calling!**
+**:fax: Checkout our [blogs](https://gorilla.cs.berkeley.edu/blog.html) for all things tools-use/function-calling!**
**:newspaper_roll: Checkout our paper!** [![arXiv](https://img.shields.io/badge/arXiv-2305.15334-.svg?style=flat-square)](https://arxiv.org/abs/2305.15334)
**:wave: Join our Discord!** [![Discord](https://img.shields.io/discord/1111172801899012102?label=Discord&logo=discord&logoColor=green&style=flat-square)](https://discord.gg/grXXvj9Whz)
-
`Gorilla` enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Hop on our Discord, or open a PR, or email us if you would like to have your API incorporated as well.
## News
+
- ⏰: [04/01] Introducing cost and latency metrics into [Berkeley function calling leaderboard](https://gorilla.cs.berkeley.edu/leaderboard)!
- :rocket: [03/15] RAFT: Adapting Language Model to Domain Specific RAG is live! [[MSFT-Meta blog](https://techcommunity.microsoft.com/t5/ai-ai-platform-blog/bg-p/AIPlatformBlog)] [[Berkeley Blog](https://gorilla.cs.berkeley.edu/blogs/9_raft.html)]
- :trophy: [02/26] [Berkeley Function Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard) is live!
@@ -41,11 +40,11 @@
- :fire: [05/25] We release the APIBench dataset and the evaluation code of Gorilla!
## Gorilla Gradio
+
**Try Gorilla LLM models in [HF Spaces](https://huggingface.co/spaces/gorilla-llm/gorilla-demo/) or [![Gradio Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ktnVWPJOgqTC9hLW8lJPVZszuIddMy7y?usp=sharing)**
![gorilla_webUI_2](https://github.com/TanmayDoesAI/gorilla/assets/85993243/f30645bf-6798-4bd2-ac6e-6943840ae095)
-
-## Get Started
+## Get Started
Inference: Run Gorilla locally [`inference/README.md`](inference/README.md)
@@ -53,15 +52,15 @@ Evaluation: We have included prompts and responses for the APIBench with and wit
## Repository Organization
-Our repository organization is shown below.
+Our repository organization is shown below.
- - The `berkeley-function-call-leaderboard` folder contains scripts for evaluating function-calling ability of models.
- - The `data` folder contains all the evaluation APIs `(APIBench)` and the community contributed APIs.
- - The `eval` folder contains all our evaluation code as well as the Gorilla outputs.
- - The `inference` folder contains all the inference code for running Gorilla locally.
- - The `openfunctions` folder contains the inference code for the OpenFunctions model(s).
+- The `berkeley-function-call-leaderboard` folder contains scripts for evaluating function-calling ability of models.
+- The `data` folder contains all the evaluation APIs `(APIBench)` and the community contributed APIs.
+- The `eval` folder contains all our evaluation code as well as the Gorilla outputs.
+- The `inference` folder contains all the inference code for running Gorilla locally.
+- The `openfunctions` folder contains the inference code for the OpenFunctions model(s).
-For our dataset collections, all the 1640 API documentation is in `data/api`. We also include the `APIBench` dataset created by self-instruct in `data/apibench`. For evaluation, we convert this into a LLM-friendly chat format, and the questions are in `eval/eval-data/questions`, and the corresponding responses are in `eval/eval-data/responses`. We have also included the evaluation scripts are in `eval/eval-scripts`. This would be entirely sufficient to train Gorilla yourself, and reproduce our results. Please see [evaluation](https://github.com/ShishirPatil/gorilla/tree/main/eval) for the details on how to use our evaluation pipeline.
+For our dataset collections, all the 1640 API documentation is in `data/api`. We also include the `APIBench` dataset created by self-instruct in `data/apibench`. For evaluation, we convert this into a LLM-friendly chat format, and the questions are in `eval/eval-data/questions`, and the corresponding responses are in `eval/eval-data/responses`. We have also included the evaluation scripts are in `eval/eval-scripts`. This would be entirely sufficient to train Gorilla yourself, and reproduce our results. Please see [evaluation](https://github.com/ShishirPatil/gorilla/tree/main/eval) for the details on how to use our evaluation pipeline.
Additionally, we have released all the model weights. `gorilla-7b-hf-v0` lets you invoke over 925 Hugging Face APIs. Similarly, `gorilla-7b-tf-v0` and `gorilla-7b-th-v0` have 626 (exhaustive) Tensorflow v2, and 94 (exhaustive) Torch Hub APIs. `gorilla-mpt-7b-hf-v0` and `gorilla-falcon-7b-hf-v0` are Apache 2.0 licensed models (commercially usable) fine-tuned on MPT-7B and Falcon-7B respectively. We will release a model with all three combined with generic chat capability and community contributed APIs as soon as we can scale our serving infrastructure. You can run Gorilla locally from instructions in the `inference/` sub-directory, or we also provide a hosted Gorilla chat completion API (see Colab)! If you have any suggestions, or if you run into any issues please feel free to reach out to us either through Discord or email or raise a Github issue.
@@ -103,8 +102,8 @@ gorilla
```
## Contributing Your API
-We aim to build an open-source, one-stop-shop for all APIs, LLMs can interact with! Any suggestions and contributions are welcome! Please see the details on [how to contribute](https://github.com/ShishirPatil/gorilla/tree/main/data/README.md). THIS WILL ALWAYS REMAIN OPEN SOURCE.
+We aim to build an open-source, one-stop-shop for all APIs, LLMs can interact with! Any suggestions and contributions are welcome! Please see the details on [how to contribute](https://github.com/ShishirPatil/gorilla/tree/main/data/README.md) or refers to [Contribution Guide](https://github.com/ShishirPatil/gorilla/tree/main/data/Contribution.md). THIS WILL ALWAYS REMAIN OPEN SOURCE.
## FAQ(s)
@@ -112,39 +111,36 @@ We aim to build an open-source, one-stop-shop for all APIs, LLMs can interact wi
Yes! We now have models that you can use commercially without any obligations.
-
2. Can we use Gorilla with other tools like Langchain etc?
-Absolutely! You've highlighted a great aspect of our tools. Gorilla is an end-to-end model, specifically tailored to serve correct API calls (tools) without requiring any additional coding. It's designed to work as part of a wider ecosystem and can be flexibly integrated within agentic frameworks and other tools.
+Absolutely! You've highlighted a great aspect of our tools. Gorilla is an end-to-end model, specifically tailored to serve correct API calls (tools) without requiring any additional coding. It's designed to work as part of a wider ecosystem and can be flexibly integrated within agentic frameworks and other tools.
Langchain, is a versatile developer tool. Its "agents" can efficiently swap in any LLM, Gorilla included, making it a highly adaptable solution for various needs.
-The beauty of these tools truly shines when they collaborate, complementing each other's strengths and capabilities to create an even more powerful and comprehensive solution. This is where your contribution can make a difference. We enthusiastically welcome any inputs to further refine and enhance these tools.
+The beauty of these tools truly shines when they collaborate, complementing each other's strengths and capabilities to create an even more powerful and comprehensive solution. This is where your contribution can make a difference. We enthusiastically welcome any inputs to further refine and enhance these tools.
Check out our blog on [How to Use Gorilla: A Step-by-Step Walkthrough](https://gorilla.cs.berkeley.edu/blogs/5_how_to_gorilla.html) to see all the different ways you can integrate Gorilla in your projects.
-
-
## Project Roadmap
In the immediate future, we plan to release the following:
-- [ ] BFCL metrics to evaluate contamination
+- [ ] BFCL metrics to evaluate contamination
- [ ] BFCL systems metrics including cost and latency
- [ ] BFCL update with "live" data and user-votes
-- [ ] Openfunctions-v3 model to support more languages and multi-turn capability
+- [ ] Openfunctions-v3 model to support more languages and multi-turn capability
- [x] Berkeley Function Calling leaderboard (BFCL) for evaluating tool-calling/function-calling models [Feb 26, 2024]
- [x] Openfunctions-v2 with more languages (Java, JS, Python), relevance detection [Feb 26, 2024]
- [x] API Zoo Index for easy access to all APIs [Feb 16, 2024]
- [x] Openfunctions-v1, Apache 2.0, with parallel and multiple function calling [Nov 16, 2023]
- [x] Openfunctions-v0, Apache 2.0 function calling model [Nov 16, 2023]
-- [X] Release a commercially usable, Apache 2.0 licensed Gorilla model [Jun 5, 2023]
-- [X] Release weights for all APIs from APIBench [May 28, 2023]
-- [X] Run Gorilla LLM locally [May 28, 2023]
-- [X] Release weights for HF model APIs [May 27, 2023]
-- [X] Hosted Gorilla LLM chat for HF model APIs [May 27, 2023]
-- [X] Opening up the APIZoo for contributions from community
-- [X] Dataset and Eval Code
+- [x] Release a commercially usable, Apache 2.0 licensed Gorilla model [Jun 5, 2023]
+- [x] Release weights for all APIs from APIBench [May 28, 2023]
+- [x] Run Gorilla LLM locally [May 28, 2023]
+- [x] Release weights for HF model APIs [May 27, 2023]
+- [x] Hosted Gorilla LLM chat for HF model APIs [May 27, 2023]
+- [x] Opening up the APIZoo for contributions from community
+- [x] Dataset and Eval Code
Propose a new task you would like to work on :star_struck:
@@ -158,5 +154,5 @@ If you use Gorilla or APIBench, please cite our paper:
author={Shishir G. Patil and Tianjun Zhang and Xin Wang and Joseph E. Gonzalez},
year={2023},
journal={arXiv preprint arXiv:2305.15334},
-}
+}
```