Skip to content

Commit

Permalink
Update version to 2.0 and add Groq-AI models
Browse files Browse the repository at this point in the history
  • Loading branch information
haseeb-heaven committed Feb 26, 2024
1 parent 705a150 commit 2ab1ecd
Show file tree
Hide file tree
Showing 6 changed files with 91 additions and 6 deletions.
26 changes: 23 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ pip install -r requirements.txt

*Step 1:* **Obtain the HuggingFace API key.**

*Step 2:* Visit the following URL: *https://huggingface.co/settings/tokens* and get your Access Token.
*Step 2:* Visit the following URL: [HuggingFace Tokens](https://huggingface.co/settings/tokens) and get your Access Token.

*Step 3:* Save the token in a `.env` file as:</br>
```bash
Expand All @@ -95,7 +95,7 @@ echo "HUGGINGFACE_API_KEY=Your Access Token" > .env

*Step 1:* **Obtain the Google Palm API key.**

*Step 2:* Visit the following URL: *https://makersuite.google.com/app/apikey*
*Step 2:* Visit the following URL: [Google AI Studio](https://makersuite.google.com/app/apikey)

*Step 3:* Click on the **Create API Key** button.

Expand All @@ -109,7 +109,7 @@ echo "GEMINI_API_KEY=Your API Key" > .env

*Step 1:* **Obtain the OpenAI API key.**

*Step 2:* Visit the following URL: *https://platform.openai.com/account/api-keys*
*Step 2:* Visit the following URL: [OpenAI Dashboard](https://platform.openai.com/account/api-keys)

*Step 3:* Sign up for an account or log in if you already have one.

Expand All @@ -122,6 +122,23 @@ echo "GEMINI_API_KEY=Your API Key" > .env
echo "OPENAI_API_KEY=Your API Key" > .env
```

## Groq AI API Key setup.

*Step 1:* **Obtain the GroqAI API key.**

*Step 2:* Visit the following URL: [Groq AI Console](https://console.groq.com/keys)

*Step 3:* Sign up for an account or log in if you already have one.

*Step 4:* Navigate to the API section in your account.

*Step 5:* Click on the **Create API Key** button.

*Step 6:* The generated key is your API key. Please make sure to **copy** it and **paste** it in the required field below.
```bash
echo "GROQ_API_KEY=Your API Key" > .env
```

# Offline models setup.</br>
This Interpreter supports offline models via **LM Studio** so to download it from [here](https://lmstudio.ai/) and follow the steps below.
- Download any model from **LM Studio** like _Phi-2,Code-Llama,Mistral_.
Expand Down Expand Up @@ -185,6 +202,8 @@ To use Open-Code-Interpreter, use the following command options:
- `gpt-4` - Generates code using the GPT 4 model.
- `gemini-pro` - Generates code using the Gemini Pro model.
- `palm-2` - Generates code using the PALM 2 model.
- `groq-mixtral` - Generates code using the Groq Mixtral model.
- `groq-llama2` - Generates code using the Groq Llama2 model.
- `code-llama` - Generates code using the Code-llama model.
- `code-llama-phind` - Generates code using the Code-llama Phind model.
- `mistral-7b` - Generates code using the Mistral 7b model.
Expand Down Expand Up @@ -320,6 +339,7 @@ If you're interested in contributing to **Open-Code-Interpreter**, we'd love to
- **v1.9.1** - Fixed **Unit Tests** and **History Args** <br>
- **v1.9.2** - Updated **Google Vision** to adapt LiteLLM instead of **Google GenAI** *.<br>
- **v1.9.3** - Added **Local Models** Support via **LM Studio**.<br>
- **v2.0** - Added **Groq-AI** Models _Fastest LLM_ with **500 Tokens/Sec** with _Code-LLaMa,Mixtral_ models.<br>

## 📜 **License**

Expand Down
17 changes: 17 additions & 0 deletions configs/groq-llama2.config
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 1024

# The start separator for the generated code.
start_sep = ```

# The end separator for the generated code.
end_sep = ```

# If True, the first line of the generated text will be skipped.
skip_first_line = True

# The model used for generating the code.
HF_MODEL = groq-llama2
17 changes: 17 additions & 0 deletions configs/groq-mixtral.config
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 1024

# The start separator for the generated code.
start_sep = ```

# The end separator for the generated code.
end_sep = ```

# If True, the first line of the generated text will be skipped.
skip_first_line = True

# The model used for generating the code.
HF_MODEL = groq-mixtral
2 changes: 1 addition & 1 deletion interpreter
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def main():
parser.add_argument('--save_code', '-s', action='store_true', default=False, help='Save the generated code')
parser.add_argument('--mode', '-md', choices=['code', 'script', 'command','vision','chat'], help='Select the mode (`code` for generating code, `script` for generating shell scripts, `command` for generating single line commands) `vision` for generating text from images')
parser.add_argument('--model', '-m', type=str, default='code-llama', help='Set the model for code generation. (Defaults to gpt-3.5-turbo)')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.9.3')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 2.0')
parser.add_argument('--lang', '-l', type=str, default='python', help='Set the interpreter language. (Defaults to Python)')
parser.add_argument('--display_code', '-dc', action='store_true', default=False, help='Display the code in output')
parser.add_argument('--history', '-hi', action='store_true', default=False, help='Use history as memory')
Expand Down
2 changes: 1 addition & 1 deletion interpreter.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def main():
parser.add_argument('--save_code', '-s', action='store_true', default=False, help='Save the generated code')
parser.add_argument('--mode', '-md', choices=['code', 'script', 'command','vision','chat'], help='Select the mode (`code` for generating code, `script` for generating shell scripts, `command` for generating single line commands) `vision` for generating text from images')
parser.add_argument('--model', '-m', type=str, default='code-llama', help='Set the model for code generation. (Defaults to gpt-3.5-turbo)')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.9.3')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 2.0')
parser.add_argument('--lang', '-l', type=str, default='python', help='Set the interpreter language. (Defaults to Python)')
parser.add_argument('--display_code', '-dc', action='store_true', default=False, help='Display the code in output')
parser.add_argument('--history', '-hi', action='store_true', default=False, help='Use history as memory')
Expand Down
33 changes: 32 additions & 1 deletion libs/interpreter_lib.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
class Interpreter:
logger = None
client = None
interpreter_version = "1.9.3"
interpreter_version = "2.0"

def __init__(self, args):
self.args = args
Expand Down Expand Up @@ -108,6 +108,7 @@ def initialize_client(self):

self.logger.info(f"Using model {hf_model_name}")

# checking if the model is from OpenAI
if "gpt" in self.INTERPRETER_MODEL:
if os.getenv("OPENAI_API_KEY") is None:
load_dotenv()
Expand All @@ -121,7 +122,23 @@ def initialize_client(self):
raise Exception("OpenAI Key not found in .env file.")
elif not hf_key.startswith('sk-'):
raise Exception("OpenAI token should start with 'sk-'. Please check your .env file.")
# checking if the model is from Groq.

elif "groq" in self.INTERPRETER_MODEL:
if os.getenv("GROQ_API_KEY") is None:
load_dotenv()
if os.getenv("GROQ_API_KEY") is None:
# if there is no .env file, try to load from the current working directory
load_dotenv(dotenv_path=os.path.join(os.getcwd(), ".env"))

# Read the token from the .env file
groq_key = os.getenv('GROQ_API_KEY')
if not groq_key:
raise Exception("GroqAI Key not found in .env file.")
elif not groq_key.startswith('gsk'):
raise Exception("GroqAI token should start with 'gsk'. Please check your .env file.")

# checking if the model is from Google AI.
model_api_keys = {
"palm": "PALM_API_KEY",
"gemini-pro": "GEMINI_API_KEY"
Expand All @@ -143,6 +160,7 @@ def initialize_client(self):
raise Exception(f"{api_key_name} not found in .env file.")
elif " " in api_key or len(api_key) <= 15:
raise Exception(f"{api_key_name} should have no spaces, length greater than 15. Please check your .env file.")

else:
if os.getenv("HUGGINGFACE_API_KEY") is None:
load_dotenv()
Expand Down Expand Up @@ -283,7 +301,20 @@ def generate_content(self,message, chat_history: list[tuple[str, str]], temperat
self.INTERPRETER_MODEL = "gemini/gemini-pro"
response = litellm.completion(self.INTERPRETER_MODEL, messages=messages,temperature=temperature)
self.logger.info("Response received from completion function.")

# Check if the model is Groq-AI
elif 'groq' in self.INTERPRETER_MODEL:

if 'groq-llama2' in self.INTERPRETER_MODEL:
self.logger.info("Model is Groq/Llama2.")
self.INTERPRETER_MODEL = "groq/llama2-70b-4096"
elif 'groq-mixtral' in self.INTERPRETER_MODEL:
self.logger.info("Model is Groq/Mixtral.")
self.INTERPRETER_MODEL = "groq/mixtral-8x7b-32768"

response = litellm.completion(self.INTERPRETER_MODEL, messages=messages,temperature=temperature,max_tokens=max_tokens)
self.logger.info("Response received from completion function.")

# Check if the model is Local Model
elif 'local' in self.INTERPRETER_MODEL:
self.logger.info("Model is Local model")
Expand Down

0 comments on commit 2ab1ecd

Please sign in to comment.