Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: Use the GPT 3.5-instruct model #9

Merged
merged 1 commit into from
Jan 17, 2024

Conversation

kmesiab
Copy link
Owner

@kmesiab kmesiab commented Jan 17, 2024

Summary by CodeRabbit

  • New Features
    • Updated the default AI model to enhance performance and user experience.

Copy link

coderabbitai bot commented Jan 17, 2024

Walkthrough

The recent update transitions the default AI model used in the software from the "gpt-4-1106-preview" to the "gpt-3.5-turbo-instruct". This reflects a shift in the underlying technology, possibly to optimize performance, cost, or compatibility.

Changes

File(s) Summary of Changes
main.go Updated the default OpenAI model from "gpt-4-1106-preview" to "gpt-3.5-turbo-instruct".

🐇✨
To code anew, we hop and leap,
From GPT-four to three-point-five,
With every line, we make the change,
Our codebase grows, it comes alive.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@kmesiab kmesiab merged commit 520f773 into main Jan 17, 2024
5 checks passed
@kmesiab kmesiab deleted the feat/set-default-model-to-3.5-instruct branch January 17, 2024 16:24
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 0dc6cd2 and c694354.
Files selected for processing (1)
  • main.go (1 hunks)
Additional comments: 3
main.go (3)
  • 18-18: The constant DefaultOpenAIModel has been updated to use "gpt-3.5-turbo-instruct". Ensure that all parts of the codebase that rely on this constant are compatible with the GPT-3.5 model's API and behavior.
Verification successful

The provided script output reveals the usage of the gptModel parameter within the CreateChatCompletion call. The gptModel is directly assigned to the Model field in the ChatCompletionRequest struct, which is then used to make the API call.

Given that the gptModel parameter is used in a straightforward manner to specify the model for the API request, and there are no additional conditions or configurations tied to a specific model version, we can infer that the code is designed to work with any model that is compatible with the OpenAI API's CreateChatCompletion endpoint.

Therefore, as long as the "gpt-3.5-turbo-instruct" model is compatible with the CreateChatCompletion endpoint of the OpenAI API, the change in the DefaultOpenAIModel constant to "gpt-3.5-turbo-instruct" should be compatible with the existing codebase.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the new model is compatible with the existing codebase
rg --type go 'DefaultOpenAIModel'

Length of output: 143



Script:

#!/bin/bash
# Check where `cliArgs.CompletionModel` is used to ensure compatibility with the GPT-3.5 model's API and behavior.
rg --type go 'cliArgs.CompletionModel'

Length of output: 137



Script:

#!/bin/bash
# Search for OpenAI API calls or interactions that specify the model to ensure compatibility with the new GPT-3.5 model.
rg --type go 'OpenAI.*Model'

Length of output: 247



Script:

#!/bin/bash
# Examine the GetCompletion function in openai.go to verify compatibility with the "gpt-3.5-turbo-instruct" model.
rg --type go -A 10 'func \(s \*OpenAICompletionService\) GetCompletion' openai.go

Length of output: 1000



Script:

#!/bin/bash
# Retrieve more of the GetCompletion function body to see how the gptModel parameter is used.
rg --type go -A 20 'func \(s \*OpenAICompletionService\) GetCompletion' openai.go

Length of output: 1658



Script:

#!/bin/bash
# Attempt to retrieve the complete GetCompletion function body to examine the usage of the gptModel parameter.
rg --type go -C 30 'func \(s \*OpenAICompletionService\) GetCompletion' openai.go

Length of output: 2107



Script:

#!/bin/bash
# Retrieve the rest of the CreateChatCompletion call to see how the gptModel parameter is used.
rg --type go -A 30 'CreateChatCompletion' openai.go

Length of output: 395

* 18-18: The `coalesceConfiguration` function correctly defaults to `DefaultOpenAIModel` when no model is provided. This change should be tested to ensure that the new default model is being used when expected. * 18-18: The `getCodeReviews` function uses the `CompletionModel` argument, which will now default to the new `DefaultOpenAIModel`. It is important to verify that the GPT-3.5 model's output is handled correctly in the context of code reviews.
Verification successful

The getCodeReviews function is confirmed to use the CompletionModel specified at runtime, with the DefaultOpenAIModel set as the fallback when no model is provided. The code correctly defaults to the new model, and there is no evidence of incompatibility with the GPT-3.5 model in the provided context.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the getCodeReviews function is compatible with the new model
rg --type go 'getCodeReviews'

Length of output: 200



Script:

#!/bin/bash
# Inspect the getCodeReviews function to understand how it handles the model parameter
rg --type go -A 20 'func getCodeReviews' main.go

Length of output: 581



Script:

#!/bin/bash
# Find the call to getCodeReviews and check how the DefaultOpenAIModel is passed to it
rg --type go 'getCodeReviews\(' main.go -A 3 -B 3

Length of output: 454



Script:

#!/bin/bash
# Find where argv.CompletionModel is set to DefaultOpenAIModel
rg --type go 'CompletionModel' main.go -A 3 -B 3

Length of output: 842

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant