Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: Enhance prompt, abstract const #14

Merged
merged 1 commit into from
Jan 17, 2024
Merged

Conversation

kmesiab
Copy link
Owner

@kmesiab kmesiab commented Jan 17, 2024

Update model, instruct wasn't working

Summary by CodeRabbit

  • New Features

    • Updated the AI model used to the latest "gpt-3.5-turbo" version.
    • Integrated a new prompt parameter to enhance code review processes.
  • Refactor

    • Modified the code review functionality to accept custom prompts for improved guidance and output quality.

Update model, instruct wasn't working
Copy link

coderabbitai bot commented Jan 17, 2024

Walkthrough

The codebase has been updated to refine the AI model interaction and enhance the code review process. The AI model default has been switched to a more specific version, and a new prompt parameter has been integrated into the code review functionality to guide the review's focus on crucial software engineering principles.

Changes

File(s) Change Summary
main.go Changed DefaultOpenAIModel constant, added prompt parameter to getCodeReviews, and updated usage in main.
openai.go Updated GetCompletion method to accept a new prompt parameter.
prompt.go Introduced a constant Prompt with instructions for code reviews.

Poem

🐇 "In the realm of code, changes are afoot,
🌟 With prompts so clear, reviews take root.
🚀 To the stars we aim, with AI so bright,
📝 Markdown feedback, to make our code right."

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@kmesiab kmesiab merged commit e7b13ad into main Jan 17, 2024
5 checks passed
@kmesiab kmesiab deleted the feat/enhance-prompt branch January 17, 2024 17:37
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between b927110 and 8b0638a.
Files selected for processing (3)
  • main.go (3 hunks)
  • openai.go (1 hunks)
  • prompt.go (1 hunks)
Files skipped from review due to trivial changes (1)
  • prompt.go
Additional comments: 4
openai.go (2)
  • 11-11: The method signature for GetCompletion has been updated to include a prompt parameter. Ensure that all implementations and usages of this interface method are updated accordingly.
  • 16-16: The implementation of GetCompletion in OpenAICompletionService now uses the prompt parameter. Verify that the prompt format string is correctly designed to accept the diff as a formatting argument.
Verification successful

The prompt parameter in the GetCompletion method of the OpenAICompletionService is correctly designed to accept the diff as a formatting argument, as evidenced by the line fullPrompt := fmt.Sprintf(prompt, diff) in the method implementation.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the prompt format string is correctly designed to accept the diff as a formatting argument.
# This will search for the prompt variable definition and usage to ensure it's correctly formatted.
rg --type go --context 5 'prompt :='

Length of output: 38



Script:

#!/bin/bash
# Search for the usage of the `prompt` parameter within the `GetCompletion` method in `openai.go`.
ast-grep --lang go --pattern 'func ($_ *OpenAICompletionService) GetCompletion($_, $_, $_, prompt string) (string, error) { $$$ }'

Length of output: 1031

main.go (2)
  • 62-62: The getCodeReviews function now accepts a prompt parameter. Ensure that all calls to this function have been updated to pass the new parameter.
  • 117-125: The concurrency implementation in getCodeReviews uses goroutines and channels. Ensure that there is proper synchronization and error handling for concurrent operations.

@@ -15,7 +15,7 @@ import (

const (
DefaultFilePerms = 0o644
DefaultOpenAIModel = "gpt-3.5-turbo-instruct"
DefaultOpenAIModel = "gpt-3.5-turbo"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The search results confirm that the DefaultOpenAIModel constant is used in main.go and has been updated to "gpt-3.5-turbo". Additionally, the constant is referenced in the assignment of cliArgs.CompletionModel, which suggests that the change is indeed reflected in the code where it's used.

However, the search also reveals that the old model name "gpt-3.5-turbo-instruct" is still present in the README.md file. This indicates that the documentation has not been updated to reflect the change in the default model.

Based on these findings:

  • The change to the DefaultOpenAIModel constant is reflected in the codebase.
  • The documentation in README.md needs to be updated to align with the new default model.
Analysis chain

The DefaultOpenAIModel constant has been updated. Confirm that this change is reflected wherever the constant is used and that the new model is compatible with existing functionality.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Confirm that the DefaultOpenAIModel constant change is reflected throughout the codebase.
rg --type go 'DefaultOpenAIModel'
# Additionally, check if there's any documentation or configuration that needs to be updated with the new model.
rg --type md --type yaml --type json 'gpt-3.5-turbo-instruct'

Length of output: 286

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant