Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different output generated for same prompt in chat mode and API mode using Claude Sonnet 3.5 #11

Open
Saeedr85 opened this issue Sep 5, 2024 · 4 comments

Comments

@Saeedr85
Copy link

Saeedr85 commented Sep 5, 2024

Dear Dev,
I hope this finds you well,

I am having a problem getting a high quality translation using the API.
I use a prompt in the chat mode of Claude Sonnet 3.5 and I get a high quality translation. When I use the same prompt in the plugin (API), I get a normal translation similar to Google Translate, even though I am using the same temperature and model.
The most expensive solution was to write a very long prompt (300 words) to teach the model how to translate the text and produce a high quality translation. But this option did not always work.
I searched the web for a solution, but in vain.
What I did find was a lot of threads talking about editing the calling code and the System Role option.
I also found a plugin that uses an AI model for translation within memoQ.

Claude Prompt Engineering -Give Claude a role with a system prompt
https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts#how-to-use-system-prompts

Threads:
https://community.openai.com/t/how-to-reproduce-the-response-from-the-chat-in-the-api/297690/2
https://community.openai.com/t/api-completions-not-really-matching-with-chat-openai-gpt-3-5-completions/305270
https://community.openai.com/t/chatgpt-and-api-results-are-quite-different/314892
https://community.openai.com/t/different-output-generated-for-same-prompt-in-chat-mode-and-api-mode-using-gpt-3-5-turbo/318246/2

Plugin that uses GPT for translation:
https://custom.mt/memoq-connector/

Hopefully you can make some changes to the code or interface to get the semi-identical results with the chat mode.

@JuchiaLu
Copy link
Owner

JuchiaLu commented Sep 5, 2024

The plugin currently uses the System Prompt instead of the User Prompt. You can test both at Anthropic Console to see which one produces better output.

Then, let me know the results—I'd be happy to modify the code accordingly.

@JuchiaLu
Copy link
Owner

JuchiaLu commented Sep 5, 2024

Claude has released its System Prompts, which is a very long prompt. You can input it into the System Prompt section in the Console, and place the content to be translated in the User Prompt section to test whether it produces the same results.

@Saeedr85
Copy link
Author

Saeedr85 commented Sep 7, 2024

Thank you for your prompt reply,

Firstly, I discovered that the issue was caused by placing the prompt in the 'user role' instead of the 'system role' within the workbench. This was one mistake. Secondly, I did not notice the temperature setting in the workbench, which is why I noticed a difference between the output quality from the workbench and the API call through the plugin.

In addition, I found that the output quality of the chat on Claude.ai differs from the API call for reasons related to context, built-in settings or system prompts that are not available for API calls.

Thanks again and best regards.

@JuchiaLu
Copy link
Owner

The new version of the code has significant changes. It still needs some time to be perfected. so I've decided to provide a temporary version first. This version already has all the functionalities, but its stability cannot be guaranteed yet. If you encounter any bugs, please provide feedback.

Note that existing configuration will become invalid. If needed, please back up before installing the new version.

Multi-Supplier-MT-Plugin-1.2.9.zip

This was referenced Oct 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants