Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

不支持 o1-preview #255

Open
7 tasks done
FlintyLemming opened this issue Oct 30, 2024 · 6 comments
Open
7 tasks done

不支持 o1-preview #255

FlintyLemming opened this issue Oct 30, 2024 · 6 comments
Labels
bug Something isn't working

Comments

@FlintyLemming
Copy link

  • 我已确认目前没有类似 issue
  • 我已确认我已升级到最新版本
  • 我已完整浏览项目 README 和项目文档并未找到解决方案
  • 我理解并愿意跟进此 issue,协助测试和提供反馈
  • 我将以礼貌和尊重的态度提问,不得使用不文明用语 (包括在此发布评论的所有人同样适用, 不遵守的人将被 block)
  • 如果为新供应商格式,我已确认此供应商有一定的用户群体和知名度,借此以广告和推广类的名义的中转站点请求将被直接关闭
  • 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭

供应商名称

OpenAI o1-preview

描述

调用 o1 的 API 时报错

Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. (type: invalid_request_error)

参考 issue:
janhq/jan#3698

@FlintyLemming FlintyLemming added the feature New feature or request label Oct 30, 2024
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: o1-preview is not supported

  • I have confirmed that there is currently no similar issue
  • I have confirmed that I have upgraded to the latest version
  • I have fully browsed the project README and project documentation and did not find a solution
  • I understand and am willing to follow up on this issue, assist in testing and provide feedback
  • I will ask questions in a polite and respectful manner and shall not use uncivilized language (the same applies to everyone who posts comments here, those who do not comply will be blocked)
  • If it is a new supplier format, I have confirmed that this supplier has a certain user group and popularity, so requests for transfer sites in the name of advertising and promotion will be closed directly
  • I understand and agree with the above content, and understand that project maintainers have limited energy. Issues that do not follow the rules may be ignored or closed directly

Supplier name

OpenAI o1-preview

describe

An error occurs when calling the API of o1

Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. (type: invalid_request_error)

Reference issue:
janhq/jan#3698

@zmh-program zmh-program added bug Something isn't working and removed feature New feature or request labels Nov 2, 2024
@BruceLee569
Copy link

BruceLee569 commented Dec 4, 2024

+1,o1 mini 也不能用

前端显示错误信息:Invalid 'max_completion_tokens': integer below minimum value. Expected a value >= 1, but got 0 instead. (type: invalid_request_error)

详细日志:

[DEBUG] - [2024-12-04 13:51:35] - [tiktoken] error encoding messages: no encoding for model o1-mini (model: o1-mini), using default model instead
[DEBUG] - [2024-12-04 13:51:35] - [tiktoken] num tokens from messages: 18 (tokens per message: 3, model: gpt-3.5-turbo-0613)
[DEBUG] - [2024-12-04 13:51:35] - [sse] event source: POST https://api.vveai.com/v1/chat/completions
headers: {"Authorization":"Bearer sk-3hjeBwrSPhXUX1RuC25718344a6041FeAaE2AfB8F61e4b53","Content-Type":"application/json"}
body: {"model":"o1-mini","messages":[{"role":"user","content":"你好"}],"max_tokens":2000,"stream":true,"presence_penalty":0,"frequency_penalty":0,"temperature":0.6,"top_p":1}
[DEBUG] - [2024-12-04 13:51:36] - [sse] request failed with status: 400 Bad Request
response: {"error":{"message":"Invalid 'max_completion_tokens': integer below minimum value. Expected a value \u003e= 1, but got 0 instead. (request id: 2024120413513629558589302506421) (request id: 2024120413513625503153063080363)","type":"invalid_request_error","param":"max_completion_tokens","code":"integer_below_min_value"}}
[WARNING] - [2024-12-04 13:51:36] - [channel] caught error Invalid 'max_completion_tokens': integer below minimum value. Expected a value >= 1, but got 0 instead.   (type: invalid_request_error) for model o1-mini at channel api-vveai-com
[INFO] - [2024-12-04 13:51:36] - [channel] channels are exhausted for model o1-mini
[WARNING] - [2024-12-04 13:51:36] - Invalid 'max_completion_tokens': integer below minimum value. Expected a value >= 1, but got 0 instead.   (type: invalid_request_error) (model: o1-mini, client: 182.99.59.210)

@XiaomaiTX
Copy link
Contributor

来自社区的回答:可以暂时使用一个第三方工具进行转换
image

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Answer from the community: You can temporarily use a third-party tool for conversion
image

@Har-Kuun
Copy link

Har-Kuun commented Dec 7, 2024

来自社区的回答:可以暂时使用一个第三方工具进行转换 image

Thanks; this is working.

@zmh-program
Copy link
Member

#285 will fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants