We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
新手上路,不太会
The text was updated successfully, but these errors were encountered:
f16的模型现在直接调用有些问题,fp32没有问题,这里有讨论 #4 。GPU没有占满的原因,我想可能是我的这份实现里,暂时还没有实现C++版本的CUDA的io_binding。这个问题在RVM的官方仓库有说明,inference_zh_Hans.md . 您可以考虑在 lite.ai.toolkit 中重新开这个issue,我会考虑在 lite.ai.toolkit 中修复这个问题,因为这个demo 项目的c++实现是在lite.ai.toolkit 中~ 对于”cpu只用到了单个核心“,我想这应该是正常的,因为当你使用CUDA版本的时候,运算主要在GPU上。
Sorry, something went wrong.
另外就是,如果你是用windows跑,可以参考 DefTruth/lite.ai.toolkit#10 进行GPU的兼容
刚上路,不太会 请问你有没有成功跑起来windows的GPU版本?
No branches or pull requests
新手上路,不太会
The text was updated successfully, but these errors were encountered: