We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytorch训练的模型导出onnx,使用onnxr2ncnn工具转为ncnn模型,使用ncnnoptimize分别优化为fp32和fp16的ncnn模型,推理时不管加载fp32还是fp16模型,推理使用fp32结果差异很小,但是启用了fp16推理,尤其设置use_fp16_arithmetic=true,推理结果差异就很大了
The text was updated successfully, but these errors were encountered:
ncnn中use_fp16_arithmetic=true这种在pytorch训练的时候有什么方案去逼近来保证推理的时候精度下降不会太严重
Sorry, something went wrong.
No branches or pull requests
detail | 详细描述 | 詳細な説明
pytorch训练的模型导出onnx,使用onnxr2ncnn工具转为ncnn模型,使用ncnnoptimize分别优化为fp32和fp16的ncnn模型,推理时不管加载fp32还是fp16模型,推理使用fp32结果差异很小,但是启用了fp16推理,尤其设置use_fp16_arithmetic=true,推理结果差异就很大了
The text was updated successfully, but these errors were encountered: