Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MNN int8量化模型GetMNNInfo无法加载模型,返回Segmentation fault #3058

Open
codingSellena opened this issue Oct 22, 2024 · 6 comments
Labels
question Further information is requested

Comments

@codingSellena
Copy link

平台(如果交叉编译请再附上交叉编译目标平台):

Linux raspberrypi 6.6.51+rpt-rpi-v8 #1 SMP PREEMPT Debian 1:6.6.51-1+rpt3 (2024-10-08) aarch64 GNU/Linux

Github版本:

2.7.0 (使用2.9.3编译得到相同结果)
下载日期:2024.10.13
Comment = 9e3cc72

编译方式:

cmake .. -DMNN_BUILD_CONVERTER=ON -DMNN_BUILD_TOOL=ON -DMNN_BUILD_QUANTOOLS=ON -DMNN_EVALUATION=ON -DMNN_SUPPORT_BF16=ON -DMNN_ARM82=ON

模型经过onnx->mnn->int8 mnn量化后,使用命令./GetMNNInfo /home/pi/v5lite-e-mnnd-i8.mnn

The device support i8sdot:0, support fp16:0, support i8mm: 0
The device support i8sdot:0, support fp16:0, support i8mm: 0
Segmentation fault

定位发现无法加载模型

 std::shared_ptr<Module> module(Module::load(empty, empty, argv[1]));

在模型推理时,使用代码

    std::shared_ptr<MNN::Interpreter> net = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(model_name.c_str()));
    if (nullptr == net)
    {
		printf("nullptr==net");
        return 0;
    }

    MNN::ScheduleConfig config;
    config.numThread = 4;
    config.type = static_cast<MNNForwardType>(MNN_FORWARD_CPU);
    MNN::BackendConfig backendConfig;
    //backendConfig.precision = (MNN::BackendConfig::PrecisionMode)1;
    backendConfig.precision = MNN::BackendConfig::Precision_Low_BF16;
    config.backendConfig = &backendConfig;
    //出错行
    printf("create Session Start\n");
    MNN::Session *session = net->createSession(config);

    if (nullptr == session) {  
    printf("create Session Fail");
    return -1; 
    }  
    printf("create Session Finish");
    std::vector<BoxInfo> bbox_collection;
    cv::Mat image;
    MatInfo mmat_objection;
    mmat_objection.inpSize = 320;
	//printf("success init");

运行得到结果

create Session Start
The device support i8sdot:0, support fp16:0, support i8mm: 0
The device support i8sdot:0, support fp16:0, support i8mm: 0
Segmentation fault

根据transformerDemo.cpp和pictureRecognition_module.cpp使用Module API推理,代码如下:

    printf("Module Create Start\n");
    std::shared_ptr<MNN::Express::Module> model;
    MNN::Express::Module::Config mdconfig;
    mdconfig.rearrange = true;
    model.reset(MNN::Express::Module::load(std::vector<std::string>{},std::vector<std::string>{}, modelName, &mdconfig));
    printf("Module Create Finish\n");

运行得到结果:

Module Create Start
The device support i8sdot:0, support fp16:0, support i8mm: 0
The device support i8sdot:0, support fp16:0, support i8mm: 0
Segmentation fault

请问一下导致这种情况可能的原因是什么?可以确保模型的正确性(int8模型来源于Github中YoloV5-lite项目提供)。
任何回复都十分感谢!

@jxt1234
Copy link
Collaborator

jxt1234 commented Oct 25, 2024

模型文件可以发一下?

@jxt1234 jxt1234 added the question Further information is requested label Oct 25, 2024
@jxt1234
Copy link
Collaborator

jxt1234 commented Oct 25, 2024

int8 模型是用离线量化工具得到的是吧?

@codingSellena
Copy link
Author

模型文件可以发一下?

通过您的qq邮箱发送过去了,麻烦您查收一下,邮件标题为“回复: [alibaba/MNN] MNN int8量化模型GetMNNInfo无法加载模型,返回Segmentation fault (Issue #3058)“

@codingSellena
Copy link
Author

int8 模型是用离线量化工具得到的是吧?

是的!

@jxt1234
Copy link
Collaborator

jxt1234 commented Nov 18, 2024

已经收到

@jxt1234
Copy link
Collaborator

jxt1234 commented Nov 18, 2024

最新版本 onnx 转 mnn 测试能通过么?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants