-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
导出onnx #29
Comments
我们暂时没有计划支持ONNX,并提供C++ inference,也许可以参考这两个仓库实现ONNX和c++ inference。 |
好的好的,我看了这两个,我跑的不是很顺利,谢谢 |
@jxhoh 大佬,你转onnx成功了吗 |
我成功倒是成功了 |
按照作者来说的,看那两个仓库 |
好的,感谢,我试试 |
@wyf2020 @MachineEyes001 import torch You can choose model type in ['full', 'opt']model_type = 'opt' # 'full' for best quality, 'opt' for best efficiency You can choose numerical precision in ['fp32', 'mp', 'fp16']. 'fp16' for best efficiencyprecision = 'fp32' # Enjoy near-lossless precision with Mixed Precision (MP) / FP16 computation if you have a modern GPU (recommended NVIDIA architecture >= SM_70). You can also change the default values like thr. and npe (based on input image size)if model_type == 'full': if precision == 'mp': print(_default_cfg) matcher.load_state_dict(torch.load(r"D:\jx\ai\model2cpp\models\EfficientLoFTR-main\EfficientLoFTR-main\weights\eloftr_outdoor.ckpt")['state_dict']) if precision == 'fp16': matcher = matcher.eval().cuda() Load example imagesimg0_pth = r"D:\ImagesStitchCode\test2\Snipaste_2024-07-18_14-57-35.jpg" if precision == 'fp16': Inference with EfficientLoFTR and get prediction#导出地址 eloftr_path = r"D:\jx\ai\model2cpp\models\EfficientLoFTR-main\EfficientLoFTR-main\weights\eloft_size_.onnx" dynamic_axes = { torch.onnx.export( ` 这是我导出的代码 `import numpy as np import torch Load ONNX modelsproviders = ["CUDAExecutionProvider", "CPUExecutionProvider"] import matplotlib.pyplot as plt Load example imagesimg0_pth = r"D:\ImagesStitchCode\Image8\image8_resized\DSC_4254.JPG"img1_pth = r"D:\ImagesStitchCode\Image8\image8_resized\DSC_4255.JPG"#3443 1591 img1_pth = r"D:\ImagesStitchCode\test_home_outdoor\resize\20240813102827_resize.jpg" Load example imagesimg0_pth = r"D:\ImagesStitchCode\test2\resized_Snipaste_2024-07-18_14-57-35.jpg"img1_pth = r"D:\ImagesStitchCode\test2\resized_Snipaste_2024-07-18_14-57-50.jpg"img0_pth = r"D:\jx\ai\featue_match\dataset\seach_dataset\uav\test_database\4.jpg"img1_pth = r"D:\jx\ai\featue_match\dataset\seach_dataset\uav\test_database\1.jpg"img0_raw = cv2.imread(img0_pth, cv2.IMREAD_GRAYSCALE) img0_raw = cv2.resize(img0_raw, (img0_raw.shape[1]//3232, img0_raw.shape[0]//3232)) # input size shuold be divisible by 32 eloftr_path = r'D:\jx\ai\model2cpp\models\EfficientLoFTR-main\EfficientLoFTR-main\weights\eloft_size_.onnx' sess_options = ort.SessionOptions() 将输入数据转换为 ONNX Runtime 需要的格式input_name = eloftr.get_inputs()[0].name 假设 eloftr 和其他相关变量已经定义和初始化kpts0, kpts1, mconf = ...记录开始时间start_time = time.time() 执行你的代码kpts0, kpts1, mconf = eloftr.run( 记录结束时间end_time = time.time() 计算总时间差elapsed_time = end_time - start_time 打印执行时间print(f"Elapsed time: {elapsed_time} seconds") mkpts0 = kpts0 H, mask = cv2.findHomography(mkpts0, mkpts1, method=cv2.RANSAC, ransacReprojThreshold=10.0)inliers_count = np.sum(mask)total_matches = mkpts0.shape[0]inliers_ratio = inliers_count / total_matchesprint(f"inlines_ratio:{inliers_ratio},inlines_count:{inliers_count},total_matches:{total_matches}")mask = mask.squeeze()mkpts0 = mkpts0[mask == 1]mkpts1 = mkpts1[mask == 1]mconf = mconf[mask == 1]mconf = (mconf - min(20.0, mconf.min())) / (max(30.0, mconf.max()) - min(20.0, mconf.min())) color = cm.jet(mconf) 这里改成 2.还有 onnx的bool值的问题,onnx 不支持 tensor[bool] 3.Support for 'aten::unflatten' operator when exporting to ONNX opset 具体的问题就这么多了 |
@jxhoh 好的大佬, 非常感谢你的分享^v^ |
@jxhoh Thanks for your reply! I keep having some problems with inference of the .onnx model that is generated. Which environment did you use? The one provided by the authors? Could you please create a fork of eloftr with the onnx export notebook? Thanks 😊 |
大佬 请教一哈。你上面提到的三点我都改了 然后export没有抱错,但是推理的时候遇到了两张图的特征点数量不匹配的情况,在fine_matching中的add1节点。请教一下你遇到过这个问题哇 |
I'll use the environment exported by the author. I don't know what problem you've encountered. |
你这个我好像没碰到过。也许时间节点有点久 |
我研究了一哈。问题是model type如果是full的话,就会抱错 但opt不会 神奇 |
大佬 你在onnx gpu推理的时候 速度有没有比pytorch慢很多 |
|
我好像是这样的,pythontorch好像很快,我用c++调用反而慢了很多 |
话说,作者有想过导出onnx,然后c++调用吗
The text was updated successfully, but these errors were encountered: