-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to submit error #1
Comments
Actually the error of REGTASK won't did too much problem on inference |
Yes, I use 1.5.0 both on host and board, as stated in your repo. The only difference - my model has 640x640 resolution, instead of 640x480. After that error there are no results, here is the full log.
Converted model runs fine on host via simulation. But on board everything fails. |
Also I found out the following messages in dmesg:
Looks like RKNPU fails on timeout. I have no idea how to deal with it :) |
I think may be the onnx probelm |
Yes, I used
He are my models, yolov8n.pt is a default pre-trained yolo model. |
Which rk platform you are using, rk3588 or rk3568? |
I use rk3568. So, what should I do to run model? I commented out line with import os, cv2, time, numpy as np
from utils import *
from rknnlite.api import RKNNLite
conf_thres = 0.25
iou_thres = 0.45
input_width = 640
input_height = 640
model_name = 'yolov8n'
model_path = "./model"
config_path = "./config"
result_path = "./result"
image_path = "./dataset/bus.jpg"
video_path = "test.mp4"
video_inference = False
RKNN_MODEL = f'{model_path}/{model_name}-{input_height}-{input_width}.rknn'
CLASSES = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis','snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
isExist = os.path.exists(result_path)
if __name__ == '__main__':
isExist = os.path.exists(result_path)
if not isExist:
os.makedirs(result_path)
rknn_lite = RKNNLite(verbose=False)
ret = rknn_lite.load_rknn(RKNN_MODEL)
# ret = rknn_lite.init_runtime()
if video_inference == True:
cap = cv2.VideoCapture(video_path)
while(True):
ret, image_3c = cap.read()
if not ret:
break
print('--> Running model for video inference')
image_4c, image_3c = preprocess(image_3c, input_height, input_width)
ret = rknn_lite.init_runtime()
start = time.time()
outputs = rknn_lite.inference(inputs=[image_3c])
stop = time.time()
fps = round(1/(stop-start), 2)
outputs[0]=np.squeeze(outputs[0])
outputs[0] = np.expand_dims(outputs[0], axis=0)
colorlist = gen_color(len(CLASSES))
results = postprocess(outputs, image_4c, image_3c, conf_thres, iou_thres, classes=len(CLASSES)) ##[box,mask,shape]
results = results[0] ## batch=1
boxes, shape = results
if isinstance(boxes, np.ndarray):
vis_img = vis_result(image_3c, results, colorlist, CLASSES, result_path)
cv2.imshow("vis_img", vis_img)
print('--> Save inference result')
else:
print("No Detection result")
cv2.waitKey(10)
else:
image_3c = cv2.imread(image_path)
image_4c, image_3c = preprocess(image_3c, input_height, input_width)
ret = rknn_lite.init_runtime()
start = time.time()
outputs = rknn_lite.inference(inputs=[image_3c])
stop = time.time()
fps = round(1/(stop-start), 2)
outputs[0]=np.squeeze(outputs[0])
outputs[0] = np.expand_dims(outputs[0], axis=0)
colorlist = gen_color(len(CLASSES))
results = postprocess(outputs, image_4c, image_3c, conf_thres, iou_thres, classes=len(CLASSES)) ##[box,mask,shape]
results = results[0] ## batch=1
boxes, shape = results
print(boxes)
print(shape)
if isinstance(boxes, np.ndarray):
vis_img = vis_result(image_3c, results, colorlist, CLASSES, result_path)
print('--> Save inference result')
else:
print("No Detection result")
print("RKNN inference finish")
rknn_lite.release()
cv2.destroyAllWindows() And it outputs
|
You should run |
Yes, i used |
Hello!
I'm using your repo to convert default yolov8n model to rknn format.
Running
onnx2rknn_step2.py
gives me the following errors:Nevertheless, .rknn model is still produced, but running on board fails with this error:
I saw your issue on rknn-toolkit repo - you told, that you fixed this problem. As far as I understand, your repo performs two step hybrid quantization, but I have no idea, what you did later to make it work.
Here is my generated config from step1. I also tried to change float16 to int8 - the same error on real hardware,
https://gist.github.com/JoshuaJakowlew/79be3060e1dd3fdb867c87d1ff5e7fd1
The text was updated successfully, but these errors were encountered: