On Intel CPU system we have fastest inference with yolov8n model exported to OpenVINO format.
All analytics you can find here
cd server
python3 server.py
pip3 install -r requirements.txt
python3 remote_run.py
On Intel CPU system we have fastest inference with yolov8n model exported to OpenVINO format.
All analytics you can find here
cd server
python3 server.py
pip3 install -r requirements.txt
python3 remote_run.py