Back to Projects List
- Ole Vegard Solberg (SINTEF, Trondheim, Norway)
- Janne Beate Bakeng (SINTEF, Trondheim, Norway)
Running trained Deep Learning networks with inference engines. The focus will be on implementing this in CustusX.
- Start with implementing C++ support for running one pre-trained model.
- Use the FAST library for inference engine support.
The task of implementing support for multiple inference engines proved too large for Project Week. We ended up using the OpenVINO Toolkit directly. The OpenVINO inference engine allows us to run the trained networks on the various Intel devices (CPU, GPU, FPFA, Movidius Stick, ...), so this choice still provides us with a decent multi-platform solution.
CustusX is the toolbox we bring to the OR. It's our tool for reusing results from old reserch projects.
Currently we got several research projects where deep learning networks are created: Examples from FAST
We want to be able to run these networks from inside CustusX to allow a more seamless integration in the OR. Some projects require the deep learning networks to run in real time, and in these cases they will need to run them on inference engines.
Video: Highlighting nerves and blood vessels on ultrasound images