-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Motion binary_sensor not working if Shinobi primary engine is not Pixel Array #61
Comments
When using another engine shinobi sends event of object detection not motion, therefore the integration reports it as object detection, |
In the meanwhile I was reading the previous issue (my fault: I didn't search closed issues before opening mine...) If I'll have time I'll try to enable debug on the integration and go in front of a pure pixel motion camera and then to a object detection cam to see what changes in json object received to further discuss this matter. Lastly take in account that a user can even configure motion detection to completely offload it to the camera by disabling pixel array and plugins and make the camera push movement information using ONVIF event, mail or ftp to Shinobi. However we can better discuss when I'll be more informed about what happens exactly. |
There is a support for object detection, almost at the bottom of the readme: Event in HA will be called shinobi/object, shinobi/face, etc... Regarding events of recording: |
Yes, I've fully read the doc and I've seen that you're passing events and that's a good way to implement a custom logic or to do weird not standard logic. I wanted to express my point of view, for the best of this wonderful integration, hoping it can make sense to you too.
So, in my point of view, the motion binary_sensor should be triggered by the "pixel" motion as well as an object was detected. It's just a different way to define "a motion". I've analyzed with debug log what events arrives from the websocket when a pixel array movement is happening (formatted for easy reading here):
And here what is coming when an object is detected:
They have so much in common. So, ultimately, I think it's a good idea to make use of every kind of movement or object detection to determine the motion binary_sensor of the integration. What do you think? Can you agree with my point of view? |
sorry for the long time took to responed,
wdyt? |
thought about it more, I can create a customAutoLoad script in Shinobi that listens to events of Face / Object detection, I would like to raise it first in the discord channel of shinobi, before raising it, will it address your need? |
Hey, @elad-bar thank you so much for your time! Sorry for being late myself too (there was last eve night in the middle ;) I'm sure that I'm not getting something but... |
i'm not thinking you are trying to satisfy your own need, i think it's super valid need, regarding the matrix above with use cases, it's about when to identify that there was a motion based on previous image processing vs. the current using the JSON of the event (avoid additional image processing, meaning means less impact on performance - instead of running 2 engines / models to process the image - pixel array and object detection). Although you need is super valid, approach of treating object as motion does not sounds right for me, 2 reasons:
eventually as I see it, you will get the same result in terms of HA, but it will serve much more users of Shinobi, and maybe later I will manage to convince Shinobi developer to add it as default behavior of Shinobi without external script when working with external motion detector. I started from .net many years ago, since i got into the IoT world any programing language is welcome... |
I have several cameras in Shinobi. Some of them use the integrated PIxel Array motion detection as a primary engine. Some other use "TensorflowCoral Connected" (the provided Coral Tensorflow plugin) and no Pixel Array at all.
The integration is correctly changing the motion binary_sensor for each cameras that use Pixel Array motion detection, according to the motion status found in Shinobi.
For cameras that use Coral Tensorflow plugin, instead, the integration doesn't change the motion binary_sensor at all.
The result is that the motion sensor is not usable on those cameras and is not reliable as it depends on configuration.
I expect, however, that the motion binary_sensor will be evaluated for wathever kind of motion detection you've configured in Shinobi.
In short: if Shinobi trigger a motion (and record a watch-only camera) then the binary_sensor in Home Assistant must be triggered.
The text was updated successfully, but these errors were encountered: