-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
drivers: video: esp32s3: Add set_signal video api function #78566
drivers: video: esp32s3: Add set_signal video api function #78566
Conversation
8150788
to
0e3e7e3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems this PR is related to: #78499
If so the first commit does not belong to this one, could you please remove this commit, from the history and link the PR above to this as a depencency to get merged?
@epc-ake , I don't know if you are aware of, but you can run the compliance checks locally in your workspace before pushing and catch compliance checks without having to wait long times until the CI executes, Inside of your zephyr folder, AFTER, commiting just do:
I hope this help you to save some time making the CI happy :) |
drivers/video/video_esp32_dvp.c
Outdated
uint32_t status = cam_ll_get_interrupt_status(data->hal.hw); | ||
cam_ll_clear_interrupt_status(data->hal.hw, status); | ||
|
||
if (data->signal_all && status & VIDEO_ESP32_VSYNC_MASK) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we check data->is_streaming
here?
As noted by @uLipe: depends on this PR: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am "requesting changes" but maybe I simply did not understand this hardware...
The Video Endpoints (enum video_endpoint_id
) allow to address the individual logical stream or physical port within a same device.
For instance, a video/frame encoder would have 2 ports:
📥 one input stream uncompressed (selected by the wildcard VIDEO_EP_IN
or VIDEO_EP_ALL
)
📤 one output stream compressed (selected by the wildcard VIDEO_EP_OUT
or VIDEO_EP_ALL
)
There can also be some form of multiplexing into logical streams if the same device is used for multiple "channels" of some kind:
📥 input channel 1 with its endpoint number 0x01 (selected by passing 0x01
or VIDEO_EP_IN
or VIDEO_EP_ALL
)
📥 input channel 2 with its endpoint number 0x02 (selected by passing 0x02
or VIDEO_EP_IN
or VIDEO_EP_ALL
)
📤 output channel 1 with its endpoint number 0x11 (selected by passing 0x11
or VIDEO_EP_OUT
or VIDEO_EP_ALL
)
📤 output channel 2 with its endpoint number 0x12 (selected by passing 0x12
or VIDEO_EP_OUT
or VIDEO_EP_ALL
)
These numbers 0x01, 0x02, 0x11, 0x12, are here just examples, vendor might have their own encoding...
There can be separate physical input ports numbered likewise 1, 2, 3, 4... And these endpoints are there to reflect the diversity of what exists on hardware.
Conclusion: Given how I understood this endpoint API, I have the impression that there need to be only one signal: dat->signal
, used for both the IRQ or the regular signals
Does that make sense? Did you have some different vision?
Thanks, I'm aware of the tool but I were not able to run it on my system... |
There are two reasons I defined two signals.
I actually don't like the Idea to polling on a signal inside the sensor driver. But with the current API, this is the only way I see to inform the sensor about the |
So it keeps the video driver signatures, but changes the video driver semantics. Maybe the fix is to modify the API to enable the kind of performance you'd need? I tried to brainstorm the data path here but did not have time to put it in practice yet. Glad to allocate time to build (discussions and code) the API you need to keep a same API for all drivers! |
I think I understood better the meaning given to Rather than being wildcards, they are used as signal categories. Which sounds much like a valid interpretation of the same API. In this interpretation, all drivers are still able to "categorize" their signal using these "endpoints categories". The endpoints used in Video4Zephyr seems like a mapping to devicetree |
I tried to make use of this PR in my sensor driver now and I came to the conclusion that it is not optimal to use the |
I suspect Linux introduced subdevices because they had userland-facing devices and "underground" devices living below the kernel, connected through the devicetree. V4L2 devices and subdevices have different APIs. In Zephyr, there is no subdevices, as for instance, cameras like Arducam Mega will combine both functions of moving the data (device) and controlling an image sensor (subdevice). Related: #73013 (review) Though, the current model is that for signals and buffers, the application does the glue across devices, Srather than the sensor talking to each other directly, and the API is built for a single https://static.linaro.org/connect/san19/presentations/san19-503.pdf |
In practice: I am wondering if regarding this...
...it was possible to move the Zephyr driver API calls are lightweight, no syscall overhead involved (unless Zephyr userspace is involved), so this could allow removing one thread? I might lack the full picture to understand the requirement to having a thread in the sensor itself. |
Sure, I just hoped I can hide this in the sensor driver since its needed to operate the camera and its not something the application should much care about. |
Maybe the difference here is that there is no video subsystem which could cover all this extra piping. |
0e3e7e3
to
6c7b7df
Compare
I removed the |
There is no concept of subdevices all capture (e.g DSI) and control (e.g sensor) devices are all video devices. However in practice a capture device has a reference to the sensor device so that it can propagate format, and controls (this should be ideally represented via remote-endpoint in dts). For the VSYNC stuff, look like what you need at the end is per-buffer sensor configuration, so maybe we lack some possibilities to attach additional metadata/config to the video buffer struct, which would allow the video 'subsystem' or capture device to do the magic (getting the buffer, reading new config, wait for vsync, applying it to sensor...). |
6c7b7df
to
4b8cf89
Compare
Can someone tell me how to silence the remaining warning without renaming the argument? |
It looks like it is indeed listed as a reserved function, which helps in case mixing sources with the POSIX C API. How about switching to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor change related to error handling.
The rest seems good to go.
This PR implements the set_signal callback from the video API. A signal registered on VIDEO_EP_OUT raises once a new image has been received completely. Signed-off-by: Armin Kessler <[email protected]>
4b8cf89
to
97a49cc
Compare
Ping @sylvioalves |
This PR implements the
set_signal
callback from the video API.It introduces two signals:VIDEO_EP_OUT
raises once a new image hasbeen received completely and is ready to be dequeued.
2. A signal registered onVIDEO_EP_ALL
raises with the vsync interrupt