-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
drivers: video: API enhancement #72959
Comments
Extra material to investigate the rationale of the existing APIs, so that their semantics can be kept: https://static.linaro.org/connect/san19/presentations/san19-503.pdf |
Thanks to @loicpoulain I rediscovered Gstreamer, and it is great! The above is also the foundation for a zephyr-shell pipeline, as it is not possible to do it if rewriting the application every time is required. The difficulty, though, is that getting Maybe it is enough to have "gst-launch pipeline, but all devicetree-made" as a first step... |
Right you could treat an rtio_iodev as an endpoint (input or output stream), which is what sensors do today. There's effectively two output streams for sensors, one which is polled, and one which is driven by the sensor itself with an event. The sensing subsystem provides what amounts to a statically defined pipeline with fanout/fanin possibilities and builds on top of this. I could imagine video input/output endpoints being similar, though perhaps needing some additional information about image encoding and framesize to describe the metadata. Image sensors (video) are after all sensors :-) |
So let's give it a try! :) I will pick some hardware known to work first and move from here. To everyone working on a sample or a driver, I wish to avoid disturbing any ongoing effort, so in case I missed an issue, do feel free to ping me about it, I can provide a patch to merge on the PR or delay another PR so that actual driver contribution gets merged first! |
Some insightful overview of how libcamera and V4L2 APIs came-up to be:
Prior work: |
@josuah any progress on doing video with RTIO? Any help needed? I have an interest in seeing this work |
@teburd many thanks! Not direct progress, but still interesting insights:
Where the most help is needed is probably in designing and discussing an API that gracefully fits the |
This is pretty common, if interrupts are involved you can reduce the cost by using direct interrupts and controlling more when the scheduler gets called into only when you know things need to be rescheduled. E.g. completed entire transfer rather than a little peripheral FIFO ding to do some refilling.
Those APIs could be replaced with the read/write ops for RTIO. Ideally we'd benefit then from some common infrastructure around tracing/statistics/timeouts/etc.
It can be, but it'd be an added cost instead of benefit perhaps, but I don't know enough of the video API to make a judgement on that. It's quite possible the API and implementations already do whats needed.
I don't have many boards that support video other than the imxrt1060 evk, which I do have the camera and screen for. |
Precious insight!
The API introduced by @loicpoulain looks similar to RTIO in practice:
7 drivers do I/O in total:
|
Sounds like its solved a very similar problem. Waiting for a completion could be done with a timeout, io_uring has a similar wait_cqe_timeout type API call https://man.archlinux.org/man/io_uring_wait_cqe_timeout.3.en that would be easy to replicate I'm sure. |
Another challenge is to combine the RTIO and video buffers together. zephyr/include/zephyr/rtio/rtio.h Lines 232 to 260 in 6b7558a
zephyr/include/zephyr/drivers/video.h Lines 105 to 119 in 6b7558a
with also uint32_t flags; and uint32_t bytesframe introduced in #66994
Is it planned to just wrap the current video struct? rtio_sqe.userdata = &video_buffer; Or use RTIO instead of video_buffer mapping the fields like this maybe? rtio_sqe.buf = video_buffer_buffer;
rtio_sqe.buf_len = video_buffer_size; /* if OP_RX */
rtio_sqe.buf_len = video_buffer_bytesused; /* if OP_TX */
rtio_sqe.userdata = video_buffer_driver_data;
(void *)video_buffer_bytesframe; /* does not fit */
(void *)video_buffer_timestamp; /* does not fit */ Or change the video buffer to only contain what does not fit into RTIO buffers to avoid too much copy? struct video_buffer_extra_info {
uint32_t timestamp;
uint32_t bytesframe;
} video_buffer;
rtio_sqe.userdata = &video_buffer; Or modify RTIO to support the extra fields so that
If RTIO replaces Tagging @loicpoulain who introduced the API and contributors/reviewers @ArduCAM @CharlesDias @danieldegrasse @decsny @epc-ake @erwango @ngphibang (alphabetical order) in case anyone is interested in seeing this happen. |
The way sensing deals with this is by encoding the extra info in the buffer itself and providing functions to get it back. Video can do something similar perhaps. It would require over allocating the buffer itself to account for the metadata and not just the frame data then. E.g. something like... rtio_sqe_prep_read(video_input, buf, buf_len);
rtio_submit(r, 1);
struct video_metadata *metadata = video_buf_metadata(buf);
struct video_frame *frames = video_buf_frames(metatadata, buf);
size_t frame_count = video_buf_frame_count(metadata, buf, buf_len); |
I have worked quite a bit with cameras to the point where we made our own "internal" API with cameras due to some short comings with the 'current' implemention of the video api. It's good to see some work on this now to improve it. Some APIs that I've noticed that are lacking within the current implemention are ways of setting the Physical Link Rate and Type such as CPHY or DPHY along with their rate in sps for CPHY and bps for DPHY. This would be a good feature that could be implemented somehow. |
I think this PR is addressing this, leveraging
This PR would allow specify endpoint numbers in addition to the generic I will comment on each PR... |
Sensing deals with multiplexed fifo buffers today and could be used as a model to follow perhaps. Or you could have multiple output streams, each one acting like an IO device (struct rtio_iodev) by moving to that API. |
I did not think about placing the metadata in the buffer! I think all elements are in place and I will be able to test now.
This could be integrated into the video allocation functions then.
Time for experimentation. Thank you. |
Right, this is general an issue in Zephyr today because we don't have a way of doing device specific behaviors akin to ioctl. Sensing sort of deals with this with its attributes API but there are still oddities with it. Because each call to set attr is a partial reconfiguration of the device, and ordering can matter, there can be invalid configurations. For example many devices offer low power modes which will toggle on/off the mems device at the cost of noise. Usually these modes are limited in sampling rate. Sometimes the sample rate overlaps, but frequently some sample rates only work in a low noise (always on) mode. So now you have this quirk of ordering where you may wish to change both the power mode and sample rate at the same time, but the API has no way of allowing this. So drivers then have to work out what the implied meaning of a sample rate setting may mean. There's also the dai interface which fully gave up on trying to provide structured configuration and takes a |
Introduction
Pursue the work on the video API past the Video 4 Zephyr added to Zephyr in #17194.
Problem description
I would like to implement UVC and enable more complex video pipelines to be written without having to rewrite from scratch a new application every time the devicetree is updated.
Proposed change
Incrementally rework the devicetree and video driver API
Detailed RFC
The existing API leaves corners left for future specification.
The goal is:
Proposed change (Detailed)
remote-endpoint
is not used by Zephyr drivers. RTIO can be leveraged to replace the many FIFOs that are typically defined in each driver 1, 2, 3 and provide a way to directly connect drivers while also allowing the application to be interleaved in the pipeline. This would act as a splice(2) for Zephyr video drivers, triggered by the devicetree `remote-endpoint[UPDATE: it might be possible to use RTIO on top of the current API instead of replacing it]
video_endpoint_id
has unclear semantics in some cases (doc fix only) and can be adjusted to allow positive numbers refer an individual endpoint number or addressenum endpoint_id
#73009[UPDATE: a documentation fix, mostly]
Following this, devicetree macros can be introduced to provide ways to refer a particular endpoint without manually coding the endpoint number from the application.
video_get_caps()
fills a struct with all the video caps filled at once. This works well, but makes it more difficult than using an enumerator-style API such as what #72254 proposes which allows drivers to be a bit more generic: a device doing software video processing can filter the format capabilities of its sources this way.video_enqueue()
: "an addition to the video API to enable streaming partial frames within one video buffer" is missing. A new API is introduced as part of Arducam Mega sensor support.Introduce a directory for the video drivers, as image sensors, MIPI/DMA controllers [...] are being mixed in one same driver API. A sensor directory was the chosen way and will be merged once the ongoing PR are completed.
Add a
drivers/video_*_skeleton.c
in the same style asdrivers/usb/udc/udc_skeleton.c
that can speed-up development and help with understanding the semantics.sensor_skeleton.c
to speed-up contribution of simple sensors #73867[UPDATE: proposed as an emulated/fake driver instead]
The devicetree can be
flattenedto enable more uniform processing, such as DT macros used to reduce boilerplate code and homogenize drivers[UPDATE: no API change needed]
VIDEO_BUF_FRAG
: Add fragmentation support to allow partial/fragmented frames being returned from video devices.video-interfaces.yaml
: Introduce common devicetree bindings for all video devices:<zephyr/drivers/video-controls.h>
: Add a flag to the video CIDs that specify variants of current controls to query the kind of result to get: current value (default) or minimum/maximum/default value.Dependencies
Annotated inline above.
Concerns and Unresolved Questions
Alternatives
Leaving-up the application developer resolve these issues is the current approach, and demos and samples submitted #71463 show that it effectively works end-to-end with the current API.
For the case of RTIO, it is also possible to introduce a "genderless" interconnect between source and sinks (so that the application or another driver can use it) by replacing
enqueue()
/dequeue()
by a public FIFO integrated directly in the API.The text was updated successfully, but these errors were encountered: