* [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
2025-07-24 14:10 [PATCH v2 00/27] media: Add support for multi-context operations Jacopo Mondi
@ 2025-07-24 14:10 ` Jacopo Mondi
2025-09-25 10:55 ` Anthony McGivern
0 siblings, 1 reply; 9+ messages in thread
From: Jacopo Mondi @ 2025-07-24 14:10 UTC (permalink / raw)
To: Sakari Ailus, Laurent Pinchart, Tomi Valkeinen, Kieran Bingham,
Nicolas Dufresne, Mauro Carvalho Chehab, Tomasz Figa,
Marek Szyprowski, Raspberry Pi Kernel Maintenance,
Florian Fainelli, Broadcom internal kernel review list,
Hans Verkuil
Cc: linux-kernel, linux-media, linux-rpi-kernel, linux-arm-kernel,
Jacopo Mondi
Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
contex. It extends 'struct media_entity_context' and is intended to be
extended by drivers that can store driver-specific information
in their derived types.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
---
drivers/media/v4l2-core/v4l2-subdev.c | 39 +++++++++++
include/media/v4l2-subdev.h | 126 ++++++++++++++++++++++++++++++++++
2 files changed, 165 insertions(+)
diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
index 4fd25fea3b58477056729665706ddbacc436379c..7307f57439499c8d5360c89f492944828ac23973 100644
--- a/drivers/media/v4l2-core/v4l2-subdev.c
+++ b/drivers/media/v4l2-core/v4l2-subdev.c
@@ -1577,6 +1577,45 @@ bool v4l2_subdev_has_pad_interdep(struct media_entity *entity,
}
EXPORT_SYMBOL_GPL(v4l2_subdev_has_pad_interdep);
+struct v4l2_subdev_context *
+v4l2_subdev_context_get(struct media_device_context *mdev_context,
+ struct v4l2_subdev *sd)
+{
+ struct media_entity *entity = &sd->entity;
+ struct media_entity_context *ctx =
+ media_device_get_entity_context(mdev_context, entity);
+
+ if (!ctx)
+ return NULL;
+
+ return container_of(ctx, struct v4l2_subdev_context, base);
+}
+EXPORT_SYMBOL_GPL(v4l2_subdev_context_get);
+
+void v4l2_subdev_context_put(struct v4l2_subdev_context *ctx)
+{
+ if (!ctx)
+ return;
+
+ media_entity_context_put(&ctx->base);
+}
+EXPORT_SYMBOL_GPL(v4l2_subdev_context_put);
+
+int v4l2_subdev_init_context(struct v4l2_subdev *sd,
+ struct v4l2_subdev_context *context)
+{
+ media_entity_init_context(&sd->entity, &context->base);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(v4l2_subdev_init_context);
+
+void v4l2_subdev_cleanup_context(struct v4l2_subdev_context *context)
+{
+ media_entity_cleanup_context(&context->base);
+}
+EXPORT_SYMBOL_GPL(v4l2_subdev_cleanup_context);
+
struct v4l2_subdev_state *
__v4l2_subdev_state_alloc(struct v4l2_subdev *sd, const char *lock_name,
struct lock_class_key *lock_key)
diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 5dcf4065708f32e7d3b5da003771810d5f7973b8..9d257b859acafb11cfe6976e906e7baabd0206f6 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -757,6 +757,78 @@ struct v4l2_subdev_state {
struct v4l2_subdev_stream_configs stream_configs;
};
+/**
+ * struct v4l2_subdev_context - The v4l2 subdevice context
+ * @base: The media entity context base class member
+ * @state: The subdevice state associated with this context
+ *
+ * This structure represents an isolated execution context of a subdevice.
+ * This type 'derives' the base 'struct media_entity_context' type which
+ * implements refcounting on our behalf and allows instances of this type to be
+ * linked in the media_device_context contexts list.
+ *
+ * The subdevice context stores the subdev state in a per-file handle context,
+ * userspace is allowed to multiplex the usage of a subdevice devnode by opening
+ * it multiple times and by associating it with a media device context. This
+ * operation is called 'bounding' and is performed using the
+ * VIDIOC_SUBDEV_BIND_CONTEXT ioctl.
+ *
+ * A subdevice context is created and stored in the v4l2_fh file handle
+ * associated with an open file descriptor when a subdevice is 'bound' to a
+ * media device context. The 'bounding' operation realizes a permanent
+ * association valid until the subdevice context is released.
+ *
+ * A subdevice can be bound to the same media device context once only.
+ * Trying to bind the same subdevice to the same media device context a
+ * second time, without releasing the already established context by closing the
+ * bound file descriptor first, will result in an error.
+ *
+ * To create a subdevice context userspace shall use the
+ * VIDIOC_SUBDEV_BIND_CONTEXT ioctl that creates the subdevice context and
+ * uniquely associates it with a media device file descriptor.
+ *
+ * Once a subdevice file descriptor has been bound to a media device context,
+ * all the operations performed on the subdevice file descriptor will be
+ * directed on the just created subdevice context. This means, in example, that
+ * the subdevice state and configuration is isolated from the ones associated
+ * with a different file descriptor obtained by opening again the same subdevice
+ * devnode but bound to a different media device context.
+ *
+ * Drivers that implement multiplexing support have to provide a valid
+ * implementation of the context-related operations in the media entity
+ * operations.
+ *
+ * Drivers are allowed to sub-class the v4l2_subdevice_context structure by
+ * defining a driver-specific type which embeds a struct v4l2_subdevice_context
+ * instance as first member, and allocate the driver-specific structure size in
+ * their implementation of the `alloc_context` operation.
+ *
+ * Subdevice contexts are ref-counted by embedding an instance of 'struct
+ * media_entity_context' and are freed once all the references to it are
+ * released.
+ *
+ * A subdevice context ref-count is increased when:
+ * - The context is created by bounding a video device to a media device context
+ * - The media pipeline it is part of starts streaming
+ * A subdevice context ref-count is decreased when:
+ * - The associated file handle is closed
+ * - The media pipeline it is part of stops streaming
+ *
+ * The ref-count is increased by a call to v4l2_subdev_context_get() and is
+ * reponsibility of the caller to decrease the reference count with a call to
+ * v4l2_subdev_context_put().
+ */
+struct v4l2_subdev_context {
+ struct media_entity_context base;
+ /*
+ * TODO: active_state should most likely be changed from a pointer to an
+ * embedded field. For the time being it's kept as a pointer to more
+ * easily catch uses of active_state in the cases where the driver
+ * doesn't support it.
+ */
+ struct v4l2_subdev_state *state;
+};
+
/**
* struct v4l2_subdev_pad_ops - v4l2-subdev pad level operations
*
@@ -1152,6 +1224,7 @@ struct v4l2_subdev_fh {
struct module *owner;
#if defined(CONFIG_VIDEO_V4L2_SUBDEV_API)
struct v4l2_subdev_state *state;
+ struct v4l2_subdev_context *context;
u64 client_caps;
#endif
};
@@ -1285,6 +1358,59 @@ int v4l2_subdev_link_validate(struct media_link *link);
bool v4l2_subdev_has_pad_interdep(struct media_entity *entity,
unsigned int pad0, unsigned int pad1);
+/**
+ * v4l2_subdev_context_get - Helper to get a v4l2 subdev context from a
+ * media device context
+ *
+ * @mdev_context: The media device context
+ * @sd: The V4L2 subdevice the context refers to
+ *
+ * Helper function that wraps media_device_get_entity_context() and returns
+ * the v4l2 subdevice context associated with a subdevice in a media device
+ * context.
+ *
+ * The reference count of the returned v4l2 subdevice context is increased.
+ * Callers of this function are required to decrease the reference count of
+ * the context reference with a call to v4l2_subdev_context_put().
+ */
+struct v4l2_subdev_context *
+v4l2_subdev_context_get(struct media_device_context *mdev_context,
+ struct v4l2_subdev *sd);
+
+/**
+ * v4l2_subdev_context_put - Helper to decrease a v4l2 subdevice context
+ * reference count
+ *
+ * @ctx: The v4l2 subdevice context to put
+ */
+void v4l2_subdev_context_put(struct v4l2_subdev_context *ctx);
+
+/**
+ * v4l2_subdev_init_context - Initialize the v4l2 subdevice context
+ *
+ * @sd: The subdevice the context belongs to
+ * @ctx: The context to initialize
+ *
+ * Initialize the v4l2 subdevice context. The intended callers of this function
+ * are driver-specific implementations of the media_entity_ops.alloc_context()
+ * function that allocates their driver specific types that derive from
+ * struct v4l2_subdev_context.
+ */
+int v4l2_subdev_init_context(struct v4l2_subdev *sd,
+ struct v4l2_subdev_context *ctx);
+
+/**
+ * v4l2_subdev_cleanup_context - Cleanup the v4l2 subdevice context
+ *
+ * @ctx: The context to cleanup.
+ *
+ * Cleanup the v4l2 subdevice context. The intended callers of this function are
+ * driver specific implementation of the media_entity_ops.destroy_context()
+ * function before releasing the memory previously allocated by
+ * media_entity_ops.alloc_context().
+ */
+void v4l2_subdev_cleanup_context(struct v4l2_subdev_context *ctx);
+
/**
* __v4l2_subdev_state_alloc - allocate v4l2_subdev_state
*
--
2.49.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
2025-07-24 14:10 ` [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context Jacopo Mondi
@ 2025-09-25 10:55 ` Anthony McGivern
0 siblings, 0 replies; 9+ messages in thread
From: Anthony McGivern @ 2025-09-25 10:55 UTC (permalink / raw)
To: jacopo.mondi
Cc: bcm-kernel-feedback-list, florian.fainelli, hverkuil, kernel-list,
kieran.bingham, laurent.pinchart, linux-arm-kernel, linux-kernel,
linux-media, linux-rpi-kernel, m.szyprowski, mchehab,
nicolas.dufresne, sakari.ailus, tfiga, tomi.valkeinen
Hi Jacopo,
On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
> Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
> contex. It extends 'struct media_entity_context' and is intended to be
> extended by drivers that can store driver-specific information
> in their derived types.
>
> Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
I am interested in how the sub-device context will handle the Streams API? Looking at the commits the v4l2_subdev_enable/disable_streams functions still appear to operate on the main sub-device only. I take it we would have additional context-aware functions here that can fetch the subdev state from the sub-device context, though I imagine some fields will have to be moved into the context such as s_stream_enabled, or even enabled_pads for non stream-aware drivers?
Anthony
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
[not found] <DU0PR08MB8836559555E586FCD5AE1CBA811FA@DU0PR08MB8836.eurprd08.prod.outlook.com>
@ 2025-09-30 9:53 ` Jacopo Mondi
2025-09-30 10:16 ` Laurent Pinchart
0 siblings, 1 reply; 9+ messages in thread
From: Jacopo Mondi @ 2025-09-30 9:53 UTC (permalink / raw)
To: Anthony McGivern
Cc: jacopo.mondi@ideasonboard.com,
bcm-kernel-feedback-list@broadcom.com,
florian.fainelli@broadcom.com, hverkuil@kernel.org,
kernel-list@raspberrypi.com,
Kieran Bingham (kieran.bingham@ideasonboard.com),
laurent.pinchart@ideasonboard.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linux-rpi-kernel@lists.infradead.org, m.szyprowski@samsung.com,
mchehab@kernel.org, nicolas.dufresne@collabora.com,
sakari.ailus@linux.intel.com, tfiga@chromium.org,
tomi.valkeinen@ideasonboard.com
Hi Anthony
On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
>
> Hi Jacopo,
>
> On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
> > Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
> > contex. It extends 'struct media_entity_context' and is intended to be
> > extended by drivers that can store driver-specific information
> > in their derived types.
> >
> > Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
>
> I am interested in how the sub-device context will handle the Streams API? Looking at the commits the v4l2_subdev_enable/disable_streams functions still appear to operate on the main sub-device only. I take it we would have additional context-aware functions here that can fetch the subdev state from the sub-device context, though I imagine some fields will have to be moved into the context such as s_stream_enabled, or even enabled_pads for non stream-aware drivers?
>
mmm good question, I admit I might have not considered that part yet.
Streams API should go in a soon as Sakari's long awaited series hits
mainline, and I will certainly need to rebase soon, so I'll probably
get back to this.
Have you any idea about how this should be designed ?
Thanks
j
> Anthony
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
2025-09-30 9:53 ` [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context Jacopo Mondi
@ 2025-09-30 10:16 ` Laurent Pinchart
2025-09-30 12:58 ` Nicolas Dufresne
0 siblings, 1 reply; 9+ messages in thread
From: Laurent Pinchart @ 2025-09-30 10:16 UTC (permalink / raw)
To: Jacopo Mondi
Cc: Anthony McGivern, bcm-kernel-feedback-list@broadcom.com,
florian.fainelli@broadcom.com, hverkuil@kernel.org,
kernel-list@raspberrypi.com,
Kieran Bingham (kieran.bingham@ideasonboard.com),
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linux-rpi-kernel@lists.infradead.org, m.szyprowski@samsung.com,
mchehab@kernel.org, nicolas.dufresne@collabora.com,
sakari.ailus@linux.intel.com, tfiga@chromium.org,
tomi.valkeinen@ideasonboard.com
On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
> On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
> > On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
> > > Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
> > > contex. It extends 'struct media_entity_context' and is intended to be
> > > extended by drivers that can store driver-specific information
> > > in their derived types.
> > >
> > > Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> >
> > I am interested in how the sub-device context will handle the
> > Streams API? Looking at the commits the
> > v4l2_subdev_enable/disable_streams functions still appear to operate
> > on the main sub-device only. I take it we would have additional
> > context-aware functions here that can fetch the subdev state from
> > the sub-device context, though I imagine some fields will have to be
> > moved into the context such as s_stream_enabled, or even
> > enabled_pads for non stream-aware drivers?
>
> mmm good question, I admit I might have not considered that part yet.
>
> Streams API should go in a soon as Sakari's long awaited series hits
> mainline, and I will certainly need to rebase soon, so I'll probably
> get back to this.
>
> Have you any idea about how this should be designed ?
Multi-context is designed for memory to memory pipelines, as inline
pipelines can't be time-multiplexed (at least not without very specific
hardware designs that I haven't encountered in SoCs so far). In a
memory-to-memory pipeline I expect the .enable/disable_streams()
operation to not do much, as the entities in the pipeline operate based
on buffers being queued on the input and output video devices. We may
still need to support this in the multi-context framework, depending on
the needs of drivers.
Anthony, could you perhaps share some information about the pipeline
you're envisioning and the type of subdev that you think would cause
concerns ?
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
2025-09-30 10:16 ` Laurent Pinchart
@ 2025-09-30 12:58 ` Nicolas Dufresne
2025-10-02 7:42 ` Anthony McGivern
0 siblings, 1 reply; 9+ messages in thread
From: Nicolas Dufresne @ 2025-09-30 12:58 UTC (permalink / raw)
To: Laurent Pinchart, Jacopo Mondi
Cc: Anthony McGivern, bcm-kernel-feedback-list@broadcom.com,
florian.fainelli@broadcom.com, hverkuil@kernel.org,
kernel-list@raspberrypi.com,
Kieran Bingham (kieran.bingham@ideasonboard.com),
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linux-rpi-kernel@lists.infradead.org, m.szyprowski@samsung.com,
mchehab@kernel.org, sakari.ailus@linux.intel.com,
tfiga@chromium.org, tomi.valkeinen@ideasonboard.com
[-- Attachment #1: Type: text/plain, Size: 2860 bytes --]
Hi Laurent,
Le mardi 30 septembre 2025 à 13:16 +0300, Laurent Pinchart a écrit :
> On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
> > On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
> > > On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
> > > > Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
> > > > contex. It extends 'struct media_entity_context' and is intended to be
> > > > extended by drivers that can store driver-specific information
> > > > in their derived types.
> > > >
> > > > Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> > >
> > > I am interested in how the sub-device context will handle the
> > > Streams API? Looking at the commits the
> > > v4l2_subdev_enable/disable_streams functions still appear to operate
> > > on the main sub-device only. I take it we would have additional
> > > context-aware functions here that can fetch the subdev state from
> > > the sub-device context, though I imagine some fields will have to be
> > > moved into the context such as s_stream_enabled, or even
> > > enabled_pads for non stream-aware drivers?
> >
> > mmm good question, I admit I might have not considered that part yet.
> >
> > Streams API should go in a soon as Sakari's long awaited series hits
> > mainline, and I will certainly need to rebase soon, so I'll probably
> > get back to this.
> >
> > Have you any idea about how this should be designed ?
>
> Multi-context is designed for memory to memory pipelines, as inline
> pipelines can't be time-multiplexed (at least not without very specific
> hardware designs that I haven't encountered in SoCs so far). In a
I probably don't understand what you mean here, since I know you are well aware
of the ISP design on RK3588. It has two cores, which allow handling up to 2
sensors inline, but once you need more stream, you should have a way to
reconfigure the pipeline and use one or both cores in a m2m (multi-context)
fashion to extend its capability (balancing the resolutions and rate as usual).
Perhaps you mean this specific case is already covered by the stream API
combined with other floating proposal ? I think most of us our missing the big
picture and just see organic proposals toward goals documented as un-related,
but that actually looks related.
Nicolas
> memory-to-memory pipeline I expect the .enable/disable_streams()
> operation to not do much, as the entities in the pipeline operate based
> on buffers being queued on the input and output video devices. We may
> still need to support this in the multi-context framework, depending on
> the needs of drivers.
>
> Anthony, could you perhaps share some information about the pipeline
> you're envisioning and the type of subdev that you think would cause
> concerns ?
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
2025-09-30 12:58 ` Nicolas Dufresne
@ 2025-10-02 7:42 ` Anthony McGivern
2025-10-02 8:06 ` Michael Riesch
2025-10-02 13:28 ` Jacopo Mondi
0 siblings, 2 replies; 9+ messages in thread
From: Anthony McGivern @ 2025-10-02 7:42 UTC (permalink / raw)
To: Nicolas Dufresne, Laurent Pinchart, Jacopo Mondi
Cc: bcm-kernel-feedback-list@broadcom.com,
florian.fainelli@broadcom.com, hverkuil@kernel.org,
kernel-list@raspberrypi.com,
Kieran Bingham (kieran.bingham@ideasonboard.com),
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linux-rpi-kernel@lists.infradead.org, m.szyprowski@samsung.com,
mchehab@kernel.org, sakari.ailus@linux.intel.com,
tfiga@chromium.org, tomi.valkeinen@ideasonboard.com
Hi all,
On 30/09/2025 13:58, Nicolas Dufresne wrote:
> Hi Laurent,
>
> Le mardi 30 septembre 2025 à 13:16 +0300, Laurent Pinchart a écrit :
>> On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
>>> On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
>>>> On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
>>>>> Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
>>>>> contex. It extends 'struct media_entity_context' and is intended to be
>>>>> extended by drivers that can store driver-specific information
>>>>> in their derived types.
>>>>>
>>>>> Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
>>>>
>>>> I am interested in how the sub-device context will handle the
>>>> Streams API? Looking at the commits the
>>>> v4l2_subdev_enable/disable_streams functions still appear to operate
>>>> on the main sub-device only. I take it we would have additional
>>>> context-aware functions here that can fetch the subdev state from
>>>> the sub-device context, though I imagine some fields will have to be
>>>> moved into the context such as s_stream_enabled, or even
>>>> enabled_pads for non stream-aware drivers?
>>>
>>> mmm good question, I admit I might have not considered that part yet.
>>>
>>> Streams API should go in a soon as Sakari's long awaited series hits
>>> mainline, and I will certainly need to rebase soon, so I'll probably
>>> get back to this.
>>>
>>> Have you any idea about how this should be designed ?
Hmm, while I haven't thought of a full implementation I did some testing
where I added a v4l2_subdev_context_enable_streams and it's respective
disable_streams. These would provide the v4l2_subdev_context so that
when the subdev state was fetched it would retrieve it from the context.
I think this would work with the streams API, however for drivers that don't
support this it will not since the fields such as enabled_pads are located
in the v4l2_subdev struct itself. Assuming these fields are only used in the
V4L2 core (haven't checked this fully) potentially they could be moved into
subdev state?
There were some other areas that I found when trying to implement this
in the driver we are working on, for example media_pad_remote_pad_unique()
only uses the media_pad struct, meaning multi-context would not work here,
atleast in the way I expected. Perhaps this is where we have some differing
thoughts on how it would be used. See some details below about the driver we
are working on.
>>
>> Multi-context is designed for memory to memory pipelines, as inline
>> pipelines can't be time-multiplexed (at least not without very specific
>> hardware designs that I haven't encountered in SoCs so far). In a
>
> I probably don't understand what you mean here, since I know you are well aware
> of the ISP design on RK3588. It has two cores, which allow handling up to 2
> sensors inline, but once you need more stream, you should have a way to
> reconfigure the pipeline and use one or both cores in a m2m (multi-context)
> fashion to extend its capability (balancing the resolutions and rate as usual).
>
> Perhaps you mean this specific case is already covered by the stream API
> combined with other floating proposal ? I think most of us our missing the big
> picture and just see organic proposals toward goals documented as un-related,
> but that actually looks related.
>
> Nicolas
>
>> memory-to-memory pipeline I expect the .enable/disable_streams()
>> operation to not do much, as the entities in the pipeline operate based
>> on buffers being queued on the input and output video devices. We may
>> still need to support this in the multi-context framework, depending on
>> the needs of drivers.
>>
>> Anthony, could you perhaps share some information about the pipeline
>> you're envisioning and the type of subdev that you think would cause
>> concerns ?
I am currently working on a driver for the Mali-C720 ISP. See the link
below for the developer page relating to this for some details:
https://developer.arm.com/Processors/Mali-C720AE
To summarize, it is capable of supporting up to 16 sensors, either through
streaming inputs or memory-to-memory modesn and uses a hardware context manager
to schedule each context to be processed. There are four video inputs, each
supporting four virtual channels. On the processing side, there are two parallel
processing pipelines, one optimized for human vision and the other for computer
vision. These feed into numerous output pipelines, including four crop+scaler
pipes who can each independently select whether to use the HV or CV pipe as
its input.
As such, our driver has a multi-layer topology to facilitate this configurability.
With some small changes to Libcamera I have all of the output pipelines implemented
and the media graph is correctly configured, but we would like to update the driver
to support multi-context.
My understanding intially was each context could have it's own topology configured
while using the same sub-devices. For example, context 0 may link our crop+scaler
pipes to human vision, whereas context 1 uses computer vision. Similarly, our input
sub-device uses internal routing to route from the desired sensor to it's context.
It would by my thoughts that the input sub-device here would be shared across every
context but could route the sensor data to the necessary contexts. With the current
implementation, we make large use of the streams API and have many links to configure
based on the usecase so in our case any multi-context integration would also need
to support this.
Anthony
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
2025-10-02 7:42 ` Anthony McGivern
@ 2025-10-02 8:06 ` Michael Riesch
2025-10-02 13:28 ` Jacopo Mondi
1 sibling, 0 replies; 9+ messages in thread
From: Michael Riesch @ 2025-10-02 8:06 UTC (permalink / raw)
To: Anthony McGivern, Nicolas Dufresne, Laurent Pinchart,
Jacopo Mondi
Cc: bcm-kernel-feedback-list@broadcom.com,
florian.fainelli@broadcom.com, hverkuil@kernel.org,
kernel-list@raspberrypi.com,
Kieran Bingham (kieran.bingham@ideasonboard.com),
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linux-rpi-kernel@lists.infradead.org, m.szyprowski@samsung.com,
mchehab@kernel.org, sakari.ailus@linux.intel.com,
tfiga@chromium.org, tomi.valkeinen@ideasonboard.com
Hi Anthony, hi all,
On 10/2/25 09:42, Anthony McGivern wrote:
>
> Hi all,
>
> On 30/09/2025 13:58, Nicolas Dufresne wrote:
>> Hi Laurent,
>>
>> Le mardi 30 septembre 2025 à 13:16 +0300, Laurent Pinchart a écrit :
>>> On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
>>>> On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
>>>>> On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
>>>>>> Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
>>>>>> contex. It extends 'struct media_entity_context' and is intended to be
>>>>>> extended by drivers that can store driver-specific information
>>>>>> in their derived types.
>>>>>>
>>>>>> Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
>>>>>
>>>>> I am interested in how the sub-device context will handle the
>>>>> Streams API? Looking at the commits the
>>>>> v4l2_subdev_enable/disable_streams functions still appear to operate
>>>>> on the main sub-device only. I take it we would have additional
>>>>> context-aware functions here that can fetch the subdev state from
>>>>> the sub-device context, though I imagine some fields will have to be
>>>>> moved into the context such as s_stream_enabled, or even
>>>>> enabled_pads for non stream-aware drivers?
>>>>
>>>> mmm good question, I admit I might have not considered that part yet.
>>>>
>>>> Streams API should go in a soon as Sakari's long awaited series hits
>>>> mainline, and I will certainly need to rebase soon, so I'll probably
>>>> get back to this.
>>>>
>>>> Have you any idea about how this should be designed ?
>
> Hmm, while I haven't thought of a full implementation I did some testing
> where I added a v4l2_subdev_context_enable_streams and it's respective
> disable_streams. These would provide the v4l2_subdev_context so that
> when the subdev state was fetched it would retrieve it from the context.
> I think this would work with the streams API, however for drivers that don't
> support this it will not since the fields such as enabled_pads are located
> in the v4l2_subdev struct itself. Assuming these fields are only used in the
> V4L2 core (haven't checked this fully) potentially they could be moved into
> subdev state?
>
> There were some other areas that I found when trying to implement this
> in the driver we are working on, for example media_pad_remote_pad_unique()
> only uses the media_pad struct, meaning multi-context would not work here,
> atleast in the way I expected. Perhaps this is where we have some differing
> thoughts on how it would be used. See some details below about the driver we
> are working on.
>
>>>
>>> Multi-context is designed for memory to memory pipelines, as inline
>>> pipelines can't be time-multiplexed (at least not without very specific
>>> hardware designs that I haven't encountered in SoCs so far). In a
>>
>> I probably don't understand what you mean here, since I know you are well aware
>> of the ISP design on RK3588. It has two cores, which allow handling up to 2
>> sensors inline, but once you need more stream, you should have a way to
>> reconfigure the pipeline and use one or both cores in a m2m (multi-context)
>> fashion to extend its capability (balancing the resolutions and rate as usual).
>>
>> Perhaps you mean this specific case is already covered by the stream API
>> combined with other floating proposal ? I think most of us our missing the big
>> picture and just see organic proposals toward goals documented as un-related,
>> but that actually looks related.
>>
>> Nicolas
>>
>>> memory-to-memory pipeline I expect the .enable/disable_streams()
>>> operation to not do much, as the entities in the pipeline operate based
>>> on buffers being queued on the input and output video devices. We may
>>> still need to support this in the multi-context framework, depending on
>>> the needs of drivers.
>>>
>>> Anthony, could you perhaps share some information about the pipeline
>>> you're envisioning and the type of subdev that you think would cause
>>> concerns ?
>
> I am currently working on a driver for the Mali-C720 ISP. See the link
> below for the developer page relating to this for some details:
>
> https://developer.arm.com/Processors/Mali-C720AE
>
> To summarize, it is capable of supporting up to 16 sensors, either through
> streaming inputs or memory-to-memory modesn and uses a hardware context manager
> to schedule each context to be processed. There are four video inputs, each
> supporting four virtual channels. On the processing side, there are two parallel
> processing pipelines, one optimized for human vision and the other for computer
> vision. These feed into numerous output pipelines, including four crop+scaler
> pipes who can each independently select whether to use the HV or CV pipe as
> its input.
>
> As such, our driver has a multi-layer topology to facilitate this configurability.
> With some small changes to Libcamera I have all of the output pipelines implemented
> and the media graph is correctly configured, but we would like to update the driver
> to support multi-context.
>
> My understanding intially was each context could have it's own topology configured
> while using the same sub-devices. For example, context 0 may link our crop+scaler
+1
I agree that having a notion of topology/graph partition/graph
configuration in the context will facilitate the support for the vast
amount of possible configurations and use cases that modern image
processing pipelines offer.
I had a chat with Jacopo yesterday and (I think) we found that the media
graph configuration (routing, linking, ...) could be embedded in the
media context (maybe in the form of a media state).
Best regards,
Michael
> pipes to human vision, whereas context 1 uses computer vision. Similarly, our input
> sub-device uses internal routing to route from the desired sensor to it's context.
> It would by my thoughts that the input sub-device here would be shared across every
> context but could route the sensor data to the necessary contexts. With the current
> implementation, we make large use of the streams API and have many links to configure
> based on the usecase so in our case any multi-context integration would also need
> to support this.
>
> Anthony
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
2025-10-02 7:42 ` Anthony McGivern
2025-10-02 8:06 ` Michael Riesch
@ 2025-10-02 13:28 ` Jacopo Mondi
2025-10-03 12:21 ` Anthony McGivern
1 sibling, 1 reply; 9+ messages in thread
From: Jacopo Mondi @ 2025-10-02 13:28 UTC (permalink / raw)
To: Anthony McGivern
Cc: Nicolas Dufresne, Laurent Pinchart, Jacopo Mondi,
bcm-kernel-feedback-list@broadcom.com,
florian.fainelli@broadcom.com, hverkuil@kernel.org,
kernel-list@raspberrypi.com,
Kieran Bingham (kieran.bingham@ideasonboard.com),
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linux-rpi-kernel@lists.infradead.org, m.szyprowski@samsung.com,
mchehab@kernel.org, sakari.ailus@linux.intel.com,
tfiga@chromium.org, tomi.valkeinen@ideasonboard.com
Hi Anthony
thanks for the details
On Thu, Oct 02, 2025 at 08:42:56AM +0100, Anthony McGivern wrote:
>
> Hi all,
>
> On 30/09/2025 13:58, Nicolas Dufresne wrote:
> > Hi Laurent,
> >
> > Le mardi 30 septembre 2025 à 13:16 +0300, Laurent Pinchart a écrit :
> >> On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
> >>> On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
> >>>> On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
> >>>>> Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
> >>>>> contex. It extends 'struct media_entity_context' and is intended to be
> >>>>> extended by drivers that can store driver-specific information
> >>>>> in their derived types.
> >>>>>
> >>>>> Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> >>>>
> >>>> I am interested in how the sub-device context will handle the
> >>>> Streams API? Looking at the commits the
> >>>> v4l2_subdev_enable/disable_streams functions still appear to operate
> >>>> on the main sub-device only. I take it we would have additional
> >>>> context-aware functions here that can fetch the subdev state from
> >>>> the sub-device context, though I imagine some fields will have to be
> >>>> moved into the context such as s_stream_enabled, or even
> >>>> enabled_pads for non stream-aware drivers?
> >>>
> >>> mmm good question, I admit I might have not considered that part yet.
> >>>
> >>> Streams API should go in a soon as Sakari's long awaited series hits
> >>> mainline, and I will certainly need to rebase soon, so I'll probably
> >>> get back to this.
> >>>
> >>> Have you any idea about how this should be designed ?
>
> Hmm, while I haven't thought of a full implementation I did some testing
> where I added a v4l2_subdev_context_enable_streams and it's respective
> disable_streams. These would provide the v4l2_subdev_context so that
> when the subdev state was fetched it would retrieve it from the context.
> I think this would work with the streams API, however for drivers that don't
> support this it will not since the fields such as enabled_pads are located
> in the v4l2_subdev struct itself. Assuming these fields are only used in the
> V4L2 core (haven't checked this fully) potentially they could be moved into
> subdev state?
>
> There were some other areas that I found when trying to implement this
> in the driver we are working on, for example media_pad_remote_pad_unique()
> only uses the media_pad struct, meaning multi-context would not work here,
> atleast in the way I expected. Perhaps this is where we have some differing
> thoughts on how it would be used. See some details below about the driver we
> are working on.
>
> >>
> >> Multi-context is designed for memory to memory pipelines, as inline
> >> pipelines can't be time-multiplexed (at least not without very specific
> >> hardware designs that I haven't encountered in SoCs so far). In a
> >
> > I probably don't understand what you mean here, since I know you are well aware
> > of the ISP design on RK3588. It has two cores, which allow handling up to 2
> > sensors inline, but once you need more stream, you should have a way to
> > reconfigure the pipeline and use one or both cores in a m2m (multi-context)
> > fashion to extend its capability (balancing the resolutions and rate as usual).
> >
> > Perhaps you mean this specific case is already covered by the stream API
> > combined with other floating proposal ? I think most of us our missing the big
> > picture and just see organic proposals toward goals documented as un-related,
> > but that actually looks related.
> >
> > Nicolas
> >
> >> memory-to-memory pipeline I expect the .enable/disable_streams()
> >> operation to not do much, as the entities in the pipeline operate based
> >> on buffers being queued on the input and output video devices. We may
> >> still need to support this in the multi-context framework, depending on
> >> the needs of drivers.
> >>
> >> Anthony, could you perhaps share some information about the pipeline
> >> you're envisioning and the type of subdev that you think would cause
> >> concerns ?
>
> I am currently working on a driver for the Mali-C720 ISP. See the link
> below for the developer page relating to this for some details:
>
> https://developer.arm.com/Processors/Mali-C720AE
>
> To summarize, it is capable of supporting up to 16 sensors, either through
> streaming inputs or memory-to-memory modesn and uses a hardware context manager
Could you help me better grasp this part ? Can the device work in m2m and inline
mode at the same time ? IOW can you assign some of the input ports to
the streaming part and reserve other input ports for m2m ? I'm
interested in understanding which parts of the system is capable of
reading from memory and which part is instead fed from the CSI-2
receiver pipeline
> to schedule each context to be processed. There are four video inputs, each
> supporting four virtual channels. On the processing side, there are two parallel
Similar in spirit to the previous question: "each input supports 4 virtual
channels": does the 4 streams get demuxed to memory ? Or do they get
demuxed to internal bus connected to the processing pipes ?
> processing pipelines, one optimized for human vision and the other for computer
> vision. These feed into numerous output pipelines, including four crop+scaler
> pipes who can each independently select whether to use the HV or CV pipe as
> its input.
>
> As such, our driver has a multi-layer topology to facilitate this configurability.
What do you mean by multi-layer ? :)
> With some small changes to Libcamera I have all of the output pipelines implemented
> and the media graph is correctly configured, but we would like to update the driver
> to support multi-context.
Care to share a .dot representation of the media graph ?
>
> My understanding intially was each context could have it's own topology configured
> while using the same sub-devices. For example, context 0 may link our crop+scaler
> pipes to human vision, whereas context 1 uses computer vision. Similarly, our input
> sub-device uses internal routing to route from the desired sensor to it's context.
> It would by my thoughts that the input sub-device here would be shared across every
> context but could route the sensor data to the necessary contexts. With the current
> implementation, we make large use of the streams API and have many links to configure
> based on the usecase so in our case any multi-context integration would also need
> to support this.
>
Media link state and routing I think make sense in the perspective of
contexts. I still feel like for ISP pipelines we could do with just
media links, but routing can be used as well (and in facts we already
do in the C55 iirc). At this time there is no support in this series
for this simply because it's not a feature I need.
As Laurent said, the stream API are mostly designed to represent data
streams multiplexed on the same physical bus, with CSI-2 being the
main use case for now, and I admit I'm still not sure if and how they
have to be considered when operated with contexts.
My general rule of thumb to decide if a point in the pipeline should
be context aware or not is: "can its configuration change on a
per-frame bases ?". If yes, then it means it is designed to be
time-multiplex between different contexts. If not, maybe I'm
oversimplifying here, then there is no need to alternate its usage on
a per-context base and a properly designed link/routing setup should
do.
I've discussed yesterday with Micheal if contexts could also be used
for partitioning a graph (making sure two non-overlapping partitions
of the pipeline can be used at the same time by two different
applications). I guess you could, but that's not the primary target,
as if the pipeline is properly designed you should be able to properly
partition it using media links and routing.
Happy to discuss your use case in more detail though to make sure
that, even not all the required features are there in this first
version, we're not designing something that makes it impossible to
support them in future.
> Anthony
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context
2025-10-02 13:28 ` Jacopo Mondi
@ 2025-10-03 12:21 ` Anthony McGivern
0 siblings, 0 replies; 9+ messages in thread
From: Anthony McGivern @ 2025-10-03 12:21 UTC (permalink / raw)
To: Jacopo Mondi
Cc: Nicolas Dufresne, Laurent Pinchart,
bcm-kernel-feedback-list@broadcom.com,
florian.fainelli@broadcom.com, hverkuil@kernel.org,
kernel-list@raspberrypi.com,
Kieran Bingham (kieran.bingham@ideasonboard.com),
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linux-rpi-kernel@lists.infradead.org, m.szyprowski@samsung.com,
mchehab@kernel.org, sakari.ailus@linux.intel.com,
tfiga@chromium.org, tomi.valkeinen@ideasonboard.com
Hi Jacopo,
On 02/10/2025 14:28, Jacopo Mondi wrote:
> Hi Anthony
> thanks for the details
>
> On Thu, Oct 02, 2025 at 08:42:56AM +0100, Anthony McGivern wrote:
>>
>> Hi all,
>>
>> On 30/09/2025 13:58, Nicolas Dufresne wrote:
>>> Hi Laurent,
>>>
>>> Le mardi 30 septembre 2025 à 13:16 +0300, Laurent Pinchart a écrit :
>>>> On Tue, Sep 30, 2025 at 11:53:39AM +0200, Jacopo Mondi wrote:
>>>>> On Thu, Sep 25, 2025 at 09:26:56AM +0000, Anthony McGivern wrote:
>>>>>> On Thu, Jul 24, 2025 at 16:10:19 +0200, Jacopo Mondi write:
>>>>>>> Introduce a new type in v4l2 subdev that represents a v4l2 subdevice
>>>>>>> contex. It extends 'struct media_entity_context' and is intended to be
>>>>>>> extended by drivers that can store driver-specific information
>>>>>>> in their derived types.
>>>>>>>
>>>>>>> Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
>>>>>>
>>>>>> I am interested in how the sub-device context will handle the
>>>>>> Streams API? Looking at the commits the
>>>>>> v4l2_subdev_enable/disable_streams functions still appear to operate
>>>>>> on the main sub-device only. I take it we would have additional
>>>>>> context-aware functions here that can fetch the subdev state from
>>>>>> the sub-device context, though I imagine some fields will have to be
>>>>>> moved into the context such as s_stream_enabled, or even
>>>>>> enabled_pads for non stream-aware drivers?
>>>>>
>>>>> mmm good question, I admit I might have not considered that part yet.
>>>>>
>>>>> Streams API should go in a soon as Sakari's long awaited series hits
>>>>> mainline, and I will certainly need to rebase soon, so I'll probably
>>>>> get back to this.
>>>>>
>>>>> Have you any idea about how this should be designed ?
>>
>> Hmm, while I haven't thought of a full implementation I did some testing
>> where I added a v4l2_subdev_context_enable_streams and it's respective
>> disable_streams. These would provide the v4l2_subdev_context so that
>> when the subdev state was fetched it would retrieve it from the context.
>> I think this would work with the streams API, however for drivers that don't
>> support this it will not since the fields such as enabled_pads are located
>> in the v4l2_subdev struct itself. Assuming these fields are only used in the
>> V4L2 core (haven't checked this fully) potentially they could be moved into
>> subdev state?
>>
>> There were some other areas that I found when trying to implement this
>> in the driver we are working on, for example media_pad_remote_pad_unique()
>> only uses the media_pad struct, meaning multi-context would not work here,
>> atleast in the way I expected. Perhaps this is where we have some differing
>> thoughts on how it would be used. See some details below about the driver we
>> are working on.
>>
>>>>
>>>> Multi-context is designed for memory to memory pipelines, as inline
>>>> pipelines can't be time-multiplexed (at least not without very specific
>>>> hardware designs that I haven't encountered in SoCs so far). In a
>>>
>>> I probably don't understand what you mean here, since I know you are well aware
>>> of the ISP design on RK3588. It has two cores, which allow handling up to 2
>>> sensors inline, but once you need more stream, you should have a way to
>>> reconfigure the pipeline and use one or both cores in a m2m (multi-context)
>>> fashion to extend its capability (balancing the resolutions and rate as usual).
>>>
>>> Perhaps you mean this specific case is already covered by the stream API
>>> combined with other floating proposal ? I think most of us our missing the big
>>> picture and just see organic proposals toward goals documented as un-related,
>>> but that actually looks related.
>>>
>>> Nicolas
>>>
>>>> memory-to-memory pipeline I expect the .enable/disable_streams()
>>>> operation to not do much, as the entities in the pipeline operate based
>>>> on buffers being queued on the input and output video devices. We may
>>>> still need to support this in the multi-context framework, depending on
>>>> the needs of drivers.
>>>>
>>>> Anthony, could you perhaps share some information about the pipeline
>>>> you're envisioning and the type of subdev that you think would cause
>>>> concerns ?
>>
>> I am currently working on a driver for the Mali-C720 ISP. See the link
>> below for the developer page relating to this for some details:
>>
>> https://developer.arm.com/Processors/Mali-C720AE
>>
>> To summarize, it is capable of supporting up to 16 sensors, either through
>> streaming inputs or memory-to-memory modesn and uses a hardware context manager
>
> Could you help me better grasp this part ? Can the device work in m2m and inline
> mode at the same time ? IOW can you assign some of the input ports to
> the streaming part and reserve other input ports for m2m ? I'm
> interested in understanding which parts of the system is capable of
> reading from memory and which part is instead fed from the CSI-2
> receiver pipeline
Each context can run in either inline mode as you'd call it, or in m2m mode.
It would be perfectly valid to have one context connected to a sensor while
another simply takes frames from buffers.
The hardware has numerous raw/out buffer descriptors that we can reserve for
our contexts. The driver handles reserving descriptors to each context, at
which point we must configure them with fields such as data format,
resolution, etc. We also assign their addresses, which may come from buffers
allocated internally by the driver for inline sensors, or our vb2 queue from
our memory input v4l2 output device.
A context must be assigned an input, which in inline mode is our desired video
input id, or a raw buffer descriptor id in m2m mode.
For inline mode, we configure our video input with the appropriate data format
and resolution, and assign it a raw buffer descriptor (the one we reserved for
our context). The hardware will then write frames that arrive on this input to
those buffers, at which point the hardware will know this context is ready to
be scheduled.
For m2m, the driver writes the VB2 buffer address to the raw buffer descriptor,
then triggers the context to be ready for scheduling.
The hardware is then responsible for actually scheduling these contexts.
If desired a user can configure specific scheduling modes, though by default
we are using a first come, first serve approach.
Once scheduled, the hardware automatically reads from the context's assigned
raw buffer and injects it into the pipeline. At which point, each output
writes to their assigned output buffer descriptor, whose addresses are provided
by each capture device's vb2 queue.
>
>> to schedule each context to be processed. There are four video inputs, each
>> supporting four virtual channels. On the processing side, there are two parallel
>
> Similar in spirit to the previous question: "each input supports 4 virtual
> channels": does the 4 streams get demuxed to memory ? Or do they get
> demuxed to internal bus connected to the processing pipes ?
>
Yes, the hardware treats every stream as a virtual input, so 16
virtual inputs in total. Each is configured with their own raw buffer descriptor
and thus images are written to separate buffers.
>> processing pipelines, one optimized for human vision and the other for computer
>> vision. These feed into numerous output pipelines, including four crop+scaler
>> pipes who can each independently select whether to use the HV or CV pipe as
>> its input.
>>
>> As such, our driver has a multi-layer topology to facilitate this configurability.
>
> What do you mean by multi-layer ? :)
Perhaps my terminology is wrong here xD But the general idea is this:
Input
pipe
/\
/ \
HV CV
\ /
Outputs
The input pipe (not to be confused with the video inputs) is the first stage
of the processing pipeline. From here, the image can flow to both HV and CV in parallel.
At which point, the output pipelines can choose whether to use the image from the
human vision or computer vision pipe (mutually exclusive), and each output pipe can
choose independently (i.e. output 0 uses HV, while output 1 chooses CV). So I guess
I meant to say there are multiple layers in the media graph where links can be configured.
>
>> With some small changes to Libcamera I have all of the output pipelines implemented
>> and the media graph is correctly configured, but we would like to update the driver
>> to support multi-context.
>
> Care to share a .dot representation of the media graph ?
>
Sure, I can attach what we have in the current state. Ofcourse this doesn't show the
internal routes, one point being in the input sub-device which can route streams
from the 4 sink pads to the 16 possible source pads. An example here might be two
sensors sharing the same sink pad on different VCs, routing one to context 0 and
the other to context 1.
We do also make use of streams within the isp sub-device to handle some hardware
muxes that control the flow of data through the input pipeline.
Perhaps this is not the best approach, I elected to use this over controls as different
routes actually affect the format of the image data. All the routes on the isp sub-device
are immutable with downstream sub-devices selecting which of these mutually exclusive
routes they wish to use.
Just to point out the isp sub-device represents the input pipeline. I called it this
to try avoid confusion with the video inputs, and also as it acts as the main point
of controlling the context (i.e. stopping/starting the HW). Data will always flow
through this pipeline, whereas HV and CV may not always be in use.
digraph board {
rankdir=TB
n00000001 [
label="{{} | mali-c720 tpg 0\n/dev/v4l-subdev0 | {<port0> 0}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000001:port0 -> n00000009:port0 [style=dashed]
n00000001:port0 -> n00000009:port1 [style=dashed]
n00000001:port0 -> n00000009:port2 [style=dashed]
n00000001:port0 -> n00000009:port3 [style=dashed]
n00000003 [
label="{{} | mali-c720 tpg 1\n/dev/v4l-subdev1 | {<port0> 0}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000003:port0 -> n00000009:port0 [style=dashed]
n00000003:port0 -> n00000009:port1 [style=dashed]
n00000003:port0 -> n00000009:port2 [style=dashed]
n00000003:port0 -> n00000009:port3 [style=dashed]
n00000005 [
label="mali-c720 mem-input\n/dev/video1",
shape=box,
style=filled,
fillcolor=yellow
]
n00000005 -> n0000001e:port0 [style=dashed]
n00000009 [
label="{{<port0> 0 | <port1> 1 | <port2> 2 | <port3> 3} |
mali-c720 input\n/dev/v4l-subdev2 |
{<port4> 4 | <port5> 5 | <port6> 6 | <port7> 7 |
<port8> 8 | <port9> 9 | <port10> 10 | <port11> 11 |
<port12> 12 | <port13> 13 | <port14> 14 | <port15> 15 |
<port16> 16 | <port17> 17 | <port18> 18 | <port19> 19}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000009:port4 -> n0000001e:port0 [style=dashed]
n00000009:port5 -> n0000001e:port0 [style=dashed]
n00000009:port6 -> n0000001e:port0 [style=dashed]
n00000009:port7 -> n0000001e:port0 [style=dashed]
n00000009:port8 -> n0000001e:port0 [style=dashed]
n00000009:port9 -> n0000001e:port0 [style=dashed]
n00000009:port10 -> n0000001e:port0 [style=dashed]
n00000009:port11 -> n0000001e:port0 [style=dashed]
n00000009:port12 -> n0000001e:port0 [style=dashed]
n00000009:port13 -> n0000001e:port0 [style=dashed]
n00000009:port14 -> n0000001e:port0 [style=dashed]
n00000009:port15 -> n0000001e:port0 [style=dashed]
n00000009:port16 -> n0000001e:port0 [style=dashed]
n00000009:port17 -> n0000001e:port0 [style=dashed]
n00000009:port18 -> n0000001e:port0 [style=dashed]
n00000009:port19 -> n0000001e:port0 [style=dashed]
n0000001e [
label="{{<port0> 0 | <port1> 1} |
mali-c720 isp\n/dev/v4l-subdev3 |
{<port2> 2 | <port3> 3 | <port4> 4 |
<port5> 5 | <port6> 6}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000001e:port3 -> n00000026:port0
n0000001e:port4 -> n0000002a:port0
n0000001e:port5 -> n0000003e:port0
n0000001e:port6 -> n0000003e:port0 [style=dashed]
n0000001e:port6 -> n00000047:port0 [style=dashed]
n0000001e:port2 -> n0000007e
n00000026 [
label="{{<port0> 0} |
mali-c720 hv pipe\n/dev/v4l-subdev4 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000026:port1 -> n0000002e:port0
n00000026:port1 -> n00000032:port0
n00000026:port1 -> n00000036:port0
n00000026:port1 -> n0000003a:port0
n00000026:port2 -> n0000003e:port0 [style=dashed]
n00000026:port1 -> n00000047:port0
n0000002a [
label="{{<port0> 0} |
mali-c720 cv pipe\n/dev/v4l-subdev5 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000002a:port1 -> n0000002e:port0 [style=dashed]
n0000002a:port1 -> n00000032:port0 [style=dashed]
n0000002a:port1 -> n00000036:port0 [style=dashed]
n0000002a:port1 -> n0000003a:port0 [style=dashed]
n0000002a:port2 -> n0000003e:port0 [style=dashed]
n0000002a:port1 -> n00000047:port0 [style=dashed]
n0000002e [
label="{{<port0> 0} |
mali-c720 fr 0 pipe\n/dev/v4l-subdev6 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000002e:port1 -> n0000004a
n0000002e:port2 -> n0000004e
n0000002e:port1 -> n00000042:port0 [style=dashed]
n0000002e:port2 -> n00000042:port0 [style=dashed]
n00000032 [
label="{{<port0> 0} |
mali-c720 fr 1 pipe\n/dev/v4l-subdev7 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000032:port1 -> n00000052
n00000032:port2 -> n00000056
n00000032:port1 -> n00000042:port0 [style=dashed]
n00000032:port2 -> n00000042:port0 [style=dashed]
n00000036 [
label="{{<port0> 0} |
mali-c720 fr 2 pipe\n/dev/v4l-subdev8 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000036:port1 -> n0000005a
n00000036:port2 -> n0000005e
n00000036:port1 -> n00000042:port0 [style=dashed]
n00000036:port2 -> n00000042:port0 [style=dashed]
n0000003a [
label="{{<port0> 0} |
mali-c720 fr 3 pipe\n/dev/v4l-subdev9 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000003a:port1 -> n00000062
n0000003a:port2 -> n00000066
n0000003a:port1 -> n00000042:port0 [style=dashed]
n0000003a:port2 -> n00000042:port0 [style=dashed]
n0000003e [
label="{{<port0> 0} |
mali-c720 raw pipe\n/dev/v4l-subdev10 |
{<port1> 1 | <port2> 2}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000003e:port1 -> n00000076
n0000003e:port2 -> n00000042:port0 [style=dashed]
n00000042 [
label="{{<port0> 0} |
mali-c720 foveated pipe\n/dev/v4l-subdev11 |
{<port1> 1 | <port2> 2 | <port3> 3}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000042:port1 -> n0000006a
n00000042:port2 -> n0000006e
n00000042:port3 -> n00000072
n00000047 [
label="{{<port0> 0} |
mali-c720 pyramid pipe\n/dev/v4l-subdev12 |
{<port1> 1}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n00000047:port1 -> n0000007a
n0000004a [
label="mali-c720 fr0-rgb\n/dev/video2",
shape=box,
style=filled,
fillcolor=yellow
]
n0000004e [
label="mali-c720 fr0-yuv\n/dev/video3",
shape=box,
style=filled,
fillcolor=yellow
]
n00000052 [
label="mali-c720 fr1-rgb\n/dev/video4",
shape=box,
style=filled,
fillcolor=yellow
]
n00000056 [
label="mali-c720 fr1-yuv\n/dev/video5",
shape=box,
style=filled,
fillcolor=yellow
]
n0000005a [
label="mali-c720 fr2-rgb\n/dev/video6",
shape=box,
style=filled,
fillcolor=yellow
]
n0000005e [
label="mali-c720 fr2-yuv\n/dev/video7",
shape=box,
style=filled,
fillcolor=yellow
]
n00000062 [
label="mali-c720 fr3-rgb\n/dev/video8",
shape=box,
style=filled,
fillcolor=yellow
]
n00000066 [
label="mali-c720 fr3-yuv\n/dev/video9",
shape=box,
style=filled,
fillcolor=yellow
]
n0000006a [
label="mali-c720 fov-0\n/dev/video10",
shape=box,
style=filled,
fillcolor=yellow
]
n0000006e [
label="mali-c720 fov-1\n/dev/video11",
shape=box,
style=filled,
fillcolor=yellow
]
n00000072 [
label="mali-c720 fov-2\n/dev/video12",
shape=box,
style=filled,
fillcolor=yellow
]
n00000076 [
label="mali-c720 raw\n/dev/video13",
shape=box,
style=filled,
fillcolor=yellow
]
n0000007a [
label="mali-c720 pyramid\n/dev/video14",
shape=box,
style=filled,
fillcolor=yellow
]
n0000007e [
label="mali-c720 3a stats\n/dev/video15",
shape=box,
style=filled,
fillcolor=yellow
]
n00000082 [
label="mali-c720 3a params\n/dev/video16",
shape=box,
style=filled,
fillcolor=yellow
]
n00000082 -> n0000001e:port1
n0000010a [
label="{{<port0> 0} |
lte-csi2-rx\n/dev/v4l-subdev13 |
{<port1> 1}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000010a:port1 -> n00000009:port0
n0000010f [
label="{{} | ar0231 0-0010\n/dev/v4l-subdev14 | {<port0> 0}}",
shape=Mrecord,
style=filled,
fillcolor=green
]
n0000010f:port0 -> n0000010a:port0 [style=bold]
}
>>
>> My understanding intially was each context could have it's own topology configured
>> while using the same sub-devices. For example, context 0 may link our crop+scaler
>> pipes to human vision, whereas context 1 uses computer vision. Similarly, our input
>> sub-device uses internal routing to route from the desired sensor to it's context.
>> It would by my thoughts that the input sub-device here would be shared across every
>> context but could route the sensor data to the necessary contexts. With the current
>> implementation, we make large use of the streams API and have many links to configure
>> based on the usecase so in our case any multi-context integration would also need
>> to support this.
>>
>
> Media link state and routing I think make sense in the perspective of
> contexts. I still feel like for ISP pipelines we could do with just
> media links, but routing can be used as well (and in facts we already
> do in the C55 iirc). At this time there is no support in this series
> for this simply because it's not a feature I need.
>
> As Laurent said, the stream API are mostly designed to represent data
> streams multiplexed on the same physical bus, with CSI-2 being the
> main use case for now, and I admit I'm still not sure if and how they
> have to be considered when operated with contexts.
>
Technically the streams could only be used for our driver on the video inputs,
handling routing sensor inputs to the appropriate contexts. Though I haven't
thought of a better way to deal with the internal routing within the pipeline
for the hardware muxes, especially since they affect the data format and in some
cases the resolution.
> My general rule of thumb to decide if a point in the pipeline should
> be context aware or not is: "can its configuration change on a
> per-frame bases ?". If yes, then it means it is designed to be
> time-multiplex between different contexts. If not, maybe I'm
> oversimplifying here, then there is no need to alternate its usage on
> a per-context base and a properly designed link/routing setup should
> do.
>
In our case we can completely change the configuration of the ISP on
every frame including internal muxes, which outputs are in use, etc.
But of course it makes sense that not every ISP may support this
functionality.
Thanks,
Anthony
> I've discussed yesterday with Micheal if contexts could also be used
> for partitioning a graph (making sure two non-overlapping partitions
> of the pipeline can be used at the same time by two different
> applications). I guess you could, but that's not the primary target,
> as if the pipeline is properly designed you should be able to properly
> partition it using media links and routing.
>
> Happy to discuss your use case in more detail though to make sure
> that, even not all the required features are there in this first
> version, we're not designing something that makes it impossible to
> support them in future.
>
>> Anthony
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-10-03 12:22 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <DU0PR08MB8836559555E586FCD5AE1CBA811FA@DU0PR08MB8836.eurprd08.prod.outlook.com>
2025-09-30 9:53 ` [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context Jacopo Mondi
2025-09-30 10:16 ` Laurent Pinchart
2025-09-30 12:58 ` Nicolas Dufresne
2025-10-02 7:42 ` Anthony McGivern
2025-10-02 8:06 ` Michael Riesch
2025-10-02 13:28 ` Jacopo Mondi
2025-10-03 12:21 ` Anthony McGivern
2025-07-24 14:10 [PATCH v2 00/27] media: Add support for multi-context operations Jacopo Mondi
2025-07-24 14:10 ` [PATCH v2 12/27] media: v4l2-subdev: Introduce v4l2 subdev context Jacopo Mondi
2025-09-25 10:55 ` Anthony McGivern
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).